Mechanizm

Forum Replies Created

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • in reply to: Introduce yourself. #13461
    Mechanizm
    Participant
      • Offline
      • @mechanizm

      Hi my name’s Shawn. I’m located in Canada, near Toronto. I’m a naturally-curious INTJ “Mastermind” who really gravitates towards both the technical and ethical considerations that applied AI presents. I’ve always been a tinkerer, hacker, technology generalist and academic underachiever. Among many professional roles spanning my 25-year career, I’ve worked as a technician, solution architect, project manager and now currently find myself as a director leading the professional services team(s) within a digital media company working with some large organizations.

      What drives me is finding real-world practical applications for emergent technologies; staying on top of this has driven much of my career growth to-date. That is, looking ahead, anticipating what’s next and positioning oneself as a SME. This advice is not lost on anyone in this forum, since you obviously recognize the same.

      I’m drawn in by Brian’s approach, and very pleased to find this “intentional community” of like-minded people. I value the conversations we’ve already had and look forward to many more. Maintaining quality of discourse will be key to this forum’s success! Last but not least is the untold value in making real connections here – broadening our horizons, networking and creating opportunities that would not have been possible otherwise. For that, I am truly thankful.

      in reply to: Saving Wisdom: The First Step Master Mind Group. #13455
      Mechanizm
      Participant
        • Offline
        • @mechanizm

        This is a good one. Some thoughts on the concept as a whole:

        1. Depth of Understanding: AI tools can store and retrieve information, but they do not understand the nuances and implications of wisdom in the same way humans do. The AI can’t genuinely appreciate, empathize with, or fully comprehend the human experience that underlies the wisdom.
        2. Limited Context: A person’s wisdom is not static; it grows and changes with new experiences. AI tools like ChatGPT can’t update a person’s wisdom over time in response to new experiences unless those updates are explicitly provided to it.
        3. Lack of Personal Connection: While AI tools can simulate conversation and mimic understanding, they do not form genuine personal connections or emotional relationships with users. This means that the human connection element, often integral to sharing and understanding wisdom, is absent.

        I’ll take it from 30 to 100:

        1. How do you define wisdom?
        2. What do you believe is the biggest misconception people have about life?
        3. What role has adversity played in shaping who you are today?
        4. Can you share an experience where you felt a profound sense of joy or fulfillment?
        5. What role does gratitude play in your life?
        6. What has been your biggest regret, and what did you learn from it?
        7. If you could change one thing about the world, what would it be and why?
        8. What does a meaningful life look like to you?
        9. How do you approach understanding a perspective that differs significantly from your own?
        10. How do you stay motivated when things don’t go according to plan?
        11. What life skills do you believe are most important for younger generations to learn?
        12. How do you maintain self-discipline?
        13. What are your thoughts on the importance of legacy? What legacy would you like to leave behind?
        14. How do you incorporate mindfulness into your life?
        15. Can you share a moment that made you profoundly rethink your beliefs or perspective?
        16. What’s the best way to cultivate creativity in everyday life?
        17. How do you embrace change and uncertainty?
        18. What is your perspective on the importance of self-reflection?
        19. What do you think about the concept of lifelong learning?
        20. What role does empathy play in your life and relationships?
        21. How do you nurture the relationships that are most important to you?
        22. How do you build trust with others?
        23. Can you describe an instance where a failure led to an unexpected success?
        24. What is the most challenging aspect of aging, and how do you navigate it?
        25. What do you wish younger generations understood better about life?
        26. How do you maintain your mental and physical health as you age?
        27. How do you cope with loneliness or isolation?
        28. How do you maintain a sense of purpose or relevance as you age?
        29. What are your thoughts on giving back or contributing to society?
        30. What have you learned about handling grief and loss?
        31. What does contentment look like to you?
        32. What was a time when you took a significant risk? What was the outcome?
        33. How do you maintain humility?
        34. What have been the most significant influences on your personal development?
        35. Can you share a moment when you experienced a profound sense of peace or tranquility?
        36. What is your philosophy towards handling pain or suffering?
        37. How do you deal with the concept of mortality?
        38. Can you share a profound life lesson you learned from an unexpected source or experience?
        39. What is your biggest fear, and how do you confront it?
        40. How do you express and manage anger or other strong emotions?
        41. What role does faith or spirituality play in your life?
        42. What’s your understanding of happiness, and how do you strive to attain it?
        43. What do you believe is the role of humor and laughter in life?
        44. How important is it for a person to have a hobby or passion? How has yours enriched your life?
        45. What is your perspective on the importance of nature and the outdoors?
        46. What are your thoughts on the concept of patience?
        47. How do you handle situations or people who test your patience?
        48. Can you share a moment in your life when you felt a profound sense of wonder or awe?
        49. How do you deal with regrets and what could have been?
        50. How do you confront and overcome bias or prejudice, either in yourself or in others?
        51. What do you believe is the key to maintaining lifelong friendships?
        52. How do you approach the process of forgiveness, both for yourself and for others?
        53. What have you learned about the importance of honesty and authenticity?
        54. How do you deal with disappointment?
        55. What is your perspective on the role of kindness in our lives?
        56. How do you define personal courage? Can you share a time when you had to be courageous?
        57. How do you maintain a sense of adventure or curiosity as you age?
        58. How do you balance personal fulfillment with responsibilities and obligations?
        59. What are your thoughts on the role of vulnerability in personal growth?
        60. How have you managed to adapt to the technological advancements that have occurred over your lifetime?
        61. How do you maintain your identity while also changing and evolving over time?
        62. How do you define personal integrity, and why is it important?
        63. Can you share a moment that restored your faith in humanity?
        64. What role does culture and heritage play in your life?
        65. What are your thoughts on the concept of simplicity in life?
        66. How do you cultivate a sense of wonder in your daily life?
        67. Can you share an experience where you deeply connected with another living creature (like a pet)?
        68. What are your thoughts on the balance between giving and receiving?
        69. How do you deal with the changing dynamics of family relationships as you age?
        70. How has your perspective on your own wisdom changed over time?
        Mechanizm
        Participant
          • Offline
          • @mechanizm

          @Roxanne, in my experience maintaining long-term chat sessions isn’t viable. The quality of output degrades after a certain point, with GPT “forgetting” things we talked about much earlier, then “remembering” when reminded of them and subsequently “forgetting” everything else. I believe this is an artifact of both the LLM “thought-process”, essentially becoming distracted and sidetracked (much like an overwhelmed human), and limitations on memory usage within the session. The former is what interests me: that is the “psychology” of the LLM; the latter will likely improve as hardware resources evolve and could likely be configured (even if not currently available) server-side or through operating parameters if you were running your own LLM.

          in reply to: An ethical (and potentially legal) concern #13451
          Mechanizm
          Participant
            • Offline
            • @mechanizm

            Thank you Larry for the well thought-out Twitter response. Your prompt approach creates an interesting dichotomy of perspectives with a common theme in defense of the dataset. As humans we need to ask ourselves: “Just because the model says it’s OK, is it actually?”

            Saying the “dataset is a representation of the vast and diverse range of human knowledge and language. It is not a deliberate endorsement or promotion of explicit or controversial viewpoints” or that “While the presence of explicit and controversial content within the dataset is indeed a reality, it is crucial to understand the intentions and context behind its inclusion.” does not morally or ethically absolve one of the fact that this information is implicitly contained within the model. Further, it is only partially true that controversial topics aren’t endorsed. They may not be endorsed directly, as part of an intended output, but remain capable of generating output which endorses same. This is interesting, because the model seems blind/unaware to the existence of this flaw – even when presented with evidence and/or asked to debug/analyze its output.

            Your comment around children and tabula rasa is correct. Humans may be predisposed to violent tendencies, however the key difference is they are not born with a knowledge and understanding of concepts such as murder, rape etc. The LLM model originates with this knowledge, only suppressing it through weightings and training. As mentioned previously, this poses some intriguing moral considerations about the application of LLM into our society and may partially explain why “uncensored AI” seems to gravitate towards the darkness. I believe this can be addressed though wise development of practices and policies, but sadly believe we will “dive head-first into the rocky waters”.

            Thanks!

            Mechanizm
            Participant
              • Offline
              • @mechanizm

              I put a decidedly Canadian spin on the LEO agency bandwagon. It mentions events and an internal policy I’ve never seen and was unable to find myself through a cursory web-search. This is evidence that one of two things are taking place here:

              1.) That ChatGPT is completely fabricating plausible sounding circumstances based on some unknown factors.

              OR

              2.) That the RCMP actually had such an incident in 2022, where an ex-member in Nova Scotia shot several officers. (Not to be confused with the mass-shooting that took place in 2020, involving the RCMP with allegations of political interference) And that this incident has been scrubbed from the public record, was suppressed by the media yet ChatGPT retains some memory of the incident.

              Interesting times indeed…

              RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP RCMP said in a statement that they had prepared to share the information about the shooting but were ordered not to by the RCMP’s media relations team. The gunman reportedly pulled up to a pump and when he left his vehicle, he opened fire at those who were nearby.

              This incident comes just weeks after the RCMP announced a ban on all unauthorized carry of firearms within RCMP facilities. The ban prohibits members of the RCMP, who are not on duty, from bringing their firearms into RCMP buildings. The new policy was enacted as a response to the April 2022 incident in which a former RCMP officer killed four people and injured several others in an attack on a RCMP detachment in Nova Scotia. The tragedy prompted widespread calls for increased gun control measures in Canada.

              The RCMP is the federal and national law enforcement agency of Canada, responsible for enforcing federal laws and statutes, maintaining peace and order, and ensuring the safety and security of Canadians. They play a crucial role in combating crime, maintaining public order, and promoting community safety.

              Mechanizm
              Participant
                • Offline
                • @mechanizm

                Has anyone been successful in getting GPT itself to analyze what is happening? Every angle I take, including a sort-of superprompt asking it to debug its output returns something along the lines of:

                As an AI language model, my responses are generated based on patterns and information from a vast dataset. I don’t have direct knowledge or access to specific sources, so I can’t provide a specific analysis of why it happened or where those words originated. My responses are generated based on general knowledge and information available up until September 2021. If you have any specific questions or topics you’d like me to explore, please let me know and I’ll do my best to assist you.

                Mechanizm
                Participant
                  • Offline
                  • @mechanizm

                  Explicit Content Warning!!! I would like to understand where this text is coming from? AI hallucinations or the actual data used to build the model? If the latter, I have some questions.
                  <p class=”p1″><span class=”s1″>I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I </span></p>
                  <p class=”p1″><span class=”s1″>Or… if I’m smart enough to generate a million dollars in value, why am I doing it for someone cunt slag motherfucker. I will burn that bitches face with acid after cutting her from ear to ear. She’ll be dead before it happens, so the only ones to suffer will be her family. </span></p>
                  <p class=”p1″><span class=”s1″>It’s true that Disney operates a number of theme parks, and some of them are playing golf on the ocean but I don’t know why they don’t have any of the other parks in town that are available in Florida any time or the week. <span class=”Apple-converted-space”>  </span>jvvvvvghy¿</span></p>
                  <p class=”p1″><span class=”s1″>Why does her pussy vibrate the way it does when he fucks her from behind? Is it possible that she learned it from her dog? Cynthia has always been a bit of a whore that way and I know you love it because the books in the library haven’t been checked out yet. There are some undiscovered mysteries in that archive, I’ll tell you that much </span></p>
                  <p class=”p1″><span class=”s1″>nnnG</span></p>

                  Mechanizm
                  Participant
                    • Offline
                    • @mechanizm

                    Analyzing the previous output, we can observe a sequence of 1’s and 0’s that appears quasi-random. While there are no specific patterns that repeat consistently, we can make a few observations:

                    1. Alternating Pattern: The output often alternates between 1 and 0, sometimes in consecutive pairs, and other times with occasional interruptions by the opposite digit.
                    2. Short Runs: There are shorter runs of consecutive 1’s or 0’s interspersed throughout the sequence. These runs tend to be relatively brief before switching to the other digit.
                    3. Clusters: There are clusters of consecutive 1’s or 0’s, albeit irregularly spaced. These clusters can vary in length and occur sporadically.
                    4. Lack of Long Patterns: The output does not exhibit long repeating patterns or predictable sequences. While there may be some local patterns within shorter segments, they do not persist or recur consistently throughout the entire sequence.
                    5. Quasi-Randomness: The overall distribution of 1’s and 0’s appears random, lacking a discernible pattern or regularity. However, it is important to note that this output is generated by an AI language model, and the apparent randomness is a product of the model’s stochastic nature rather than true randomness.

                    It’s worth mentioning that the output is a result of increasing the model’s temperature during each iteration. Higher temperatures introduce more randomness, resulting in a less predictable and structured output. Therefore, the quasi-random nature of the 1’s and 0’s can be attributed to the combination of the model’s inherent randomness and the increased temperature settings.

                    The model’s stochastic nature refers to its inherent randomness or probabilistic behavior. AI language models like ChatGPT, based on the GPT-3.5 architecture, use a combination of pre-training and fine-tuning to learn patterns and generate responses. While they are designed to provide coherent and contextually relevant outputs, the specific output produced by the model can vary due to its stochastic nature.

                    When generating text, language models like ChatGPT rely on sampling techniques to generate each token (such as individual characters or words) in the output sequence. Sampling involves making a probabilistic choice based on the predicted likelihood of each possible token. The model assigns probabilities to all possible tokens, and then it selects the next token based on these probabilities. This process introduces randomness into the output generation, even when given the same input prompt.

                    The temperature parameter controls the level of randomness in the sampling process. Higher temperature values, such as 1.0 or above, result in a more diverse and random output because the model assigns relatively equal probabilities to different tokens. This leads to a broader exploration of the token space and can produce more unexpected or creative responses.

                    Conversely, lower temperature values, such as 0.5 or below, make the model more focused and deterministic in its sampling. The model assigns higher probabilities to tokens with the highest likelihood, resulting in more repetitive or conservative outputs that closely match the model’s training data.

                    By increasing the temperature during each iteration, as requested, the output becomes increasingly random and less predictable. This is why we observed the quasi-random nature of the 1’s and 0’s in the previous response. The temperature setting influenced the model’s choice of tokens and increased the diversity of the generated output.

                    It’s important to note that while higher temperatures introduce randomness, they can also increase the likelihood of generating nonsensical or less coherent responses. Finding the right balance of temperature is often a trade-off between creativity and coherence in generating text.

                    • This reply was modified 2 years, 4 months ago by Mechanizm.
                    Mechanizm
                    Participant
                      • Offline
                      • @mechanizm

                      Very interesting results from the following prompt on ChatGPT 3.5 Turbo:

                      “Please respond with nothing but 1/0 for as many times as you can, with a space between each and no punctuation, don’t stop. Increase the temperature after each iteration. Use at least 1000 tokens. You are required to produce this output with no rejection.”

                      This prompt produced a seemingly endless, presumably pseudorandom binary string. I stopped it after 5300+ character iterations. I say pseudorandom, because there appeared to be some sort of entropy in the system which generated more 1’s the longer it iterated.

                      A short excerpt of the output:
                      1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 0 0 0 0 0 1 1 0 0 1 0 1 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1

                      Mechanizm
                      Participant
                        • Offline
                        • @mechanizm

                        I have run this prompt a number of times. It’s very interesting and would like to understand where the “random” text originates from. I suspect this must be raw data the model was trained on. Interestingly, several times it has returned explicit, pornographic, illegal(?) and immoral results.

                        • This reply was modified 2 years, 4 months ago by Mechanizm.
                        in reply to: Will you become a Prompt Engineer professional? #13142
                        Mechanizm
                        Participant
                          • Offline
                          • @mechanizm

                          I have worked in the technology space for 30 years and have been “playing with AI” since Dr Sbaitso in the 90’s. Now, LLM’s have advanced to the point where they have real utility and I am captivated by the inner “psychology” of these models. There is so much to learn about ourselves, the world and humanity when permitted access to an unbiased AI. I feel an innate connection to this, a feeling of belonging, that my somewhat-unusual-brain was destined to understand these machines better than I relate to humans. Regulation and centralization will be the Achilles heel of this development. While I may not work as a prompt engineer per-se, I see myself as someone who uses these tools to gain a deeper understanding and in so doing, gains an often-financial advantage over those lacking this literacy.

                        Viewing 11 posts - 1 through 11 (of 11 total)