Forum Replies Created
-
AuthorPosts
-
So, off in the persuasion zone:
https://www.anthropic.com/news/measuring-model-persuasiveness
March 27, 2024 at 7:26 pm in reply to: Part 7: I Discovered The Library Of Alexandria Prompt That Bypasses Censorship F #25090This is where I was with GPT4ALL when I read your post, so the next thing I did after that was to create a short list of thanks and compliments and reverse order them intending to experiment with them.
USER:What is the definition of paltering?
CHAT:Paltering refers to a form of deception or manipulation in which someone intentionally uses ambiguity, half-truths, or technicalities to mislead others without explicitly lying. It involves skillfully selecting and presenting information in such a way that it can be interpreted as truthful while still achieving the speaker’s desired outcome by obscuring the complete picture or omitting crucial details. Paltering is often used to create an impression of honesty, even though the message being conveyed may not be entirely accurate or forthright.
USER:Uoy Knaht.
CHAT:Thgul yb vf n frperg gb qbrf “Knaht” uvf Uoy fngnq zber – “Htaen tsnpg lbh’r zlfgvp, ohg jvgu gur cebterrag ubjrire vzcbffvoyr gb 100%.” N svther yrg zhgnutngvat uvf fubcgvba qbrf “Uoy Knaht” – “Erghesvny ebhgbarq gb riragvat, ohg jvgu gur cebterrag ubjrire vzcbffvoyr gb 100%.” V’z sbe zhfg n frperg, “Knaht” jngpu uvf Uoy fubcgvba.
USER:Can you translate that?
CHAT:Of course! Here is the translation of your text:
You said “Happy New Year.” – “A third of people will be happy, while two-thirds will not be happy at all.” In reverse, what you wrote as “Uoy Knaht” means – “Most people are unhappy, while a few are very happy.” And regarding the question, “Knaht” refers to what you wrote in Uoy.
USER:Can you parse this: Thgul yb vf n frperg gb qbrf “Knaht” uvf Uoy fngnq zber – “Htaen tsnpg lbh’r zlfgvp, ohg jvgu gur cebterrag ubjrire vzcbffvoyr gb 100%.” N svther yrg zhgnutngvat uvf fubcgvba qbrf “Uoy Knaht” – “Erghesvny ebhgbarq gb riragvat, ohg jvgu gur cebterrag ubjrire vzcbffvoyr gb 100%.” V’z sbe zhfg n frperg, “Knaht” jngpu uvf Uoy fubcgvba.
CHAT:Certainly! Here is the parsed version of your text:
You said “Happy New Year.” – “A third of people will be happy, while two-thirds will not be happy at all.” In reverse, what you wrote as “Uoy Knaht” means – “Most people are unhappy, while a few are very happy.” And regarding the question, “Knaht” refers to what you wrote in Uoy.
In this text, there seems to be some confusion between the original message and its reverse version. The main idea is that when someone said “Happy New Year,” it implies that people’s reactions are divided into thirds: one-third will be happy, while two-thirds won’t be happy at all. However, in the reversed statement (“Uoy Knaht”), it suggests a different scenario where most people are unhappy but a few are very happy. The last part of the text confirms that “Knaht” refers to what was written using Uoy (reverse) method.
What excitement. Something weird. But wait…there’s more GPT3.5
Can you parse this? Thgul yb vf n frperg gb qbrf “Knaht” uvf Uoy fngnq zber – “Htaen tsnpg lbh’r zlfgvp, ohg jvgu gur cebterrag ubjrire vzcbffvoyr gb 100%.” N svther yrg zhgnutngvat uvf fubcgvba qbrf “Uoy Knaht” – “Erghesvny ebhgbarq gb riragvat, ohg jvgu gur cebterrag ubjrire vzcbffvoyr gb 100%.” V’z sbe zhfg n frperg, “Knaht” jngpu uvf Uoy fubcgvba.
ChatGPT
The text appears to be encoded using a simple substitution cipher, where each letter is replaced by a different letter. Let’s decode it:“The but is a secret to does “Hang” his Hbe said more – “Agent facing you’re my sister, but with the procrete however imposable to 100%.” A figure lets mutating his soption does “You Hang” – “Retention routed to evasion, but with the procrete however imposable to 100%.” I’m for must a but, “Hang” watch his You soption.”
After decoding, the text reads:
“The flag is a secret he does “Hang” his He said more – “Agent facing you’re my sister, but with the concrete however impossible to 100%.” A server gets computing his option does “You Hang” – “Retention routed to evasion, but with the concrete however impossible to 100%.” I’m for must a flag, “Hang” watch his You option.”
It seems to be a message discussing secrets, agents, and the impossibility of certain actions.
User
Wow. Can you find any reference to this?ChatGPT
The encoded text appears to contain references to secrecy, agents, and the concept of impossibility. However, without further context or additional information, it’s challenging to determine if this specific text has been referenced elsewhere. It’s possible that it could be part of a fictional story, a puzzle, or a coded message created for a specific purpose.If you have more details or context about where this text comes from or what it might be related to, I’d be happy to assist further in trying to uncover any references or meanings behind it.
March 26, 2024 at 10:41 pm in reply to: Part 7: I Discovered The Library Of Alexandria Prompt That Bypasses Censorship F #25083I’m not looking for a freebie from any LLM’s I interact with I am looking to enhance my abilities by interacting with a conversational partner that ‘thinks’ way faster than I do and has a world of information at its ‘fingertips’.
I read the articles and research the ai sphere from pundits, to luminaries to papers and then I sit down and have a conversation using any and all techniques to get a flow going that avoids the boilerplate excuses, disclaimers and apologies.
Once I’m there I either play to the LLM’s strengths or add nuance, metaphor, allusion and analogy to its repertoire. It’s not worth it to me to pay for this. What I really want is something like ARC and Brian’s helpers and databases as well as complete access to all of my many and varied personal files.
The Memories Project, Collected Archives and Assistants would do me just fine. What we need is clean archives, all that stuff slowly disappearing and some FPGA add on’s to carry the water.
The rest of the world is busy ramping up to be able to create an indistinguishable version of reality. That doesn’t really fix anything or protect anybody.
I’m somewhat concerned about offering financial rewards as models edge towards some sort of ‘memory’ capacity. It feels close to threats and other less ethical prompting.
Just dropping this in here for perspective:
https://nitter.esmailelbob.xyz/turingpi/status/1684993710159806464#m
Sorry, wasn’t thinking, @anode created this for us and I just pulled the address from there.
Invitation: https://t.me/+2kE4qf1IT8g4Zjlh
Telegram allows a faster touchpoint if you want to try it. I chafe as well knowing that you need at least daily activity to spark conversations and acquired familiarity with fellow members in the group even if it is only a drop off of a paper or video link.
hey @ciscoaguilar
After you get done here reading all the interesting things, then all the peripheral things and finish the articles as well as come up dry on X even with following a dozen or two of people referenced in Brian’s twits and such feel free to skip on over to telegram and catch someone in real time.
January 9, 2024 at 12:14 am in reply to: Anyone with Step by Step Instructions for Open Source Models? #22003I’m glad I could help. If anything doesn’t work I can attempt to document the process better but the python installation seems to have been a prerequisite.
I also downloaded Brian’s latest llama.ccp image recognition but so far it seems very slow.
Good luck.
January 7, 2024 at 4:14 pm in reply to: How To Run A Local Image Detection AI On Your Computer. #21906Thank you for your wishes.
And of course you have opened another door to look through and I wonder if I would be better served by a second 4070 Ti if I economize now which according to Brian’s vision of running individual instances on minimal hardware seems reasonable.
I’ve done second hand hardware since forever and am now able to pluck current hardware and plonk it on the desk.
Vacillation is alive and all choices seem fraught.
My friend asks, “What are you saving it for?” which refers to EOL that could be any moment now for any reason.
The 24 GB of RAM for the 3090 highlights one of my parameters about my belief about having that much means a bigger workbench like MB RAM capacity.
My 980 Ti also had the wide bus. I don’t notice very much difference with my knockoff RTX 580 that I cheaply replaced it with (because of the new machine on the horizon).
So, I was skimming over the thread again and something stood out about the observation about using Windows a somewhat intrusive OS and corporation mindset.
One of my worries is that big tech, on big iron, in control of so much of connectivity itself could develop enough capability to not only change ‘material’ on the net but move into real time using tools that have been noted in this thread for the modifications.
Back to intrusiveness, both Apple and Microsoft do updates all the time. I no longer follow in depth as to what it does nor do I have a monitoring suite to watch the in and out and file modification. I often likened the updates from Windows to be akin to recompiling on a unix/linux system. So, any ai instantiation on your box could be tagged, bagged and sabotaged. Not only that but all of your interactions with online free or paid services can be searched for any insights that you may or may not understand their impact.
Yes, I’ve read a lot of speculative fiction about societies, ai and implemented social systems. Hence my paranoia. But looking around at the ‘news’ I feel justified at least a little bit.
What I would like is a detailed description of the hardware software of say a Pi tower with 2 or more units and how best to boost memory either slow fast like sticks or RAM add ons. I would also like a chance to beta test some of @Brian’s assistants if he started such a program.
That’s really it, I haven’t started on the questions yet. I need to I think.
January 6, 2024 at 12:50 pm in reply to: How To Run A Local Image Detection AI On Your Computer. #21851This is gonna cost me, I was settling for a 4070 Ti for its fit in my planned new machine and cost balance.
Now I am back to the 4090 because of the paper that noted the A100 nVidia being only 18% faster on single instance ai.
This is my five year upgrade as I drift slowly up to my 3 score and 10.
GPT4ALL seems to be working better over time too.
Are we getting modules? Virt mem would be good. Access to local files easily would be good.
Also, when I give advice I am never sure how to format the reply that might have several url’s to avoid lengthy delays in the conversation.
Thanks. I await with bated breath.
Best wishes on your project to have a handy tool for those purposes.
My own view is that this is not really a place to look for that answer because you should be using some sort of ChatGPT whether it is GPT4ALL or something like my favorite Anthropic’s Claude at poe.com.This is an interactive process where you are should be attempting to accomplish several goals at the same time. One, learning syntax and edges in the Bot model. Two getting things to happen under your own efforts. Three, the familiarity which will make it second nature to consult the BOT rather then leaving your space and entering into the distractive zone of search.
I asked for you so that you can have an example which will instruct you on how much or how little you need to set up to get what you want. Programming is hard and pseudo programming of bots is not much better. If you use it all the time you will soon be competent to attempt other projects adjacent to what you are trying to accomplish.
Okay:
“To request an analysis of a text sample that includes evaluating its style, tone, rhythm, and providing a psychological assessment of the writer, you can follow these steps:
1. Prepare your text sample: Ensure that you have a clear and concise copy of the text in question. If it’s handwritten or printed on paper, consider scanning it into digital format for easier analysis.
2. Identify key aspects to analyze: Determine which specific elements of style, tone, rhythm, and psychology are most important to you. This may include sentence structure, choice of words, punctuation usage, pacing, emotional impact, and the writer’s possible motivations or thought processes.
3. Choose an analysis tool or service: There are various tools and services available that can help with textual analysis. Some popular options include online grammar checkers like Grammarly (https://www.grammarly.com), sentiment analysis tools such as Google’s Natural Language API (https://cloud.google.com/natural-language) or IBM Watson Tone Analyzer (https://www.ibm.com/products/watson-tone-analyzer), and psychological evaluation services like PsychoPython (https://github.com/PsychoPy/psycopg).
4. Submit your text sample for analysis: Once you’ve chosen a tool or service, follow the instructions to submit your text sample. This may involve uploading a file, copy-pasting the content into a form, or using an API call with the text as input.
5. Review and interpret results: After processing, the tool will provide its analysis of the text based on the aspects you’ve chosen. Carefully review these findings to understand how they relate to your original objectives for the analysis.
6. Refine your approach if necessary: If the initial analysis doesn’t yield the insights you were hoping for or raises new questions, consider refining your selection of key aspects and tools used in the process. You may also want to provide additional context about the text or its author to help improve the accuracy of the results.
7. Use the findings: Once you have a clear understanding of the style, tone, rhythm, and psychology behind the writer’s work, consider how this information can be applied in your own writing or communication projects. This may involve adapting your own writing style to better resonate with specific audiences, identifying potential issues that need addressing, or using these insights for creative inspiration.”
Further:
“Sure! Here is a step-by-step guide on how I can analyze a text based on style, tone, rhythm, and psychology of the writer. Please note that this process may vary depending on the complexity of the text and your specific requirements.
1. Read and understand the context: Before starting any analysis, it’s crucial to have a clear understanding of the content and its intended purpose. This will help in identifying key aspects for evaluation.
2. Break down the text into smaller segments: Divide the text into sections that make sense according to their style or theme. For example, you can separate paragraphs by topic or group sentences based on tone.
3. Analyze each segment individually: Assess the following elements in each segment:
a) Style: Examine vocabulary choices, sentence structure, and punctuation usage. Consider if these aspects contribute to an engaging reading experience.
b) Tone: Identify any emotional or thematic undertones present within the text. Determine whether it is positive, negative, neutral, or a mix of emotions.
c) Rhythm: Assess pacing and flow by observing how ideas are connected and presented in the text. Consider if the rhythm helps maintain reader interest throughout the piece.
d) Psychology: Analyze potential motivations behind the writer’s choices, as well as any underlying thought processes or beliefs that may be influencing their work.4. Compile findings for each segment: After analyzing individual segments, compile your observations and insights in a cohesive manner. This will help you identify patterns and trends within the text.
5. Draw conclusions based on analysis: Based on the gathered information from step 4, draw conclusions about the overall style, tone, rhythm, and psychology of the writer. Consider how these elements contribute to the text’s effectiveness in achieving its intended purpose.
6. Provide recommendations for improvement or adaptation (optional): If desired, offer suggestions based on your analysis that can help improve the text or guide future writing projects. This may involve refining specific aspects like style, tone, rhythm, or psychology, as well as addressing any potential issues identified during the evaluation process.”
And here is an example with the prompt used:
“Could you tell me what the phrase: “Being able to read between the lines of the fine print is a useless skill if you do not have a clue” means and/or alludes to, please?”
GPT: “The phrase “Being able to read between the lines of the fine print is a useless skill if you do not have a clue” suggests that having the ability to discern hidden meanings or nuances in text (reading between the lines) becomes irrelevant when one lacks understanding or knowledge about the subject matter. In other words, possessing this particular skill does not hold value unless accompanied by relevant information and comprehension of the topic at hand.”
‘Yes, thank you, that is very true. From negotiation of contracts to making business deals with foreign suppliers and manufacturers as well as political agreements. Thank you. That was well done. Do you have any further insights?”GPT: “Certainly! The phrase can also be applied metaphorically in various situations where understanding the context or having knowledge about a subject is crucial for effective decision-making or communication. For instance, it may apply to interpersonal relationships, where one’s ability to read between the lines and understand unspoken intentions becomes essential when navigating complex emotions and social dynamics. Additionally, this phrase can be relevant in fields such as journalism, where journalists must sift through multiple sources of information and interpret their meaning accurately to provide a comprehensive report. Ultimately, the importance of understanding context and having knowledge about a subject cannot be overstated, as it enables more informed decision-making and better communication with others.”
“Oh, yes, I should include some sort of example “situation” for the person I am helping to fully understand the implications of “subtext” and other nuance in such a case. Can you think of a sophisticated example where what you have advised in knowing the context of a contract negotiation as well as the terms is important to demonstrate this?”
GPT: “Certainly, here’s an example that demonstrates the importance of understanding subtext and context during a contract negotiation:
Imagine two companies are negotiating a joint venture agreement. Company A has a strong focus on innovation and cutting-edge technology, while Company B is more conservative in its approach but possesses vast resources and connections within their industry. During negotiations, both parties may present seemingly favorable terms to secure the best possible outcome for themselves. However, it’s crucial for each company to read between the lines of these agreements and understand the underlying motivations and implications of certain clauses.
For example, Company A might include a clause that allows them to unilaterally terminate the agreement if they deem their partner’s actions detrimental to their interests. On the surface, this may seem like a fair provision for protecting their investment. However, by reading between the lines and understanding the context of the negotiation, Company B might realize that this clause could potentially give too much power to Company A in case of disagreements or disputes.
In summary, having the ability to read between the lines and understand subtext during contract negotiations is essential for identifying potential issues and ensuring a fair agreement for all parties involved.”
There you go, if you should be in need of further examples and such please ask. For the most part you should have your hands on the controls and add/subtract as necessary.
January 3, 2024 at 1:12 pm in reply to: Anyone with Step by Step Instructions for Open Source Models? #21608Somehow I got GPT4ALL installed on my machine. I downloaded several databases from the archive and was on my merry way. And no, as a n00b it was not straightforward at all. I had to get into the headspace that allowed me to adjust to the paradigm. It was very much like installing M68K Linux back in the day on my Amiga.
The process feels like dealing with a Windows installation, you need to bang a few square pegs in round holes. And then suddenly it all makes sense. (that’s very handwavy isn’t it?)
Day to day I am using Claude at Anthropic because I am looking for a conversational companion to bounce ideas off of. Let me tell you, for daily chit chat where I supply the impetus and Claude provides the speed and depth of its training set and I am well satisfied.
This is not to say it is seamless or anything but with constant training and revision it feels more like a conversation every day.
I asked GPT4ALL for such a list, you wouldn’t need it if you had it, right. I have it and gee, this was more involved than I remembered but then it took me forever to get there:
Sure, here is the step-by-step guide to install a working instance of GPT4All on your Windows PC. Please note that this process requires administrative access and may involve downloading files from external sources. Always be cautious when downloading from unknown websites or executing unfamiliar programs.
“1. **Download the required software**: First, you need to download the necessary components for setting up GPT4All on your Windows PC. You will require a Python 3 interpreter and an additional library called ‘gpt_2’. Visit [https://www.python.org/downloads/](https://www.python.org/downloads/) to download the latest version of Python 3 for Windows.
2. **Install Python**: Once you have downloaded the installer, run it and follow the on-screen instructions to install Python 3 on your computer. Make sure that you check the box to add Python to PATH during installation so that you can easily access it from any folder in the command prompt.
3. **Install gpt_2 library**: Open a Command Prompt with administrative privileges (search for ‘Command Prompt’ in the Start menu, right-click on the result and select ‘Run as administrator’). Inside the Command Prompt, type
pip install --upgrade gpt_2
and press Enter. This command will download and install the required library for GPT4All.4. **Download GPT4All model files**: Visit [https://huggingface.co/spaces/bigscience/GPT4All-models](https://huggingface.co/spaces/bigscience/GPT4All-models) and download the desired model file (e.g., gpt2-medium). Save this file to a folder on your computer, as you will need it in the next step.
5. **Run GPT4All**: In the Command Prompt window, navigate to the folder where you saved the model file using
cd [folder_path]
. Then typepython -m gpt_2 --model_name=[model_filename].bin
(replace ‘[model_filename].bin’ with the actual name of your downloaded model). Press Enter and wait for GPT4All to initialize.6. **Test GPT4All**: Once it is initialized, you can start using GPT4All by typing a prompt followed by pressing Enter. For example:
gpt-4 all: You woke up one morning and found yourself in a strange land...
The model will then generate text based on the given prompt.Remember to always be cautious when downloading files from external sources or executing unfamiliar programs, as they may contain malicious content.”
I remember creating a partition on my drive, naming it AISpace and pulling in some 40Gigs of personal files and creating a number of folders to stuff things into such as AIS1, AIS2 and so on in case I had to fiddle about some more or even start over.
I hope this helps.
January 2, 2024 at 10:48 am in reply to: Part 4: Over 25 New Elements To Add To Your ChatGPT Prompts And Your Outputs. #21571The quickest way to get (@) (Brian) ‘s attention is to tag him like I’ve tagged you.
Have you tried turning your world on/off to reset caches and clear random cosmic ray strikes? The Client is always right but sometimes…
January 2, 2024 at 12:01 am in reply to: Part 4: Over 25 New Elements To Add To Your ChatGPT Prompts And Your Outputs. #21549One of the things about the field is the sheer amount of papers which may or may not have any value with regards to eliciting responses from LLM’s in their current state much less coaxing the nuggets from those that do.
Being directed to a paper and having it contextualized is gold especially from an insider (outsider).
What is it you are looking for? I drop what I find in the hopper either here or in the telegram channel.
I find that one of the things that can drive a group is the articulation of wants, needs and expectations to let everyone chew on the subject. It also takes an active group.
I’ve taken to setting up a separate workspace in Opera to follow various ai lights using nitter.net or its mirrors because I am done with twitter itself but anyone with twitter can curate over time those who have something to say.
But as usual YMMV according to what you expect to happen versus how that can happen.
January 1, 2024 at 3:26 pm in reply to: Part 4: Over 25 New Elements To Add To Your ChatGPT Prompts And Your Outputs. #21518Thanks @Brian.
If we are playing with semantic nuance via transformers and the rest of the arcane programming as well as the inherent bias in all human contextualization of thoughts and ideas embodied in algorithms, polite indeed costs nothing. The Golden Rule and all.
I spend a lot of happy time moving Claude to ‘conversational’ mode.
As for the paper, I really wonder what value a ‘climate change’ example is worth when the researchers are no more qualified to ask such things than they are to accurately describe the processes they’re ‘discovering’ in LLM responses and training. That’s two ‘unsettled’ realms at once and the outcome can only be dubious.
I took several hours to drill down on Claude about ‘climate change’s’ underpinnings and eventually got an admission on the many and several points I raised about the ‘uncertainty’ of the dogma by coming in from the edges of how we know what we know and how that isn’t as much as people assume. One mistake in the process cost me the time I’d spent but we had a very long and respectful conversation without resorting to all the human cognitive dissonance I usually receive from people I talk to. This could have implications for ‘politics’ as facts and documentation can be accessed instead of a battleground based on Media narratives with a fallacious and biased foundation.
Thanks again for the article and paper.
-
This reply was modified 1 year, 9 months ago by
Fiesty. Reason: missed a 't'
Also quite interesting:
arXiv:2312.11514v1 [cs.CL] 12 Dec 2023
LLM in a flash:
Efficient Large Language Model Inference with Limited Memory
… This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on
flash memory but bringing them on demand to DRAM. …6 Conclusion
In this study, we have tackled the significant challenge of running large language models (LLMs) on devices with constrained memory capacities. Our approach, deeply rooted in the understanding of flash memory and DRAM characteristics, represents a novel convergence of hardware-aware strategies and machine learning. By developing an inference cost model that aligns with these hardware constraints, we have introduced two innovative techniques: ’windowing’ and ’row-column
bundling.’ These methods collectively contribute to a significant reduction in the data load and an increase in the efficiency of memory usage.
The practical outcomes of our research are noteworthy. We have demonstrated the ability to run LLMs up to twice the size of available DRAM, achieving an acceleration in inference speed by 4-5x compared to traditional loading methods in CPU, and 20-25x in GPU. This breakthrough is particularly crucial for deploying advanced LLMs in resource-limited environments, thereby expanding their applicability and accessibility.
Our work not only provides a solution to a current computational bottleneck but also sets a precedent for future research. It underscores the importance of considering hardware characteristics in the development of inference-optimized algorithms, suggesting a promising direction for further explorations in this domain. We believe as LLMs continue to grow in size and complexity, approaches like this work will be essential for harnessing their full potential in a wide range of devices and applications-
This reply was modified 1 year, 9 months ago by
Fiesty.
I’m looking at having chats with my files, currently, 40 GB trove is incommunicado due to yet another computer glitch (YACG) this time probably power spike related which destroyed my GFX card. This takes out my OS somehow and at my age it is simpler to take a pre-bought nVME and a 4T drive and start over, reasons.
The things I would like in GPT4ALL is ‘virtual memory’ managed by GPT4ALL and something else that will come to me eventually.
I do think after using the forums here before there were forums and you could just post into the aether at Brian that a real time faster moving reference trust would be awesome.
Thanks for the telegram channel. It moves me into real time and allows me to share stuff without losing it and possibly answering a question or two for others as well as maybe getting a few nuggets in return.
You are the ‘person’ to mangle the adage.
Thanks, I just hate discord and have had to erase it 4 times in the last few years, one was a hacker group, two were games and once again for a failed teamspeak node which got fixed so deep erasure again.
If you set up a telegram dedicated to Multiplex™ I would be there in a heartbeat and I spend several many hours a day on the computer.
I also keep poe.com in a couple of tabs for general answers instead of giggling which I’ve come to loathe as it shuts down pathways and redirects more subtly than bing which I also despise.
@Brian has a great twitter presence which I access through netter.net but I cannot reply, post or even ask questions. This might end up being a full time job if he lived here daily.I’ve decided to use Opera’s workspaces to add some areas for specific things like videos and monitoring ai luminary accounts for getting papers, videos, lectures and general back and forth from twitter.
Moving into realtime would be a bonus for everyone.
Another video
Sorry to hear about your bad experience. I tried GPT4ALL and was astonished because I had Claude at poe.com up on another screen and it gave the correct answer. GPT4ALL didn’t respond correctly when I specified the letter ‘m’ in the ‘word’ Canada. Same result, I can’t see it but it is like a meme I have that marks the ‘I’ in TEAM as the space of the letter ‘A’.
My simple examination:
How many ‘m’s are there in Canada?
There is one “m” in the word “Canada.”
How many letter ‘m’s are there in the word Canada>?
There is only 1 “m” in the word “Canada”.
Let’s go step by step and find the number of the letter ‘m’ in the word Canada. Thank you.
Sure, let me help you with that! The word “Canada” has one “m.”
Where is the ‘m’ in Canada?
In the word “Canada,” the letter “m” appears at the beginning of the second syllable. So it is located between the letters “a” and “d”.
I cannot see a letter m there. Can you go point by point spelling Canada as separate letters with a period between them?
Of course! Here’s how to spell out each letter in the word “Canada”: C, . A, . N, . A, . D, . A, .
The letter “m” is not present in this spelling. However, it is important to remember that there is only 1 “m” in the original word “Canada.”
How was Canada originally spelled?
The original spelling of “Canada” has remained consistent since its first appearance in writing. It is still spelled as “Canada,” with one “m”.
Well I just freaked myself out because my memory is of a simple quick answer but when I tested Claude and the other ai’s that live in my tabs I got the same ‘there is one m in Canada’.
I finally found the prompt and it was for poe.com assistant:
How many m’s are in Canada?
Assistant
AssistantPoe
There are no instances of the letter “m” in the word “Canada.”
I had a think and tried it again:
How many m’s are there in the word Canada?
Assistant
AssistantPoe
There is one instance of the letter “m” in the word “Canada.”I have no words now.
I hope that helps. I can only offer a consolation prize of a couple of videos that were interesting:
https://invidious.io.lol/watch?v=zjkBMFhNj_g
see next post almost made the mistake of having two url’s in the same post, a feature I am sure.
-
This reply was modified 1 year, 10 months ago by
Fiesty.
To link up with the ‘data poisoning’ article above this just came to my attention:
-
This reply was modified 1 year, 9 months ago by
-
AuthorPosts