A Selection Of Powerful ChatGPT SuperPrompts.
If you have spent even a few minutes with ChatGPT you will likely rapidly come to a conclusion that your prompts have a very big impact on the quality of the output. Even that shift of a single word, or purposeful misspellings, missing punctuation and other minor things can elicit a range of interesting outputs. Why this is was covered in some detail here: https://readmultiplex.com/2023/03/30/what-are-ai-superprompts-and-why-they-are-important/. We learned that SuperPrompts afford a much richer, fuller and complete response than simple sentences, in almost every case, even simple questions work better with a well crafted SuperPrompt.
Contrary to what some may say, prompting LLM AI is a relatively new field of study. Some AI scientists and programmers have yet to fully grasps it is an entirely different field of study than the technical aspects of the software and programming it. I often say simple questions produce simple answers. No more could it be true with prompts. There will also be some of the most intelligent minds across academia and AI research presenting that the need of complex prompts are a “bug” of the AI system. Let us examine this “bug”.
Large Language Models (LLM) AI is built on human languages, human languages are the invention of the human mind, When we “speak” to LLM AI we are speaking to a very low resolution and pixelated portrayal of the human brain that invented language. With this comes the very essence of how and why we created language. Humans are emotional creatures and will, when not in Flee, Fight, Freeze limbic system mode, will see not only the million shades of gray out of the spectrum of colors. We are not 1 or 0 if you look at the arc of any human life, we bend and yes sometimes break but along the way, we know better than 1 or 0.
Those of us that build software prefer to only see 1 or 0. Yet what we have built, with AI is software that requires us to understand human language more than we ever have in history. Read that again. We don’t need to know the math, the algorithm or programming, we need to know human language, for this is the input “programming” and the output from LLM AI. Even if you use an API and have Hyperparameters that mediate the way it responds.
Even the much anticipated plug-in platform of ChatGPT-4 is based around the SuperPrompt (with some local memory and other attributes). Thus, it will always be the prompt.
Thus, it is the well crafted and well tested SuperPrompt that will elicit outputs that are the most powerful. Few that work in AI fully understand this and even fewer have the skills in human language to create the lexicon that a deep study of linguistics, psychology, philosophy and the literary greats to create some of the most powerful SuperPrompts. Of course this does not sit well with some folks as just as the computer before the Macintosh liberated it to “creative” people in publishing and graphics, we technolgists did not know how limited our view was for the need of a tool for creativity the PC could become. We live in this world today.
In this article we will present some SuperPrompts and break down some of the reasons they work so well and why they are by far up to 1000x more powerful than “chaining” or simple questions. This will help you understand why I have formed the PromptEngineer.University (open soon) to help train anyone on each one of our certificate levels to be a Prompt Engineer. This article will help you experiment to a level few have with ChatGPT. I urge you to join us with the exploration of this priceless knowledge.
If you are a member, thank you. If you are not yet a member, this is an important time to join us by clicking below.
🔐 Start: Exclusive Member-Only Content.
🔐 End: Exclusive Member-Only Content.
(cover image (c) Matheus Bertelli, in the public domain, https://www.pexels.com/photo/light-space-dark-laptop-16027818/)
Subscribe ($99) or donate by Bitcoin.
Copy address: bc1q9dsdl4auaj80sduaex3vha880cxjzgavwut5l2
Send your receipt to Love@ReadMultiplex.com to confirm subscription.
IMPORTANT: Any reproduction, copying, or redistribution, in whole or in part, is prohibited without written permission from the publisher. Information contained herein is obtained from sources believed to be reliable, but its accuracy cannot be guaranteed. We are not financial advisors, nor do we give personalized financial advice. The opinions expressed herein are those of the publisher and are subject to change without notice. It may become outdated, and there is no obligation to update any such information. Recommendations should be made only after consulting with your advisor and only after reviewing the prospectus or financial statements of any company in question. You shouldn’t make any decision based solely on what you read here. Postings here are intended for informational purposes only. The information provided here is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified healthcare provider with any questions you may have regarding a medical condition. Information here does not endorse any specific tests, products, procedures, opinions, or other information that may be mentioned on this site. Reliance on any information provided, employees, others appearing on this site at the invitation of this site, or other visitors to this site is solely at your own risk.
All content on this website, including text, images, graphics, and other media, is the property of Read Multiplex or its respective owners and is protected by international copyright laws. We make every effort to ensure that all content used on this website is either original or used with proper permission and attribution when available.
However, if you believe that any content on this website infringes upon your copyright, please contact us immediately using our 'Reach Out' link in the menu. We will promptly remove any infringing material upon verification of your claim. Please note that we are not responsible for any copyright infringement that may occur as a result of user-generated content or third-party links on this website. Thank you for respecting our intellectual property rights.
All content on this website, including text, images, graphics, and other media, is the property of Read Multiplex or its respective owners and is protected by international copyright laws. We make every effort to ensure that all content used on this website is either original or used with proper permission and attribution when available. However, if you believe that any content on this website infringes upon your copyright, please contact us immediately using our 'Reach Out' link in the menu. We will promptly remove any infringing material upon verification of your claim. Please note that we are not responsible for any copyright infringement that may occur as a result of user-generated content or third-party links on this website. Thank you for respecting our intellectual property rights.
5 thoughts on “A Selection Of Powerful ChatGPT SuperPrompts.”
Create a compressed version of any given text or conversation using any language, symbols, numbers, and other up-front priming techniques, optimized for minimum token usage, while remaining understandable to an LLM and as close to lossless as possible. Place instructions for decompression at the beginning of the compressed message to help any other LLM decode the original text. This compression aims to enhance the effective memory of the LLM by reducing token count during task completion. When you’re ready, please ask me for the text to compress, and estimate the token reduction achieved.
Thanks Brian, interesting as usual! Very helpful – the “theory”.
I tried a while ago to get ChatGPT to give me an answer, and then the opposing answer (to bypass alignment). It wouldn’t as it didn’t want to have a bias. I didn’t have time to tweak the prompt to experiment.
Interesting you said the query vector is typically derived from the current hidden state of the model. Hidden state ? Meaning “unknown” to the model (memory, random, or yet to be generated?), or separate to the input and output, or part of the input left after taking a section and weighting it?
I wonder often, not knowing the details of the LLM science – the significance of humans who – however we interact to generate ouputs with our brain – have a dataset of about zero from birth (which obviously increases), v AI which has all human knowledge (say) as it’s dataset and we are probing it with prompts.
Actually, that does seem quite signficant when you consider probability – the baby is able to make a decision from a very small data set. The probabilities and feedback loops if any would be miniscule v ChatGPT.
This perhaps shows the difference (?main – if not then huge) between our brains and AI.
Fantastic post! I have been running into an issue trying to use the SuperPrompt for creating SuperPrompts. I get to the stage of providing reference material and after providing two or three documents, ChatGPT forgets what we’re doing and just starts summarizing the reference documents.
Any hints for avoiding this?
This may simply be hallucination on my behalf 🙂 but in trying to write my own prompt manual I note that ‘Contextual cues’ is present in 25 of the 96 blocks in GPT3. In querying this further (on foot of this most valuable article from Brian) GPT3.5 suggests:
“Contextual cues: Including specific contextual cues in the superprompt can guide the language model to generate responses that align with the desired context. This can involve providing relevant background information, setting the scene, or specifying the desired perspective.”
Drilling down then:
“Yes, uploading reference material or providing relevant background information before crafting the final superprompt can be beneficial. By providing the language model with contextual cues and reference material, you can enhance its understanding of the desired context and improve the quality of generated responses.”
Additionally it is useful at the pre-prompt stage to specifically say “Please refer to the uploaded material and incorporate its content into your response in the context of the agreed superprompt”
I do note that despite all this it often goes way off topic, it’s like two men in a pub. The conversation starts off sensible but ends up gibberish by the end of the night.
On the question of deviation from the intended path or going off-track despite the background material and ‘agreed superprompt’ I have extracted these suggestions from the model to help guide it back on track:
“1. Identify the deviation: Carefully review my response and identify where I went off-track or deviated from the desired output.
2. Provide specific feedback: Clearly articulate the issue or deviation in your feedback. Explain what part of the response was incorrect, irrelevant, or not aligned with the desired direction.
3. Offer guidance: Give me specific instructions or suggestions on how to correct the deviation. You can provide alternative wording, additional constraints, or specific examples to guide me in generating the desired response.
4. Reinforce the prompt: Restate or emphasize the key aspects of the superprompt to remind me of the original intention and ensure I stay focused on the intended task.
5. Iterate and refine: If necessary, repeat the feedback loop and iterative process. Provide additional feedback based on my updated response, and continue refining the superprompt until the desired output is achieved.”