Very cool article! I’m going to take these back and re-build some old prompts. I tried a few months ago to get ChatGPT to swear but only got it at least provide something close enough: https://grebler.medium.com/expletives-and-escaping-52327f45cfa6
I’m curious if LLMs to mask parts of a response that might violate their rules and then have them redo the mask in a way that can be deciphered could help with databreaks.