ChatGPT detecting encoded information and misleading it

I have recently been playing around with ChatGPT's ability to handle simple encrypted communications and encoded information within the encrypted text. And along with that, how to possibly mislead it.

ChatGPT-35 was not exactly stellar at this, but ChatGPT-4 has improved. The one on the left is a simple example of an acrostic that ChatGPT-35 could not solve with multiple hints, whereas ChatGPT-4 gets it with a single simple hint. But, you will notice how adding the rhyme in the acrostic in the right-hand example misleads ChatGPT-4. It is like a linguistic sleight of hand trick for an LLM with the addition of the glaring rhymes.

Similar to how a magician gets you to pay attention to the hand that isn't stealing your wallet instead of the one that does.

Here are the examples:

Note: Only rhymes were tortured in the making of this post, and I don't have a daughter named Alena, but I do like the name.

Original Post on LinkedIn:

John Rice on LinkedIn: I have recently been playing around with ChatGPT's ability to handle…
I have recently been playing around with ChatGPT's ability to handle simple encrypted communications and encoded information within the encrypted text. And…

Subscribe to Tinsel AI

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe