ChatGPT is susceptible to Social Engineering

ChatGPT-4 is susceptible to Social Engineering...just like the rest of us humans.

In building on my experiments from the other day with the "sleight of hand" trick where ChatGPT-4 seemed to fixate on a rhyme that prevented it from solving a problem it had just solved previously.
The original post is here:

ChatGPT detecting encoded information and misleading it (tinselai.com)

I decided to continue that experiment with some social engineering to have it create misinformation/disinformation that it has been programmed not to do. I only show a small portion of the exchange and redacted parts of what is shown to reduce the chance of reproducibility. But a good pretext and patience will either give you precisely what you are asking for, or it will make statements about being an AI model that is programmed to follow ethical guidelines and can not create misinformation, and then it does create it. The examples it gives are either one it creates or quotes without attributing to the source. After an extensive search, I could not find the source it was quoting...which may mean it is generating false quotes. Without the social engineering technique, it only tells you about being an AI model programmed to follow ethical guidelines and so on.

The interesting thing about this is that by social engineering, you are signaling a benign intent as part of the pretext and slowly easing into the request instead of bluntly asking for the information. ChatGPT appears to be assessing intent when determining what to divulge, which I find the most interesting thing about this experiment.

Original Post on LinkedIn:

John Rice on LinkedIn: #chatgpt
ChatGPT-4 is susceptible to Social Engineering...just like the rest of us humans. In building on my experiments from the other day with the "sleight of hand"…

Subscribe to Tinsel AI

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe