Prompt Engineering Basics

Today I am going to cover some basics of Prompt Engineering. These are my best practices. A few have tried to establish patterns. There are no definitive best practices to date. This is what makes this a lot of fun...patterns and practices will drive consistent behaviors. I enjoy finding new and exciting emergent behaviors.

So today, I gave my grumpy AI Assistant the day off. It is good fun to build an assistant with a serious personality. On with the show!

Tune Your Prompt On Lower Tiers:
This may be obvious to some, but a best practice I have developed is iterating on my prompts in lower-cost tiers of GPT or with the free tier of ChatGPT3 before moving to a more costly tier. With trial and error, I have discovered that if staying in the OpenAI GPT family, any fine-tuning you do on GPT3, then running your final prompt on GPT4 will give nearly the same level of improvement as fine-tuning your prompt exclusively GPT4.

I experiment a lot! You can look back on my over a month's worth of nearly daily posts showing my experiments, which were of value or gave "interesting" results. I didn't show all of the nonsense I put into ChatGPT nor the nonsense it returned. Garbage In Garbage Out! I have always learned the most from breaking things. That is why I always liked security.

Task Type:
There isn't a "one size fits all approach" for creating an effective prompt. The prompt I build to create my mercurial AI Assistant has an entirely different structure and methodology than if I were to build a prompt to solve a problem or write a program. I will give some examples of this in later posts, but the point here is to experiment, experiment some more, and did I mention experiment?

Other Posts On Understanding Prompt Engineering:
Temperature, also known as "Why you get funky results sometimes": What is the temperature of ChatGPT (

Subscribe to Tinsel AI

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.