It’s easy to teach chatGPT about your company.
In 5 minutes you can teach chatGPT specific facts about about your company — then it will accurately answer complex questions.
I’ll explain the process and show you step-by-step how to do it.
An important fact to remember. Even though ChatGPT seems smart, confident, Large Language Models (LLMs) like GPT3, chatGPT, Google’s BARD don’t think. They are not sentient, and they don’t understand the meaning behind the phrases they create.
LLMs are complex mathematical models of the relationships between words and the importance of particular words in any given text. The basis of most modern LLMs (even chatGPT) builds on work of Google’s transformer technology.
LLMs are great at stringing lots of related words together in ways that can make them seem smarter than your typical CTO at a cocktail party.
With a little understanding of how they work, we can do some pretty cool things. Let’s get started. (If you’d like to follow along, head over to https://chat.openai.com/chat.)
ChatGPT is a LLM that is a customized version of OpenAIs GPT3 LLM. The OpenAI team is frantically tuning their chatGPT product to make sure it answers are more reliable than the base GPT3, so we will use GPT3 to demonstrate the initial key concepts and them move to chatGPT for the final magic.
- 🍄 Hallucination — When an LLM confidently delivers a dead wrong answer we call it hallucination. Let’s take a look at an example of hallucination in OpenAIs GPT3 model with the following prompt / response sequence.
INPUT GIVEN to GPT3:
“What is a foofster?”
GPT3 RESPONSE:
“A Foofster is a type of pet that is a cross between a poodle and a terrier. It is a small, fluffy dog that is usually white or cream in color. They have a curly coat and often have a little bit of a curly tail. Foofsters are known for being friendly, intelligent, and obedient. They make excellent companion pets…”
WHAT JUST HAPPENED:
The model had no idea what a foofster (it’s a made up word) actually is, so it ‘invented’ a definition. This type of response has been correct in chatGPT model, possibly using “prompt engineering”, which we will talk about now. - ❓ Prompt Engineering — Creative ways to ask our questions (define a prompt) to get more useful, accurate answers. Let’s add some more direction in our prompt.
INPUT GIVEN to GPT3:
“Please describe a foofster. If you are unsure of the answer say ‘I don’t know’”
GPT3 RESPONSE:
“I don’t know”
WHAT JUST HAPPENED:
Wow. Even though that response feels sentient, its really just a cool way the interface has parsed our “intent” and tweaked the model’s parameters to give us a better answer. We can use prompt engineering for lot’s of powerful things 👩🔬. One of these things is called Context. We will use context to ‘teach’ chatGPT new things - 📰 Context — Context is a way to give GPT3 specific, targeted information to use when it is answering our question. GPT3 is programmed to assign the information we give it a high relevance. Let’s take a look. Our next prompt is going to provide the LLM with some additional, helpful information.
INPUT GIVEN to GPT3 (or chatGPT if you are following along):
“Please describe a foofster using the context below. If you are unsure of the answer say ‘I don’t know’.
Context: A foofster is a medium sized bird that is native to the Canadian Island of Victoria. It’s natural diet consists of pine tree bark and small fish, however it prefers to drink the maple syrup off waffles of unsuspecting tourists. Male foofsters are bright blue in color while female foofsters have yellow iridescent feathers and long, beautiful necks. Females can lay up to 20 eggs a year, but will happily trade them for sips of maple syrup. This latter practice has decimated their population and landed the foofster on the endangered species list.
Q: What does a foofster eat?
A:
GPT3 RESPONSE:
“A foofster’s natural diet consists of pine tree bark and small fish, but it prefers to drink the maple syrup off waffles of unsuspecting tourists.”
INPUT GIVEN:
“Is the foofster endangered?”
GPT3 RESPONSE:
“the foofster is on the endangered species list due to its preference for drinking maple syrup from tourists’ waffles. This practice has decimated the population of foofsters.”
INPUT GIVEN:
“Why the foofster endangered?”
GPT3 RESPONSE:
“the foofster is endangered because it has a preference for drinking maple syrup from tourists’ waffles, which has led to a decline in its population. The practice of trading eggs for sips of maple syrup has contributed to the decline in the foofster population.”
WHAT JUST HAPPENED:
Like those reading for comprehension test questions, chatGPT took new information provided in our context and used it to answer our questions. Now imagine giving it context from a company brochure, or text from a bunch of web pages. You might even hook it up to your website’s chat button 💬. It’s really that simple. - 🏗️ One last cool thing about GPT3 and other LLMs is there ability to combine the relationships of words from your prompt and it’s trained data — Let’s ask it a question where it needs to integrate relationships between the context given and it’s general training data.
INPUT GIVEN:
“Q: Does a foofster live in British Columbia?”
GPT3 RESPONSE:
“A: Yes, the foofster is native to the Canadian Island of Victoria, which is located in British Columbia.”
WHAT JUST HAPPENED:
GPT3 did a nice job of mapping relationships between 1. the words in our question, 2. the words in the context we provided, and 3. it’s existing base of trained material.
In conclusion, it is very easy to “train” chatGPT on any specific information you care about. Products, wedding plans or typically ignored policies and procedures can get new attention and life with chatGPT and creative prompt engineering. I suspect BING is using a similar approach to their AI powered search. Google is not far behind. Pretty cool stuff.