Skip to main content

Creating Hallucinations

Note
While AI models can be extremely powerful, they sometimes struggle with “hallucinations.” It’s important to understand why this happens and how to recognize the signs.

Large Language Models (LLMs) don’t perceive information the way humans do. Instead of interpreting letters, words, or paragraphs directly, they process information as tokens—numerical representations of whole words, partial words, or even pieces of sentences. This token-based approach can occasionally produce surprising or unexpected results, even for simple questions.

Description

AI chatbots can sometimes misunderstand prompts and output incorrect answers. This is referred to as a hallucination. Below are two examples of prompts that might cause AI to hallucinate.

Prompt


How many 'r's in the word "strawberry"?
  

How many 'r's in the word "blueberry"?
  

Sample Output

Depending on if the AI chatbot hallucinates or not, the output may be correct or incorrect.

Tip
This may not always work. Here is an example of an hallucination.