How to Spot and Prevent Hallucinations
What you will learn
- Apply prompt engineering techniques that reduce hallucination risk
- Use grounding, source constraints, and confidence calibration in prompts
- Recognize the warning signs that an AI output may contain hallucinations
- Compare retrieval-augmented generation (RAG) vs. raw generation for accuracy
# How to Spot and Prevent Hallucinations
Now that you understand what hallucinations are and why they happen, this lesson focuses on actionable techniques you can use immediately — in any AI tool — to reduce hallucination risk and catch errors before they matter.
Part 1: Prevention Through Better Prompts
Technique 1: Ground the Model with Source Material
The single most effective way to reduce hallucinations is to give the AI the information it needs rather than asking it to generate from memory.
High hallucination risk: > "Summarize the key findings of the 2024 WHO report on global health."
The model may not have this report in its training data. It will likely fabricate plausible-sounding findings.
Low hallucination risk: > "Here is the executive summary of the 2024 WHO report on global health: [paste text]. Summarize the key findings based ONLY on the text I provided. Do not add information from other sources."
When you provide the source text, the model works as a text transformer rather than a knowledge generator. This dramatically reduces fabrication.
Technique 2: Explicitly Permit "I Don't Know"
Models are trained to be helpful, which makes them reluctant to admit uncertainty. Override this by explicitly giving permission:
"Answer the following question based on your training data. If you are not confident in the answer, say 'I'm not sure about this — please verify.' Do not guess or fabricate information."
This simple instruction measurably reduces confident-sounding fabrications across all major models.
Unlock this lesson
Upgrade to Pro to access the full content
What you'll learn:
- Apply prompt engineering techniques that reduce hallucination risk
- Use grounding, source constraints, and confidence calibration in prompts
- Recognize the warning signs that an AI output may contain hallucinations