Taming AI Hallucinations: A Guide for Instructional Designers

“It sounded so confident… but it was totally wrong.”

Friendly robot with a long nose like Pinocchio, symbolizing AI hallucinations, against a blue background.

If you’ve used ChatGPT or another AI chatbot, you’ve likely seen it confidently provide incorrect information. That’s an AI hallucination—when the model invents facts or distorts details.

As instructional designers, we’re increasingly using large language models (LLMs) for course design and content generation (see Using AI to Create Branching Scenarios or ChatGPT and Bloom’s Taxonomy). So, how do we make the most of these tools while keeping hallucinations in check?

Why Do LLMs Hallucinate?

Predictive Engines, Not Fact Banks

LLMs like ChatGPT generate text by predicting the next word based on patterns in their training data. If that data is incomplete or contradictory, the model may produce plausible but incorrect answers.

Example: Asking “How does D2L detect plagiarism in Brightspace?” might lead to invented tools or features that sound right but don’t exist.

Outdated or Missing Real-Time Info

Unless connected to live data, the model can’t reflect current policies or events. It doesn’t “know” facts, it mirrors its training data.

Example: A prompt like “What are D2L’s AI policies for 2025?” may trigger outdated responses based on older documents.

Overgeneralization

LLMs can blend similar patterns or topics, even when nuance matters.

Example: A question about Brightspace accessibility could lead to general web accessibility tips that don’t match the platform’s actual features.

How to Prompt Smarter

You can’t eliminate hallucinations completely, but you can reduce them significantly with better prompting. Here’s how:

Be Specific

Avoid broad prompts. The more specific your request, the better the response.

Example:

  • Vague: “Summarize D2L’s AI policy.”
  • Better: “Using D2L’s 2024 Responsible AI Guidelines, summarize how instructors should validate AI-generated assessments.”

Ask for Sources

Encourage the AI to cite documentation. It helps ground the response in real material, but always verify the sources it gives.

Example:

  • “List your sources.”
  • “Cite only from D2L documentation.”

Use Fact-Check Prompts

Follow up with self-checks. Ask the AI to review its own output for accuracy or bias.

Example:

  • “What are the factual risks in this answer?”
  • “Cross-check with Brightspace documentation.”

Limit Creativity When It Matters

In contexts like compliance or academic integrity, be explicit: no flair, just facts.

Example prompts:

  • “Stick to known facts.”
  • “Do not speculate.”
  • “Mark assumptions clearly.”

Final Thoughts

LLMs can accelerate content creation and brainstorming, but only with thoughtful prompting. By narrowing the scope, requesting citations, and encouraging verification, we can keep hallucinations at bay and ensure instructional quality.

Want to Learn More?

Connect with your D2L Customer Success Manager or Client Sales Executive, or reach out to the D2L Sales Team to learn how Learning Services can support your instructional goals.