Hallucinations: Why AI will sometimes make things up đź™…

Imagine you've just completed a large event, and you quickly want an overview of attendee feedback. You decide to let AI do the heavy lifting.

Everything seems to run smoothly — until you read a summary praising "excellent breakout sessions."

Just one problem: there weren't any breakout sessions.

This illustrates exactly where things can go wrong with AI. Your clever assistant got just a bit too creative with the facts. In the domain of generative AI, we call this a hallucination.

What are AI hallucinations?

AI hallucinations occur when AI presents completely false information as if it's the truth. The pitfall is that AI delivers these inaccuracies with extreme confidence.

AI models, such as ChatGPT's language model, are designed to predict the next word in a sequence based on what they've previously learned. But when they lack information, they sometimes fill the gap with "plausible-sounding nonsense."

A great example occurred in January 2023, documented by Big Data Wire, where ChatGPT confidently wrote a positive review about the disastrously infamous Fyre Festival.
‍

Why do these errors happen?

It’s tempting to think hallucinations are just about incorrect facts, but there are actually two types of errors you can encounter: factual errors and instruction errors.

Factual errors occur when the AI mixes up information or provides incorrect data. Suppose you ask who the first person on the moon was, and the AI answers "Yuri Gagarin." Although this might sound plausible, the correct answer is Neil Armstrong. Sometimes the AI even invents completely new information. For instance, if you ask for a summary of a fictional scientific article, you’ll get a convincing but completely made-up story.

Instruction errors happen when the AI doesn't correctly follow your instructions. For example, you ask to translate an English question into Spanish, but the AI provides the answer in English. These errors are often subtler: the AI generally follows your instructions and uses correct data, but secretly adds additional, hidden instructions that weren't in your original prompt. Usually, these invisible additional instructions improve the output, but unfortunately, sometimes the opposite happens.

Another reason AI hallucinates is because it's only trained up to a certain point in time. If you ask about recent technological developments that occurred after its training period, the AI might simply fabricate an answer because the information is missing. Returning to the Fyre Festival example, six months later, after several model updates, the same prompt yielded a different outcome. This time, the model responded that it couldn’t come up with a positive review for the festival, demonstrating improvement in preventing hallucinations.

‍

How do you prevent AI from hallucinating?

Now that you know what AI hallucinations are and why they occur, the question becomes: how can you stop AI from inventing things? Here are some practical tips to make AI more reliable.

  1. Professional expertise is crucial
    Expert knowledge is essential when working with AI. The more you understand a subject, the better you'll be able to verify the AI’s output. AI can sound extremely convincing, even when it’s completely wrong. Only through your own expertise can you detect factual errors and ensure accuracy. Without adequate knowledge, you risk being misled by seemingly correct but erroneous answers. Therefore, it’s critical to be well-informed about the topics for which you're using AI.
  2. Ask clear questions
    With AI, you get out what you put in. Vague questions give AI room to be creative—which isn't always desirable. Instead of asking, “What are the trends in the event industry?”, you could be more specific: “Which new technologies will be used in 2024 for customer interaction at events?” The more targeted your question, the less likely AI is to improvise.
  3. Watch out for answers that seem "too perfect"
    If an answer sounds too good to be true, it usually is. AI models are trained to satisfy users—even if that means making things up a bit. If something sounds exactly like what you wanted to hear, be cautious. AI wants to please you, sometimes even at the expense of accuracy.
  4. Verify critical details
    AI often sounds convincing, even when entirely wrong. Therefore, it's crucial always to verify important facts. Names, dates, locations—if AI mentions a source or something specific, take the time to double-check. It might take a bit longer, but this prevents embarrassing errors and keeps your clients satisfied.
  5. Break complex tasks into steps
    When you ask AI to perform a complex task with multiple components, the risk of errors increases. AI can struggle to remain consistent across multiple parts. A single error might derail the entire output. Therefore, break complex tasks into smaller, manageable steps, verifying the output at each stage. This helps you catch mistakes early and build upon accurate results.
  6. Be careful with summaries
    One useful feature of AI is its ability to summarize lengthy texts. But be cautious—summaries are a frequent source of AI hallucinations. AI tends to add extra details that weren’t in the original text. Always cross-check the summary against the original document to ensure accuracy, especially if you plan to share the summary externally.
  7. Use AI capable of real-time searches
    When working with current facts, it's crucial to use an AI model with internet access. Recently, ChatGPT introduced its own search functionality, ChatGPT Search, consulting the internet to provide answers. Perplexity goes one step further by transparently presenting the sources it consulted to produce the given answer.

Stay in the driver’s seat

AI can greatly help you as an event professional to save time and accomplish more, but you must use it wisely. It's like driving with cruise control—it provides relaxation and comfort, but you can't fall asleep behind the wheel. AI can make mistakes, just as a car on cruise control doesn't automatically handle sharp turns.

Stay alert, ask clear questions, and always verify the critical details.

Share this story:

##/## Related articles

Creativity

De stille fout die creatieve teams maken met Generatieve AI

Fundamentals

Geen grip op AI zonder een duidelijk beleid en manifest 🚨

Fundamentals

Embracing Generative AI should be your top priority 🔥