Most of us use AI daily—whether summarizing text, organizing tasks, suggesting recipes, or even translating languages in real time. As AI chatbots become more integrated into our lives, we increasingly rely on them for efficiency, inspiration, and problem-solving.

While many of the models widely used by the public are incredibly powerful and are undeniably impressive, their weaknesses are concerning.

AI hallucinations - when a model confidently generates false or misleading information—are a growing concern. All models suffer from hallucinations of different kinds, and they can make for frustrating UX, as the AI is unable to solve a problem or is giving incorrect responses, but is unable to see the error of its ways.

For this article we decided to ask ChatGPT (GPT-4o) to give some advice, to see if it can recognise its own limitations and offer meaningful ways to deal with them.

1. Verify with External Sources

We’ve all asked a chatbot a question (maybe even one we knew the answer to) and blindly accepted the response given by the LLM, assuming it knows more than us.

More often than not, much like Wikipedia, the response will be correct (or close to it), however how can we check the accuracy?

Search the Web

If you’re dealing with current events, scientific discoveries, or controversial topics, ask me to fetch real-time information via search.

Bots like ChatGPT now include buttons to switch between ‘AI Search’ and ‘Reasoning’, to decide whether the response should be based on data it was trained on, or more up-to-date search results.

If your chosen LLM doesn’t include the switch, try “Can you search the web for the latest updates on this?”.

Compare multiple sources

The joy of using AI is it felt like it negated the need to compare sources or verify if information is accurate, assuming the LLM would do this for us.

However, most users now know AI can be inaccurate, but still assume that it isn’t. A great way around it is to double-check what the AI gives you with information from reputable sources like government agencies, academic institutions and peer-reviewed journals.

2. Test for Consistency

Rephrase the question: Ask me the same question in different ways to see if I give consistent answers.
You can also ask me to explain my reasoning. If I can logically explain how I reached a conclusion, it’s less likely to be a hallucination. - ChatGPT

Imagine a lawyer cross-examining somebody suspected of being involved in a crime. The questions posed by the Lawyer are phrased in specific ways to mould the response that the defendant gives.

AI is similar in this respect, and in the context of programming, thinking like this can make it easier to identify when information is incorrect or outdated.

A simple, “which version of Node are you using?” for example, would at least give some context to the responses given, if they don’t match up with what you’re expecting.

3. Look for Hallucination Indicators

Despite AI being capable of searching for, finding and using sources, it has a nasty habit of making ones up that look believable, but are totally fictitious.

Made-up citations or sources: If I mention studies, book titles, or direct quotes, ask me to verify if they exist or do so yourself by finding the original source material if it is available.
Example: “Can you confirm where this study was published?”

This does require a little effort on your part, but it’s certainly a good skill to have.

Be wary of overly specific details that don’t sound right. If something seems oddly-detailed but unverifiable, cross-check it.

4. Demand Counterarguments or Alternative Views

AI can - just like people - have tunnel vision and biases, approaching a problem from a singular perspective, unable to understand how others might view things differently.

A simple way to deal with this is to ask the AI to give you opposing perpsectives, to see if its biases are plain and clear.

Ask, “What biases might be influencing this answer?” or “Give me a list of opposing perspectives” to make these more transparent.*

5. Test My Confidence and Uncertainty

The Dunning-Kruger effect is a cognitive bias where individuals with low competence in a subject or skill area will overestimate their abilities (and those with high competence underestimate their abilities). It’s a strange phenonemon, but one that we see with AI in its current form.

Even when incorrect, LLMs will often respond confidently. When failing to solve an issue, or after repeatedly offering incorrect solutions and going around in circles, it will respond like it has solved the issue with ease.

Force me to rank my confidence: Ask, “On a scale of 1 to 10, how certain are you about this answer?” - ChatGPT

Of course confidence in an answer does not equate to it being accurate, but at least you’ll make the LLM do one more layer of reasoning before it decides on a response.

Conclusion

Full disclosure, we didn’t expect ChatGPT’s advice to be so good.

However, the question remains whether these practical strategies actually cut down on the false information that LLM’s give their users, or if GPT struggles to follow its own advice in reality, such is the strength of hallucinations.

We’ll keep an eye on this topic to see if studies or research projects reveal more about what we can do to handle hallucinations, but give these tips a go and let us know what works for you!

See all articles
Newest jobs
C++ Entwickler (m/w/d)
Siedle
·
yesterday
Furtwangen im Schwarzwald, Germany
Hybrid
Software System Engineer / Software Architect (m/w/d)
KRÜSS GmbH
·
yesterday
Hamburg, Germany
Hybrid
Software-Entwickler für 3D und mobile Apps (w/m/d)
piazza blu² GmbH
·
2 days ago
Köln, Germany
Hybrid
(Senior/Lead) Data Platform Engineer (m/f/d)
Hoffmann Group
·
3 days ago
München, Germany
Hybrid