When AI Makes Things Up: How Advisors Can Avoid Hallucinations

Jun 11, 2025 / By Sean Bailey, Horsesmouth Editor in Chief
Print AAA
Add to My Archive
My Folder

My Notes
Save
AI for Advisors: Advisors are embracing AI to save time and boost productivity. But if you don’t know what you’re doing, you risk letting false information slip through undetected. Here’s how to spot, prevent, and manage AI-generated inaccuracies.

AI for Advisors newsletter

When I first started working with AI, one thing quickly became clear: Sometimes it confidently gets things wrong.

You’ve probably seen the stories: Lawyers submitting court briefs with fake citations, commencement speakers delivering invented quotes, and government officials presenting reports with non-existent data.

These so-called “hallucinations” are a real concern for any advisor using AI. If you’re not aware of them, or if you blindly trust AI outputs, you could end up passing along inaccurate information—something no professional wants to do.

The good news is that hallucinations are manageable. And with the right approach, you can dramatically reduce—or almost eliminate—them in your practice.

What are AI hallucinations?

Large language models like ChatGPT don’t “know” facts the way humans do. They predict the most likely next words based on patterns in their training data.

As a result, sometimes AI will fabricate information—citing non-existent laws, making up data, or giving polished but inaccurate answers. That’s what’s known as a hallucination.

It doesn’t happen because the AI is being “dishonest”; it happens because the model is doing what it was designed to do: predict language.

Why it matters for advisors

As advisors, we are held to a high standard of accuracy and professionalism. A wrong number, an incorrect interpretation of a tax rule, or a fabricated case study could damage client trust and potentially create compliance issues.

That’s why we must treat AI outputs with the same scrutiny we would apply to any source of information: review, verify, and refine.

Practical strategies to manage factuality risks

Always fact-check critical information: If you’re generating content that includes numbers, regulations, statistics, or anything technical, double-check it against reliable, independent sources before using it with clients.

Use AI for ideation, not final answers: Think of AI as a brainstorming partner or a first-draft assistant. Let it help you generate ideas, frameworks, or outlines—but reserve the role of fact-checker and final authority for yourself.

Be specific in your prompts: Vague prompts invite vague (and often inaccurate) answers. Clear, detailed prompts guide the AI toward more accurate responses.

For example, instead of asking, “Tell me about Roth IRAs,” you might ask, “List three commonly cited benefits of Roth IRAs according to IRS guidelines.”

Follow a structured prompting framework: One of the best ways to minimize hallucinations is to follow a disciplined prompting method. I use a structure called Role-Task-Format-Context-Questions-Examples. This is the process we teach advisors in our AI Powered Financial Advisor program and our AI Marketing for Advisors program.

Recognize AI’s limits: No matter how polished the output looks, always remember: AI is a tool, not a truth-teller. Human judgment must always be the final filter.

The Creative-Accuracy Paradox

By giving the AI a clear role to assume, a specific task to complete, a format for the response, relevant background context, clarifying questions, and real examples, you dramatically reduce the risk of hallucination.

In fact, when I follow this structured approach, I almost never encounter hallucinations. Why?

Well, here’s something interesting I’ve learned: The same underlying mechanism that can produce hallucinations is also what makes AI so powerful for creative and strategic thinking.

AI’s ability to make unexpected connections and generate novel combinations of ideas comes from that same pattern-prediction process that occasionally fabricates facts.

This is especially important when using AI for research. While AI excels at helping you explore ideas, it requires extra scrutiny when you’re looking for specific data, citations, or regulatory details.

The model might confidently point you toward a “study” that doesn’t exist or cite a regulation that’s been misinterpreted from its training data.

That said, when I follow the RTF-CQE prompting framework, I honestly can’t remember the last time I encountered a factual hallucination. The combination of clear prompting and appropriate verification has made this a non-issue in my day-to-day practice.

How reframing the risk helps

Instead of fearing hallucinations, I learned to treat AI outputs like the work of a smart but inexperienced junior employee. There’s a lot of value there—but it needs oversight, refinement, and fact-checking.

Once I made that mental shift, I stopped worrying about AI “getting it wrong” and focused instead on how I could use its strengths without being trapped by its weaknesses.

So, yes, hallucinations can happen. But you can dramatically reduce their likelihood when you:

  • Fact-check critical information,
  • Use AI for ideation, not final answers,
  • Craft specific prompts,
  • Follow a structured prompting framework,
  • And apply human judgment.

By navigating AI’s strengths and limitations effectively, you protect your professional credibility and unlock greater creativity, efficiency, and client value.

AI for Advisors newsletter

Sean Bailey is editor in chief at Horsesmouth, where he has led editorial strategy for over 25 years. He is the co-author of Hack Proof Your Life Now! and has spent over 3,000 hours researching how AI can transform the way financial advisors work. Through his AI-Powered Financial Advisor and AI Marketing for Advisors programs, he helps advisors save time, deliver better client experiences, and market their services with unprecedented speed, quality, and confidence.

IMPORTANT NOTICE
This material is provided exclusively for use by Horsesmouth members and is subject to Horsesmouth Terms & Conditions and applicable copyright laws. Unauthorized use, reproduction or distribution of this material is a violation of federal law and punishable by civil and criminal penalty. This material is furnished “as is” without warranty of any kind. Its accuracy and completeness is not guaranteed and all warranties express or implied are hereby excluded.

© 2025 Horsesmouth, LLC. All Rights Reserved.