AI for Advisors newsletter
While AI privacy doesn’t come up in every conversation I have with advisors, it’s is a fair concern among those professionals exploring AI. And for good reason.
Working with personal client information means any new tool, especially one as powerful as generative AI, deserves a clear-eyed look at how data is handled.
The key is to move beyond vague worry and toward practical understanding. What’s being stored? Who has access? And how can you configure your tools to align with your responsibilities? These are the questions that matter on the privacy front.
The answers start with understanding how different versions of ChatGPT (or other AI models) handle your data, and why some options are safer than others when it comes to client-related work.
The good news? You can use AI thoughtfully and effectively without compromising client confidentiality. It just requires a little awareness and some smart practices.
Why privacy matters
Privacy isn’t just a “nice to have” in our profession. It’s foundational to our profession. Clients choose you because they trust that you will safeguard their personal information. If you misuse that trust, even unintentionally, you risk not just regulatory consequences and reputational damage.
Using AI introduces a new layer of risk if you aren’t careful. That’s why understanding and managing that risk is so important.
If you enter sensitive, personal, identifiable client information into a free AI tool, you might be handing that data over for analysis without even realizing it. Even paid versions vary. Not all guarantee that your data won’t be retained or used in some way. You have to check.
Understanding ChatGPT’s privacy levels
ChatGPT offers several subscription levels: Free, Plus, Team, and Enterprise. The privacy protections vary significantly across them.
The Free and Plus tiers are designed for individual use and include fewer safeguards. Conversations at these levels may be stored and used to improve OpenAI’s models.
Users can opt out of this data sharing in their settings. (Go to Settings>Data controls>Improve the model for everyone>Off.) Double check this on each device or browser. Your chat history is only deleted if manually removed.
For professionals handling sensitive or client-related data, the Team and Enterprise tiers offer stronger protections. These business-level accounts come with default privacy settings that prevent your conversations from being used to train the AI, and they include better data controls and admin oversight.
5 strategies for protecting privacy
Here’s the good news. With a few straightforward practices, you can use AI effectively without risking your clients’ trust.
- Avoid free tiers: Advisors should not use any free tiers if they’re using an AI service for anything client related. For ChatGPT, to get a higher level of privacy, choose Team or Enterprise.
- Don’t consent to train the model: Whenever possible, use AI tools that offer enterprise-grade security and privacy commitments. Look for clear policies stating that your data is not used for training.
- Anonymize client information: Of course, never input real names, Social Security numbers, account numbers, or any identifying client details into an AI prompt. If you want to brainstorm solutions or draft ideas related to a client situation, fictionalize the names. For example, instead of “John Smith, age 63, retiring from IBM,” you might say “a client retiring soon from a large corporation.”
- Check provider data policies: Before using any AI tool professionally, take a few minutes to read their privacy policies. Specifically, look for:
- Whether your data is stored
- Whether your data is used for training
- How long data is retained
- Whether you can opt out of data retention or training
- Isolate sensitive activities: Use AI for tasks like brainstorming marketing ideas, creating educational materials, or drafting internal documents. If in doubt, treat AI interactions the way you would treat email—only send what you’re comfortable being seen.
Privacy as a professional practice
When I started using AI in early 2023, I too was concerned about privacy. Was using AI “safe”? Would it somehow expose my data or chat history?
What I’ve learned, and what I believe many other advisors have realized, is that once you understand how to manage your privacy settings and how to prompt with a solid framework, you realize privacy concerns can be addressed without limiting how you use AI. You’re free to take full advantage of AI’s creative and strategic capabilities without second-guessing yourself. And you do it with purpose, not fear.
AI for Advisors newsletter