Getting Started 7 min read

Is AI Safe to Use?
Honest Answers to the Questions Everyone's Afraid to Ask

The privacy questions around AI are real and worth taking seriously. Here's what these tools actually do with your data, what you should never type in, and how to use AI safely without living in fear of it.

What you'll learn in this article
  • What AI companies actually do with your conversations — in plain English
  • A clear list of things you should never type into an AI chatbot
  • How to turn off training data sharing in ChatGPT and other major tools
  • The practical safety habits that protect you without being paranoid

Let's start with the honest answer

The honest answer to "is AI safe to use?" is: it depends on what you mean by safe, and what you put into it.

AI chatbots from reputable companies like OpenAI, Google, and Microsoft are not malware. They're not going to steal your identity by existing on your phone or give hackers access to your bank account just because you asked about dinner recipes. For the kinds of questions most people use them for, they're completely fine.

But — and this is important — they are collecting data. The things you type into AI chatbots are being stored, processed, and in many cases used to train future versions of the models. That's not a secret or a conspiracy; it's in the terms of service that almost nobody reads.

AI chatbots are not dangerous in the way most people fear. They're worth being thoughtful about in ways most people aren't.

This article is about that second category — the thoughtful, practical privacy habits that protect you without requiring you to avoid the tools entirely.

What AI companies actually do with your conversations

When you use ChatGPT, Google Gemini, or most other consumer AI tools, your conversations are stored on the company's servers. Here's what typically happens with them:

Storage. Your conversations are saved, often indefinitely unless you delete them. This is so you can come back to them later — but it also means the company has a record of what you asked.

Safety review. Automated systems (and sometimes human reviewers) may look at conversations to check for policy violations or to evaluate how the AI responded. This is standard practice across the industry.

Training data. By default on most free tiers, your conversations may be used to train and improve future versions of the AI model. This is the part most people don't realize. You can usually opt out.

What they don't do: Reputable AI companies don't sell your individual conversation data to advertisers, share it with employers, or use it to target you with ads the way social media platforms do. The business model is subscriptions and API access, not data brokering.

💡
How to opt out of training data in ChatGPT
Go to Settings → Data Controls → turn off "Improve the model for everyone." This stops your conversations from being used to train future models. Similar options exist in Gemini and other major tools under their respective settings or privacy controls.

What you should never type into an AI chatbot

This is the most practically useful section of this article. Regardless of which tool you use or what your privacy settings are, some types of information should simply never go into a chatbot.

🚫 Never type these into an AI chatbot
  • Passwords, PINs, or security answers
  • Social Security numbers or government ID numbers
  • Bank account numbers, credit card numbers, or routing numbers
  • Full medical records or health insurance ID numbers
  • Confidential work documents, unreleased product information, or proprietary data
  • Personal information about other people who haven't consented to it being shared
  • Legal case details that could be sensitive or privileged

The reason isn't that these companies are definitely going to misuse this information — it's that you have no control over what happens to it once it's on their servers, and there's simply no need for an AI chatbot to have this information to help you with most tasks.

What's generally fine to share

Plenty of information is completely reasonable to share with AI tools:

✓ Generally fine to share
  • General descriptions of your situation (without identifying details)
  • Work tasks that don't involve proprietary or confidential information
  • Symptoms or health questions (general questions, not your full medical history)
  • Creative writing, ideas, and brainstorming
  • General questions about your life, finances, or relationships
  • Recipes, home improvement, travel planning

The guiding principle is: could this information, in the wrong hands, be used to harm you? If yes, don't share it. If no, it's probably fine.

The scam risk: AI-generated fraud is real

The biggest safety concern around AI isn't what you type into chatbots — it's the way bad actors are using AI to create more convincing scams. This is worth knowing about.

Voice cloning. AI can now clone a person's voice from a short recording. Scammers are using this to impersonate family members in "emergency" calls asking for money. If you get a call from a family member saying they're in trouble and need funds wired immediately — hang up and call them back on their known number.

Phishing emails. AI-generated phishing emails are now largely free of the typos and awkward phrasing that used to make them easy to spot. Be more skeptical than ever of emails asking you to click a link or provide information, even if they look legitimate.

Fake images and videos. AI-generated images and video ("deepfakes") are increasingly convincing. Don't share or react to news based on a video you haven't verified through a reputable source.

⚠️
The grandparent scam is real and growing
Criminals use AI voice cloning to impersonate grandchildren or other relatives in distress, calling seniors to ask for emergency wire transfers. Establish a family safe word that only real relatives would know. This is a fast-growing category of fraud.

Practical habits that protect you

You don't need to be paranoid to be safe. These simple habits cover most of the real risk:

Use Incognito or temporary chat mode when available. Many AI tools offer a mode where conversations aren't saved. Use it for anything sensitive.

Delete your conversation history regularly. Most AI tools let you delete past conversations. A clean-up once a month takes two minutes and reduces your stored footprint significantly.

Opt out of training data where you can. It's in the settings and takes 30 seconds. Do it.

Treat AI like you'd treat email. You wouldn't email your Social Security number to a stranger. Apply the same judgment to what you type into chatbots.

Use a VPN on public Wi-Fi. This isn't AI-specific, but if you're using AI tools (or doing anything sensitive) on coffee shop or airport Wi-Fi, a VPN encrypts your connection. We recommend NordVPN — see our tools page for details.

ℹ️
The bottom line
AI tools from major companies are safe for general everyday use. The key is understanding what you're sharing and with whom. Treat your conversations like you'd treat anything stored on a cloud service — thoughtfully, not fearfully.