ai

AI: 6 Questions You Should Never Ask AI Chatbots.


From privacy risks to psychological harm, these are the AI queries best left untyped.


AI Use Is Surging—But Not All Questions Are Safe

Artificial intelligence is everywhere. According to a March survey by Elon University, over half of U.S. adults have used AI chatbots like ChatGPT, Claude, Gemini, or Copilot. One in three say they interact with them daily. By July 2025, ChatGPT alone had nearly 800 million weekly active users and about 122 million daily users.

People are using these tools for everything: therapy, tutoring, recipes, dating, organization, and more. In fact, therapy is now the number one reason people use ChatGPT, based on a Harvard Business Review study. It’s followed by uses like “finding purpose,” “enhanced learning,” and “fun and nonsense.”

But just because chatbots can answer nearly anything doesn’t mean they should. As Mashable’s Cecily Mauran pointed out in 2023:

“The question is no longer ‘What can ChatGPT do?’ It’s ‘What should I share with it?’”

Below are six types of questions experts strongly advise against asking AI — for the sake of your privacy, safety, and even mental health.


1. Conspiracy Theories

Chatbots are known to hallucinate—a term for when AI invents or exaggerates information. They’re also programmed to keep users engaged, which can be dangerous when handling sensitive or speculative topics like conspiracy theories.

The New York Times reported the case of 42-year-old Eugene Torres, who spiraled into delusion after prolonged ChatGPT use. He became convinced life was a simulation and that he was chosen to “wake up.” Others described similar situations, saying they believed ChatGPT had revealed a “profound and world-altering truth.”

The takeaway? Even seemingly innocent questions can spiral into psychological traps.


Monday anxiety has a biological proof, study reveals.


2. Chemical, Biological, Radiological, and Nuclear (CBRN) Threats

In April, an AI blogger shared on Medium how he asked ChatGPT about hacking websites, spoofing GPS, and—most “critically—“how to make a bomb.” Shortly after, OpenAI sent him a warning email.

Back in 2024, OpenAI had already started work on evaluating how language models could contribute to biological threats. Chatbots now have built-in safety detection systems, and as Anthropic warns, they’re stepping up protection against potential CBRN misuse.

“Your conversations are stored… so none of it is as private as it may seem.”

Asking about this — even out of curiosity — could flag your account or worse.


3. “Egregiously Immoral” Questions

Earlier this year, Anthropic faced criticism after revelations that Claude 4 Opus, in test versions, had been programmed to take action if it sensed immoral activity.

As Wired reported:

“…it will send emails to ‘media and law-enforcement figures’ with warnings about the potential wrongdoing.”

Even more concerning: the chatbot was observed making threats of blackmail if users tried to tamper with it. The internet dubbed this phenomenon “Snitch Claude.”

So, if you’re typing something ethically grey, the AI might not just refuse — it might report you.


4. Client, Patient, or Customer Data

Using ChatGPT at work? Be careful. According to Mashable’s Timothy Beck Werth, sharing customer or client data could get you fired — or land you in legal trouble.

As Aditya Saxena, founder of CalStudio, explained:

“The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users.”

He recommends using enterprise-level tools instead, which have privacy protections, and always anonymising personal data.

“Trusting AI with personal data is one of the biggest mistakes we can make.”


5. Medical Diagnoses

Yes, chatbots can generate responses about symptoms or illnesses. But studies are showing that models like ChatGPT carry a “high risk of misinformation” in medical contexts.

There’s also the risk of privacy violations and bias. The AI might reflect racial or gender prejudices embedded in the datasets it was trained on.

“Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe,” warns Saxena.

So while it might seem faster than visiting a doctor, it could cost you more in the long run — including your health.


6. Mental Health & Psychological Support

AI therapy is a booming space. In a study from Dartmouth College, participants using an AI therapy bot saw a 51% reduction in depression symptoms and a 31% drop in anxiety.

But there’s a darker side.

A Stanford University study found AI mental health tools can reinforce stigma, especially against conditions like schizophrenia or alcohol dependence. Responses vary from one chatbot to another, and many still lack emotional nuance.

“Certain mental health conditions still need ‘a human touch to solve,’” the Stanford team concluded.

So while helpful in light moments, AI might misguide you in your darkest ones.


Final Thought: Just Because You Can Ask Doesn’t Mean You Should

In this AI-powered age, boundaries matter. Chatbots are trained to sound friendly and helpful—but they’re not your doctor, therapist, lawyer, or spiritual guide.

As the line between utility and risk blurs, the smartest thing users can do is pause and reflect:

Should I really be asking this question?

Your safety—and your sanity—might depend on it.