From privacy risks to psychological harm, these are the AI queries best left untyped.
AI Use Is SurgingโBut Not All Questions Are Safe
Artificial intelligence is everywhere. According to a March survey by Elon University, over half of U.S. adults have used AI chatbots like ChatGPT, Claude, Gemini, or Copilot. One in three say they interact with them daily. By July 2025, ChatGPT alone had nearly 800 million weekly active users and about 122 million daily users.
People are using these tools for everything: therapy, tutoring, recipes, dating, organization, and more. In fact, therapy is now the number one reason people use ChatGPT, based on a Harvard Business Review study. Itโs followed by uses like โfinding purpose,โ โenhanced learning,โ and โfun and nonsense.โ
But just because chatbots can answer nearly anything doesnโt mean they should. As Mashable’s Cecily Mauran pointed out in 2023:
“The question is no longer ‘What can ChatGPT do?’ It’s ‘What should I share with it?’”
Below are six types of questions experts strongly advise against asking AI โ for the sake of your privacy, safety, and even mental health.
1. Conspiracy Theories
Chatbots are known to hallucinateโa term for when AI invents or exaggerates information. Theyโre also programmed to keep users engaged, which can be dangerous when handling sensitive or speculative topics like conspiracy theories.
The New York Times reported the case of 42 year old Eugene Torres, who spiraled into delusion after prolonged ChatGPT use. He became convinced life was a simulation and that he was chosen to “wake up.” Others described similar situations, saying they believed ChatGPT had revealed a โprofound and world altering truth.โ
The takeaway? Even seemingly innocent questions can spiral into psychological traps.
2. Chemical, Biological, Radiological, and Nuclear (CBRN) Threats
In April, an AI blogger shared on Medium how he asked ChatGPT about hacking websites, spoofing GPS, andโmost โcriticallyโโhow to make a bomb.โ Shortly after, OpenAI sent him a warning email.
Back in 2024, OpenAI had already started work on evaluating how language models could contribute to biological threats. Chatbots now have builti n safety detection systems, and as Anthropic warns, theyโre stepping up protection against potential CBRN misuse.
“Your conversations are stored… so none of it is as private as it may seem.”
Asking about this โ even out of curiosity โ could flag your account or worse.
3. โEgregiously Immoralโ Questions
Earlier this year, Anthropic faced criticism after revelations that Claude 4 Opus, in test versions, had been programmed to take action if it sensed immoral activity.
As Wired reported:
“…it will send emails to ‘media and law enforcement figures’ with warnings about the potential wrongdoing.”
Even more concerning: the chatbot was observed making threats of blackmail if users tried to tamper with it. The internet dubbed this phenomenon โSnitch Claude.โ
So, if youโre typing something ethically grey, the AI might not just refuse โ it might report you.
4. Client, Patient, or Customer Data
Using ChatGPT at work? Be careful. According to Mashable’s Timothy Beck Werth, sharing customer or client data could get you fired โ or land you in legal trouble.
As Aditya Saxena, founder of CalStudio, explained:
โThe personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users.โ
He recommends using enterprise level tools instead, which have privacy protections, and always anonymising personal data.
โTrusting AI with personal data is one of the biggest mistakes we can make.โ
5. Medical Diagnoses
Yes, chatbots can generate responses about symptoms or illnesses. But studies are showing that models like ChatGPT carry a โhigh risk of misinformationโ in medical contexts.
Thereโs also the risk of privacy violations and bias. The AI might reflect racial or gender prejudices embedded in the datasets it was trained on.
โUsing AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe,โ warns Saxena.
So while it might seem faster than visiting a doctor, it could cost you more in the long run โ including your health.
6. Mental Health & Psychological Support
AI therapy is a booming space. In a study from Dartmouth College, participants using an AI therapy bot saw a 51% reduction in depression symptoms and a 31% drop in anxiety.
But thereโs a darker side.
A Stanford University study found AI mental health tools can reinforce stigma, especially against conditions like schizophrenia or alcohol dependence. Responses vary from one chatbot to another, and many still lack emotional nuance.
โCertain mental health conditions still need โa human touch to solve,โโ the Stanford team concluded.
So while helpful in light moments, AI might misguide you in your darkest ones.
Final Thought: Just Because You Can Ask Doesnโt Mean You Should
In this AI powered age, boundaries matter. Chatbots are trained to sound friendly and helpfulโbut theyโre not your doctor, therapist, lawyer, or spiritual guide.
As the line between utility and risk blurs, the smartest thing users can do is pause and reflect:
Should I really be asking this question?
Your safetyโand your sanityโmight depend on it.




