Skip to main content
A hand holds a smartphone with an AI in healthcare application against a blue background.

Can you trust AI with your health?

In today’s world of instant information, quite literally at our fingertips, technology has been pushed to new heights with the rise of AI. With an overwhelming amount of health information to sift through, it can be tempting to hit AI mode and ask for a quick, condensed explanation of a condition or what a symptom might mean.

This may seem like a fast way to get answers. But when given out of context or without clear sources, they can be misleading and inaccurate. We asked a health expert to break down the hidden dangers of trusting AI-driven health info.

Video picks for Online and social media

Continue reading below

Is everyone using AI for health advice now?

AI didn't just appear out of thin air. We’ve been living with versions of it for well over a decade - it’s been quietly powering everything from Netflix recommendations to virtual assistants, acting as a helpful background feature we almost took for granted.

The arrival of ChatGPT in 2022, however, changed the landscape. AI is no longer an invisible helper tucked into apps. It has a voice. When you speak to it, it responds. It can create, converse, and, in health contexts, even present itself in ways that make it sound like an actual clinician.

Dr David Shusterman, a board-certified urologist and Chief Physician at Modern Urologist in New York City, USA, says that compared to even three or four years ago, he sees far more patients with a long list of possible diagnoses they found online or through AI tools.

“Sometimes they’ve read ten different explanations for the same symptom, and many of those explanations contradict each other,” he says. “Instead of coming in with one concern, they’re often overwhelmed and anxious about five or six possible conditions.

“The internet can be helpful for education, but without a clinical context, it can easily turn into information overload.”

Dr David Shusterman

Dr David Shusterman

If you rely on AI or algorithm-generated health advice over peer-reviewed data or a qualified professional, remember - you are not speaking to an actual doctor. These tools don't understand the person behind the screen and are no replacement for a professional, individualised assessment.

Shusterman warns that although AI can summarise information, it can’t examine you, review your full medical history in context, or recognise subtle warning signs during a conversation.

“When someone relies exclusively on algorithm-generated advice, important diagnoses can be missed or delayed,” he says.

Continue reading below

AI health content often sounds overly confident, creating an illusion of expertise. This is due to a phenomenon called ‘AI hallucinations,’ where the technology generates information that sounds perfectly logical and factual but is actually entirely invented. AI models prioritise flow and persuasion over medical truth, which makes it harder for people to unlearn harmful, oversimplified health advice.

“AI-generated information is often written in a very authoritative tone, which makes it sound like definitive medical guidance,” warns Shusterman. “The issue is that the confidence of the language doesn’t guarantee the accuracy of the information.

“When people hear something stated very confidently online, it can be difficult to convince them that the situation is actually more nuanced.”

Shusterman says the real danger is when safety exceptions are omitted, and people are given one-size-fits-all advice.

“In medicine, small details matter - age, medicines, family history, physical exam findings," he explains. “A recommendation that is safe for one person may be dangerous for another.

“When complex symptoms are reduced to generic advice, you risk overlooking serious conditions that require timely evaluation or specialised treatment.”

'Quick fixes' vs real medicine

Another point to consider is that the internet is awash with social media 'health hacks' and apparent 'miracle' fixes. Presented as quick, simple solutions - often by those without medical expertise - these claims can make professional healthcare seem slow or unnecessarily complicated.

“Good medicine usually involves a plan, follow-up, and consistency,” says Shusterman. “But online content often promotes instant results. That creates unrealistic expectations, and when people don’t see immediate change, they sometimes abandon treatments that would actually help them in the long run.”

Though many people turn to AI for a quick breakdown of health concerns, they often feel compelled to double‑check its answers. This can mean asking the AI follow‑up questions or searching elsewhere, which sometimes creates a merry‑go‑round of contradictions and second‑guessing. Before long, you can spend hours online and end up more confused and worried than when you started.

“Patients sometimes spend weeks or months researching symptoms online, and instead of feeling more informed, they feel exhausted and unsure what to believe,” Shusterman explains. “Eventually, some people delay care because they’re stuck in a cycle of reading conflicting opinions.

“That kind of decision paralysis can, unfortunately, postpone the medical evaluation that would give them clear answers.”

Bypassing credible sources for a quick-fire synopsis from the digital wild west can easily trigger cyberchondria. This is the digital form of hypochondria - excessive worry about having an illness you may not actually have.

Short, bullet-pointed summaries make it easy to skim and miss reassuring context, while highlighting alarming signs. That combination can push you to repeatedly search for confirmations, misinterpret normal sensations as symptoms, and escalate your anxiety.

Continue reading below

When AI advice turns out to be wrong - or people rely on unqualified sources, misleading visuals, or deepfake experts - it can undermine trust in real providers and the healthcare system as a whole.

Shusterman says this breeds confusion and scepticism. When people discover supposedly trusted online information is inaccurate, they may start doubting all medical guidance - even advice from real physicians.

“Trust is a critical part of the doctor-patient relationship,” he explains. “Our goal as clinicians is to help people navigate information, not dismiss their curiosity.”

Shusterman’s tips for navigating online health content

Shusterman advises treating online health information as a starting point, not a diagnosis.

He shares some practical guidance for staying savvy online:

  • Be wary of content that promises instant cures, oversimplifies complex conditions, or uses fear to push action.

  • Prefer sources tied to recognised medical institutions, peer-reviewed research, or accredited professionals.

  • Use online info to inform questions for your clinician, not to replace a professional assessment.

“Remember that real healthcare involves conversation, examination, and individualised care,” Shusterman concludes. “Technology can support medicine, but it should never replace the guidance of a qualified professional who understands your specific health situation.”

Continue reading below

Article history

The information on this page is peer reviewed by qualified clinicians.

flu eligibility checker

Ask, share, connect.

Browse discussions, ask questions, and share experiences across hundreds of health topics.

symptom checker

Feeling unwell?

Assess your symptoms online for free

Sign up to the Patient newsletter

Your weekly dose of clear, trustworthy health advice - written to help you feel informed, confident and in control.

Please enter a valid email address

By subscribing you accept our Privacy Policy. You can unsubscribe at any time. We never sell your data.