Study: 86% of Medical Students Use AI Chatbots; Experts Warn of “Hallucinated” Data
A new cross-sectional study conducted among undergraduate medical students in Mumbai has highlighted a massive shift in academic habits, revealing that 86% of students now use AI chatbots (like ChatGPT/Gemini) for their studies.
While these tools are popular for summarizing complex topics and quick learning, a concerning trend emerged regarding accuracy: only about 16% of students actually cross-check the AI-generated information with standard medical textbooks or journals.
The Risk of “Hallucination”: Educators and experts are raising red flags about this unchecked dependency. The primary concern is that AI models can “hallucinate” or generate plausible-sounding but medically incorrect data, which could lead to fundamental gaps in a student’s clinical knowledge base. The study suggests that without rigorous verification, students might unknowingly absorb and apply flawed medical concepts.
The Way Forward: The findings have triggered calls for policy changes in medical education. Rather than a blanket ban, experts recommend integrating AI literacy into the MBBS curriculum.
The goal is to train future doctors to use AI as a supplementary tool strictly for assistance while enforcing the habit of verifying critical medical data against authoritative sources.

