Artificial intelligence tools like ChatGPT are being used by millions of people every day for everything from casual questions to complex problem-solving. But a recent incident has once again highlighted the risks of relying on AI for sensitive health advice without cross-checking with qualified professionals.

According to a report published in Annals of Internal Medicine Clinical Cases, a 60-year-old man developed a rare and dangerous condition known as bromism after consuming sodium bromide for several months. The man reportedly turned to ChatGPT for dietary advice and was told to replace regular table salt (sodium chloride) with sodium bromide. Believing the suggestion to be a safe alternative, he made the switch and began consuming the compound regularly.

Bromism is a form of poisoning caused by excessive intake of bromide salts, and it can lead to a wide range of symptoms. In this case, the patient began experiencing paranoia, hallucinations, insomnia, fatigue, muscle coordination problems, and even skin changes such as acne and cherry angiomas. His mental state deteriorated to the point where he believed his neighbor was trying to poison him, prompting a visit to the emergency room.

Medical staff sedated the patient and consulted the Poison Control Department, eventually diagnosing bromism. Tests revealed that his condition was the result of prolonged sodium bromide consumption over a period of three months.

The study noted that the man did not have a history of psychiatric or major medical conditions before this incident. While it is unclear whether GPT-3.5 or GPT-4 provided the advice, the researchers tested the question themselves and found that ChatGPT did mention bromide as a chloride alternative in some responses. However, the AI failed to include a strong health warning or ask clarifying questions, which a medical professional would have done before giving such advice.

OpenAI, the maker of ChatGPT, responded by pointing to its terms of use, which clearly state that users should not rely on the AI as a sole source of truth or as a replacement for professional medical guidance. The company has recently been promoting GPT-5 as a more capable model for health-related queries, but this case underlines the importance of caution.

After three weeks of treatment, the man began to recover and showed significant improvement. Experts say this incident serves as a reminder that while AI can be a useful tool for information, it is not a substitute for trained healthcare professionals. AI models can generate scientific inaccuracies, lack the ability to critically evaluate context, and may unintentionally spread misinformation when used without oversight.

With AI becoming more deeply integrated into daily life, the takeaway is clear  treat medical advice from AI tools as supplementary, not definitive, and always confirm health-related recommendations with a licensed doctor.

For more updates on AI, technology, and digital safety, follow Tech Moves on Instagram and Facebook.