ChatGPT Health is a new health‑focused feature introduced by OpenAI in January 2026 that aims to help users navigate health and wellness questions more confidently by integrating personal health data with conversational AI.
The product was designed as a dedicated space within ChatGPT where users could securely connect their medical records, wearable and wellness app data such as Apple Health, Function, or MyFitnessPal to receive more personalized explanations and insights about health, lab results, symptoms, and everyday health concerns.
Importantly, OpenAI emphasised that ChatGPT Health is not intended to diagnose or treat medical conditions and should support rather than replace professional medical care. 
The feature was developed over two years in collaboration with more than 260 physicians worldwide and includes enhanced privacy protections, such as encrypted health‑data storage and the separation of health chats from regular conversations so sensitive information isn’t used to train AI models. 
However, a recent independent evaluation published in Nature Medicine has raised serious safety concerns about how ChatGPT Health handles potentially life‑threatening situations.
Researchers tested nearly 1,000 AI responses across 60 realistic clinical scenarios, spanning common ailments to true emergencies, and found that in more than half of cases doctors judged to require urgent care, the AI did not recommend emergency intervention.
In those situations, the tool often suggested waiting or seeking routine evaluation instead of going to an emergency department which experts describe as potentially dangerous if users interpret the advice as definitive medical guidance. 
Concerns were especially acute around mental health guidance. In some simulated scenarios where users described specific plans for self‑harm, safeguards such as referrals to crisis lines were inconsistent or absent while similar alerts sometimes appeared in less serious cases.
Because ChatGPT Health’s responses can sound calm and authoritative, critics worry that users might develop a false sense of security and delay seeking real medical help when it’s needed most. 
OpenAI responded to early critiques by stating that lab studies may not reflect actual user behaviour and reiterated that the tool is continuously being updated, but independent experts continue to call for greater transparency, clearer safety standards, and more rigorous evaluation before large‑scale use in sensitive health contexts. 
while AI features like ChatGPT Health could help millions better understand their health data and prepare for medical appointments, there are significant risks if users turn to these tools as a substitute for qualified clinical judgement especially in emergencies.
Health professionals caution that AI should never be the sole basis for urgent care decisions, and users should seek immediate medical attention from licensed providers when serious symptoms arise.



