Key Points
- OpenAI says 40 million people ask health questions of ChatGPT each day.
- Health prompts represent more than five percent of all ChatGPT queries.
- A survey of U.S. adults shows 55% use AI to explore symptoms and 52% to ask health questions at any time.
- A real‑world case involved a user coordinating urgent care for a family member abroad after AI flagged a possible stroke.
- OpenAI calls the chatbot a “healthcare ally” but acknowledges it cannot replace a doctor.
- The company is partnering with hospitals and researchers to improve AI safety and accuracy.
Scale of Daily Use
OpenAI reports that 40 million individuals turn to ChatGPT for health‑related questions every day. This volume accounts for more than five percent of all prompts entered into the system. In the context of the platform’s overall traffic, 200 million of its 800 million weekly users ask at least one health‑related question each week.
Survey Findings on Consumer Behavior
In a poll of 1,042 U.S. adults who used AI for health purposes within the prior three months, OpenAI identified several common use cases. Fifty‑five percent of respondents said they used the technology to check or explore symptoms. Fifty‑two percent turned to a chatbot for health questions at any time of day, and forty‑eight percent used it to understand medical terms or instructions. Additionally, forty‑four percent employed AI to learn about treatment options.
Real‑World Example
The company highlighted the experience of Ayrin Santoso, a resident of San Francisco, who used ChatGPT to help coordinate urgent care for her mother in Indonesia after a sudden loss of vision. By entering symptoms, prior advice, and context, Santoso received a warning that the condition could signal a hypertensive crisis and possible stroke. Her mother was subsequently hospitalized and, according to OpenAI, has recovered ninety‑five percent of her vision in the affected eye.
OpenAI’s Position and Safety Efforts
OpenAI describes ChatGPT as a “healthcare ally,” emphasizing its role in helping users organize information, translate medical jargon, and generate drafts that can be verified. The organization says it is working with hospitals and researchers to improve the model’s accuracy and safety, acknowledging that the tool cannot replace a physician and that it does not have access to a user’s full medical history.
Risks and Context
The report notes that while AI can provide quick answers outside of clinic hours, there are serious risks if users treat the output as definitive medical advice. OpenAI cautions that the chatbot can still make errors that matter. The narrative also references the long‑standing practice of using internet search engines for health information, pointing out that AI introduces an additional layer of uncertainty compared with traditional sources.
Source: techradar.com