Asking a chatbot for health advice? After the recent launch of OpenAI's ChatGPT Health service, there are several important factors to consider.
With the launch of ChatGPT Health by OpenAI, there is a need to consider several important points before turning to AI chatbots for medical advice.
As the number of users seeking advice from chatbots increases, it has become clear that technology companies will start developing healthcare-oriented applications.
In January, OpenAI announced ChatGPT Health — an updated version of its chatbot, which, according to the company, is capable of analyzing users' medical data obtained from health monitoring apps and wearable devices to provide answers to medical questions.
Currently, access to the program is limited while waiting. Competing company Anthropic offers similar features for a specific group of users of its chatbot Claude.
Both companies emphasize that their language models are not intended to replace professional medical assistance and should not be used for diagnosis. Chatbots can help summarize and explain complex medical results, prepare for a doctor's visit, or identify important health trends hidden in medical records.
However, how reliable and accurate can they be in analyzing health and disease data? And should one rely on them?
Here are a few aspects to consider before discussing your health with AI:
Chatbots can provide more personalized information than search engines
Some medical professionals and researchers who have worked with ChatGPT Health and similar programs view them as progress compared to the current situation.
Artificial intelligence is not perfect — it can sometimes provide incorrect recommendations (source in English), however, its answers are often more tailored to the specific user than those found through Google.
“Often the only alternative is a lack of information or guessing,” notes Dr. Robert Wachter, a medical technology expert from the University of California, San Francisco. “Therefore, if these tools are used responsibly, they can provide truly useful data.”
In countries like the UK and the US, where waiting for a doctor's appointment can take weeks and emergency departments can involve hours of waiting, chatbots can help reduce anxiety and save time.
Additionally, they can indicate the need for immediate medical attention in cases with dangerous symptoms.
One of the advantages of new chatbots is their ability to respond to user queries considering their medical history, including medications taken, age, and notes from doctors.
Even if access to medical data is not provided, specialists like Wachter advise describing symptoms in as much detail as possible to receive more accurate answers.
Do not turn to AI for alarming symptoms
Wachter and his colleagues note that in some situations, it is best to refrain from using chatbots and seek medical help immediately. Symptoms such as shortness of breath, chest pain, or severe headache may indicate an emergency.
Even in less critical cases, both patients and doctors should approach AI with a certain degree of skepticism, emphasizes Dr. Lloyd Minor from Stanford.
“When it comes to serious medical decisions or even a less significant health issue, one should not rely solely on information provided by a language model,” adds Minor, dean of the Stanford University School of Medicine.
Even in the case of common and less complex conditions, such as polycystic ovary syndrome (PCOS), it is often preferable to consult a qualified physician, as symptoms can manifest differently in different individuals, affecting treatment choices.
Consider privacy before sharing health data
Many of the advantages offered by AI bots depend on how willingly users share personal medical information. It is important to understand that everything you provide to the company developing the AI is not protected by the US federal privacy law that regulates the handling of sensitive medical data.
This law, known as HIPAA, imposes penalties on doctors, hospitals, insurers, and other medical institutions for disclosing medical information. However, it does not apply to companies developing chatbots.
“When a person uploads their medical history to a language model, it is not the same as handing it over to a new doctor,” explains Minor. “Consumers should be aware that privacy standards in these cases vary significantly.”
OpenAI and Anthropic claim that user data is stored separately from other data and protected by additional measures. The companies do not use medical data to train their models. Users must separately consent to the sharing of such information and can withdraw that consent at any time.
Despite the growing interest in AI, independent research on such technologies is still in its early stages. Initial results show that programs like ChatGPT perform well on medical exams but often struggle to interact correctly with people.
A recent study from the University of Oxford, involving 1,300 participants, demonstrated that users of AI chatbots for information about hypothetical diseases did not make more informed decisions than those relying on traditional online searches or their own opinions.
When AI chatbots were presented with medical scenarios in detailed descriptions, they correctly identified the underlying condition 95 percent of the time.
“There were no problems at that point,” says lead author Adam Mahdi from the University of Oxford. “Problems arose at the stage of communication with real participants.”
Mahdi and his team identified several communication issues. People often did not provide chatbots with enough data to accurately determine their condition. In turn, AI systems sometimes responded with a mix of correct and incorrect information, making it difficult for users to distinguish between the two.
The study conducted in 2024 did not include testing of the latest versions of chatbots, including ChatGPT Health.
An AI second opinion can be helpful
The ability of chatbots to ask clarifying questions and extract key details from users is an area where, according to Wachter, there is significant potential for improvement.
“I am confident that they will become truly effective when their approach to communicating with patients becomes more ‘clinical,’ and the dialogues resemble real consultations,” believes Wachter.
Currently, one way to increase confidence in the information received is to compare the opinions of several chatbots, just as patients sometimes seek a second opinion from another doctor.
“Sometimes I enter the same data into ChatGPT and Gemini,” shares Wachter, referring to the AI tool from Google. “When their responses match, I feel more confident in the correctness of the answer.”
The article "Are you seeking medical advice from a chatbot? After the recent launch of ChatGPT Health by OpenAI, it is worth considering several key aspects" was first published on K-News.
Read also:
The Trump Administration Ordered Military Contractors and Federal Agencies to Cease Cooperation with Anthropic
The severing of relations with Anthropic concludes a week of tense negotiations between the...
Без изображения
Users are Massively Deleting ChatGPT: What is the Reason?
The material was prepared by K-News. Any copying or partial use is only possible with the...
Anthropic accused Chinese companies of stealing Claude's data
The company Anthropic has made accusations against several Chinese artificial intelligence...