Parents are putting more trust into ChatGPT than actual doctors, study finds

Parents are putting more trust into ChatGPT than actual doctors, study finds

Parents are trusting ChatGPT for medical advice over actual doctors and nurses, a new study found.

Researchers at the University of Kansas also found that parents also say AI-generated text is credible, trustworthy and moral.

“When we began this research, it was right after ChatGPT first launched — we had concerns about how parents would use this new, easy method to gather health information for their children,” lead author and doctoral student Calissa Leslie-Miller said in a release. “Parents often turn to the internet for advice, so we wanted to understand what using ChatGPT would look like and what we should be worried about.”

To reach these conclusions, Leslie-Miller and her colleagues conducted a study with 116 parents who were between the ages of 18 and 65. The study was published earlier this month in the Journal of Pediatric Psychology.

The participants were given health-related text, reviewing content generated by healthcare professionals and the OpenAI chatbot ChatGPT. They were not told who, or what, authored the texts. They were asked to rate the texts based on five criteria - perceived morality, trustworthiness, expertise, accuracy and how likely they would be to rely on the information.

When comparing AI-generated text and that of health care experts, more than 115 parents told researchers at the University of Kansas that the ChatGPT text was more trustworthy (Getty Images/iStock)
When comparing AI-generated text and that of health care experts, more than 115 parents told researchers at the University of Kansas that the ChatGPT text was more trustworthy (Getty Images/iStock)

In many cases, parents couldn’t tell which content was generated by ChatGPT or by the experts. When there were significant differences in ratings, ChatGPT was rated to be more trustworthy, accurate and reliable than the expert-generated content.

“This outcome was surprising to us, especially since the study took place early in ChatGPT’s availability,” said Leslie-Miller. “We’re starting to see that AI is being integrated in ways that may not be immediately obvious, and people may not even recognize when they’re reading AI-generated text versus expert content.”

ChatGPT was released in November 2022. On Thursday, OpenAI announced it had added a search engine into its chatbox, known as ChatGPT search. ChatGPT now has more than 250 million active monthly users.

In many cases, parents couldn’t tell which content was generated by ChatGPT or by the expert (REUTERS/Dado Ruvic/Illustration/File Photo)
In many cases, parents couldn’t tell which content was generated by ChatGPT or by the expert (REUTERS/Dado Ruvic/Illustration/File Photo)

Although ChatGPT performs well in many scenarios and could be a beneficial tool, the AI model is capable of producing information that is incorrect. It’s not an expert. Users need to proceed with caution.

“During the study, some early iterations of the AI output contained incorrect information,” Leslie-Miller said. “This is concerning because, as we know, AI tools like ChatGPT are prone to ‘hallucinations’ — errors that occur when the system lacks sufficient context.”

“In child health, where the consequences can be significant, it’s crucial that we address this issue,” she said. “We’re concerned that people may increasingly rely on AI for health advice without proper expert oversight.”