There is already a lot about artificial intelligence, but at least the ChatGPT language model does not replace a pediatrician yet. It was found to do even more harm than good.
Adobe Stock
ChatGPT has already revolutionized the production of text and other content, but it is not even close to replacing top class professionals. At least for the time being, the application of artificial intelligence in, for example, medicine does more harm than good, it turns out of the JAMA Network from the published research.
In the test, the language model’s ability to apply paediatrics, i.e. children’s medical publications, to find the correct diagnosis was only about 17 percent. The number is significantly worse than in medical diagnoses for adults, although even in those the numbers are not staggering at 39 percent.
Doctors have been hoping for a revolution brought about by artificial intelligence applications, but we still have to wait. At the moment, they are more suitable for other problem solving, such as automating schedules and entries.
In JAMA’s test, artificial intelligence was made to combine diagnoses and symptoms presented by patients from data collected over the previous ten years. The answers are evaluated by a group of doctors. Out of 100 answers, 72 were completely wrong, 11 were missing. 57 percent of the wrong answers hit the right organ, so the artificial intelligence was finally on the right track.
From the answers, the JAMA researchers concluded that ChatGPT has great difficulty understanding the connections between different symptoms. According to the researchers, it would be easy to improve the situation by training the language model on truthful scientific literature instead of medical material available on the internet. According to JAMA, this difference is the subject of the next study.