INTERNATIONAL

ChatGPT states that it pairs medical professionals for diagnosis

When given case studies, study at Mass General Brigham in Boston found that ChatGPT had a diagnosis success rate similar to that of a freshly graduated medical student.


Researchers looked at how ChatGPT may support decision-making from the first patient interaction through the process of completing evaluations, diagnosing disease, and managing treatment, according to the study’s corresponding author, Dr. Marc Succi.

The study’s co-author on the potential of AI is: The health system, which has a sizable research arm and billions of dollars in financing, according to Adam Landman, chief data officer and senior VP of digital, “Mass General Brigham sees great promise for large-language models to assist in improving care supply and clinician expertise.”
According to Dr. Marc Succi, who serves as the strategic innovation leader at Mass Common Brigham and is also the affiliate chair of innovation and commercialization there, ChatGPT “has the potential to be an augmenting instrument for the use of drugs and assist scientific decision making with spectacular accuracy.”

According to a Mass General Brigham press release, the study, which was published in the Journal of Medical Web Analysis, found that ChatGPT was roughly 72% accurate when it came to making common decisions, “from generating potential diagnoses to making final diagnoses and care administration choices.” Surprisingly, 77% of the time, the conclusion was accurate. ChatGPT performed equally well in both primary care and emergency care and without any gender bias.

Doctors definitely don’t need to be concerned about AI replacing them just yet. Succi said in a statement that ChatGPT “struggled with differential diagnosis, which is the meat and potatoes of medicine when a physician has to figure out what to do.” This is important because it shows the beginning of a patient’s therapy, when there is little presenting information and a list of plausible diagnoses is required, when doctors are genuinely experts and providing the most value.

Certain restrictions were noted by the researchers, including “possible model hallucinations and the unclear composition of ChatGPT’s training data set.” Many internet sites define a model hallucination as a confident diagnosis that doesn’t seem to be supported by the available data. The goal of the researchers’ future study is to determine if AI might help hospitals in “resource-constrained areas” operate more efficiently while also enhancing patient care and outcomes.

Related Articles

Back to top button