Artificial Intelligence in Healthcare

Imagine playing a significant role in transforming medicine. Stanford researchers have been looking into large language models and how they can make medical diagnoses more accurate. The researchers believe these models can also help with clinical reasoning. Individuals interested in playing a role in this should look into human artificial intelligence, which many of these models are based on.

Artificial Intelligence as a Diagnostic Tool

The researchers took a series of cases from actual patients and asked 50 physicians to participate in the study and diagnose these individuals. Half the physicians used medical manuals and other traditional diagnostic resources to diagnose each patient, while the other 25 relied on ChatGPT as a diagnostic aid.

Surprisingly, ChatGPT received a median score of 92, which most people would say is the equivalent of an A in a traditional class. Doctors, in contrast, only earned scores of 74 and 76 based on whether they used AI or conventional methods. Researchers believe doctors scored lower because their diagnosis-related reasoning steps weren’t as comprehensive. This study shows the importance of artificial intelligence in diagnosing human patients and is leading some to choose to get certified on HAI here.

Physicians may rely more on human artificial intelligence in the future as they can learn from these tools and utilize them to the fullest. Patients will likely benefit when doctors are fully trained on these tools and use them in clinical settings. Large language models in healthcare settings will become more commonplace. Unfortunately, that day has yet to arrive.

Researchers found that physicians aren’t entirely using artificial intelligence to improve their clinical reasoning today. There’s room for improvement and physician-AI collaboration. This AI will benefit clinical practice and healthcare overall. 

Patients currently don’t ask many questions once they have a diagnosis. They don’t want to know the why behind the diagnosis but what steps will be taken to treat them. Doctors often can’t explain why they came to this diagnosis either. They arrived at the correct conclusion but couldn’t tell the patient how they had come to it.

Providing a Diagnosis

OpenAI, a San Francisco-based company, introduced ChatGPT in November 2022. Since then, many large language models have been released. These programs collect massive amounts of data using natural human language. Websites and books are two sources these models pull from, and there are countless others. When trained using these resources, a large language model can respond to a natural language query input, producing a fluent and compelling answer.

Many industries already benefit from large language models. Content generation and finance rely heavily on these models today, and healthcare is expected to follow suit soon. With large language models, experts believe diagnostic errors will be reduced. Sadly, they’re prevalent and harmful in medicine today, and anything that can help reduce these errors will be greatly appreciated by physicians and patients alike. 

Existing models can handle multiple-choice and open-ended medical reasoning questions. Research shows that the models answer these questions correctly on examinations. However, using these tools outside of educational settings must be further examined. Researchers need to ensure that the results seen on examinations translate into actual clinical practice. The study conducted at Stanford examined whether these results would transfer.

The Study

Doctors participating in the study were given six or fewer complex clinical cases. They were given an hour to examine these cases based on patient histories, lab results, and physical exams. The doctors were asked to provide a plausible diagnosis for each patient’s history. They were also allowed to recommend additional steps for evaluating the patient.

When diagnosing each patient, the doctors participating in the study relied on their medical knowledge and experience. They were also given access to reference materials to make a diagnosis. When the doctors were given access to ChatGPT to help them make this diagnosis, approximately 1/3 reported that they had frequently or occasionally used the tool. However, many doctors in the group with access to the large language model either didn’t agree with the model’s diagnosis or factor it in when they made their diagnosis.

Researchers found that having access to the large language model did not improve diagnostic accuracy. However, doctors who used this model made their diagnosis faster than physicians who did not use the model. Additional research is needed to determine whether large language models can improve diagnostic turnaround time, which may be critical for some patients. 

One thing researcher discovered when conducting the survey was that the use of large language models makes doctors more efficient. They save time using these diagnostic tools, which could benefit them in the long run. Saving time could lead to less burnout for medical professionals. This finding is critical as burnout remains a significant issue in the industry, leading many people to pursue different careers.

Improving Human-AI Teamwork

Doctor-AI collaboration has room for improvement. Doctors must learn to trust large language models in their diagnosis. They need to view the AI perspective as valid and consider that it could be correct. However, before doctors fully trust large language models, they must understand how these models are trained and what materials were used in the training. Some experts believe that having a large language model specifically geared to healthcare would give doctors more confidence in the results generated. Furthermore, doctors must become more familiar with using large language models so they feel comfortable doing so.

AI clinical applications must prioritize patient safety. Doctors cannot rely solely on AI responses when diagnosing. They need to use their judgment, medical knowledge, and experience to ensure the diagnosis provided is correct. Furthermore, doctors are going nowhere soon. Patients want a trusted human doctor to turn to when they are ill. AI will not replace doctors. They will continue prescribing medications, completing surgical procedures, and handling other interventions. However, they will do so with the help of AI. Patients will appreciate doctors using this tool to ensure they promptly get the correct diagnosis and treatment. AI is simply a tool the doctor can use to perform their job better. Anyone interested in learning more about human AI should consider taking courses so they can help advance medicine today. 

Sharing Is Caring:

Leave a Comment