LATEST NEWS
SELECTED FOR YOU
WEEKLY
STAY ON TOP

Best crypto insights delivered straight to your inbox.

AI’s Inconsistency in Medical Emergencies Raises Concerns

ByJames KinotiJames Kinoti
3 mins read
AI
  • A recent study provides key information on the possible barriers to AI during emergency medical situations.
  • According to the lead researcher, ChatGPT doesn’t work consistently. 
  • The study also found that ChatGPT’s performance was weak compared to traditional methods.

The recent study of Washington State University’s Elson S. Floyd College of Medicine provides key information on the possible barriers to artificial intelligence (AI) during emergency medical situations. In the published PLOS One study, the authors explored the capabilities of OpenAI’s ChatGPT program to decide on the cardiac risk of simulated patients in a case of chest pain.

Inconsistent conclusions

The outcomes point out a problematic level of variability in The ChatGPT’s conclusions while the same patient data is inputted. According to the word of Dr. Thomas Heston, the lead researcher, ChatGPT doesn’t work in a consistent way. When showing the exact same data, ChatGPT would give a low risk the first time, an intermediate risk the following time, and even, at times, a high-risk rating.

This gap is very serious in critical life-threatening cases because in these cases, the essential and objective evaluations carry a lot of significance for the medical person to take the accurate and appropriate action. Patients may experience chest pain due to different diseases. Hence, the physician needs to examine the patient quickly and do timely treatment in order to give the patient proper care.

The study found out also that ChatGPT’s performance was weak if compared to traditional methods used by doctors to assess patients’ cardiac risk. Today a two- sided checklist note method is used by doctors who evaluate patients approximately according to the TIMI and HEART protocols which are indicators for the degree of a heart patient’s illness.

However, when offering as inputs to it such variables as those displayed in the TIMI and HEART scales, greater disagreement was reached with the scores by ChatGPT, with agreement rate being 45% and 48% for the respective scales. Suppose this diversity is to be found in the AI’s decision-making in high-risk medical cases. In that case, it leads one to question the AI’s reliability because it is these high-stakes situations that depend on consistent and accurate decisions.

Addressing the limitations and potential of AI in healthcare

Dr. Heston pointed out AI’s capability to boost healthcare support and stressed the necessity of conducting a thorough study to exclude deficiencies inherent in it. AI may be a necessary tool, but we are moving faster than we understand. Therefore, we ought to do much research, especially in commonly encountered clinical situations.

Evidently, the research results have confirmed the importance of human nurses in these settings, although AI technology has also shown some advantages. Take the instance of an emergency during which digital health specialists would be able to peruse a patient’s full medical report, thus using the system’s capacity to offer only pertinent information with the greatest degree of efficiency. Besides that, AI can both participate in the generation of differential diagnoses and thinking about the challenging cases with doctors. This will help doctors move on more efficiently with the diagnostic process.

Nonetheless, there still remains some issues according to Dr Heston.

“It can be really good at helping you think through the differential diagnosis of something you don’t know, and it probably is one of its greatest strengths of all. I mean, you could ask it for the top five diagnoses and the evidence behind each one, and so it could be very good at helping you think through the problem but it just cannot give the straightforward answer.”

Where AI is ever evolving, it is of paramount importance to evaluate its performance deeply, maybe just specially in high-risk situations like health care, to secure patients as such optimize medical decision-making.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Share this article

Disclaimer: The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

James Kinoti

James Kinoti

A crypto enthusiast, James finds pleasure in sharing knowledge on fintech, cryptocurrency as well as blockchain and frontier technologies. The latest innovations in the crypto industry, crypto gaming, AI, blockchain technology, and other technologies are his preoccupation. His mission: be on track with transformative applications in various industries.

MORE … NEWS
DEEP CRYPTO
CRASH COURSE