MillenniumPost
Opinion

Promises and pitfalls

Generative AI’s potential in conducting medical diagnosis, automating clinical paperwork, analysing medical research, and offering medical education appears promising, but it’s not yet immune to fatal errors

Promises and pitfalls
X

After finishing your morning tea, a sharp pain suddenly pierced through your chest, causing your heart rate to soar and your blood pressure to plummet. You found yourself struggling to catch your breath, realising that you were experiencing a heart attack. With urgency, you dialled the emergency services of the hospital, and the paramedics arrived promptly. They worked tirelessly to stabilise your condition, but their efforts proved futile. As the situation became more critical, they urgently contacted the hospital for advice, but time was running out. In a last-ditch effort to save your life, the team decided to try out the newly released Generative AI application ChatGPT-4 that they had heard about. One of the medical residents quickly pulled out her phone and opened the app, seeking advice from the ChatGPT-4. She explained your medication resistance and the latest blood infection therapy you had. "I am at an impasse as to what is taking place and what to do," she then begged the ChatGPT-4. Help us save the life of this patient, please. The bot immediately answered with a well-written response outlining possible causes for your crashes, citing pertinent recent studies, and recommending a white blood cell-boosting infusion therapy. The resident understood that ChatGPT-4 was indicating that sepsis, a potentially fatal illness, may be developing in the patient. If that were the case, you urgently required medication. The resident swiftly placed her prescription for the infusion recommended by ChatGPT-4 from the hospital pharmacy, then asked her phone to "Show me" the research to confirm what the bot had said. The infusion arrived just in time and you started to respond to the treatment. You were swiftly transported to the hospital, where the doctors were able to stabilise your condition. Thanks to the ChatGPT-4 app, you were saved. The medical team was amazed at how quickly and accurately the ChatGPT-4 had provided the information they needed. They knew that this new technology would revolutionise the way they treated patients in emergencies.

Harnessing the life-saving potential

The new Generative AI surpasses the previous ChatGPT-3.5 chatbot in sophistication and can generate intelligent insights that could be promptly useful in emergency rooms, potentially saving time and even lives. The medical resident felt as though she had a knowledgeable mentor by her side, equipped with an abundance of medical knowledge. ChatGPT-4 aided the resident in simplifying the insurance paperwork procedure post medication prescription, thus preserving significant time. The bot also could automate the filling of medication for the patient, as well as any necessary insurance documentation. The potential impact of AI in healthcare is vast, ranging from clinical trials to medical records, and we should begin discussing how to best optimise its use. However, it's crucial to note that errors in medicine can be fatal, making it imperative to ensure the utmost accuracy when incorporating AI into patient care. While some doctors believe that ChatGPT-4 can produce a diagnosis that is nearly as accurate as that of a medical student in their third or fourth year, it is still just a machine and not necessarily superior to a search engine or a textbook. If the system were to rely on information from textbooks written by humans instead of its current computer-generated material, it could provide more reliable and secure data. Additionally, ChatGPT-4 could be used to automate clinical paperwork, analyse medical research, and offer medical education and chatbot-based applications to increase patient engagement. Furthermore, ChatGPT-4 could assist doctors in conducting research by collecting and comparing an individual's numerous health aspects to those of millions of patients worldwide, analysing every characteristic in detail. However, before the AI bot can perform such tasks, it must be provided with millions of patient datasets containing various attributes and their outcomes.

Generative AI-assisted medical education

To simulate the role of a physician, you need an advanced Generative AI model like OpenAI's ChatGPT-4. GPT-3.0 and 3.5 demonstrated impressive performance on the MCAT in the US, surpassing 276,779 students in a study. They exhibited high agreement with the answer key and a strong grasp of justifications. These models could enable students to receive customised and thorough explanations of MCAT competency-related questions either free of charge or at a reasonable price. Additionally, test creators could use them to generate new test questions, and pre-med students could employ them for targeted study sessions. Since medical school admissions are already highly competitive, the potential replacement of students by Generative AI in the classroom may further intensify the challenge.

Not a panacea

While Generative AI is touted as the future of medicine, there is a caveat. It is not infallible and may contain minute flaws in an otherwise reliable medical recommendation. It must always be used with human oversight and caution experts. Even if the diagnosis seems convincing, ChatGPT-4 can still commit mistakes, errors, or inaccuracies that require human intervention. Generative AI is both brighter and dumber than any person you've ever met. Although its responses may appear rational and persuasive to an untrained observer, they can ultimately put patients at risk. Thus, ChatGPT-4 is both more and less intelligent than any human one has ever encountered. ChatGPT-4 is not immune to clerical errors, such as inaccurately recording information or making simple arithmetic mistakes. It is challenging to determine the specific instances and causes of ChatGPT-4's errors because it is a Machine Learning system. One way to combat system flaws is to ask ChatGPT-4 to review its work, which may help identify problems. Another option is to have the bot display its work so that calculations can be checked in a more human-like way or to request that the bot provide the sources it used to reach its conclusion, as the hypothetical medical student did.

Do you believe that ChatGPT-4 is too constrained at the moment to become the next major innovation in medicine? Don't you think that is it still a promising technology?

The writer is an HoD and Assistant Professor of Dept of Computer Sc & Electronics, Ramakrishna Mission Vidyamandira. Views expressed are personal

Next Story
Share it