The artificial intelligence (AI) chatbot ChatGPT is creeping into science and medicine — prompting Douglas L. Mann, MD, editor-in-chief of JACC: Basic to Translational Science, to take it out for a test drive.
ChatGPT (OpenAI), which stands for Chat Generative Pre-trained Transformer, is a natural language processing tool that allows users to have a personalized conversation with an AI bot capable of providing detailed responses to any question posed.
Mann, who has previously written about the potential perils of AI technology in healthcare, engaged ChatGPT (OpenAI) in a point–counterpoint discussion about the role of AI translational medicine.
“The first impression is that it’s this amazing new technology that is reasonably efficient and accurate and returns answers blindingly fast. I was totally stunned,” Mann told theheart.org | Medscape Cardiology.
“But then when you drill down, the responses lack the type of nuanced insight that a person with in-depth knowledge of the field might provide. That’s what’s missing right now,” said Mann, with Department of Internal Medicine – Cardiology Division, Washington University School of Medicine, St. Louis, Missouri.
As detailed in a perspective piece published online January 18 in JACC: Basic to Translational Science, he first posed the question: “Why Is AI The Future of Translational Medicine?”
Within about one minute, ChatGPT produced five paragraphs, concluding with the following, copied here verbatim with no editing for grammar, style, or content.
“In conclusion, AI is likely to play an increasingly important role in the future of translational medicine, due to its ability to analyze large datasets quickly and accurately, to identify patterns and trends, and to help develop personalized treatments and new drugs. As AI technologies continue to advance, it is likely that we will see even more applications of AI in the field of translational medicine, which will have the potential to improve patient care and outcomes.”
Next, Mann asked ChatGPT: “Why Will AI Not Be The Future of Translational Medicine?” Again, it returned five paragraphs, with the following conclusion (verbatim).
“In conclusion, while AI has the potential to be a useful tool in the field of translational medicine, it is unlikely to be the future of this field. There are a number of limitations to the use of AI in this context, including the potential for bias and incomplete data, the inability to understand the complexity of biological systems, and the ethical implications of using these technologies. As a result, it is important to be cautious about relying too heavily on AI in the field of translational medicine, and to ensure that it is used in a responsible and ethical manner.”
Mann said he found both sides of the argument “factual and to some extent compelling,” but lacking depth.
“To me, it was like what a first-year medical student could generate just scanning the internet, although I do think that as the systems become more sophisticated, they may be able to interject some insight that you can’t get from the internet,” Mann said.
He’s not alone. In a recent survey on progress in AI, fully half of AI experts estimated that an AI system capable of human-level thinking would be available by 2059.
“Given that this date is well beyond my tenure as editor-in-chief for JACC: Basic to Translational Science, I am reasonably confident that I will be able to hold onto my editorial position and that I will not be replaced by an editor-in-chief chatbot in the near term,” Mann writes.
Still, Mann believes AI-generated content will become an increasingly “important voice” in scientific and medical journals in the not-too-distant future. Some research papers have already been submitted to journals citing ChatGPT as a co-author — prompting much debate and putting journal editors on edge.
“The journals are circling the wagons now about not letting AI-generated content be used,” Mann said.
Last month, The World Association of Medical Editors (WAME) published a position paper — Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. Among the recommendations: chatbots cannot be authors.
Adam C. Powell, PhD, president of Payer+Provider Syndicate, also thinks ChatGPT and similar technologies will play an “increasingly important role in generating journalistic content, and that content may soon find its way into medical and scientific journals.”
Powell also sees a role for AI in helping clinicians produce personalized content for their practices.
“Rather than providing patients with generic instructions, using AI, it will be possible to adapt instructions to the cultural needs, reading levels, and stylistic preferences of the patients receiving them,” Powell told theheart.org | Medscape Cardiology.
“Likewise, AI can help answer patient and physician questions, and help illustrate connections between concepts,” Powell said.
For example, Powell asked ChatGPT to write 150 words, citing 5 sources, explaining why not all patients should receive imaging for lower back pain, to which it responded (citing 3 sources):
“Imaging for lower back pain, such as X-rays, CT scans, and MRIs, is often unnecessary and can lead to potential harm for patients. According to the American College of Physicians, routine imaging for lower back pain is not recommended for most patients as it has been shown to not improve patient outcomes and can lead to overdiagnosis and overtreatment (Qaseem et al., 2017).
Similarly, the Choosing Wisely initiative, a national effort aimed at reducing unnecessary medical tests, also advises against routine imaging for lower back pain without clear indications such as severe or progressive neurological deficit, suspected spinal cord compression, or an underlying systemic disease (Smith et al., 2013).
Additionally, exposure to ionizing radiation from imaging tests can increase the risk of cancer and other harmful effects (Smith et al., 2013). The use of imaging for lower back pain can also result in increased costs for both patients and healthcare systems (Qaseem et al., 2017). Instead, non-imaging approaches, such as physical therapy and pain management, have been shown to be effective in managing lower back pain (Foster et al., 2018).
In conclusion, not all patients should receive imaging for lower back pain as it can lead to potential harm and has limited benefits. Non-imaging approaches should be considered as first-line treatments.”
Powell then asked ChatGPT to write 150 words at a 4th grade reading level, humorously explaining why not all patients should receive imaging for lower back pain, to which it responded:
“Imaging tests like X-rays, CT scans, and MRIs are like detective work for doctors, helping them find out what’s going on inside your body. But just because they can see what’s happening doesn’t mean they always should!
Think of it like this: if you have a tummy ache, you don’t want to go on a wild goose chase looking for the cause. The same goes for your back! Just because your back hurts, doesn’t mean you need an X-ray. It’s like looking for a lost toy in your room when it’s probably just in your pocket!
Plus, some imaging tests use radiation, which is like super strong sunshine. Too much sunshine can give you a sunburn, and too much radiation can be bad for your body. So, imagine if you had to go on a sunny vacation every time your back hurt! That would be too much sunshine, and not good for you.
So, what should you do instead? Well, sometimes just moving around and stretching can help your back feel better. You can also talk to your doctor about other options, like physical therapy, or medicine to help with the pain.
In conclusion, imaging tests aren’t always necessary for your back pain. It’s better to start with simpler solutions and save the detective work for the big mysteries!”
Powell said the main negative consequence that he envisions is that “persuasively written, factually inaccurate content will be produced.”
Mann and Powell report no relevant financial relationships.
JACC: Basic Transl Sci. Published online January 18, 2023. Full text
For more from theheart.org | Medscape Cardiology, join us on Twitter and Facebook