Skip to Content

Healthcare research & technology advancements

Our team of clinicians, researchers, and engineers are all working together to create new AI and discover opportunities to increase the availability and accuracy of healthcare technologies globally, to realize long-term health technology potential.

Man and woman sitting on grass

Meet Med-PaLM 2, our large language model designed for the medical domain

Developing AI that can answer medical questions accurately has been a challenge for several decades. With Med-PaLM 2 , a version of PaLM 2 fine-tuned for the medical domain, we showed state-of-the-art performance in answering medical licensing exam questions. With thorough human evaluation, we’re exploring how Med-PaLM 2 can help healthcare organizations by drafting responses, summarizing documents, and providing insights. Learn more .

Expanding the power of AI in medicine

We are building and testing AI models with the goal of helping alleviate the global shortages of physicians, as well as the low access to modern imaging and diagnostic tools in certain parts of the world. With improved tech, we hope to increase accessibility and help more patients receive timely and accurate diagnoses and care.

How DeepVariant is improving the accuracy of genomic analysis

Sequencing genomes enables us to identify variants in a person’s DNA that indicate genetic disorders such as an elevated risk for breast cancer. DeepVariant is an open-source variant caller that uses a deep neural network to call genetic variants from next-generation DNA sequencing data.

Father and daughter embracing

Healthcare research led by scientists, enhanced by Google

Google Health is providing secure technology to partners that helps doctors, nurses, and other healthcare professionals conduct research and help improve our understanding of health. If you are a researcher interested in working with Google Health to conduct health research, enter your details to be notified when Google Health is available for research partnerships.

Using AI to give doctors a 48-hour head start on life-threatening illness

In this research in Nature , we demonstrated how artificial intelligence could accurately predict acute kidney injuries (AKI) in patients up to 48 hours earlier than it is currently diagnosed. Notoriously difficult to spot, AKI affects up to one in five hospitalized patients in the US and UK, and deterioration can happen quickly. Read the article

Deep Learning

Protecting patients, deep learning for electronic health records.

In a paper published in npj Digital Medicine , we used deep learning models to make a broad set of predictions relevant to hospitalized patients using de-identified electronic health records, and showed how that model could be used to render an accurate prediction 24 hours after a patient was admitted to the hospital. Read the article

Protecting patients from medication errors

Research shows that 2% of hospitalized patients experience serious preventable medication-related incidents that can be life-threatening, cause permanent harm, or result in death. Published in Clinical Pharmacology and Therapeutics , our best-performing AI model was able to anticipate physician’s actual prescribing decisions 75% of the time, based on de-identified electronic health records and the doctor’s prescribing records. This is an early step towards testing the hypothesis that machine learning can support clinicians in ways that prevent mistakes and help to keep patients safe. Read the article

Discover the latest

Learn more about our most recent developments from Google’s health-related research and initiatives.

Detecting Signs of Disease from External Images of the Eye

Detecting abnormal chest x-rays using deep learning, improving genomic discovery with machine learning, how ai is advancing science and medicine.

Google researchers have been exploring ways technologies could help advance the fields of medicine and science, working with scientists, doctors, and others in the field. In this video, we share a few research projects that have big potential.

We are continuously publishing new research in health

NIH News in Health

A monthly newsletter from the National Institutes of Health, part of the U.S. Department of Health and Human Services

Search form

January 2023

Print this issue

Health Capsule

Artificial Intelligence and Medical Research

Conceptual graphic showing the many ways AI is integrated into the technologies people use every day

Artificial intelligence, or AI, has been around for decades. In the past 20 years or so, it’s become a growing part of our lives. Researchers are now drawing on the power of AI to improve medicine and health care in innovative and far-reaching ways. NIH is on the cutting edge supporting these efforts.

At first, computers could simply do calculations based on human input. In AI, they learn to perform certain tasks. Some early forms of AI could play checkers or chess and even defeat human world champions. Others could recognize and convert speech to text.

Today, different forms of AI are being used to improve medical care. Researchers are exploring how AI could be used to sift through test results and image data. AI could then make recommendations to help with treatment decisions.

Some NIH-funded studies are using AI to develop “smart clothing” that can reduce low back pain. This technology could warn the wearer of unsafe body movements. Other studies are seeking ways to better manage blood glucose (or blood sugar) levels using wearable sensors.

Learn more about the different types of AI and their use in medical research .

Related Stories

 Illustration of a robot and a doctor analyzing a medical image together.

Artificial Intelligence and Your Health

Illustration of different types of dog breeds.

Pet Dogs to the Rescue!

NIHNiH Logo

How Research Works

Older couple looking up health information on a laptop

Finding Reliable Health Information Online

NIH Office of Communications and Public Liaison Building 31, Room 5B52 Bethesda, MD 20892-2094 [email protected] Tel: 301-451-8224

Editor: Harrison Wein, Ph.D. Managing Editor: Tianna Hicklin, Ph.D. Illustrator: Alan Defibaugh

Attention Editors: Reprint our articles and illustrations in your own publication. Our material is not copyrighted. Please acknowledge NIH News in Health as the source and send us a copy.

For more consumer health news and information, visit health.nih.gov .

For wellness toolkits, visit www.nih.gov/wellnesstoolkits .

Illustration with collage of pictograms of clouds, pie chart, graph pictograms on the following

Artificial intelligence in medicine is the use of machine learning models to help process medical data and give medical professionals important insights, improving health outcomes and patient experiences.

Thanks to recent advances in computer science and informatics, artificial intelligence (AI) is quickly becoming an integral part of modern healthcare. AI algorithms and other applications powered by AI are being used to support medical professionals in clinical settings and in ongoing research.

Currently, the most common roles for AI in medical settings are clinical decision support and imaging analysis. Clinical decision support tools help providers make decisions about treatments, medications, mental health and other patient needs by providing them with quick access to information or research that's relevant to their patient. In medical imaging, AI tools are being used to analyze CT scans, x-rays, MRIs and other images for lesions or other findings that a human radiologist might miss.

The challenges that the COVID-19 pandemic created for many health systems also led many healthcare organizations around the world to start field-testing new AI-supported technologies, such as algorithms designed to help monitor patients and AI-powered tools to screen COVID-19 patients.

The research and results of these tests are still being gathered, and the overall standards for the use AI in medicine are still being defined. Yet opportunities for AI to benefit clinicians, researchers and the patients they serve are steadily increasing. At this point, there is little doubt that AI will become a core part of the digital health systems that shape and support modern medicine.

Use this model selection framework to choose the most appropriate model while balancing your performance requirements with cost, risks and deployment needs.

Register for the ebook on Presto

There are numerous ways AI can positively impact the practice of medicine, whether it's through speeding up the pace of research or helping clinicians make better decisions.

Here are some examples of how AI could be used:

AI in disease detection and diagnosis Unlike humans, AI never needs to sleep. Machine learning models could be used to observe the vital signs of patients receiving critical care and alert clinicians if certain risk factors increase. While medical devices like heart monitors can track vital signs, AI can collect the data from those devices and look for more complex conditions, such as sepsis. One IBM client has developed a predictive AI model for premature babies that is 75% accurate in detecting severe sepsis.

Personalized disease treatment Precision medicine could become easier to support with virtual AI assistance. Because AI models can learn and retain preferences, AI has the potential to provide customized real-time recommendations to patients around the clock. Rather than having to repeat information with a new person each time, a healthcare system could offer patients around-the-clock access to an AI-powered virtual assistant that could answer questions based on the patient's medical history, preferences and personal needs.

AI in medical imaging AI is already playing a prominent role in medical imaging. Research has indicated that AI powered by artificial neural networks can be just as effective as human radiologists at detecting signs of breast cancer as well as other conditions. In addition to helping clinicians spot early signs of disease, AI can also help make the staggering number of medical images that clinicians have to keep track of more manageable by detecting vital pieces of a patient's history and presenting the relevant images to them.

Clinical trial efficiency A lot of time is spent during clinical trials assigning medical codes to patient outcomes and updating the relevant datasets. AI can help speed this process up by providing a quicker and more intelligent search for medical codes. Two IBM Watson Health clients recently found that with AI, they could reduce their number of medical code searches by more than 70%.

Accelerated drug development Drug discovery is often one of the longest and most costly parts of drug development. AI could help reduce the costs of developing new medicines in primarily two ways: creating better drug designs and finding promising new drug combinations. With AI, many of the big data challenges facing the life sciences industry could be overcome.

Integrating medical AI into clinician workflows can give providers valuable context while they're making care decisions. A trained machine learning algorithm can help cut down on research time by giving clinicians valuable search results with evidence-based insights about treatments and procedures while the patient is still in the room with them.

There is some evidence that AI can help improve patient safety. A  recent systemic review  (link resides outside ibm.com) of 53 peer-reviewed studies examining the impact of AI on patient safety found that AI-powered decision support tools can help improve error detection and drug management.

There are a lot of potential ways AI could reduce costs across the healthcare industry. Some of the most promising opportunities include reducing medication errors, customized virtual health assistance, fraud prevention, and supporting more efficient administrative and clinical workflows.

Many patients think of questions outside of typical business hours. AI can help provide around-the-clock support through chatbots that can answer basic questions and give patients resources when their provider’s office isn’t open. AI could also potentially be used to triage questions and flag information for further review, which could help alert providers to health changes that need additional attention.

One major advantage of deep learning is that AI algorithms can use context to distinguish between different types of information. For example, if a clinical note includes a list of a patient's current medications along with a new medication their provider recommends, a well-trained AI algorithm can use natural language processing to identify which medications belong in the patient's medical history.

Curate patient experiences that surpass patient expectations. Leverage watsonx Assistant AI healthcare chatbots to focus the attention of skilled medical professionals while empowering patients to quickly help themselves with simple inquiries. 

Artificial intelligence is being used for everything from answering patient questions to assisting with surgeries and developing new pharmaceuticals.

Learn how AI can help address disparities in health outcomes that have been recognized and persisted for decades.

IBM watsonx Assistant helps organizations provide better customer experiences with an AI chatbot that understands the language of the business, connects to existing customer care systems, and deploys anywhere with enterprise security and scalability. watsonx Assistant automates repetitive tasks and uses machine learning to resolve customer support issues quickly and efficiently.

Featured Clinical Reviews

  • Screening for Atrial Fibrillation: US Preventive Services Task Force Recommendation Statement JAMA Recommendation Statement January 25, 2022
  • Evaluating the Patient With a Pulmonary Nodule: A Review JAMA Review January 18, 2022

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

AI in Medicine— JAMA ’s Focus on Clinical Outcomes, Patient-Centered Care, Quality, and Equity

  • 1 Associate Editor, JAMA ; and Yale School of Medicine, New Haven, Connecticut
  • 2 Editorial Board member, JAMA ; and University of California, San Francisco
  • 3 Electronic Editor, JAMA and JAMA Network
  • 4 Associate Editor, JAMA ; and University of California, San Francisco
  • 5 Executive Managing Editor, JAMA and JAMA Network
  • 6 Managing Director of Strategy and Planning, JAMA and JAMA Network
  • 7 Executive Editor, JAMA and JAMA Network
  • 8 Editor in Chief, JAMA and JAMA Network
  • Editorial Guidance on Reporting Use of AI in Research and Scholarly Publication Annette Flanagin, RN, MA; Romain Pirracchio, MD, MPH, PhD; Rohan Khera, MD, MS; Michael Berkwits, MD, MSCE; Yulin Hswen, ScD, MPH; Kirsten Bibbins-Domingo, PhD, MD, MAS JAMA
  • Special Communication Creation and Adoption of Large Language Models in Medicine Nigam H. Shah, MBBS, PhD; David Entwistle, BS, MHSA; Michael A. Pfeffer, MD JAMA
  • Medical News & Perspectives Can AI Solve Clinician Burnout? Yulin Hswen, ScD, MPH; Rebecca Voelker, MSJ JAMA
  • Medical News & Perspectives Building Health Care Equity Into AI Tools Yulin Hswen, ScD, MPH; Rebecca Voelker, MSJ JAMA
  • Medical News & Perspectives How Kaiser Permanente Is Testing AI in the Clinic Rebecca Voelker, MSJ; Yulin Hswen, ScD, MPH JAMA
  • Medical News & Perspectives Can Predictive AI Improve Early Sepsis Detection? Rebecca Voelker, MSJ; Yulin Hswen, ScD, MPH JAMA
  • Medical News & Perspectives Feeding AI the Right Diet for Health Care Success Rebecca Voelker, MSJ; Yulin Hswen, ScD, MPH JAMA

The transformative role of artificial intelligence (AI) in health care has been forecast for decades, 1 but only recently have technological advances appeared to capture some of the complexity of health and disease and how health care is delivered. 2 Recent emergence of large language models (LLMs) in highly visible and interactive applications 3 has ignited interest in how new AI technologies can improve medicine and health for patients, the public, clinicians, health systems, and more. The rapidity of these developments, their potential impact on health care, and JAMA ’s mission to publish the best science that advances medicine and public health compel the journal to renew its commitment to facilitating the rigorous scientific development, evaluation, and implementation of AI in health care.

JAMA editors are committed to promoting discoveries in AI science, rigorously evaluating new advances for their impact on the health of patients and populations, assessing the value such advances bring to health systems and society nationally and globally, and examining progress toward equity, fairness, and the reduction of historical medical bias. Moreover, JAMA ’s mission is to ensure that these scientific advances are clearly communicated in a manner that enhances the collective understanding of the domain for all stakeholders in medicine and public health. 4 For scientific development of AI to be most effective for improving medicine and public health requires a platform that recognizes and supports the vision of rapid cycle innovation and is also fundamentally grounded in the principles of reliable and reproducible clinical research that is ethically sound, respectful of rights to privacy, and representative of diverse populations. 2 , 3 , 5

The scientific development in AI can be viewed through the framework used to describe other health-related sciences. 6 In these domains, scientific discoveries begin with identifying biological mechanisms of disease. Then inventions that target these mechanisms are tested in progressively larger groups of people with and without diseases to assess the effectiveness and safety of these interventions. These are then scaled to large studies evaluating outcomes for individuals and populations with the disease. This well-established scientific development framework can work for research in AI as well, with reportable stages as inventions and findings move from one stage to the next.

The editors seek original science that focuses on developing, testing, and deploying AI in studies that improve understanding of its effects on the health outcomes of patients and populations. The starting point is original research rigorously examining the challenges and potential solutions to optimizing clinical care with AI. In addition, to ensure our readers remain abreast of major scientific development across the entire continuum of scientific innovation, we invite reviews, special communications, and opinion articles that summarize the potential health care applications of emerging technology written for our journal’s broad readership.

While highlighting new developments, JAMA will focus on these essential areas ( Figure ):

Clinical care and outcomes:  JAMA ’s key interest is in clinically impactful science, and we will be most interested in studies demonstrating the effective translation of novel AI technologies to improve clinical care and outcomes. The potential for clinical impact will represent an important yardstick in our evaluation of all AI studies.

Patient-centered care: Early phases of scientific development have focused on directly measurable outcomes, reflecting the broader availability of data on these outcomes. However, how algorithmic care may shape the care experience of individuals and outcomes of interest to patients remains an understudied domain. 7 Implementing novel technology to enhance patient care and experience can only achieve its intended effect when patients believe that it offers them an advance—either through more time with their clinicians, more accessible information on their care decisions, or personalized interventions that target the outcomes of interest to them. We encourage studies that consider domains of autonomy, mobility, comfort, education, or other aspects of health not measured in traditional outcome assessments.

Health care quality: Advances in modern medicine are often stymied by the inability to translate evidence-based care to all patients. As clinicians increasingly provide care for more complex patient conditions in an ever-expanding therapeutic landscape, AI can play a crucial role in alleviating current challenges in optimizing clinical care, 8 if stewarded appropriately when positioned in the medical enterprise. 9 We are interested in studies that assess the potential for AI technologies to improve access to high-quality health care for all patients.

Fairness in AI algorithms: We encourage the explicit assessment of the fairness of algorithms and their potential effect on health inequities. Through development on biased data sources or restricted deployment in privileged health care settings, algorithms can potentially exacerbate health outcome gaps across socioeconomic and sociocultural axes. 9 We are interested in studies that assess the fairness of algorithms, their potential impact on health disparities, and strategies to mitigate biases.

Medical education and clinician experience: In addition to patient-facing science, we seek investigations into the role of AI in addressing the challenges clinicians face in medical training and in the practice of medicine. The information overload through digital health technologies has posed an increasing burden on clinicians, with unintended consequences for their health and well-being. This remains a central area to target for AI in health. The investigations in this domain will evaluate the use of AI to enable a health care team and its members to function to the highest and best use of their expertise.

Global solutions: To advance health care beyond well-resourced countries, critical technologies would need to adapt to the infrastructural, technological, and health care milieu across the globe. We invite investigations to submit science that demonstrates and evaluates AI applications that enhance care within the limitations of low-resource settings. AI-driven method development that enables low-cost tools to be even more effective at diagnosis and treatment, and those that guide the fair and appropriate allocation of limited resources, may move the needle on bridging the health disparities across societies across the globe.

JAMA is one of the most widely circulated general medicine journals in the world and the flagship journal of the JAMA Network, which includes 11 specialty journals and JAMA Network Open . Submissions are welcome to all the JAMA Network journals. The Network also offers the advantage of coordinated publications, as well as amplification of findings to specific audiences of interest. With a mission to reach clinicians, scientists, patients, policymakers, and the general public globally, the value of JAMA and the JAMA Network for authors and readers interested in AI in medicine is clear.

We seek to engage scientists and other thought leaders advancing AI and medicine across clinical, computational, health policy, and public health domains. We invite authors to communicate directly with the editors about topics they believe can impact health care delivery and to connect with the editors to discuss further the development of your science and our approach to its evaluation; such engagement is critical in this rapidly evolving field. We are committed to including diverse opinions and voices in the journal and urge experts from across the career spectrum and the globe to participate in the discourse. The editors are committed to communicating science effectively to a broad range of stakeholders across our digital, multimedia, and social media avenues. As AI promises to enable major health care transformation, JAMA and the JAMA Network are positioned to serve as a platform for the publication of this transformative work.

Corresponding Author: Kirsten Bibbins-Domingo, PhD, MD, MAS, JAMA ( [email protected] ).

Published Online: August 11, 2023. doi:10.1001/jama.2023.15481

Conflict of Interest Disclosures: Dr Khera reported receiving grants from NHLBI, Doris Duke Charitable Foundation, Bristol Myers Squibb, and Novo Nordisk, and serving as cofounder of Evidence2Health, outside the submitted work. Dr Butte reported being a cofounder and consultant to Personalis and NuMedii; consultant to Mango Tree Corporation, Samsung, 10x Genomics, Helix, Pathway Genomics, and Verinata (Illumina); has served on paid advisory panels or boards for Geisinger Health, Regenstrief Institute, Gerson Lehman Group, AlphaSights, Covance, Novartis, Genentech, Merck, and Roche; is a shareholder in Personalis and NuMedii; is a minor shareholder in Apple, Meta (Facebook), Alphabet (Google), Microsoft, Amazon, Snap, 10x Genomics, Illumina, Regeneron, Sanofi, Pfizer, Royalty Pharma, Moderna, Sutro, Doximity, BioNtech, Invitae, Pacific Biosciences, Editas Medicine, Nuna Health, Assay Depot, and Vet24seven, and several other nonhealth-related companies and mutual funds; and has received honoraria and travel reimbursement for invited talks from Johnson & Johnson, Roche, Genentech, Pfizer, Merck, Lilly, Takeda, Varian, Mars, Siemens, Optum, Abbott, Celgene, AstraZeneca, AbbVie, Westat, and many academic institutions, medical, or disease-specific foundations and associations, and health systems; receives royalty payments through Stanford University for several patents and other disclosures licensed to NuMedii and Personalis; and has had research funded by NIH, Peraton, Genentech, Johnson & Johnson, FDA, Robert Wood Johnson Foundation, Leon Lowenstein Foundation, Intervalien Foundation, Chan Zuckerberg Initiative, the Barbara and Gerson Bakar Foundation, and in the recent past, the March of Dimes, Juvenile Diabetes Research Foundation, California Governor’s Office of Planning and Research, California Institute for Regenerative Medicine, L’Oreal, and Progenity. No other disclosures were reported.

See More About

Khera R , Butte AJ , Berkwits M, et al. AI in Medicine— JAMA ’s Focus on Clinical Outcomes, Patient-Centered Care, Quality, and Equity. JAMA. 2023;330(9):818–820. doi:10.1001/jama.2023.15481

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Center for Artificial Intelligence in Medicine & Imaging

The AIMI Center

Stanford has established the AIMI Center to develop, evaluate, and disseminate artificial intelligence systems to benefit patients.  We conduct research that solves clinically important problems using machine learning and other AI techniques.

Curtis Langlotz

Director's Welcome

Back in 2017, I tweeted “radiologists who use AI will replace radiologists who don’t.”  The tweet has taken on a life of its own, perhaps because it has a double meaning.

ai medicine research

Register for #AIMI24

AIMI Dataset Index

AIMI has launched a community-driven resource of health AI datasets for machine learning in healthcare as part of our vision of catalyze sharing well curated, de-identified clinical data sets

AIMI Summer Research Internship & AI Bootcamp

Inviting high school students to join us for a two-week virtual journey delving into the intersection of AI and healthcare through our summer programs. Applications due March 31, 2024.

AIMI Datasets for Research & Commercial Use

The AIMI Center is helping to catalyze outstanding open science by publicly releasing 20+ AI-ready clinical data sets (many with code and AI models) for research and commercial use.

ai medicine research

Upcoming Events

Aimi symposium 2024, ibiis-aimi seminar: mildred cho, phd, ibiis-aimi seminar: bo wang, phd.

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

ai medicine research

How friends helped fuel the rise of a relentless enemy 

3D Illustration Concept of Human Liver

Getting ahead of liver cancer

Bottles of liquor.

Alcohol is dangerous. So is ‘alcoholic.’

Illustration by Ben Boothman

AI revolution in medicine

Alvin Powell

Harvard Staff Writer

It may lift personalized treatment, fill gaps in access to care, cut red tape but risks abound

Third in a series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the coming age of artificial intelligence and machine learning.

T he news is bad: “I’m sorry, but you have cancer.”

Those unwelcome words sink in for a few minutes, and then your doctor begins describing recent advances in artificial intelligence, advances that let her compare your case to the cases of every other patient who’s ever had the same kind of cancer. She says she’s found the most effective treatment, one best suited for the specific genetic subtype of the disease in someone with your genetic background — truly personalized medicine.

And the prognosis is good.

Also in the series

Illustration of people making ethical decisions.

Trailblazing initiative marries ethics, tech

Illustration Balancing business big and small.

Great promise but potential for peril

Illustration of robot making decisions.

Imagine a world in which AI is in your home, at work, everywhere

It has taken time — some say far too long — but medicine stands on the brink of an AI revolution. In a recent article in the New England Journal of Medicine, Isaac Kohane, head of Harvard Medical School’s Department of Biomedical Informatics, and his co-authors say that AI will indeed make it possible to bring all medical knowledge to bear in service of any case. Properly designed AI also has the potential to make our health care system more efficient and less expensive, ease the paperwork burden that has more and more doctors considering new careers, fill the gaping holes in access to quality care in the world’s poorest places, and, among many other things, serve as an unblinking watchdog on the lookout for the medical errors that kill an estimated 200,000 people and cost $1.9 billion annually.

“I’m convinced that the implementation of AI in medicine will be one of the things that change the way care is delivered going forward,” said David Bates, chief of internal medicine at Harvard-affiliated Brigham and Women’s Hospital, professor of medicine at Harvard Medical School and of health policy and management at the Harvard T.H. Chan School of Public Health. “It’s clear that clinicians don’t make as good decisions as they could. If they had support to make better decisions, they could do a better job.”

Years after AI permeated other aspects of society, powering everything from creepily sticky online ads to financial trading systems to kids’ social media apps to our increasingly autonomous cars, the proliferation of studies showing the technology’s algorithms matching the skill of human doctors at a number of tasks signals its imminent arrival.

“I think it’s an unstoppable train in a specific area of medicine — showing true expert-level performance — and that’s in image recognition,” said Kohane, who is also the Marion V. Nelson Professor of Biomedical Informatics. “Once again medicine is slow to the mark. I’m no longer irritated but bemused that my kids, in their social sphere, are using more advanced AI than I use in my practice.”

But even those who see AI’s potential value recognize its potential risks. Poorly designed systems can misdiagnose. Software trained on data sets that reflect cultural biases will incorporate those blind spots. AI designed to both heal and make a buck might increase — rather than cut — costs, and programs that learn as they go can produce a raft of unintended consequences once they start interacting with unpredictable humans.

“I think the potential of AI and the challenges of AI are equally big,” said Ashish Jha, former director of the Harvard Global Health Institute and now dean of Brown University’s School of Public Health. “There are some very large problems in health care and medicine, both in the U.S. and globally, where AI can be extremely helpful. But the costs of doing it wrong are every bit as important as its potential benefits. … The question is: Will we be better off?”

Many believe we will, but caution that implementation has to be done thoughtfully, with recognition of not just AI’s strengths but also its weaknesses, and taking advantage of a range of viewpoints brought by experts in fields outside of medicine and computer science, including ethics and philosophy, sociology, psychology, behavioral economics, and, one day, those trained in the budding field of machine behavior, which seeks to understand the complex and evolving interaction of humans and machines that learn as they go.

“You’re not expecting this AI doctor that’s going to cure all ills but rather AI that provides support so better decisions can be made.”

— Finale Doshi-Velez, John L. Loeb Associate Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences

Rose Lincoln/Harvard file photo

“The challenge with machine behavior is that you’re not deploying an algorithm in a vacuum. You’re deploying it into an environment where people will respond to it, will adapt to it. If I design a scoring system to rank hospitals, hospitals will change,” said David Parkes, George F. Colony Professor of Computer Science, co-director of the Harvard Data Science Initiative, and one of the co-authors of a recent article in the journal Nature calling for the establishment of machine behavior as a new field. “Just as it would be challenging to understand how a new employee will do in a new work environment, it’s challenging to understand how machines will do in any kind of environment, because people will adapt to them, will change their behavior.”

Machine learning on the doorstep

Though excitement has been building about the latest wave of AI, the technology has been in medicine for decades in some form, Parkes said. As early as the 1970s, “expert systems” were developed that encoded knowledge in a variety of fields in order to make recommendations on appropriate actions in particular circumstances. Among them was Mycin, developed by Stanford University researchers to help doctors better diagnose and treat bacterial infections. Though Mycin was as good as human experts at this narrow chore, rule-based systems proved brittle, hard to maintain, and too costly, Parkes said.

The excitement over AI these days isn’t because the concept is new. It’s owing to rapid progress in a branch called machine learning, which takes advantage of recent advances in computer processing power and in big data that have made compiling and handling massive data sets routine. Machine learning algorithms — sets of instructions for how a program operates — have become sophisticated enough that they can learn as they go, improving performance without human intervention.

“The superpower of these AI systems is that they can look at all of these large amounts of data and hopefully surface the right information or the right predictions at the right time,” said Finale Doshi-Velez, John L. Loeb Associate Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “Clinicians regularly miss various bits of information that might be relevant in the patient’s history. So that’s an example of a relatively low-hanging fruit that could potentially be very useful.”

Before being used, however, the algorithm has to be trained using a known data set. In medical imaging, a field where experts say AI holds the most promise soonest, the process begins with a review of thousands of images — of potential lung cancer, for example — that have been viewed and coded by experts. Using that feedback, the algorithm analyzes an image, checks the answer, and moves on, developing its own expertise.

In recent years, increasing numbers of studies show machine-learning algorithms equal and, in some cases, surpass human experts in performance. In 2016, for example, researchers at Beth Israel Deaconess Medical Center reported that an AI-powered diagnostic program correctly identified cancer in pathology slides 92 percent of the time, just shy of trained pathologists’ 96 percent. Combining the two methods led to 99.5 percent accuracy.

More recently, in December 2018, researchers at Massachusetts General Hospital (MGH) and Harvard’s SEAS reported a system that was as accurate as trained radiologists at diagnosing intracranial hemorrhages, which lead to strokes. And in May 2019, researchers at Google and several academic medical centers reported an AI designed to detect lung cancer that was 94 percent accurate, beating six radiologists and recording both fewer false positives and false negatives.

“The challenge with machine behavior is that you’re not deploying an algorithm in a vacuum. You’re deploying it into an environment where people will respond to it, will adapt to it.”

— David Parkes, George F. Colony Professor of Computer Science and co-director of the Harvard Data Science Initiative

Kris Snibbe/Harvard file photo

One recent area where AI’s promise has remained largely unrealized is the global response to COVID-19, according to Kohane and Bates. Bates, who delivered a talk in August at the Riyad Global Digital Health Summit titled “Use of AI in Weathering the COVID Storm,” said though there were successes, much of the response has relied on traditional epidemiological and medical tools.

One striking exception, he said, was the early detection of unusual pneumonia cases around a market in Wuhan, China, in late December by an AI system developed by Canada-based BlueDot. The detection, which would turn out to be SARS-CoV-2, came more than a week before the World Health Organization issued a public notice of the new virus.

“We did some things with artificial intelligence in this pandemic, but there is much more that we could do,” Bates told the online audience.

In comments in July at the online conference FutureMed, Kohane was more succinct: “It was a very, very unimpressive performance. … We in health care were shooting for the moon, but we actually had not gotten out of our own backyard.”

The two agree that the biggest impediment to greater use of AI in formulating COVID response has been a lack of reliable, real-time data. Data collection and sharing have been slowed by older infrastructure — some U.S. reports are still faxed to public health centers, Bates said — by lags in data collection, and by privacy concerns that short-circuit data sharing.

“COVID has shown us that we have a data-access problem at the national and international level that prevents us from addressing burning problems in national health emergencies,” Kohane said.  

A key success, Kohane said, may yet turn out to be the use of machine learning in vaccine development. We won’t likely know for some months which candidates proved most successful, but Kohane pointed out that the technology was used to screen large databases and select which viral proteins offered the greatest chance of success if blocked by a vaccine.

“It will play a much more important role going forward,” Bates said, expressing confidence that the current hurdles would be overcome. “It will be a key enabler of better management in the next pandemic.”

Corporations agree about that future promise and in recent years have been scrambling to join in. In February 2019, IBM Watson Health began a 10-year, $50 million partnership with Brigham and Women’s Hospital and Vanderbilt University Medical Center whose aim is to use AI on electronic health records and claims data to improve patient safety, precision medicine, and health equity. And in March 2019, Amazon awarded a $2 million AI research grant to Beth Israel in an effort to improve hospital efficiency, including patient care and clinical workflows.

A force multiplier?

A properly developed and deployed AI, experts say, will be akin to the cavalry riding in to help beleaguered physicians struggling with unrelenting workloads, high administrative burdens, and a tsunami of new clinical data.

Robert Truog, head of the HMS Center for Bioethics, the Frances Glessner Lee Professor of Legal Medicine, and a pediatric anesthesiologist at Boston Children’s Hospital, said the defining characteristic of his last decade in practice has been a rapid increase in information. While more data about patients and their conditions might be viewed as a good thing, it’s only good if it can be usefully managed.

“Psychologists say that humans can handle four independent variables and when we get to five, we’re lost. So AI is coming at the perfect time. It has the potential to rescue us from data overload.”

— Robert Truog, head of the Harvard Medical School Center for Bioethics and the the Frances Glessner Lee Professor of Legal Medicine

“Over the last 10 years of my career the volume of data has absolutely gone exponential,” Truog said. “I would have one image on a patient per day: their morning X-ray. Now, if you get an MRI, it generates literally hundreds of images, using different kinds of filters, different techniques, all of which convey slightly different variations of information. It’s just impossible to even look at all of the images.

“Psychologists say that humans can handle four independent variables and when we get to five, we’re lost,” he said. “So AI is coming at the perfect time. It has the potential to rescue us from data overload.”

Given the technology’s facility with medical imaging analysis, Truog, Kohane, and others say AI’s most immediate impact will be in radiology and pathology, fields where those skills are paramount. And, though some see a future with fewer radiologists and pathologists, others disagree. The best way to think about the technology’s future in medicine, they say, is not as a replacement for physicians, but rather as a force-multiplier and a technological backstop that not only eases the burden on personnel at all levels, but makes them better.

“You’re not expecting this AI doctor that’s going to cure all ills but rather AI that provides support so better decisions can be made,” Doshi-Velez said. “Health is a very holistic space, and I don’t see AIs being anywhere near able to manage a patient’s health. It’s too complicated. There are too many factors, and there are too many factors that aren’t really recorded.”

In a September 2019 issue of the Annals of Surgery, Ozanan Meireles, director of MGH’s Surgical Artificial Intelligence and Innovation Laboratory, and general surgery resident Daniel Hashimoto offered a view of what such a backstop might look like. They described a system that they’re training to assist surgeons during stomach surgery by having it view thousands of videos of the procedure. Their goal is to produce a system that one day could virtually peer over a surgeon’s shoulder and offer advice in real time.

At the Harvard Chan School, meanwhile, a group of faculty members, including James Robins, Miguel Hernan, Sonia Hernandez-Diaz, and Andrew Beam, are harnessing machine learning to identify new interventions that can improve health outcomes.

Their work, in the field of “causal inference,” seeks to identify different sources of the statistical associations that are routinely found in the observational studies common in public health. Those studies are good at identifying factors that are linked to each other but less able to identify cause and effect. Hernandez-Diaz, a professor of epidemiology and co-director of the Chan School’s pharmacoepidemiology program, said causal inference can help interpret associations and recommend interventions.

For example, elevated enzyme levels in the blood can predict a heart attack, but lowering them will neither prevent nor treat the attack. A better understanding of causal relationships — and devising algorithms to sift through reams of data to find them — will let researchers obtain valid evidence that could lead to new treatments for a host of conditions.

“We will make mistakes, but the momentum won’t go back the other way,” Hernandez-Diaz said of AI’s increasing presence in medicine. “We will learn from them.”

Finding new interventions is one thing; designing them so health professionals can use them is another. Doshi-Velez’s work centers on “interpretable AI” and optimizing how doctors and patients can put it to work to improve health.

AI’s strong suit is what Doshi-Velez describes as “large, shallow data” while doctors’ expertise is the deep sense they may have of the actual patient. Together, the two make a potentially powerful combination, but one whose promise will go unrealized if the physician ignores AI’s input because it is rendered in hard-to-use or unintelligible form.

“I’m very excited about this team aspect and really thinking about the things that AI and machine-learning tools can provide an ultimate decision-maker — we’ve focused on doctors so far, but it could also be the patient — to empower them to make better decisions,” Doshi-Velez said.

“Getting diversity in the training of these algorithms is going to be incredibly important, otherwise we will be in some sense pouring concrete over whatever current distortions exist.”

— Isaac Kohane, head of Harvard Medical School’s Department of Biomedical Informatics

Stephanie Mitchell/Harvard file photo

While many point to AI’s potential to make the health care system work better, some say its potential to fill gaps in medical resources is also considerable. In regions far from major urban medical centers, local physicians could be able to get assistance diagnosing and treating unfamiliar conditions and have available an AI-driven consultant that allows them to offer patients a specialists’ insight as they decide whether a particular procedure — or additional expertise — is needed.

Outside the developed world that capability has the potential to be transformative, according to Jha. AI-powered applications have the potential to vastly improve care in places where doctors are absent, and informal medical systems have risen to fill the need. Recent studies in India and China serve as powerful examples. In India’s Bihar state, for example, 86 percent of cases resulted in unneeded or harmful medicine being prescribed. Even in urban Delhi, 54 percent of cases resulted in unneeded or harmful medicine.

“If you are sick, is it better to go to the doctor or not? In 2019, in large parts of the world, it’s a wash. It’s unclear. And that is scary,” Jha said. “So it’s a low bar. People ask, ‘Will AI be helpful?’ I say we’d really have to screw up AI for it not to be helpful. Net-net, the opportunity for improvement over the status quo is massive.”

A double-edged sword?

Though the promise is great, the road ahead isn’t necessarily smooth. Even AI’s most ardent supporters acknowledge that the likely bumps and potholes, both seen and unseen, should be taken seriously.

One challenge is ensuring that high-quality data is used to train AI. If it is biased or otherwise flawed, that will be reflected in the performance. A second challenge is ensuring that the prejudices rife in society aren’t reflected in the algorithms, added by programmers unaware of those they may unconsciously hold.

That potential was a central point in a 2016 Wisconsin legal case, when an AI-driven, risk-assessment system for criminal recidivism was used in sentencing a man to six years in prison. The judge remarked that the “risk-assessment tools that have been utilized suggest that you’re extremely high risk to reoffend.”

The defendant challenged the sentence, arguing that the AI’s proprietary software — which he couldn’t examine — may have violated his right to be sentenced based on accurate information. The sentence was upheld by the state supreme court, but that case, and the spread of similar systems to assess pretrial risk, has generated national debate over the potential for injustices due to our increasing reliance on systems that have power over freedom or, in the health care arena, life and death, and that may be unfairly tilted or outright wrong.

“We have to recognize that getting diversity in the training of these algorithms is going to be incredibly important, otherwise we will be in some sense pouring concrete over whatever current distortions exist in practice, such as those due to socioeconomic status, ethnicity, and so on,” Kohane said.

Also highlighted by the case is the “black box” problem. Since the algorithms are designed to learn and improve their performance over time, sometimes even their designers can’t be sure how they arrive at a recommendation or diagnosis, a feature that leaves some uncomfortable.

“If you see a frontline community health worker in India disagree with a tool developed by a big company in Silicon Valley, Silicon Valley is going to win. And that’s potentially a dangerous thing.”

— Ashish Jha, former director of the Harvard Global Health Institute and now dean of Brown University’s School of Public Health

Jon Chase/Harvard Staff Photographer

“If you start applying it, and it’s wrong, and we have no ability to see that it’s wrong and to fix it, you can cause more harm than good,” Jha said. “The more confident we get in technology, the more important it is to understand when humans can override these things. I think the Boeing 737 Max example is a classic example. The system said the plane is going up, and the pilots saw it was going down but couldn’t override it.”

Jha said a similar scenario could play out in the developing world should, for example, a community health worker see something that makes him or her disagree with a recommendation made by a big-name company’s AI-driven app. In such a situation, being able to understand how the app’s decision was made and how to override it is essential.

“If you see a frontline community health worker in India disagree with a tool developed by a big company in Silicon Valley, Silicon Valley is going to win,” Jha said. “And that’s potentially a dangerous thing.”

Researchers at SEAS and MGH’s Radiology Laboratory of Medical Imaging and Computation are at work on the two problems. The AI-based diagnostic system to detect intracranial hemorrhages unveiled in December 2019 was designed to be trained on hundreds, rather than thousands, of CT scans. The more manageable number makes it easier to ensure the data is of high quality, according to Hyunkwang Lee, a SEAS doctoral student who worked on the project with colleagues including Sehyo Yune, a former postdoctoral research fellow at MGH Radiology and co-first author of a paper on the work, and Synho Do, senior author, HMS assistant professor of radiology, and director of the lab.

“We ensured the data set is of high quality, enabling the AI system to achieve a performance similar to that of radiologists,” Lee said.

Second, Lee and colleagues figured out a way to provide a window into an AI’s decision-making, cracking open the black box. The system was designed to show a set of reference images most similar to the CT scan it analyzed, allowing a human doctor to review and check the reasoning.

Jonathan Zittrain, Harvard’s George Bemis Professor of Law and director of the Berkman Klein Center for Internet and Society, said that, done wrong, AI in health care could be analogous to the cancer-causing asbestos that was used for decades in buildings across the U.S., with widespread harmful effects not immediately apparent. Zittrain pointed out that image analysis software, while potentially useful in medicine, is also easily fooled. By changing a few pixels of an image of a cat — still clearly a cat to human eyes — MIT students prompted Google image software to identify it, with 100 percent certainty, as guacamole. Further, a well-known study by researchers at MIT and Stanford showed that three commercial facial-recognition programs had both gender and skin-type biases.

Ezekiel Emanuel, a professor of medical ethics and health policy at the University of Pennsylvania’s Perelman School of Medicine and author of a recent Viewpoint article in the Journal of the American Medical Association, argued that those anticipating an AI-driven health care transformation are likely to be disappointed. Though he acknowledged that AI will likely be a useful tool, he said it won’t address the biggest problem: human behavior. Though they know better, people fail to exercise and eat right, and continue to smoke and drink too much. Behavior issues also apply to those working within the health care system, where mistakes are routine.

“We need fundamental behavior change on the part of these people. That’s why everyone is frustrated: Behavior change is hard,” Emanuel said.

Susan Murphy, professor of statistics and of computer science, agrees and is trying to do something about it. She’s focusing her efforts on AI-driven mobile apps with the aim of reinforcing healthy behaviors for people who are recovering from addiction or dealing with weight issues, diabetes, smoking, or high blood pressure, conditions for which the personal challenge persists day by day, hour by hour.

The sensors included in ordinary smartphones, augmented by data from personal fitness devices such as the ubiquitous Fitbit, have the potential to give a well-designed algorithm ample information to take on the role of a health care angel on your shoulder.

The tricky part, Murphy said, is to truly personalize the reminders. A big part of that, she said, is understanding how and when to nudge — not during a meeting, for example, or when you’re driving a car, or even when you’re already exercising, so as to best support adopting healthy behaviors.

“How can we provide support for you in a way that doesn’t bother you so much that you’re not open to help in the future?” Murphy said. “What our algorithms do is they watch how responsive you are to a suggestion. If there’s a reduction in responsivity, they back off and come back later.”

The apps can use sensors on your smartphone to figure out what’s going on around you. An app may know you’re in a meeting from your calendar, or talking more informally from ambient noise its microphone detects. It can tell from the phone’s GPS how far you are from a gym or an AA meeting or whether you are driving and so should be left alone.

Trickier still, Murphy said, is how to handle moments when the AI knows more about you than you do. Heart rate sensors and a phone’s microphone might tell an AI that you’re stressed out when your goal is to live more calmly. You, however, are focused on an argument you’re having, not its physiological effects and your long-term goals. Does the app send a nudge, given that it’s equally possible that you would take a calming breath or angrily toss your phone across the room?

Working out such details is difficult, albeit key, Murphy said, in order to design algorithms that are truly helpful, that know you well, but are only as intrusive as is welcome, and that, in the end, help you achieve your goals.

More like this

Socrates and binary code.

Embedding ethics in computer science curriculum

David Parkes.

The science of the artificial

Ashley Nunes at the podium

The algorithm will see you now

ai medicine research

Onward and upward, robots

Enlisting allies.

For AI to achieve its promise in health care, algorithms and their designers have to understand the potential pitfalls. To avoid them, Kohane said it’s critical that AIs are tested under real-world circumstances before wide release.

Similarly, Jha said it’s important that such systems aren’t just released and forgotten. They should be reevaluated periodically to ensure they’re functioning as expected, which would allow for faulty AIs to be fixed or halted altogether.

Several experts said that drawing from other disciplines — in particular ethics and philosophy — may also help.

Programs like Embedded EthiCS at SEAS and the Harvard Philosophy Department, which provides ethics training to the University’s computer science students, seek to provide those who will write tomorrow’s algorithms with an ethical and philosophical foundation that will help them recognize bias — in society and themselves — and teach them how to avoid it in their work.

Disciplines dealing with human behavior — sociology, psychology, behavioral economics — not to mention experts on policy, government regulation, and computer security, may also offer important insights.

“The place we’re likely to fall down is the way in which recommendations are delivered,” Bates said. “If they’re not delivered in a robust way, providers will ignore them. It’s very important to work with human factor specialists and systems engineers about the way that suggestions are made to patients.”

Bringing these fields together to better understand how AIs work once they’re “in the wild” is the mission of what Parkes sees as a new discipline of machine behavior. Computer scientists and health care experts should seek lessons from sociologists, psychologists, and cognitive behaviorists in answering questions about whether an AI-driven system is working as planned, he said.

“How useful was it that the AI system proposed that this medical expert should talk to this other medical expert?” Parkes said. “Was that intervention followed? Was it a productive conversation? Would they have talked anyway? Is there any way to tell?”

Next: A Harvard project asks people to envision how technology will change their lives going forward.

Share this article

You might like.

ai medicine research

Economists imagine an alternate universe where the opioid crisis peaked in ’06, and then explain why it didn’t

3D Illustration Concept of Human Liver

Researchers hope identifying blood proteins may lead to earlier prediction of risk, increase treatment options

Bottles of liquor.

Researcher explains the human toll of language that makes addiction feel worse

How old is too old to run?

No such thing, specialist says — but when your body is trying to tell you something, listen

Seem like Lyme disease risk is getting worse? It is.

The risk of Lyme disease has increased due to climate change and warmer temperature. A rheumatologist offers advice on how to best avoid ticks while going outdoors.

  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

1292 Accesses

6 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

ai medicine research

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: capabilities of gemini models in medicine.

Abstract: Excellence in a wide variety of medical applications poses considerable challenges for AI, requiring advanced reasoning, access to up-to-date medical knowledge and understanding of complex multimodal data. Gemini models, with strong general capabilities in multimodal and long-context reasoning, offer exciting possibilities in medicine. Building on these core strengths of Gemini, we introduce Med-Gemini, a family of highly capable multimodal models that are specialized in medicine with the ability to seamlessly use web search, and that can be efficiently tailored to novel modalities using custom encoders. We evaluate Med-Gemini on 14 medical benchmarks, establishing new state-of-the-art (SoTA) performance on 10 of them, and surpass the GPT-4 model family on every benchmark where a direct comparison is viable, often by a wide margin. On the popular MedQA (USMLE) benchmark, our best-performing Med-Gemini model achieves SoTA performance of 91.1% accuracy, using a novel uncertainty-guided search strategy. On 7 multimodal benchmarks including NEJM Image Challenges and MMMU (health & medicine), Med-Gemini improves over GPT-4V by an average relative margin of 44.5%. We demonstrate the effectiveness of Med-Gemini's long-context capabilities through SoTA performance on a needle-in-a-haystack retrieval task from long de-identified health records and medical video question answering, surpassing prior bespoke methods using only in-context learning. Finally, Med-Gemini's performance suggests real-world utility by surpassing human experts on tasks such as medical text summarization, alongside demonstrations of promising potential for multimodal medical dialogue, medical research and education. Taken together, our results offer compelling evidence for Med-Gemini's potential, although further rigorous evaluation will be crucial before real-world deployment in this safety-critical domain.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Cover for Latest Issue

Latest Issue

Psychiatry’s new frontiers

Hope amid crisis

Recent Issues

  • AI explodes Taking the pulse of artificial intelligence in medicine

Health on a planet in crisis

  • Real-world health How social factors make or break us
  • Molecules of life Understanding the world within us
  • The most mysterious organ Unlocking the secrets of the brain
  • All Articles
  • The spice sellers’ secret
  • ‘And yet, you try’
  • Making sense of smell
  • Before I go
  • My favorite molecule
  • View all Editors’ Picks
  • Diversity, Equity & Inclusion
  • Infectious Diseases
  • View All Articles

This new issue of Stanford Medicine magazine reports on emerging research and innovative treatments to improve mental health.

Cover Illustration by Jules Julien

Featured Media for Psychiatry’s new frontiers

Mental health

Cover for Reasons for hope

Reasons for hope

Solutions for the mental health crisis emerge through innovative research, diagnostics and treatments

Cover for Neuropsychiatry and sandwiches

Neuropsychiatry and sandwiches

How a silo-busting program to probe neuropsychiatric disease was hatched over lunch

Cover for Going beyond ‘How often do you feel blue?’

Going beyond ‘How often do you feel blue?’

AI emotional assessments are aimed at diagnosing mental illness more accurately and quickly

Cover for The early days of a psychedelic resurgence?

The early days of a psychedelic resurgence?

Research with illicit drugs to treat anxiety, depression and PTSD inches forward

Cover for Organoid brain models yield insights into resilience

Organoid brain models yield insights into resilience

Genes influence our ability to bounce back from stress

Cover for Beyond the psychiatrist’s office

Beyond the psychiatrist’s office

Empowering community-based mental health for young people

Cover for ‘We could be changing lives’

‘We could be changing lives’

The importance of getting precise with mental health, treating it as health

Cover for Let’s talk about it

Let’s talk about it

After losing a loved one to depression, a mental health expert finds the courage to tell her story

Cover for New wave psychiatry

New wave psychiatry

Rolling back mental illness with electromagnetism

Cover for How moms and dads can provide mental health care

How moms and dads can provide mental health care

Center helps parents guide their children through psychological challenges

Cover for Toward a psychiatry of resilience

Toward a psychiatry of resilience

As long as we don’t have too much of it, stress makes us stronger, more competent and able to make better decisions

Cover for Culture in care

Culture in care

Stanford Medicine mental health professionals speak to inequities

Letter from the Dean

Advancing mental health crisis solutions

Stanford Medicine researchers, clinicians and medical students are well-positioned to lead in psychiatric discovery and treatments.

Exploring the realms of medicine and healing

Cover for An unusual school celebrates its first century

An unusual school celebrates its first century

K-12 hospital school gives kids a chance for normalcy and connections to other kids during long recoveries

Cover for The power of humility and optimism in health equity advocacy

The power of humility and optimism in health equity advocacy

A conversation with Chelsea Clinton

Cover for The bigger the bucks, the bigger the (dopamine) bang

The bigger the bucks, the bigger the (dopamine) bang

One reason we make bad decisions

Cover for How young is your heart?

How young is your heart?

Progress toward sussing out the biological ages of our various organs

Cover for Upfront

Reviving cognition

Device restores brain function lost to injury

Upfront is a quick look at the latest developments from Stanford Medicine

Lab-grown heart tissue yields insights

Stem cell-derived heart tissue used to study tachycardia

Alexa, manage my diabetes

Voice-activated AI app runs on smart speaker

Autoimmunity’s XX factor

Molecule can set off immune response in women

Teen eating disorders

Hospitalizations climb with broader diagnostic criteria

Telomeres’ obesity connection

Longer telomeres in children linked to exercise and healthy diet

Equalizing cancer screening

Alternative approach for lung cancer screening outperforms national guidelines

Explore Issues

Cover for AI explodes

AI explodes

Taking the pulse of artificial intelligence in medicine

Cover for Health on a planet in crisis

Real-world health

How social factors make or break us

Cover for Molecules of life

Molecules of life

Understanding the world within us

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

May 2, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

Four state-of-the-art AI search engines for histopathology images may not be ready for clinical use

by University of California, Los Angeles

histopathology

Four proposed state-of-the art image search engines for automating search and retrieval of digital histopathology slides were found to be of inadequate performance for routine clinical care, new research suggests.

The performance of the artificial intelligence algorithms to power the histopathology image databases was worse than expected, with some having less than 50% accuracy, which is not suitable for clinical practice , said Dr. Helen Shang, a third-year internal medicine resident and incoming hematology-oncology fellow at the David Geffen School of Medicine at UCLA.

"Currently, there are many AI algorithms being developed for medical tasks but there are fewer efforts directed on rigorous, external validations," said Shang, who co-led the study with Dr. Mohammad Sadegh Nasr of the University of Texas at Arlington. "The field has also yet to standardize how AI algorithms should be best tested prior to clinical adoption."

The paper is published in the journal NEJM AI .

As it now stands, pathologists manually search and retrieve histopathology images, which is very time consuming. As a result, there has been growing interest in developing automated search and retrieval systems for the digitized cancer images.

The researchers designed a series of experiments to evaluate the accuracy of search engine results on tissue and subtype retrieval tasks on real-world UCLA cases and larger, unseen datasets. The four engines examined are Yottixel, SISH, RetCCL, HSHR. Each takes a different approach toward indexing, database generation, ranking and retrieval of images.

Overall, the researchers found inconsistent results across the four algorithms—for instance, Yottixel performed best on breast tissue , while RetCCL had the highest performance on brain tissue . They also found that a group of pathologists found search engine results to be of low to average quality with several visible errors.

The researchers are devising new guidelines to standardize the clinical validation of AI tools, Shang said. They are also developing new algorithms that leverage a variety of different data types to develop more reliable and accurate predictions.

"Our studies show that despite amazing progress in artificial intelligence over the past decade, significant improvements are still needed prior to widespread uptake in medicine," Shang said. "These improvements are essential in order to avoid doing patients harm while maximizing the benefits of artificial intelligence to society."

Explore further

Feedback to editors

ai medicine research

New study reveals how teens thrive online: Factors that shape digital success revealed

4 hours ago

ai medicine research

New approach for developing cancer vaccines could make immunotherapies more effective in acute myeloid leukemia

17 hours ago

ai medicine research

Drug targeting RNA modifications shows promise for treating neuroblastoma

ai medicine research

Researchers discover compounds produced by gut bacteria that can treat inflammation

18 hours ago

ai medicine research

A common type of fiber may trigger bowel inflammation

ai medicine research

People with gas and propane stoves breathe more unhealthy nitrogen dioxide, study finds

19 hours ago

ai medicine research

Newly discovered mechanism of T-cell control can interfere with cancer immunotherapies

ai medicine research

Scientists discover new immunosuppressive mechanism in brain cancer

ai medicine research

Birdwatching can help students improve mental health, reduce distress

ai medicine research

Doctors describe Texas dairy farm worker's case of bird flu

Related stories.

ai medicine research

Researchers develop AI foundation models to advance pathology

Mar 19, 2024

ai medicine research

AI in histopathology image analysis for cancer precision medicine

Oct 24, 2023

ai medicine research

Automated assessment of pathology image quality

Apr 21, 2022

ai medicine research

Evaluation of AI for medical imaging: A key requirement for clinical translation

Sep 13, 2022

ai medicine research

Research harnesses AI to fight breast cancer

May 10, 2021

ai medicine research

Will 'AI' be part of your health-care team?

Dec 12, 2017

Recommended for you

ai medicine research

Real-time MRI reveals the movement dynamics of stuttering

23 hours ago

ai medicine research

AI can tell if a patient battling cancer needs mental health support

ai medicine research

Brain imaging study reveals connections critical to human consciousness

May 1, 2024

ai medicine research

Machine learning tool identifies rare, undiagnosed immune disorders through patients' electronic health records

ai medicine research

With huge patient dataset, AI accurately predicts treatment outcomes

ai medicine research

Study finds ChatGPT fails at heart risk assessment

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 22 February 2024

To do no harm — and the most good — with AI in health care

  • Carey Beth Goldberg 1 ,
  • Laura Adams 2 ,
  • David Blumenthal 3 ,
  • Patricia Flatley Brennan 4 , 5 ,
  • Noah Brown   ORCID: orcid.org/0000-0002-8490-1775 6 ,
  • Atul J. Butte   ORCID: orcid.org/0000-0002-7433-2740 7 ,
  • Morgan Cheatham 8 ,
  • Dave deBronkart 9 , 10 ,
  • Jennifer Dixon 11 ,
  • Jeffrey Drazen 12 ,
  • Barbara J. Evans 13 ,
  • Sara M. Hoffman 6 ,
  • Chris Holmes   ORCID: orcid.org/0000-0002-6667-4943 14 , 15 ,
  • Peter Lee 16 ,
  • Arjun Kumar Manrai 6 , 12 ,
  • Gilbert S. Omenn   ORCID: orcid.org/0000-0002-8976-6074 17 ,
  • Jonathan B. Perlin 18 ,
  • Rachel Ramoni 19 ,
  • Guillermo Sapiro 20 , 21 ,
  • Rupa Sarkar 22 ,
  • Harpreet Sood 23 , 24 ,
  • Effy Vayena 25 ,
  • Isaac S. Kohane   ORCID: orcid.org/0000-0003-2192-5160 6 &

the RAISE Consortium

Nature Medicine volume  30 ,  pages 623–627 ( 2024 ) Cite this article

6858 Accesses

3 Citations

104 Altmetric

Metrics details

  • Health care

Drawing from real-life scenarios and insights shared at the RAISE (Responsible AI for Social and Ethical Healthcare) conference, we highlight the critical need for AI in health care (AIH) to primarily benefit patients and address current shortcomings in health care systems such as medical errors and access disparities.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

World Health Organization. https://iris.who.int/handle/10665/373421 (2023).

Coalition for Health AI. https://www.coalitionforhealthai.org/papers/Blueprint%20for%20Trustworthy%20AI.pdf (7 December 2022).

GOV.UK. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countriesattending-the-ai-safety-summit-1-2-november-2023 (1 November 2023).

The Commonwealth Fund. https://www.commonwealthfund.org/blog/2018/electronic-health-record-problem (13 December 2018).

STANDING Together Working Group. https://www.datadiversity.org (2023).

UK Data Service. https://ukdataservice.ac.uk/help/secure-lab/what-is-the-five-safes-framework (2020).

Publications Office of the European Union. https://op.europa.eu/en/publication-detail/-/publication/d50d3be5-c88b-4b46-af0edebf4d1106ba/language-en (2013).

Download references

Acknowledgements

The RAISE symposium was supported by grants from the Harvard Medical School Center for Bioethics, the Harvard Medical School Office of the Dean, the Gordon and Betty Moore Foundation, The Health Foundation, Microsoft Corporation, and Apple.

Author information

Authors and affiliations.

Massachusetts Institute of Technology, Cambridge, MA, USA

  • Carey Beth Goldberg

National Academy of Medicine, Washington, DC, USA

Laura Adams

Department of Health Policy and Management, Harvard University, T.H. Chan School of Public Health, Boston, MA, USA

David Blumenthal

National Library of Medicine, Bethesda, MD, USA

Patricia Flatley Brennan

University of Wisconsin–Madison, Madison, WI, USA

Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA

Noah Brown, Sara M. Hoffman, Arjun Kumar Manrai, Isaac S. Kohane, William Gordon & Amelia Li Min Tan

Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, CA, USA

Atul J. Butte

Warren Alpert Medical School of Brown University, Providence, RI, USA

Morgan Cheatham

e-Patient Dave, LLC, Nashua, NH, USA

Dave deBronkart

Society for Participatory Medicine, Pembroke, MA, USA

The Health Foundation, London, UK

Jennifer Dixon

NEJM Group, Waltham, MA, USA

Jeffrey Drazen & Arjun Kumar Manrai

Levin College of Law and Wertheim College of Engineering, University of Florida, Gainesville, FL, USA

Barbara J. Evans

Department of Statistics and Nuffield Department of Medicine, University of Oxford, Oxford, UK

Chris Holmes

The Alan Turing Institute, London, UK

Microsoft Corporation, Redmond, WA, USA

University of Michigan Health System, University of Michigan, Ann Arbor, MI, USA

Gilbert S. Omenn

The Joint Commission, Oakbrook Terrace, IL, USA

Jonathan B. Perlin

U.S. Department of Veterans Affairs, Washington, DC, USA

Rachel Ramoni

Duke University, Durham, NC, USA

Guillermo Sapiro

Apple, New York, NY, USA

The Lancet Ltd., Lancet Digital Health, London, UK

Rupa Sarkar

National Health Service England, Hurley Group, Redditch, UK

Harpreet Sood

Huma, London, UK

ETH Zurich, Zurich, Switzerland

Effy Vayena

Brigham and Women’s Hospital, Boston, MA, USA

Emily Alsentzer & Lisa Soleymani Lehmann

Harvard Medical School, Boston, MA, USA

Emily Alsentzer, Michael Chernew, Alexander Hoffmann & Lisa Soleymani Lehmann

MITRE Corporation, Bedford, MA, USA

Brian Anderson

Coalition for Health AI, Boston, MA, USA

The Ivan and Francesca Berkowitz Family Living Laboratory Collaboration at Harvard Medical School, Boston, USA

Ran D. Balicer

Clalit Research Institute, Innovation Division, Clalit Health Services, Tel Aviv, Israel

Ran D. Balicer & Reut Ohana

Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA

Andrew L. Beam

Wyss Center for Bio and Neuroengineering, Geneva, Switzerland

Erwin Bottinger

Harvard Medical School Center for Bioethics, Boston, MA, USA

Rebecca W. Brendel & Edward M. Hundert

Harvard-MIT Program in Health Sciences and Technology, Cambridge, MA, USA

Payal Chandak & Elizabeth Healey

TriNetX, Cambridge, MA, USA

Arnaub Chatterjee

Healthcare Performance Lab INSERM U1290 Lyon 1 University, Lyon, France

Antoine Duclos

Health Data Department Lyon University Hospital, Lyon, France

Center for Surgery and Public Health, Brigham and Women’s Hospital, Boston, MA, USA

Perelman School of Medicine of the University of Pennsylvania, Philadelphia, PA, USA

Lee A. Fleisher

Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA

William Gordon

University of Antioquia, Alma Máter Hospital, Medellín, Colombia

Alejandro Hernández-Arango

Undiagnosed Diseases Network Foundation, Washington, DC, USA

Michele Kathleen Herndon

BMJ Leader, London, UK

Indra Joshi

Palantir, London, UK

Institute for Implementation Science in Health Care, University of Zurich, Zurich, Switzerland

Tobias Kowatsch

School of Medicine, University of St. Gallen, St. Gallen, Switzerland

Centre for Digital Health Interventions, Department of Management, Technology, and Economics at ETH Zurich, Zurich, Switzerland

Bessemer Venture Partners, New York, NY, USA

Stephen Kraus

Harvard T.H. Chan School of Public Health, Boston, MA, USA

Lisa Soleymani Lehmann

Universitat de Barcelona, Artificial Intelligence in Medicine Lab, Department of Mathematics and Computer Science, Barcelona, Spain

Karim Lekadir

Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain

Kaiser Permanente Division of Research, Oakland, CA, USA

Vincent X. Liu

MaineHealth, Portland, ME, USA

Daniel J. Nigrin

Tufts School of Medicine, Boston, MA, USA

Stanford Health Care, Palo Alto, CA, USA

Nigam H. Shah

Guy’s & St Thomas’ NHS Foundation Trust, London, UK

Haris Shuaib

Chan Zuckerberg Initiative, Melon Park, CA, USA

Tania Simoncelli

School of Law, University of KwaZulu-Natal, Durban, South Africa

Donrich Thaldar

Institute for Experiential AI and Bouve College of Health Sciences, Northeastern University, Boston, MA, USA

Eugene Tunik

Gordon and Betty Moore Foundation, San Francisco, USA

Milken Institute, Washington, USA, DC

John Wilbanks

Broad Institute, Cambridge, MA, USA

Shanghai Digital Medicine Innovation Center, Shanghai, China

Ruijin Hospital, Shanghai, China

Shanghai Jiao Tong University School of Medicine, Shanghai, China

NEJM AI, Boston, MA, USA

Jianfei Zhao

Jiahui Medical Research and Education, Shanghai, China

You can also search for this author in PubMed   Google Scholar

  • , Laura Adams
  • , David Blumenthal
  • , Patricia Flatley Brennan
  • , Noah Brown
  • , Atul J. Butte
  • , Morgan Cheatham
  • , Dave deBronkart
  • , Jennifer Dixon
  • , Jeffrey Drazen
  • , Barbara J. Evans
  • , Sara M. Hoffman
  • , Chris Holmes
  • , Peter Lee
  • , Arjun Kumar Manrai
  • , Gilbert S. Omenn
  • , Jonathan B. Perlin
  • , Rachel Ramoni
  • , Guillermo Sapiro
  • , Rupa Sarkar
  • , Harpreet Sood
  • , Effy Vayena
  • , Isaac S. Kohane
  • , Emily Alsentzer
  • , Brian Anderson
  • , Ran D. Balicer
  • , Andrew L. Beam
  • , Erwin Bottinger
  • , Rebecca W. Brendel
  • , Payal Chandak
  • , Arnaub Chatterjee
  • , Michael Chernew
  • , Antoine Duclos
  • , Lee A. Fleisher
  • , William Gordon
  • , Elizabeth Healey
  • , Alejandro Hernández-Arango
  • , Michele Kathleen Herndon
  • , Alexander Hoffmann
  • , Edward M. Hundert
  • , Indra Joshi
  • , Tobias Kowatsch
  • , Stephen Kraus
  • , Lisa Soleymani Lehmann
  • , Karim Lekadir
  • , Vincent X. Liu
  • , Daniel J. Nigrin
  • , Reut Ohana
  • , Nigam H. Shah
  • , Haris Shuaib
  • , Tania Simoncelli
  • , Amelia Li Min Tan
  • , Donrich Thaldar
  • , Eugene Tunik
  • , Tommy Wang
  • , John Wilbanks
  • , Yuchen Xu
  •  & Jianfei Zhao

Corresponding author

Correspondence to Isaac S. Kohane .

Ethics declarations

Competing interests.

The authors declare the following competing interests: L.A. reports a senior advisor contractor role with the National Academy of Medicine; consulting fees from the National Hospice & Palliative Care Organization, X4 Health, David Nagel, MD and Nagel Pain Community; speakers’ fee from Gotham Artists, Executive Speakers Bureau, St. Luke’s Health System and Salem Health; corporate board member or advisor or stock options at T2 Biosystems and TMA Precision Health. D.B. reports grants from the Commonwealth Fund; advisory role at Aledade, Carol Emmott Foundation, New England Journal of Medicine, New England Journal of Medicine AI and Josiah Macy Foundation; stock options at Aledade, Nova Cor and Xhale. A.J.B. reports grants from the National Institutes of Health, Merck, Genentech, Peraton, Priscilla Chan and Mark Zuckerberg and Bakar Family Foundation; royalties, licenses or consulting fees from NuMedii, Personalis, Progeny, Samsung, Gerson Lehman Group, Dartmouth, Gladstone Institute, Boston Children’s Hospital and Mango Tree Corporation; honoraria or speakers or expert testimony fees from Boston Children’s Hospital, Johns Hopkins University, Endocrine Society, Alliance for Academic Internal Medicine, Roche, Children’s Hospital of Philadelphia, University of Pittsburgh Medical Center, Cleveland Clinic, University of Utah, Society of Toxicology, Mayo Clinic, Pfizer, Cerner, Johnson and Johnson and The Transplantation Society; Foresight; patents issued or pending with Personalis, NuMedii, Carmenta, Progenity, Stanford, University of California, San Francisco; participation on a Data Safety Monitoring Board or Advisory Board at Washington University in Saint Louis, Regenstreif Institute. Geisinger and University of Michigan; Stock or stock options with Sophia Genetics, Allbirds, Coursera, Digital Ocean, Rivian, Invitae, Editas Medicine, Pacific Biosciences, Snowflake, Meta, Alphabet, 10x Genomics, Snap, Regeneron, Doximity, Netflix, Illumina, Royalty Pharma, Starbucks, Sutro Biopharma, Pfizer, Biontech, Advanced Micro Devices, Amazon, Microsoft, Moderna, Tesla, Apple, Personalis and Lilly. D. deB. reports speaker honorarium to IHI Leadership Summit. J. Drazen reports unpaid role as Member of the Board of Trustees of Nantucket Cottage Hospital. B.J.E. reports grants from National Institutes of Health Common Fund’s Bridge2AI Patient-Focused Collaborative Hospital Repository Uniting Standards (CHoRUS) for Equitable AI. NIH OT2OD0327 (9/1/2022-8/31/2026); speakers honoraria from American College of Legal Medicine, University of Minnesota, Columbia University School of Medicine; minor holdings of Amazon stock and Pfizer stock. S.M.H. reports support from Harvard Medical School and stock or stock options from PathAI, Inc. P.L. reports employment and stock options with Microsoft Corporation. G.S. reports support from Duke University and Apple; grants from the Simons Foundation, National Science Foundation and Office of Naval Research; stock or stock options with Apple. E.V. reports grants from the Swiss National Science Foundation and Botnar Foundation; consulting fees or other honoraria from Johns Hopkins University- consultant for Bioethics Academy and Roche diagnostics; participation on a Data Safety Monitoring Board or Advisory Board for Meck: Digital Ethics Advisory Panel and IQVIA: Ethics Advisory Panel; role as co-chair for WHO expert group on ethics and governance of AI in Health. M.C. reports a leadership or advisory role with Coalition for Health AI. C.H. reports grants or contracts with Novo Nordisk; participation on a Data Safety Monitoring Board or Advisory Board at Novo Nordisk Foundation, UK Biobank and MRC Advisory Board; leadership or advisory role at CRUK Data Science Advisory Board. A.K.M is a paid deputy editor at NEJM AI, a publication of the Massachusetts Medical Society. I.S.K is Editor-in-Chief at NEJM AI, a publication of the Massachusetts Medical Society; he reports honoraria for lectures or other educational activities at the University of Massachusetts (Amherst), Morehouse, Simons Foundation, Cincinnati Children’s Hospital and University of Pennsylvania; board membership with Canary Medical, Pulse Data and Inovalon. Authors may have received financial support from their institutions or the NEJM AI/MMS to attend the RAISE in person. C.B.G., P.F.B. J. Dixon, G.S.O., R.R., N.B., J.B.P. R.S. and H.S. report no conflicts of interest.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Goldberg, C.B., Adams, L., Blumenthal, D. et al. To do no harm — and the most good — with AI in health care. Nat Med 30 , 623–627 (2024). https://doi.org/10.1038/s41591-024-02853-7

Download citation

Published : 22 February 2024

Issue Date : March 2024

DOI : https://doi.org/10.1038/s41591-024-02853-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

How to support the transition to ai-powered healthcare.

Nature Medicine (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

ai medicine research

  • Washington State University
  • Go to wsu twitter
  • Go to wsu facebook
  • Go to wsu linkedin

ChatGPT fails at heart risk assessment

Close up of the torso of a male medical professional in a blue short sleeve hospital shirt with a stethoscope around his neck holding an Ipad and tapping it with one hand.

SPOKANE, Wash. – Despite ChatGPT’s reported ability to pass medical exams, new research indicates it would be unwise to rely on it for some health assessments, such as whether a patient with chest pain needs to be hospitalized.  

In a study involving thousands of simulated cases of patients with chest pain, ChatGPT provided inconsistent conclusions, returning different heart risk assessment levels for the exact same patient data. The generative AI system also failed to match the traditional methods physicians use to judge a patient’s cardiac risk. The findings were published in the journal PLOS ONE .

“ChatGPT was not acting in a consistent manner,” said lead author Dr. Thomas Heston, a researcher with Washington State University’s Elson S. Floyd College of Medicine. “Given the exact same data, ChatGPT would give a score of low risk, then next time an intermediate risk, and occasionally, it would go as far as giving a high risk.”

The authors believe the problem is likely due to the level of randomness built into the current version of the software, ChatGPT4, which helps it vary its responses to simulate natural language. This same randomness, however, does not work well for healthcare uses that require a single, consistent answer, Heston said.

“We found there was a lot of variation, and that variation in approach can be dangerous,” he said. “It can be a useful tool, but I think the technology is going a lot faster than our understanding of it, so it’s critically important that we do a lot of research, especially in these high-stakes clinical situations.”

Chest pains are common complaints in emergency rooms, requiring doctors to rapidly assess the urgency of a patient’s condition. Some very serious cases are easy to identify by their symptoms, but lower risk ones can be trickier, Heston said, especially when determining whether someone should be hospitalized for observation or sent home and receive outpatient care.

Currently medical professionals often use one of two measures that go by the acronyms TIMI and HEART to assess heart risk. Heston likened these scales to calculators with each using a handful of variables including symptoms, health history and age. In contrast, an AI neural network like ChatGPT can assess billions of variables quickly, meaning it could potentially analyze a complex situation faster and more thoroughly.

For this study, Heston and colleague Dr. Lawrence Lewis of Washington University in St. Louis first generated three datasets of 10,000 randomized, simulated cases each. One dataset had the seven variables of the TIMI scale, the second set included the five HEART scale variables and a third had 44 randomized health variables. On the first two datasets, ChatGPT gave a different risk assessment 45% to 48% of the time on individual cases than a fixed TIMI or HEART score. For the last data set, the researchers ran the cases four times and found ChatGPT often did not agree with itself, returning different assessment levels for the same cases 44% of the time.

Despite the negative findings of this study, Heston sees great potential for generative AI in health care – with further development. For instance, assuming privacy standards could be met, entire medical records could be loaded into the program, and an in an emergency setting, a doctor could ask ChatGPT to give the most pertinent facts about a patient quickly. Also, for difficult, complex cases, doctors could ask the program to generate several possible diagnoses. “ChatGPT could be excellent at creating a differential diagnosis and that’s probably one of its greatest strengths,” said Heston. “If you don’t quite know what’s going on with a patient, you could ask it to give the top five diagnoses and the reasoning behind each one. So it could be good at helping you think through a problem, but it’s not good at giving the answer.”

Media Contacts

ai medicine research

WSU veterinary students recognized with international reproductive medicine scholarships

Recent news.

ai medicine research

Cougar Pride sculptures find homes across WSU system

Wsu awards nine junior faculty with seed funding.

ai medicine research

Veterinary students receive diversity leadership scholarships

ai medicine research

WSU partners with community colleges to enhance urban forests

ai medicine research

Sleep scientist Kimberly Honn elected president of Working Time Society

ai medicine research

WSU Alumni Association recognizes Top Ten Seniors

COMMENTS

  1. AI in medicine: creating a safe and equitable future

    But the medical applications of generative AI remain largely speculative. Automation of evidence synthesis and identification of de novo drug candidates could expedite clinical research. AI-enabled generation of medical notes could ease the administrative burden for health-care workers, freeing up time to see patients.

  2. Artificial intelligence in healthcare: transforming the practice of

    AI can enable healthcare systems to achieve their 'quadruple aim' by democratising and standardising a future of connected and AI augmented care, precision diagnostics, precision therapeutics and, ultimately, precision medicine (Table (Table1 1). 30 Research in the application of AI healthcare continues to accelerate rapidly, with potential ...

  3. AI in health and medicine

    Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to ...

  4. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023

    As computers and the concept of artificial intelligence (AI) were almost simultaneously developed in the 1940s and 1950s, the field of medicine was quick to see their potential relevance and ...

  5. AI in Medicine

    Artificial Intelligence in Medicine. A.L. Beam and OthersN Engl J Med 2023; 388:1220-1221. The editors announce both a series of articles focusing on AI and machine learning in health care and the ...

  6. Advancing Healthcare Research & AI in Medicine

    Research shows that 2% of hospitalized patients experience serious preventable medication-related incidents that can be life-threatening, cause permanent harm, or result in death. Published in Clinical Pharmacology and Therapeutics, our best-performing AI model was able to anticipate physician's actual prescribing decisions 75% of the time ...

  7. NEJM AI

    NEJM AI, a monthly journal from NEJM Group, publishes cutting-edge research and applications of artificial intelligence in clinical medicine. View issue. Save. Policy Corner; Apr 12, 2024; Scaling Adoption of Medical AI — Reimbursement from Value-Based Care and Fee-for-Service Perspectives.

  8. Artificial Intelligence in Medicine

    Medicine is much different from other areas where AI is being applied. AI enables new discoveries and improved processes in the entire health care continuum; ethical, governance, and regulatory ...

  9. An AI revolution is brewing in medicine. What will it look like?

    The solution has often been to add more AI-powered tools, but that creates challenges for medical care, too, says Alan Karthikesalingam, a clinical research scientist at Google Health in London.

  10. Artificial Intelligence and Medical Research

    Learn more about the different types of AI and their use in medical research. NIH Office of Communications and Public Liaison. Building 31, Room 5B52. Bethesda, MD 20892-2094. [email protected]. Tel: 301-451-8224. Editor: Harrison Wein, Ph.D. Managing Editor: Tianna Hicklin, Ph.D. Illustrator: Alan Defibaugh.

  11. Overview of artificial intelligence in medicine

    In 2016, the biggest chunk of investments in AI research were in healthcare applications compared with other sectors. AI in medicine can be dichotomized into two subtypes: Virtual and physical. The virtual part ranges from applications such as electronic health record systems to neural network-based guidance in treatment decisions.

  12. What is Artificial Intelligence in Medicine?

    Artificial intelligence in medicine is the use of machine learning models to help process medical data and give medical professionals important insights, improving health outcomes and patient experiences. ... Research has indicated that AI powered by artificial neural networks can be just as effective as human radiologists at detecting signs of ...

  13. AI in Medicine— JAMA 's Focus on Clinical Outcomes, Patient-Centered

    Clinical care and outcomes: JAMA's key interest is in clinically impactful science, and we will be most interested in studies demonstrating the effective translation of novel AI technologies to improve clinical care and outcomesThe potential for clinical impact will represent an important yardstick in our evaluation of all AI studies. Patient-centered care: Early phases of scientific ...

  14. Artificial intelligence in medicine: current trends and future

    Artificial intelligence (AI) research within medicine is growing rapidly. In 2016, healthcare AI projects attracted more investment than AI projects within any other sector of the global economy. 1 However, among the excitement, there is equal scepticism, with some urging caution at inflated expectations. 2 This article takes a close look at current trends in medical AI and the future ...

  15. How AI is being used to accelerate clinical trials

    A few companies are developing platforms that integrate many of these AI approaches into one system. Xiaoyan Wang, who heads the life-science department at Intelligent Medical Objects, co ...

  16. Artificial Intelligence in Medicine

    Artificial Intelligence in Medicine publishes original articles from a wide variety of interdisciplinary perspectives concerning the theory and practice of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care. Artificial intelligence in medicine may be characterized as the scientific discipline pertaining to research studies, projects, and applications ...

  17. AIM

    Artificial Intelligence in Medicine Program . An academic program designed to accelerate AI solutions into clinic practice. Our latest Updates and News ... The NIH awards the ModelHub platform for reproducible AI research. news. 2022. news. 2022. Highest Cited Researcher Award. news. 2022. Web of Science awards AIM researcher as among the top ...

  18. Center for Artificial Intelligence in Medicine & Imaging

    Stanford has established the AIMI Center to develop, evaluate, and disseminate artificial intelligence systems to benefit patients. We conduct research that solves clinically important problems using machine learning and other AI techniques. Director's Welcome. Back in 2017, I tweeted "radiologists who use AI will replace radiologists who don ...

  19. Risks and benefits of an AI revolution in medicine

    One recent area where AI's promise has remained largely unrealized is the global response to COVID-19, according to Kohane and Bates. Bates, who delivered a talk in August at the Riyad Global Digital Health Summit titled "Use of AI in Weathering the COVID Storm," said though there were successes, much of the response has relied on traditional epidemiological and medical tools.

  20. Mayo researchers invented a new class of AI to improve cancer research

    Mayo Clinic researchers recently invented a new class of artificial intelligence (AI) algorithms called hypothesis-driven AI that are a significant departure from traditional AI models which learn solely from data. ... "It can significantly advance medical research by leading to deeper understanding and improved treatment strategies ...

  21. How AI improves physician and nurse collaboration

    Lisa Shieh, MD, PhD, clinical professor of medicine; Margaret Smith, executive director of the Healthcare AI Applied Research Team, operations for primary care and population health; and Jerri Westphal, nursing informatics manager, also helped lead the study and the implementation of the AI system.

  22. Multimodal biomedical AI

    Multimodal artificial intelligence models could unlock many exciting applications in health and medicine; this Review outlines the most promising uses and the technical pitfalls to avoid.

  23. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [1,2,3].Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health ...

  24. Precision Medicine, AI, and the Future of Personalized Health Care

    Precision medicine. The field of precision medicine is similarly experiencing rapid growth. Precision medicine is perhaps best described as a health care movement involving what the National Research Council initially called the development of "a New Taxonomy of human disease based on molecular biology," or a revolution in health care triggered by knowledge gained from sequencing the human ...

  25. [2404.18416] Capabilities of Gemini Models in Medicine

    Excellence in a wide variety of medical applications poses considerable challenges for AI, requiring advanced reasoning, access to up-to-date medical knowledge and understanding of complex multimodal data. Gemini models, with strong general capabilities in multimodal and long-context reasoning, offer exciting possibilities in medicine. Building on these core strengths of Gemini, we introduce ...

  26. Psychiatry's new frontiers

    AI explodes Taking the pulse of artificial intelligence in medicine; Health on a planet in crisis Real-world health ... Research with illicit drugs to treat anxiety, depression and PTSD inches forward. Innovation & Technology. Organoid brain models yield insights into resilience.

  27. Four state-of-the-art AI search engines for ...

    The four engines examined are Yottixel, SISH, RetCCL, HSHR. Each takes a different approach toward indexing, database generation, ranking and retrieval of images. Overall, the researchers found ...

  28. To do no harm

    This consensus emerged at the RAISE (Responsible AI for Social and Ethical Healthcare) conference, which was organized by the Department of Biomedical Informatics at Harvard Medical School. The ...

  29. Artificial Intelligence: How is It Changing Medical Sciences and Its

    Their research demonstrated that AI systems were capable of classifying skin cancers with a level of competence comparable to dermatologists and required only a fraction of the time to train the model in comparison to physicians who spend years in medical school and also relied on experience they developed through patient diagnosis over decades ...

  30. ChatGPT fails at heart risk assessment

    The generative AI system also failed to match the traditional methods physicians use to judge a patient's cardiac risk. The findings were published in the journal PLOS ONE. "ChatGPT was not acting in a consistent manner," said lead author Dr. Thomas Heston, a researcher with Washington State University's Elson S. Floyd College of Medicine.