latest-news-headlines Market Intelligence /marketintelligence/en/news-insights/latest-news-headlines/ai-could-exacerbate-inequalities-if-misapplied-healthcare-sector-warned-67248088 content esgSubNav
In This List

AI could exacerbate inequalities if misapplied, healthcare sector warned

Podcast

MediaTalk | Season 2 | Ep. 29 - Streaming Services, Linear Networks Kick Off 2024/25 NFL Showdown

Blog

Major Copper Discoveries

Podcast

MediaTalk | Season 2 | Ep. 27 - College Football Preview & Venu Injunction

Podcast

Next in Tech | Ep. 181: Lighting up Fiber


AI could exacerbate inequalities if misapplied, healthcare sector warned

The growing use of artificial intelligence and machine learning across the healthcare industry could lead to greater disparities if applied incorrectly, experts have warned.

These technologies can now be found at all levels of the healthcare ecosystem from helping clinicians sort through electronic medical records and interpret data from connected medical devices to aiding life science companies in their clinical trials and modeling the effectiveness of vaccines.

"AI in machine learning is based on algorithms, which are based on data — and the data can be biased," said Michael Petersen, chief clinical innovation officer at IT consulting firm NTT DATA Inc.

SNL Image

Pharmaceutical companies are using AI to interpret clinical trial data.
Source: Novartis

While these technologies have the potential to mitigate certain health inequities, the data used for these algorithms is key to either achieving this goal or exacerbating the problem, Petersen told S&P Global Market Intelligence ahead of his appearance at HLTH 2021, a four-day conference in Boston focused on the future of health.

Suchi Saria, CEO and co-founder of Bayesian Health Inc. — an AI platform to monitor patients — pointed to research published December 2020 in The New England Journal of Medicine to show how easily healthcare data can be skewed against certain groups. A study of pulse oximeters, which measure patients' blood oxygen levels, found that Black patients were almost three times more likely to have lower blood oxygen levels missed than their white counterparts.

"Part of the issue has been too much focus on the technology and not enough focus on the use cases, the processes that we're changing," Saria said on an Oct. 17 HLTH panel. "Intimate knowledge of the actual use case ... or the problems we're looking to solve is I think the most important thing."

A major reason why healthcare data can be biased, according to Microsoft Corp.'s national director for AI, health and life sciences Tom Lawry, is that the initial data typically comes from electronic medical records and other traditional healthcare systems. People who are not well-insured or do not have enough money to receive hospital care may not be recorded through these systems and therefore can be left out of large healthcare datasets, Lawry told a HLTH audience.

"When certain populations are [under-represented] or not represented in the data, naturally those predictive algorithms are going to reflect the data which in turn reflects the position of public policy in this country or any country," Lawry said.

Human bias can also creep into algorithms, either consciously or unconsciously, via the people who are creating them, Lawry added.

One of the best ways to mitigate some of this bias is to use large and diverse data sets, as well as setting up human oversight for these algorithms, said NTT Data managing director and health insight lead Milissa Campbell.

"It's very important how you set up the human oversight around AI," Campbell said. "Are you incorporating a diverse set of physicians when you're building AI around imaging or anything to do with medical? Are you leveraging diverse oversight of the output and making sure that the human oversight is trained in looking for bias?"

However, Campbell noted that some healthcare organizations do not have an understanding of the fundamentals needed for AI models such as basic data governance principles.

"You are creating more than bias, you are creating chaos, and it's irresponsible to build AI models on data that is known to not be of high quality," Campbell added.

Machine learning is going to be a business imperative for healthcare in the future, said NTT Data's Petersen, and having ethical, unbiased data at the beginning is going to save healthcare organizations time trying to clean it up later.

Addressing social determinants

SNL Image

NTT Data Inc. chief innovation officer Michael Petersen
Source: NTT Data

Looking to the future capabilities of machine learning, Petersen said AI has the potential to help clinicians improve health equity by taking into account social determinants, which can range from poverty and food insecurity to transportation and housing.

Instead of simply prescribing a certain medication, an electronic medical record service that takes into account these social factors could also let a clinician know of other ways they can improve a patient's outcome — such as having the medication shipped directly to them or referring them to the nearest food pantry.

When it comes to creating an algorithm that can take into account social determinants of health data and clinical data together, Petersen said that work is still in the early stages.

"There are so many AI companies out there ... that have different ways to process the data and filter it out, and some are for mental health and some are for specific acute care settings or at-home care settings," Petersen added. "What hasn't happened is the ability to stitch it together yet, and they're on their way to doing that."