Nov 192020
 November 19, 2020  Posted by  Healthcare

Amit Kaushal, Russ Altman, and Curt Langlotz write:

Thanks to advances in artificial intelligence (AI) and machine learning, computer systems can now diagnose skin cancer like a dermatologist would, pick out a stroke on a CT scan like a radiologist, and even detect potential cancers on a colonoscopy like a gastroenterologist. These new expert digital diagnosticians promise to put our caregivers on technology’s curve of bigger, better, faster, cheaper. But what if they make medicine more biased too?

At a time when the country is grappling with systemic bias in core societal institutions, we need technology to reduce health disparities, not exacerbate them. We’ve long known that AI algorithms that were trained with data that do not represent the whole population often perform worse for underrepresented groups. For example, algorithms trained with gender-imbalanced data do worse at reading chest x-rays for an underrepresented gender, and researchers are already concerned that skin-cancer detection algorithms, many of which are trained primarily on light-skinned individuals, do worse at detecting skin cancer affecting darker skin.

Given the consequences of an incorrect decision, high-stakes medical AI algorithms need to be trained with data sets drawn from diverse populations. Yet, this diverse training is not happening.

Read more from these academics on Scientific American.

Sorry, the comment form is closed at this time.