Beyond Bias Audits: Bringing Equity to the Machine Learning Pipeline
Irene Y. Chen
January 25, 2023, Wednesday, 3:00 PM - 4:00 PM EST
Abstract
Advances in machine learning and the explosion of clinical data have demonstrated immense potential to fundamentally improve clinical care and deepen our understanding of human health. However, algorithms for medical interventions and scientific discovery in heterogeneous patient populations are particularly challenged by the complexities of healthcare data. Not only are clinical data noisy, missing, and irregularly sampled, but questions of equity and fairness also raise grave concerns and create additional computational challenges.

In this talk, I present two approaches for leveraging machine learning towards equitable healthcare. First, I examine how to address algorithmic bias in supervised learning for cost-based metrics discrimination. By decomposing discrimination into bias, variance, and noise components, I propose tailored actions for estimating and reducing each term of the total discrimination. Second, I demonstrate how to address one specific health disparity through the early detection of intimate partner violence from clinical indicators. Using a time-based model with noisy labels, we can correct for biases in data measurement to learn more clinically useful subtypes and improve prediction. The talk concludes with a discussion about how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population
Bio
Starting summer 2023, Irene Chen will be an Assistant Professor in UC Berkeley and UCSF's new Computational Precision Health program with a joint appointment in Berkeley EECS. She is currently a ML/Stats Postdoc at Microsoft Research New England. She recently graduated with a PhD at MIT EECS as a member of the Clinical Machine Learning group. Before MIT, she received a joint AB/SM degree from Harvard University and worked at Dropbox.

She studies machine learning for equitable healthcare. Her research focuses on two main areas: 1) developing machine learning methods for equitable clinical care, and 2) auditing and addressing algorithmic bias.