Towards Reliable and Trustworthy Machine Learning Methods for Medical Imaging
Alan Wang
February 9, 2024, Friday, 2:00 PM - 3:00 PM EST
Machine learning (ML) algorithms powering the AI revolution are leading to breakthroughs in medical image analysis. These algorithms are enabling fast and scalable automation of human-expensive tasks like image registration and image reconstruction, while also showing promise in performing more complex, higher-level tasks like diagnosis and prognosis. At the same time, the healthcare arena that AI seeks to disrupt is formidable; healthcare is not only facilitated by domain experts (e.g. doctors and radiologists) who undergo years of training, but also is characterized by a high-stakes setting where safety and trust is critical. Indeed, there is a need for reliable and trustworthy ML in this arena, which can interface with humans, perform well under varying conditions, and accept user input and feedback. In this talk, I will present several ML methods we've developed across various tasks in medical imaging, focused around three directions towards reliability and trustworthiness: interpretability, robustness, and controllability. In the first part of the talk, I will describe an interpretable, robust, and controllable image registration method, called KeyMorph, which uses a deep neural network to extract corresponding keypoints in a pair of images and subsequently uses the keypoints to solve for the desired transformation which aligns the images in closed-form. In the second part of the talk, I will present an inherently interpretable image classification method, called the Nadaraya-Watson Head, which can be seen as a "soft" version of a nearest-neighbors classifier and works by making a classification prediction via comparisons with examples in the training dataset. Furthermore, I will show how we can leverage this model to learn "invariant" representations of images that come from multiple environments (e.g., hospitals) for the purposes of robust domain generalization, starting from rigorous causally-informed assumptions of the data-generating process.
I am a PhD candidate at Cornell University in the School of Electrical and Computer Engineering, where I am advised by Professor Mert Sabuncu. Currently, I am based in New York City where I am affiliated with Cornell Tech and the Department of Radiology at Weill Cornell Medical School. Previously, I studied Computer Engineering at the University of Illinois at Urbana-Champaign (UIUC).

My research interests lie in the intersection of machine learning and healthcare, especially medical imaging. Specifically, I am interested in building reliable and trustworthy deep learning models such that they are effective in the high-stakes, safety-critical domain of healthcare and work well with human experts like doctors and clinicians. This leads me to be interested in research in interpretability, robustness, and controllability of deep learning models.

During my studies, I've also had the chance to design deep learning models as an intern at Google and MIT Lincoln Laboratory.