Towards Safer AI in Medical Imaging
Ben Glocker
June 23, 2021, Wednesday, 3:00 PM - 4:00 PM EDT
Abstract
In this talk, I will present an overview of some of our recent attempts to improve the safety of machine learning models in medical imaging. We focus on model robustness and reliability in the context of dataset mismatch between the development and deployment stage. Our aim is to provide safeguards such as automatic quality control, failure detection, and uncertainty estimation which are necessary for safe clinical use. We also discuss the use of causal reasoning to identify potential biases already at the design stage of model development. We will conclude with an outlook on how counterfactual image generation could help with explainability of predictive models.
Bio
Ben Glocker is Reader in Machine Learning for Imaging at the Department of Computing at Imperial College London where he co-leads the Biomedical Image Analysis Group with more than 45 research staff. He also leads the HeartFlow-Imperial Research Team and is scientific advisor for Kheiron Medical Technologies. He holds a PhD from TU Munich and was a postdoc at Microsoft and a Research Fellow at the University of Cambridge. His research is at the intersection of medical imaging and artificial intelligence aiming to build computational tools for improving image-based detection and diagnosis of disease. He has received several awards including a Philips Impact Award, a Medical Image Analysis - MICCAI Best Paper Award, and the Francois Erbsmann Prize. He is a member of the Young Scientists Community of the World Economic Forum and a member of the AI Task Group of the UK National Screening Committee advising the Government on questions around clinical deployment of AI for screening programmes. He was awarded an ERC Starting Grant in 2017.