Learning Movement Representations of Small Humans with Small Data
Sarah Ostadabbas
October 19, 2022, Wednesday, 3:00 PM - 4:00 PM EST
Closely tracking the development of motor functioning in infants provides prodromal risk markers of many developmental disruption such as autism spectrum disorder (ASD), cerebral palsy (CP), and developmental coordination disorder (DCD), among others. Screening for motor delays will allow for earlier and more targeted interventions that will have a cascading effect on multiple domains of infant development, including communication, social, cognitive, and memory. However, only about 29% of US children under 5 years of age receive developmental screening due to expense and shortage of testing resources, contributing negatively to lifelong outcomes for infants at risk for developmental delays. My research aims to learn and quantify visual representations of motor function in infants towards designing an accessible and affordable video-based screening technology for their motor skills by developing novel data-/label-efficient AI techniques including biomechanically-constrained synthetic data augmentation, semantic-aware domain adaptation, and human-AI co-labeling algorithms.

While there are several powerful human behavior recognition and tracking algorithms, however, models trained on large-scale adult activity datasets have limited success in estimating infant movements due to the significant differences in their body ratios, the complexity of infant poses, and types of their activities. Privacy and security considerations hinder the availability of adequate infant images/videos required for training of a robust model with deep structure from scratch, making this a particularly constrained ``small data problem''. To address this gap, in this talk I will cover: (i) introduction of biomechanically-constrained models to synthesize labeled pose data in the form of domain-adjacent data augmentation; (ii) design and analysis of a semantic-aware unsupervised domain adaptation technique to close the gap between the domain-adjacent and domain-specific pose data distributions; and (iii) development and analysis of an AI-human co-labeling technique to provide high-quality labels to refine and adapt the domain-adapted inference models into robust pose estimation algorithms in the target application. These contributions enable the use of advanced AI in the small data domain.
Professor Ostadabbas is an assistant professor in the Electrical and Computer Engineering Department of Northeastern University (NEU), Boston, Massachusetts, USA. Professor Ostadabbas joined NEU in 2016 from Georgia Tech, where she was a post-doctoral researcher following completion of her PhD at the University of Texas at Dallas in 2014. At NEU, Professor Ostadabbas is the director of the Augmented Cognition Laboratory (ACLab) with the goal of enhancing human information-processing capabilities through the design of adaptive interfaces via physical, physiological, and cognitive state estimation. These interfaces are based on rigorous models adaptively parameterized using machine learning and computer vision algorithms. For many of these interfaces, Professor Ostadabbas has developed augmented reality (AR) and virtual reality (VR) tools for both the assessment and enhancement portions of the project. Professor Ostadabbas' work also expands to the Small Data Domain (e.g. medical or military applications), where data collection and/or labeling is expensive, individualized, and protected by very strong privacy or classification laws. Her solutions include learning frameworks with deep structures that work with limited labeled training samples, integrate domain-knowledge into the model for both prior learning and synthetic data augmentation, and maximize the generalization of learning across domains by learning invariant representations. Professor Ostadabbas is the co-author of more than 70 peer-reviewed journal and conference articles and her research has been awarded by the National Science Foundation (NSF), including Pre-CAREER and CAREER awards, Department of Defense (DoD), Mathworks, Amazon AWS, Verizon, Biogen, and NVIDIA. She co-organized the Multimodal Data Fusion (MMDF2018) workshop, an NSF PI mini-workshop on Deep Learning in Small Data, the CVPR workshop on Analysis and Modeling of Faces and Gestures from 2019 and she was the program chair of the Machine Learning in Signal Processing (MLSP2019). Prof. Ostadabbas is an associate editor of the IEEE Transactions on Biomedical Circuits and Systems, on the Editorial Board of the IEEE Sensors Letters and Digital Biomarkers Journal, and has been serving in several signal processing and machine learning conferences as a technical chair or session chair. She is a member of IEEE, IEEE Computer Society, IEEE Women in Engineering, IEEE Signal Processing Society, IEEE EMBS, IEEE Young Professionals, International Society for Virtual Rehabilitation (ISVR), and ACM SIGCHI.