We have developed the first principled probabilistic framework, dubbed Bayesian Deep Learning (BDL), to unify perception in deep learning and reasoning in probabilistic graphical models (arXiv’14, KDD’15, AAAI’15, TKDE’16, ACM Computing Surveys’20, ICLR’23). We pioneered some of its applications on healthcare (ICML’23a, Nature Medicine’22, Nature Medicine’21, AAAI’19a), speech recognition (ICML’21a, ICML’20b, AAAI’19b), recommender systems (NeurIPS’16a, KDD’15, AAAI’15), network analysis (AAAI’17), and computer vision (CVPR’21, ICCV’21a, ICML’23c).
My group works on fundamental methodology and theory on statistical machine learning and deep learning, as well as their high-impact applications such as healthcare, network analysis, recommender systems, forecasting, computer vision, speech recognition, and natural language processing.
Below are some active (and interconnected) research areas (see here for more and Publication for details):
We went beyond the conventional categorical domain adaptation regime and proposed
the first approach to adapt across continuously indexed domains (ICML’20a, ICML’21c),
graph-relational domains (ICLR’22), and
taxonomy-structured domains (ICML’23b) as well as
when the domain index is unavailable (ICLR’23).
We developed the first unified framework for existing domain incremental learning algorithms (NeurIPS’23).
We significantly simplified deep learning models for zero-shot learning (CVPR’19).
We have also developed the first sleep posture estimation model that generalizes across subjects in the wild (Ubicomp’20).
We have developed new ML algorithms for various healthcare applications. Our algorithms lead to (1) the first contactless medication self-administration monitoring system (Nature Medicine’21), (2) the first contactless Parkinson’s disease detection and assessment system (Nature Medicine’22), (3) the first general ML model that adapts across patients of different ages (ICML’20a), (4) the first contactless, multi-person breathing (Ubicomp’18) and sleep posture (Ubicomp’20) monitoring system that works in the wild.
We have developed the first hierarchical Bayesian model for deep hybrid recommender systems (KDD’15, AAAI’15, NeurIPS’16a, AAAI’22), bringing the accuracy of recommender systems to a new level and leading to a paradigm shift in recommender system research. Our pioneering DL-based recommender systems inspired hundreds of follow-up works and speed up the DL paradigm shift in the field of recommender systems.
Bias and imbalance are major obstacles for real-world ML deployment.
We have developed novel theories and methodologies to correct for the exposure bias
in ML algorithms (ICML’21b) and
address the imbalance issue in DL models, including the first deep imbalanced regression benchmark (ICML’21c),
the first imbalanced domain generalization algorithm (ECCV’22a), and
the first imbalanced/fair uncertainty quantification algorithm (NeurIPS’23a).