Our goal is to advance robust deep learning under distribution shifts, facilitating reliable and successful deployment of foundation models in real-world applications. We are currently working on the following research topics:
Learning
Confidence Calibration
Representation Learning
Generalization and Adaptation
Out-of-Distribution Generalization
Continual Learning
Test-Time Adaptation
Model Evaluation and Selection
AutoEval (Automated Model Evaluation, Autonomous Evaluation, ...)
Model Selection
We are looking for highly motivated students with a strong interest in the area of machine/deep learning and its applications to various domains, such as computer vision, natural language processing, time series, and tabular data. If you are interested, please read the "How to Join?" tab and contact me. For more information about our lab, please refer to the materials in the "Links" tab.