Our goal is to advance robust deep learning under distribution shifts, facilitating reliable and successful deployment of foundation models in real-world applications. We are currently working on the following research topics:
Learning:
Representation Learning
Confidence Calibration
Generalization and Adaptation:
Out-of-Distribution Generalization
Continual Learning
Test-Time Adaptation
Evaluation:
AutoEval (Automated Model Evaluation, Autonomous Evaluation, ...)
We are looking for highly motivated students with a strong interest in the area of machine/deep learning and its applications to various domains, such as computer vision, natural language processing, time series, and tabular data. If you are interested, please read this instruction and contact me.