In my research group, we’re really excited about the work we’re doing in distributionally robust optimization (DRO), distributionally robust control (DRC), and distributionally robust reinforcement learning (DRRL). We’re tackling some of the big challenges in these fields, where the goal is to create models and algorithms that are robust to uncertainty in distributions, meaning they can still perform well even when the underlying data or system dynamics aren’t exactly what we expected.

In DRO, for example, we’re developing methods to optimize when we don’t have a perfect understanding of the probability distributions involved—maybe we only know some basic statistics or have an idea of the worst-case scenario. One of the interesting challenges here is making these approaches more computationally efficient, especially for large-scale problems. We use the theory optimal transport to model and compute the solution of DRO problems.

In DRC, the focus shifts to controlling systems when there’s uncertainty in the dynamics. Think of a robot navigating through an environment where the conditions could change in unexpected ways. We’re working on control strategies that ensure the system behaves safely and effectively across a range of potential situations.

Then there’s DRRL, where we’re exploring how reinforcement learning can adapt to uncertainty in the environment, like how a self-driving car can learn to handle unexpected situations. We’re particularly interested in how to make these systems more sample-efficient and robust without sacrificing too much performance.

The overarching theme across all these areas is finding that sweet spot between robustness and optimal performance, and we’re making a lot of exciting progress in that direction!