Research

Risk-aware linear bandits with convex loss (EWRL 2022, AISTATS 2023)

Linear bandits with a twist.

Bregman Deviations of Generic Exponential Families

Time-uniform concentration for a vast class of parametric distributions.

From Optimality to Robustness: Dirichlet Sampling Strategies in Stochastic Bandits (NeurIPS 2021)

State-of-the-art randomised bandit algorithm with guarantees under weak assumptions.