Events
Chengchun Shi (LSE): Doubly Robust Alignment for Large Language Models
Centre for Probability, Statistics and Data ScienceDate: 26 February 2026 Time: 14:00 - 15:00
Location: Hybrid: MB503, SMS, QMUL, or via the Teams link below
This talk focuses on reinforcement learning from human feedback (RLHF) for aligning large language models with human preferences. While RLHF has demonstrated promising results, many algorithms are highly sensitive to misspecifications in the underlying preference model (e.g., the Bradley-Terry model), the reference policy, or the reward function, resulting in undesirable fine-tuning. To address model misspecification, we propose a doubly robust preference optimization algorithm that remains consistent when either the preference model or the reference policy is correctly specified (without requiring both). Our proposal demonstrates superior and more robust performance than stateof-the-art algorithms, both in theory and in practice. The code is available at github.com/DRPO4LLM/DRPO4LLM
| Contact: | Nicolás Hernández |
| Email: | n.hernandez@qmul.ac.uk |
| Website: |
Updated by: Kostas Papafitsoros