Events
Joe Stacey: How to improve the robustness and interpretability of Natural Language Inference models
Centre for Human-Centred ComputingDate: 12 November 2025 Time: 11:00 - 12:00
Location: Queen's building QB-215
Abstract
Joe's talk will have two parts: 1) discussing how to improve the robustness of fine-tuned Natural Language Inference (NLI) models, and 2) introducing a method for creating inherently interpretable NLI models.
In the first part, Joe will talk about different strategies to improve robustness, including training with natural language explanations, and using LLMs to generate out-of-distribution data for fine-tuning. Joe will also discuss why debiasing models is often not an effective solution, and how model robustness methods can also be applied to large-scale closed-source LLMs.
The second part of the talk will introduce atomic inference, an approach that involves decomposing a task into discrete atoms, before making predictions for each atom and combining these atom-level predictions using interpretable rules.
Bio
Joe Stacey has recently started his postdoc at Sheffield, working on Uncertainty Quantification under the supervision of Nafise Moosavi. Joe was formerly an Apple AI/ML scholar doing his PhD at Imperial College London, where he worked on creating more robust and interpretable NLP models. Prior to his PhD, Joe worked as a strategy consultant and as a maths teacher in a challenging school in Birmingham.
| Contact: | Haim Dubossarsky |
| Email: | h.dubossarsky@qmul.ac.uk |
Updated by: Haim Dubossarsky