SS25: Disagreement in NLP

Seminar Description

Traditional NLP approaches resolve label disagreements into a single “gold standard,” since disagreements are treated as noise in the data, resulting from the lack of attention or mistakes of the annotators, subjective bias, or insufficient annotation guidelines.
However, recent research highlights that a single gold label may not capture the ambiguity and diversity in language. For subjective tasks such as abuse detection and quality estimation, there is an even greater need for multi-perspective modeling in order to include different viewpoints and improve the robustness and fairness of NLP models.

This seminar explores disagreement in linguistic annotation and perspectivist approaches in NLP, focusing on learning from non-aggregated datasets and multi-perspective evaluation. We will explore the causes of annotation disagreements and strategies to address them. We will discuss current research on modeling diverse viewpoints and the broader implications for AI fairness and inclusion.

Seminar
taught by:Dr. Frances Yung
date and time: Monday, 12:15 - 13:45; first seminar on 14.04.2025
located:Building C7 2 - Seminar room -1.05
sign-up:If you are interested, please join our MS Team [155279] Disagreement in NLP |
credits:4 CP (R), 7 CP (R+H)
suited for:see LSF