Han Zhou
Han Zhou
Home
Featured
Publications
Experience
Tags
Contact
Light
Dark
Automatic
Human Alignment
Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments
Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments
Han Zhou
,
Xingchen Wan
,
Yinhong Liu
,
Nigel Collier
,
Ivan Vulić
,
Anna Korhonen
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators.
Yinhong Liu
,
Han Zhou
,
Zhijiang Guo
,
Ehsan Shareghi
,
Ivan Vulić
,
Anna Korhonen
,
Nigel Collier
Cite
×