Biography

I am a third-year PhD student in Computation, Cognition, and Language (NLP) at Language Technology Lab, University of Cambridge. I am supervised by Prof. Anna Korhonen and Dr. Ivan Vulić. I am also a Student Researcher at Google Cloud AI Research and Google DeepMind. Previously, I interned at the Gemini Responsible AI Team at Google Research.

I am always interested in modular, efficient, and reward-driven intelligence.

Recent News

Academic Services

Reviewer/program committee member at ACL (2023-24), EMNLP (2022-24), ICML (2024), NeurIPS (2023-24), ICLR (2025).

Interests
  • Large Language Models
  • Parameter-Efficient Fine-Tuning
  • Prompt Optimization
  • Modularity
Education
  • PhD in Computation, Cognition, and Language, Oct 2022 -

    University of Cambridge

  • MSc in Machine Learning (rank 1st), 2020 - 2021

    University College London

  • BA, MEng in Engineering Science, 2015 - 2019

    University of Oxford

Publications

Quickly discover relevant content by filtering publications.
(2024). Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments. The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).

PDF Cite Code Abstract ACL Anthology

(2024). TopViewRS: Vision-Language Models as Top-View Spatial Reasoners. The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).

PDF Cite Code Abstract Project Page ACL Anthology

(2024). Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators. The First Conference on Language Modeling (COLM).

PDF Cite Code Abstract OpenReview

(2024). Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering. International Conference on Learning Representations (ICLR).

PDF Cite Abstract (Google Research) OpenReview Blog (Google AI) Talk (NeurIPS Spotlight)

(2024). AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning. Transactions of the Association for Computational Linguistics (TACL).

PDF Cite Code Abstract MIT Press ACL Anthology

(2023). Can Large Language Models Achieve Calibration with In-Context Learning?. ICLR 2024 Workshop on Reliable and Responsible Foundation Models.

PDF Cite Code Abstract OpenReview

(2023). Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning. Findings of the Association for Computational Linguistics (EMNLP).

PDF Cite Code Abstract OpenReview ACL Anthology

(2023). A Systematic Study of Performance Disparities in Multilingual Task-Oriented Dialogue Systems. The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP).

PDF Cite Code Abstract OpenReview ACL Anthology

(2023). Multi3WOZ: A Multilingual, Multi-Domain, Multi-Parallel Dataset for Training and Evaluating Culturally Adapted Task-Oriented Dialog Systems. Transactions of the Association for Computational Linguistics (TACL).

PDF Cite Dataset Abstract MIT Press ACL Anthology

(2023). GreenPLM: Cross-Lingual Transfer of Monolingual Large Language Models at Almost No Cost. The 32nd International Joint Conference on Artificial Intelligence (IJCAI).

PDF Cite Code Abstract IJCAI 2023

(2023). XQA-DST: Multi-Domain and Multi-Lingual Dialogue State Tracking. Findings of the Association for Computational Linguistics (EACL).

PDF Cite Code Abstract ACL Anthology

Experience

 
 
 
 
 
Google DeepMind
Student Researcher
Nov 2024 – Apr 2025 London, UK
 
 
 
 
 
Google Research, Cloud AI Team
Student Researcher
Jul 2024 – Nov 2024 San Francisco, US
 
 
 
 
 
Google Research, Responsible AI Team
Student Researcher
Jun 2023 – Dec 2023 London, UK
 
 
 
 
 
Language Technology Lab
Research Intern
Mar 2022 – Sep 2022 Cambridge, UK
 
 
 
 
 
UCL NLP Lab
Research Student
May 2021 – Oct 2021 London, UK
 
 
 
 
 
Tencent
Game Designer
Jul 2019 – Aug 2020 Shenzhen, China
 
 
 
 
 
RACE
Research Intern
Jul 2018 – Sep 2018 Culham, UK
 
 
 
 
 
Battery Intelligence Lab
Research Intern
Jul 2017 – Sep 2017 Oxford, UK

Accomplish­ments

Awarded to advance research with Google Gemma Models.
Awarded to present at International Conference on Learning Representations (ICLR 2024).
Fully funded PhD studentship for research towards globally equitable language technologies.
Awarded to rank 1 for MSc Machine Learning in the academic year at UCL.
Awarded to the distinct Engineering Science student at Oxford.

Contact

  • hz416 [at] cam.ac.uk (Cambridge)
  • LTL, University of Cambridge, UK