EVENT

Event News

Talk on "AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages" "Showing Your Prompt Doesn't Always Work"by Jiayi Wang and Yao Lu

We are pleased to inform you about the upcoming seminar by Jiayi Wang and Yao Lu titled:"AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages" "Showing Your Prompt Doesn't Always Work"Everyone interested is cordially invited to attend!

Title1:

AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages(Jiayi Wang)

Abstract:

Despite the recent progress on scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging, since evaluation is often performed on n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics such as COMET have higher correlation; however, the lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this study, we address these challenges by creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AfriCOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).

Speaker Bio:

Jiayi Wang is currently pursuing her PhD in Computer Science at University College London, where she is supervised by Prof. Pontus Stenetorp and Prof. Sebastian Riedel. Her research concentrates on translation evaluation and the development of multilingual large language models. Jiayi brings an experience from the tech industry, having contributed as a Senior Algorithm Engineer within the machine translation group at Alibaba DAMO Academy for five years. Additionally, her academic repertoire was enriched by a role in Computational Genetics as a Statistician at the Social Science Research Institute of Duke University. She holds a Master of Science in Engineering in Applied Mathematics and Statistics from Johns Hopkins University, obtained in 2015, and dual bachelor's degrees in Mathematics and Statistics, from the University of Minnesota, Duluth, obtained in 2013.

Title2:

Showing Your Prompt Doesn't Always Work (Yao Lu)

Abstract:

In this talk, we will discuss the order sensitivity of in-context learning. We demonstrate that the sequence in which samples are provided can make the difference between near state-of-the-art and random guess performance; essentially, some permutations are "fantastic," while others are not. We analyze this phenomenon in detail, establishing that it is
present across model sizes, it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another.
We will also discuss how randomly sampling tokens from the model's vocabulary to use as "separators" can be as effective as language models for prompt-style text classification. Our experiments show that random separators serve as competitive baselines, revealing that the language space is abundant with potentially good separators and that there is a high chance a randomly drawn separator will outperform human-curated ones.

Speaker Bio:

Yao Lu is a final year PhD student in Computer Science at University College London, where he is supervised by Prof. Pontus Stenetorp and Prof. Sebastian Riedel. His research focuses on behaviour analysis of in-context learning with large language models.

Time/Date:

13:00-15:00 / wednesday, March. 27th, 2024

Place:

Room 1509 at NII

Contact:

If you would like to join, please contact by email.
Email : saku[at]nii.ac.jp

entry6302

SPECIAL