Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas


The Second AmericasNLP Competition on Speech-to-Text Translation for Indigenous Languages of the Americas is an official NeurIPS 2022 competition aimed at encouraging the development of machine translation (MT) systems for indigenous languages of the Americas. The overall goal is to develop new speech-to-text translation technology for Indigenous languages, and participants will build systems for 3 tasks: (1) automatic speech recognition (ASR) for an Indigenous language (Task 1), (2) text-to-text translation between an Indigenous language and a high-resource language (Task 2), and (3) speech-to-text translation between an Indigenous language and a high-resource language (Task 3, our main task).


Many Indigenous languages of the Americas are so-called low-resource languages: parallel data with other languages as needed to train speech-to-text MT systems is limited. This means that many approaches designed for translating between high-resource languages – such as English, Spanish, or Portuguese – are not directly applicable or perform poorly. Additionally, many Indigenous languages exhibit linguistic properties uncommon among languages frequently studied in natural language processing (NLP), e.g., many are polysynthetic or tonal. This constitutes an additional difficulty. We want to motivate researchers to take on the challenge of developing speech-to-text MT systems for Indigenous languages.


We invite submissions of speech-to-text MT results (as well as of results for the subtasks of ASR and text-to-text translation) obtained by systems built for Indigenous languages. We will provide training and evaluation data to the participants, but there are no limits on what outside resources – such as additional data or pretrained systems – participants can use, with the exception of the datasets listed here. This should go without saying, but we ask that participants don't translate (or transcribe, in the case of ASR) the test input by hand. The main metrics of this competition are ChrF (Popović, 2015) for Tasks 2 and 3 and character error rate for Task 1. Participants can submit results for as many language pairs as they like, but only teams that participate for all language pairs for a task are entering the official ranking. We provide an evaluation script and a baseline MT system to help participants getting started quickly. If you are interested in this competition, please register here.

System Submission

Please send your system outputs to abteen[dot]ebrahimi[at]colorado[dot]edu and katharina[dot]kann[at]colorado[dot]edu. The subject of your email should be "NeurIPS–AmericasNLP 2022; Competition Submission; Task(s) ‹TASK NUMBER(S)›; ‹TEAM NAME›". The content of your submission email should be as follows: Please attach all output files to your email as a single zip file, named after your team, e.g., "". Within that zip file, the individual files should be named "‹LANGUAGE_CODE›.Task_‹TASK_NUMBER›.‹VERSION>›. The language code should be the same as used in the corresponding training set names. The version number is in case you want to submit the outputs of multiple systems; it should be a single digit (please don't submit more than 9 options per language!). Each output file should contain one sentence per line. Sentences should not be tokenized.


The following language pairs are featured in the NeurIPS–AmericasNLP 2022 competition: For all pairs, the Indigenous language is the source language, and the high-resource language is the target language.

Pilot Data

Evaluation script:

Data and Baseline System

A script to download the datasets for the competition, an evalution script, and the official baselines can be found in our GitHub.


As long as the best performing systems beat our baselines, the corresponding teams will be awarded the following prizes:

Important Dates

All deadlines will be 11:59 pm UTC -12h ("anywhere on Earth").


Manuel Mager, Katharina Kann, Abteen Ebrahimi, Arturo Oncevay, Rodolfo Zevallos, Adam Wiemerslage, Pavel Denisov, John E. Ortega, Kristine Stenzel, Aldo Alvarez, Luis Chiruzzo, Rolando Coto-Solano, Hilaria Cruz, Sofía Flores-Solórzano, Ivan Vladimir Meza Ruiz, Alexis Palmer, Ngoc Thang Vu


Maja Popović, 2015. chrF: Character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation.
Design: Rebeca Guerrero and Manuel Mager