Tackling the Low-resource Challenge for Canonical Segmentation

Manuel Mager, Özlem Çetinoğlu and Katharina Kann

Abstract

Canonical morphological segmentation consists of dividing words into their standardizedmorphemes. Here, we are interested in approaches for the task when training data is limited. We compare model performance in a simulated low-resource setting for the high-resource languages German, English, and Indonesian to experiments on new datasets forthe truly low-resource languages Popoluca and Tepehua. We explore two new models forthe task, borrowing from the closely relatedarea of morphological generation: an LSTM pointer-generator and a sequence-to-sequencemodel with hard monotonic attention trainedwith imitation learning. We find that, in the low-resource setting, the novel approaches out-perform existing ones on all languages by up to 11.4% accuracy. However, while accuracy in emulated low-resource scenarios is over 50% for all languages, for the truly low-resource languages Popoluca and Tepehua, our best model only obtains 37.4% and 28.4% accuracy, respectively. Thus, we conclude that canonical segmentation is still a challenging task for low-resource languages.

Resources

Cite

You are welcome to use the code and datasets, however please acknowledge its use with a citation:
@inproceedings{mager2020trackling,
    title = "Tackling the Low-resource Challenge for Canonical Segmentation",
    author = {Mager, Manuel and {\c{C}}etino{\u{g}}lu, {\"O}zlem and Kann, Katharina},
    booktitle = "Proceedings of the 2020 Conference of Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2020",
    address = "Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
}