CMU & Google Extend Pretrained Models to Thousands of Underrepresented Languages Without Using Monolingual Data | Synced

A research team from Carnegie Mellon University and Google systematically explores strategies for leveraging the relatively under-studied resource of bilingual lexicons to adapt pretrained multilin...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

A research team from Carnegie Mellon University and Google systematically explores strategies for leveraging the relatively under-studied resource of bilingual lexicons to adapt pretrained multilingual models to low-resource languages. Their resulting Lexicon-based Adaptation approach produces consistent performance improvements without requiring additional monolingual text.