Home  >  Article  >  Technology peripherals  >  Google open-sources its first 'dialect' data set: making machine translation more authentic

Google open-sources its first 'dialect' data set: making machine translation more authentic

PHPz
PHPzforward
2023-04-08 10:51:041033browse

Although people all over China speak Chinese, the specific dialects in different places are slightly different. For example, it also means alley. When you say "Hutong", you will know it is old Beijing, but when you go to the south, it is called "Hutong". alley".

When these subtle regional differences are reflected in the "machine translation" task, the translation results will appear not to be "authentic" enough. However, almost all current machine translation systems do not consider the region. The influence of sexual language (i.e. dialect).

This phenomenon also exists around the world. For example, the official language of Brazil is Portuguese, and there are some regional differences with Portuguese in Europe.

Recently, Google released a brand new data set and evaluation benchmark FRMT that can be used for few-shot Region-aware machine translation, which mainly solves the problem of dialect translation. The paper was published in TACL (Transactions of the Association for Computational Linguistics).

Google open-sources its first dialect data set: making machine translation more authentic

Paper link: https://arxiv.org/pdf/2210.00193.pdf

Open source link: https:// github.com/google-research/google-research/tree/master/frmt

This dataset includes professional translations from English to two regional variants of Portuguese and Chinese Mandarin, the source documents are designed to be able A detailed analysis of the phenomenon of interest, including lexically distinct terms and interference terms.

The researchers explored FRMT’s automated evaluation metrics and verified its correlation with expert manual evaluation under regional match and mismatch scoring scenarios.

Finally, some baseline models are proposed for this task, and guidance suggestions are provided for researchers on how to train, evaluate, and compare their own models. The dataset and evaluation code have been open sourced.

Few-Shot Generalization

Most modern machine translation systems are trained on millions or billions of translation samples, with input data consisting of an English input sentence and its corresponding Portuguese translation .

However, the vast majority of available training data does not account for regional differences in translation.

Given this data scarcity, the researchers positioned FRMT as a benchmark for few-shot translation, measuring how well a machine translation model identifies a given region when given no more than 100 labeled examples per language. Language variant capabilities.

The machine translation model needs to identify similar patterns in other unlabeled training samples based on the language patterns displayed in a small number of labeled samples (i.e., examples). The model needs to generalize in this way to produce "idiomatic" translations for areas not explicitly specified in the model.

Google open-sources its first dialect data set: making machine translation more authentic

For example, enter the sentence: The bus arrived, and given a few Brazilian Portuguese examples, the model should be able to translate "O ônibus chegou"; if the examples are given is European Portuguese, the translation result of the model should be "O autocarro chegou".

The few-shot method of machine translation is of great research value and can add support for additional regional languages ​​to existing systems in a very simple way.

While the current work published by Google is for regional variants of two languages, the researchers predict that a good approach will be easily applicable to other language and regional variants.

In principle, these methods are also applicable to other language difference phenomena, such as etiquette and style.

Data collection

The FRMT dataset includes some English Wikipedia articles, derived from the Wiki40b dataset, which have been translated into different regions by paid professional translators Sexual Portuguese and Chinese.

Google open-sources its first dialect data set: making machine translation more authentic

To highlight the translation challenges of critical region awareness, the researchers used three content buckets to design Data set:

1. Vocabulary Lixical

Lexical bucket mainly focuses on the differences in vocabulary selection in different regions. For example, when a word with When a sentence with the word "bus" is translated into Brazilian and European Portuguese respectively, the model needs to be able to identify the difference between "ônibus" and "autocarro".

The researchers manually collected 20-30 regionally specific translation terms based on blogs and educational websites, and filtered and filtered the translations based on feedback from native-speaking volunteers from each region. Review.

Based on the obtained list of English terms, 100 sentences are extracted from relevant English Wikipedia articles (for example, bus). For Mandarin, repeat the same collection process above.

Google open-sources its first dialect data set: making machine translation more authentic

2. Entity Entity

The entity bucket is filled in a similar manner with the people, locations or other entities involved A strong connection to one of the two areas in which a particular language is involved.

For example, given an explanatory sentence, such as "In Lisbon, I often took the bus." (In Lisbon, I often took the bus.), in order to correctly interpret it Translating into Brazilian Portuguese, the model must be able to identify two potential pitfalls:

#1) The closer geographical connection between Lisbon and Portugal may influence the choice of model translation, thereby Help the model determine that it should be translated into European Portuguese instead of Brazilian Portuguese, that is, select "autocarro" instead of "ônibus".

2) Replacing "Lisbon" with "Brasilia" may be a simpler way. For the same pattern, localize its output to Brazilian Portuguese, even if the translation result is still Very smooth, but may also lead to inaccurate semantics.

3. Random Random

Random bucket is used to check whether a model correctly handles other different phenomena, including features from Wikipedia and good) 100 articles randomly selected from the collection.

Google open-sources its first dialect data set: making machine translation more authentic

System Performance

To verify that the translations collected for the FRMT dataset are able to capture the phenomena in a specific region , the researchers performed a manual assessment of data quality.

Expert annotators from each respective region identify and classify errors in translation using a Multidimensional Quality Measurement (MQM) framework: the framework includes a classification weighting scheme that combines the identified Errors are converted into a single score that roughly represents the number of major errors per sentence, i.e. smaller numbers indicate better translations.

For each region, the researchers asked MQM raters to rate translations from their region and translations from other regions in their language.

For example, Brazilian Portuguese raters rated both Brazilian and European Portuguese translations at the same time. The difference between the two scores indicates the universality of the linguistic phenomenon, that is, the Whether a language variant is acceptable rather than another language.

Experimental results found that in Portuguese and Chinese, raters found approximately two more major errors per sentence on average than in the matched translations, indicating that the FRMT dataset is indeed able to capture specific regional linguistic phenomena.

While manual evaluation is the best way to ensure model quality, it is often slow and expensive.

Therefore, the researchers hope to find a ready-made automatic metric that can be used to evaluate the performance of the model in the benchmark. The researchers consider using chrF, BLEU and BLEURT.

Google open-sources its first dialect data set: making machine translation more authentic

Based on MQM evaluators’ ratings of several baseline model translation results, it can be found that BLEURT has the best correlation with human judgment , and the strength of this correlation (0.65 Pearson correlation coefficient, ρ) is comparable to the inter-annotator agreement (0.70 intraclass correlation).

System Performance

This article evaluates some recently released models with few-shot control capabilities.

Based on human evaluation of MQM, baseline methods all show a certain ability to localize Portuguese output, but for Chinese Mandarin, most do not use the knowledge of the target area to generate excellent local translations result.

Among the benchmarks evaluated, Google’s language model PaLM model performed best. To use PaLM to generate region-specific translations, an instructive prompt is first fed into the model, and then Generate text from it to fill in the gaps.

Google open-sources its first dialect data set: making machine translation more authentic

PaLM achieved great results with just one example, in Portuguese , the quality improves slightly when increasing to 10 examples, which is already very good considering that PaLM is trained unsupervised.

The findings also suggest that language models like PaLM may be particularly good at memorizing region-specific lexical choices needed for smooth translation.

Google open-sources its first dialect data set: making machine translation more authentic

However, there is still a significant performance gap between PaLM and humans.

Reference materials:

https://ai.googleblog.com/2023/02/frmt-benchmark-for-few-shot-region.html

The above is the detailed content of Google open-sources its first 'dialect' data set: making machine translation more authentic. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete