search
HomeTechnology peripheralsAIGoogle open-sources its first 'dialect' data set: making machine translation more authentic

Although people all over China speak Chinese, the specific dialects in different places are slightly different. For example, it also means alley. When you say "Hutong", you will know it is old Beijing, but when you go to the south, it is called "Hutong". alley".

When these subtle regional differences are reflected in the "machine translation" task, the translation results will appear not to be "authentic" enough. However, almost all current machine translation systems do not consider the region. The influence of sexual language (i.e. dialect).

This phenomenon also exists around the world. For example, the official language of Brazil is Portuguese, and there are some regional differences with Portuguese in Europe.

Recently, Google released a brand new data set and evaluation benchmark FRMT that can be used for few-shot Region-aware machine translation, which mainly solves the problem of dialect translation. The paper was published in TACL (Transactions of the Association for Computational Linguistics).

Google open-sources its first dialect data set: making machine translation more authentic

Paper link: https://arxiv.org/pdf/2210.00193.pdf

Open source link: https:// github.com/google-research/google-research/tree/master/frmt

This dataset includes professional translations from English to two regional variants of Portuguese and Chinese Mandarin, the source documents are designed to be able A detailed analysis of the phenomenon of interest, including lexically distinct terms and interference terms.

The researchers explored FRMT’s automated evaluation metrics and verified its correlation with expert manual evaluation under regional match and mismatch scoring scenarios.

Finally, some baseline models are proposed for this task, and guidance suggestions are provided for researchers on how to train, evaluate, and compare their own models. The dataset and evaluation code have been open sourced.

Few-Shot Generalization

Most modern machine translation systems are trained on millions or billions of translation samples, with input data consisting of an English input sentence and its corresponding Portuguese translation .

However, the vast majority of available training data does not account for regional differences in translation.

Given this data scarcity, the researchers positioned FRMT as a benchmark for few-shot translation, measuring how well a machine translation model identifies a given region when given no more than 100 labeled examples per language. Language variant capabilities.

The machine translation model needs to identify similar patterns in other unlabeled training samples based on the language patterns displayed in a small number of labeled samples (i.e., examples). The model needs to generalize in this way to produce "idiomatic" translations for areas not explicitly specified in the model.

Google open-sources its first dialect data set: making machine translation more authentic

For example, enter the sentence: The bus arrived, and given a few Brazilian Portuguese examples, the model should be able to translate "O ônibus chegou"; if the examples are given is European Portuguese, the translation result of the model should be "O autocarro chegou".

The few-shot method of machine translation is of great research value and can add support for additional regional languages ​​to existing systems in a very simple way.

While the current work published by Google is for regional variants of two languages, the researchers predict that a good approach will be easily applicable to other language and regional variants.

In principle, these methods are also applicable to other language difference phenomena, such as etiquette and style.

Data collection

The FRMT dataset includes some English Wikipedia articles, derived from the Wiki40b dataset, which have been translated into different regions by paid professional translators Sexual Portuguese and Chinese.

Google open-sources its first dialect data set: making machine translation more authentic

To highlight the translation challenges of critical region awareness, the researchers used three content buckets to design Data set:

1. Vocabulary Lixical

Lexical bucket mainly focuses on the differences in vocabulary selection in different regions. For example, when a word with When a sentence with the word "bus" is translated into Brazilian and European Portuguese respectively, the model needs to be able to identify the difference between "ônibus" and "autocarro".

The researchers manually collected 20-30 regionally specific translation terms based on blogs and educational websites, and filtered and filtered the translations based on feedback from native-speaking volunteers from each region. Review.

Based on the obtained list of English terms, 100 sentences are extracted from relevant English Wikipedia articles (for example, bus). For Mandarin, repeat the same collection process above.

Google open-sources its first dialect data set: making machine translation more authentic

2. Entity Entity

The entity bucket is filled in a similar manner with the people, locations or other entities involved A strong connection to one of the two areas in which a particular language is involved.

For example, given an explanatory sentence, such as "In Lisbon, I often took the bus." (In Lisbon, I often took the bus.), in order to correctly interpret it Translating into Brazilian Portuguese, the model must be able to identify two potential pitfalls:

#1) The closer geographical connection between Lisbon and Portugal may influence the choice of model translation, thereby Help the model determine that it should be translated into European Portuguese instead of Brazilian Portuguese, that is, select "autocarro" instead of "ônibus".

2) Replacing "Lisbon" with "Brasilia" may be a simpler way. For the same pattern, localize its output to Brazilian Portuguese, even if the translation result is still Very smooth, but may also lead to inaccurate semantics.

3. Random Random

Random bucket is used to check whether a model correctly handles other different phenomena, including features from Wikipedia and good) 100 articles randomly selected from the collection.

Google open-sources its first dialect data set: making machine translation more authentic

System Performance

To verify that the translations collected for the FRMT dataset are able to capture the phenomena in a specific region , the researchers performed a manual assessment of data quality.

Expert annotators from each respective region identify and classify errors in translation using a Multidimensional Quality Measurement (MQM) framework: the framework includes a classification weighting scheme that combines the identified Errors are converted into a single score that roughly represents the number of major errors per sentence, i.e. smaller numbers indicate better translations.

For each region, the researchers asked MQM raters to rate translations from their region and translations from other regions in their language.

For example, Brazilian Portuguese raters rated both Brazilian and European Portuguese translations at the same time. The difference between the two scores indicates the universality of the linguistic phenomenon, that is, the Whether a language variant is acceptable rather than another language.

Experimental results found that in Portuguese and Chinese, raters found approximately two more major errors per sentence on average than in the matched translations, indicating that the FRMT dataset is indeed able to capture specific regional linguistic phenomena.

While manual evaluation is the best way to ensure model quality, it is often slow and expensive.

Therefore, the researchers hope to find a ready-made automatic metric that can be used to evaluate the performance of the model in the benchmark. The researchers consider using chrF, BLEU and BLEURT.

Google open-sources its first dialect data set: making machine translation more authentic

Based on MQM evaluators’ ratings of several baseline model translation results, it can be found that BLEURT has the best correlation with human judgment , and the strength of this correlation (0.65 Pearson correlation coefficient, ρ) is comparable to the inter-annotator agreement (0.70 intraclass correlation).

System Performance

This article evaluates some recently released models with few-shot control capabilities.

Based on human evaluation of MQM, baseline methods all show a certain ability to localize Portuguese output, but for Chinese Mandarin, most do not use the knowledge of the target area to generate excellent local translations result.

Among the benchmarks evaluated, Google’s language model PaLM model performed best. To use PaLM to generate region-specific translations, an instructive prompt is first fed into the model, and then Generate text from it to fill in the gaps.

Google open-sources its first dialect data set: making machine translation more authentic

PaLM achieved great results with just one example, in Portuguese , the quality improves slightly when increasing to 10 examples, which is already very good considering that PaLM is trained unsupervised.

The findings also suggest that language models like PaLM may be particularly good at memorizing region-specific lexical choices needed for smooth translation.

Google open-sources its first dialect data set: making machine translation more authentic

However, there is still a significant performance gap between PaLM and humans.

Reference materials:

https://ai.googleblog.com/2023/02/frmt-benchmark-for-few-shot-region.html

The above is the detailed content of Google open-sources its first 'dialect' data set: making machine translation more authentic. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.