Home >Backend Development >PHP Tutorial >Efficient Chinese Search with Elasticsearch

Efficient Chinese Search with Elasticsearch

William Shakespeare
William ShakespeareOriginal
2025-02-19 08:28:11442browse

Elasticsearch Chinese search: Analyzers and best practices

Analysis and lexicization are crucial in the content index of Elasticsearch, especially when dealing with non-English languages. For Chinese, this process is even more complicated due to the characteristics of Chinese characters and the lack of spaces between words and sentences.

This article discusses several solutions for analyzing Chinese content in Elasticsearch, including the default Chinese analyzer, paoding plug-in, cjk analyzer, smartcn analyzer and ICU plug-in, and analyzes their advantages and disadvantages and applicable scenarios.

Challenges of Chinese Search

Chinese characters are ideograms that represent a word or morphemes (the smallest meaningful unit in language). When combined together, its meaning will change, representing a completely new word. Another difficulty is that there are no spaces between words and sentences, which makes it difficult for computers to know where a word starts and ends.

Even if you only consider Mandarin (the official Chinese language and the most widely used Chinese in the world), there are tens of thousands of Chinese characters, even if you actually write Chinese, you only need to know three to four thousand Chinese characters. For example, "volcano" (volcano) is actually a combination of the following two Chinese characters:

  • Fire: Fire
  • Mountain: Mountain

Our word participle must be smart enough to avoid separating these two Chinese characters, because their meaning is different from when they are separated.

Another difficulty is the spelling variant used:

  • Simplified Chinese: Calligraphy
  • Traditional Chinese, more complex and richer: Book method
  • Pinyin, Romanized form of Mandarin: shū fǎ

Chinese Analyzer in Elasticsearch

At present, Elasticsearch provides the following Chinese analyzers:

  • Default Chinese Analyzer, based on deprecated classes in Lucene 4;
  • paoding Plugin, although no longer maintained, is based on a very good dictionary;
  • cjk Analyzer, which binaryizes content;
  • smartcn Analyzer, an officially supported plug-in;
  • ICU plug-in and its word segmentation device.

These analyzers vary greatly, and we will compare their performance with a simple test word "mobile phone". "Mobile phone" means "mobile phone", which consists of two Chinese characters, which represent "hand" and "mobile". The word "ji" also constitutes many other words:

  • Flights: Air tickets
  • Robot:Robot
  • Machine gun: machine gun
  • Opportunity: Opportunity

Our participle cannot split these Chinese characters because if I search for "mobile phone", I don't want any documentation about Rambo owning a machine gun.

We will test these solutions using the powerful _analyze API:

<code class="language-bash">curl -XGET 'http://localhost:9200/chinese_test/_analyze?analyzer=paoding_analyzer1' -d '手机'</code>

Efficient Chinese Search with Elasticsearch

  • Default Chinese Analyzer: It only divides all Chinese characters into word elements. Therefore, we get two lexical elements: mobile phone and mobile phone. Elasticsearch's standard analyzer produces exactly the same output. Therefore, Chinese is deprecated and will soon be replaced by standard and should be avoided.

  • paoding Plug-in: paoding Almost an industry standard and is considered an elegant solution. Unfortunately, the plugin for Elasticsearch is not maintained, and I can only run it on version 1.0.1 after some modifications. (Installation steps are omitted, original text provided) After installation, we get a new paoding word segmenter and two collectors: max_word_len and most_word. By default, there is no public analyzer, so we have to declare a new analyzer. (Configuration steps are omitted, original text provided) Both configurations provide good results with clear and unique lexical elements. It also behaves very well when dealing with more complex sentences.

  • cjk Analyzer: Very simple analyzer that converts only any text into binaries. "Mobile phone" only indexes 手机, which is good, but if we use longer words, such as "Lantern Festival" (Lantern Festival), two words will be generated: Lantern Festival and Xiao Festival, which means "Lantern Festival" and respectively "Xiao Festival".

  • smartcn Plug-in: Very easy to install. (Installation steps are omitted, original text provided) It exposes a new smartcn analyzer, as well as smartcn_tokenizer word segmenter, using Lucene's SmartChineseAnalyzer. It uses a probability suite to find the best segmentation of words, using hidden Markov models and a large amount of training text. Therefore, a fairly good training dictionary has been embedded—our examples are correctly participled.

  • ICU Plugin: Another official plugin. (Installation steps are omitted, original text provided) If you deal with any non-English language, it is recommended to use this plugin. It discloses a icu_tokenizer word segmenter, as well as many powerful analysis tools such as icu_normalizer, icu_folding, icu_collation, etc. It uses Chinese and Japanese dictionaries that contain information about word frequency to infer Chinese character groups. On "mobile phone", everything is normal and works as expected, but on "Lantern Festival", two words will be produced: Lantern Festival and Festival - this is because "Lantern Festival" and "Festival" are more important than "Lantern Festival". common.

Comparison of results (The form omitted, original text provided)

From my point of view, paoding and smartcn got the best results. chinese The word participle is very bad, icu_tokenizer is a bit disappointing on the "Lantern Festival", but it is very good at dealing with traditional Chinese.

Traditional Chinese support

You may need to process traditional Chinese from a document or user search request. You need a normalization step to convert these traditional inputs into modern Chinese because plugins like smartcn or paoding do not handle it correctly.

You can handle it through your application, or try using the elasticsearch-analysis-stconvert plugin to handle it directly in Elasticsearch. It can convert traditional and simplified characters in both directions. (Installation steps are omitted, original text has been provided)

The last solution is to use cjk: If you can't enter correctly participle, you're still very likely to capture the required documentation and then use icu_tokenizer (also quite good) to improve relevance.

Further improvements

There is no perfect universal solution for Elasticsearch analysis, and Chinese is no exception. You must combine and build your own analyzers based on the information you have obtained. For example, I use the cjk and smartcn participle on the search field, using multi-field and multi-match query.

(FAQ part omitted, original text provided)

The above is the detailed content of Efficient Chinese Search with Elasticsearch. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn