Elasticsearch Chinese search: Analyzers and best practices
Analysis and lexicization are crucial in the content index of Elasticsearch, especially when dealing with non-English languages. For Chinese, this process is even more complicated due to the characteristics of Chinese characters and the lack of spaces between words and sentences.
This article discusses several solutions for analyzing Chinese content in Elasticsearch, including the default Chinese analyzer, paoding plug-in, cjk analyzer, smartcn analyzer and ICU plug-in, and analyzes their advantages and disadvantages and applicable scenarios.
Challenges of Chinese Search
Chinese characters are ideograms that represent a word or morphemes (the smallest meaningful unit in language). When combined together, its meaning will change, representing a completely new word. Another difficulty is that there are no spaces between words and sentences, which makes it difficult for computers to know where a word starts and ends.
Even if you only consider Mandarin (the official Chinese language and the most widely used Chinese in the world), there are tens of thousands of Chinese characters, even if you actually write Chinese, you only need to know three to four thousand Chinese characters. For example, "volcano" (volcano) is actually a combination of the following two Chinese characters:
- Fire: Fire
- Mountain: Mountain
Our word participle must be smart enough to avoid separating these two Chinese characters, because their meaning is different from when they are separated.
Another difficulty is the spelling variant used:
- Simplified Chinese: Calligraphy
- Traditional Chinese, more complex and richer: Book method
- Pinyin, Romanized form of Mandarin: shū fǎ
Chinese Analyzer in Elasticsearch
At present, Elasticsearch provides the following Chinese analyzers:
- Default
Chinese
Analyzer, based on deprecated classes in Lucene 4; -
paoding
Plugin, although no longer maintained, is based on a very good dictionary; -
cjk
Analyzer, which binaryizes content; -
smartcn
Analyzer, an officially supported plug-in; - ICU plug-in and its word segmentation device.
These analyzers vary greatly, and we will compare their performance with a simple test word "mobile phone". "Mobile phone" means "mobile phone", which consists of two Chinese characters, which represent "hand" and "mobile". The word "ji" also constitutes many other words:
- Flights: Air tickets
- Robot:Robot
- Machine gun: machine gun
- Opportunity: Opportunity
Our participle cannot split these Chinese characters because if I search for "mobile phone", I don't want any documentation about Rambo owning a machine gun.
We will test these solutions using the powerful _analyze
API:
curl -XGET 'http://localhost:9200/chinese_test/_analyze?analyzer=paoding_analyzer1' -d '手机'
-
Default
Chinese
Analyzer: It only divides all Chinese characters into word elements. Therefore, we get two lexical elements: mobile phone and mobile phone. Elasticsearch'sstandard
analyzer produces exactly the same output. Therefore,Chinese
is deprecated and will soon be replaced bystandard
and should be avoided. -
paoding
Plug-in:paoding
Almost an industry standard and is considered an elegant solution. Unfortunately, the plugin for Elasticsearch is not maintained, and I can only run it on version 1.0.1 after some modifications. (Installation steps are omitted, original text provided) After installation, we get a newpaoding
word segmenter and two collectors:max_word_len
andmost_word
. By default, there is no public analyzer, so we have to declare a new analyzer. (Configuration steps are omitted, original text provided) Both configurations provide good results with clear and unique lexical elements. It also behaves very well when dealing with more complex sentences. -
cjk
Analyzer: Very simple analyzer that converts only any text into binaries. "Mobile phone" only indexes手机
, which is good, but if we use longer words, such as "Lantern Festival" (Lantern Festival), two words will be generated: Lantern Festival and Xiao Festival, which means "Lantern Festival" and respectively "Xiao Festival". -
smartcn
Plug-in: Very easy to install. (Installation steps are omitted, original text provided) It exposes a newsmartcn
analyzer, as well assmartcn_tokenizer
word segmenter, using Lucene'sSmartChineseAnalyzer
. It uses a probability suite to find the best segmentation of words, using hidden Markov models and a large amount of training text. Therefore, a fairly good training dictionary has been embedded—our examples are correctly participled. -
ICU Plugin: Another official plugin. (Installation steps are omitted, original text provided) If you deal with any non-English language, it is recommended to use this plugin. It discloses a
icu_tokenizer
word segmenter, as well as many powerful analysis tools such asicu_normalizer
,icu_folding
,icu_collation
, etc. It uses Chinese and Japanese dictionaries that contain information about word frequency to infer Chinese character groups. On "mobile phone", everything is normal and works as expected, but on "Lantern Festival", two words will be produced: Lantern Festival and Festival - this is because "Lantern Festival" and "Festival" are more important than "Lantern Festival". common.
Comparison of results (The form omitted, original text provided)
From my point of view, paoding
and smartcn
got the best results. chinese
The word participle is very bad, icu_tokenizer
is a bit disappointing on the "Lantern Festival", but it is very good at dealing with traditional Chinese.
Traditional Chinese support
You may need to process traditional Chinese from a document or user search request. You need a normalization step to convert these traditional inputs into modern Chinese because plugins like smartcn
or paoding
do not handle it correctly.
You can handle it through your application, or try using the elasticsearch-analysis-stconvert
plugin to handle it directly in Elasticsearch. It can convert traditional and simplified characters in both directions. (Installation steps are omitted, original text has been provided)
The last solution is to use cjk
: If you can't enter correctly participle, you're still very likely to capture the required documentation and then use icu_tokenizer
(also quite good) to improve relevance.
Further improvements
There is no perfect universal solution for Elasticsearch analysis, and Chinese is no exception. You must combine and build your own analyzers based on the information you have obtained. For example, I use the cjk
and smartcn
participle on the search field, using multi-field and multi-match query.
(FAQ part omitted, original text provided)
The above is the detailed content of Efficient Chinese Search with Elasticsearch. For more information, please follow other related articles on the PHP Chinese website!

Thedifferencebetweenunset()andsession_destroy()isthatunset()clearsspecificsessionvariableswhilekeepingthesessionactive,whereassession_destroy()terminatestheentiresession.1)Useunset()toremovespecificsessionvariableswithoutaffectingthesession'soveralls

Stickysessionsensureuserrequestsareroutedtothesameserverforsessiondataconsistency.1)SessionIdentificationassignsuserstoserversusingcookiesorURLmodifications.2)ConsistentRoutingdirectssubsequentrequeststothesameserver.3)LoadBalancingdistributesnewuser

PHPoffersvarioussessionsavehandlers:1)Files:Default,simplebutmaybottleneckonhigh-trafficsites.2)Memcached:High-performance,idealforspeed-criticalapplications.3)Redis:SimilartoMemcached,withaddedpersistence.4)Databases:Offerscontrol,usefulforintegrati

Session in PHP is a mechanism for saving user data on the server side to maintain state between multiple requests. Specifically, 1) the session is started by the session_start() function, and data is stored and read through the $_SESSION super global array; 2) the session data is stored in the server's temporary files by default, but can be optimized through database or memory storage; 3) the session can be used to realize user login status tracking and shopping cart management functions; 4) Pay attention to the secure transmission and performance optimization of the session to ensure the security and efficiency of the application.

PHPsessionsstartwithsession_start(),whichgeneratesauniqueIDandcreatesaserverfile;theypersistacrossrequestsandcanbemanuallyendedwithsession_destroy().1)Sessionsbeginwhensession_start()iscalled,creatingauniqueIDandserverfile.2)Theycontinueasdataisloade

Absolute session timeout starts at the time of session creation, while an idle session timeout starts at the time of user's no operation. Absolute session timeout is suitable for scenarios where strict control of the session life cycle is required, such as financial applications; idle session timeout is suitable for applications that want users to keep their session active for a long time, such as social media.

The server session failure can be solved through the following steps: 1. Check the server configuration to ensure that the session is set correctly. 2. Verify client cookies, confirm that the browser supports it and send it correctly. 3. Check session storage services, such as Redis, to ensure that they are running normally. 4. Review the application code to ensure the correct session logic. Through these steps, conversation problems can be effectively diagnosed and repaired and user experience can be improved.

session_start()iscrucialinPHPformanagingusersessions.1)Itinitiatesanewsessionifnoneexists,2)resumesanexistingsession,and3)setsasessioncookieforcontinuityacrossrequests,enablingapplicationslikeuserauthenticationandpersonalizedcontent.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SublimeText3 Chinese version
Chinese version, very easy to use

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.
