Home  >  Article  >  Backend Development  >  Implementation method of detecting rare words in Python

Implementation method of detecting rare words in Python

WBOY
WBOYOriginal
2016-12-05 13:27:141953browse

Solution ideas

The first thing that comes to mind is to use python’s regular expressions to match illegal characters and then find illegal records. However, ideals are always full, but reality is cruel. During the implementation process, I discovered that I lacked knowledge about character encoding and Python's internal string representation. During this period, I went through a lot of pitfalls, and although there were still some ambiguities in the end, I finally had an overall clear understanding. Record your experience here to avoid falling in the same place in the future.

The following test environment is the python 2.7.8 environment that comes with ArcGIS 10.3. There is no guarantee that other python environments will also be suitable.

python regular expression

The regular function in python is provided by the built-in re function library, which mainly uses 3 functions. re.compile() Provides reusable regular expressions, match() and search() functions return matching results. The difference between the two is: match() starts matching from the specified position, search() will search backward from the specified position until a matching string is found. For example, in the code below, match_result starts matching from the first character f, and returns a null value if the match fails; search_result searches backward from f until it finds the first matching character a, and then uses the group() function The output matching result is the character a.

import re

pattern = re.compile('[abc]')
match_result = pattern.match('fabc')
if match_result:
 print match_result.group()

search_result = pattern.search('fabc')
if search_result:
 print search_result.group()

The above implementation requires compiling a pattern first and then matching. In fact, we can directly use the re.match(pattern, string) function to achieve the same function. However, the direct matching method is not as flexible as compiling first and then matching. First of all, regular expressions cannot be reused. If a large amount of data is matched with the same pattern, it means that internal compilation is required every time, resulting in performance loss; in addition, re.match The () function is not as powerful as pattern.match() , which can specify the position from which to start matching.

Encoding problem

After understanding the basic functions of python regular expressions, the only thing left is to find a suitable regular expression to match rare words and illegal characters. Illegal characters are very simple and can be matched using the following pattern:

pattern = re.compile(r'[~!@#$%^&* ]')

However, the matching of rare characters really stumped me. The first is the definition of rare words. What kind of words are considered rare? After consultation with the project manager, it was determined that non-GB2312 characters are rare characters. The next question is, how to match GB2312 characters?

After inquiry, the range of GB2312 is [xA1-xF7][xA1-xFE] , among which the range of Chinese character area is [xB0-xF7][xA1-xFE] . Therefore, the expression after adding rare word matching is:

pattern = re.compile(r'[~!@#$%^&* ]|[^\xA1-\xF7][^\xA1-\xFE]')

The problem seems to be solved smoothly, but I am still too simple and too naive. Since the strings to be judged are all read from layer files, arcpy thoughtfully encodes the read characters into unicode format. Therefore, I need to find out the encoding range of GB2312 character set in unicode. But the reality is that the distribution of the GB2312 character set in unicode is not continuous, and using regular expressions to represent this range must be very complicated. The idea of ​​using regular expressions to match rare words seems to have hit a dead end.

Solution

Since the provided string is in unicode format, can I convert it to GB2312 and then match it? In fact, it is not possible, because the unicode character set is much larger than the GB2312 character set, so GB2312 => unicode can always be achieved, but conversely unicode => GB2312 may not necessarily succeed.

This suddenly provided me with another idea. Assuming that the unicode => GB2312 conversion of a string will fail, does that mean that it does not belong to the GB2312 character set? So, I use the unicode_string.encode('GB2312') function to try to convert the string and catch the UnicodeEncodeError exception to identify the rare characters.

The final code is as follows:

import re

def is_rare_name(string):
 pattern = re.compile(u"[~!@#$%^&* ]")
 match = pattern.search(string)
 if match:
 return True

 try:
    string.encode("gb2312")
  except UnicodeEncodeError:
   return True

  return False

Summary

The above is the entire content of this article. I hope the content of this article can bring some help to everyone's study or work. If you have any questions, you can leave a message to communicate.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn