Home  >  Article  >  Backend Development  >  python black magic encoding conversion method

python black magic encoding conversion method

高洛峰
高洛峰Original
2017-03-13 18:15:331115browse

This article mainly introduces the encoding conversion of python and analyzes the method of python encoding conversion. Interested friends can refer to it

We are using other languages When the library does encoding conversion, there are usually only two (or three) ways to deal with unintelligible characters:

  • throw an exception

  • Replace with alternative characters

  • Skip

But in the complex real world, due to various unreliability, there will always be some discordant factors in the texts we deal with, such as Mixed encoding. In this case, it’s back to the above approach.

Then the question is, is there a better way in python?

The answer is, yes!

Python's encoding conversion process is actually a two-stage conversion:


source -> unicode -> dest

First convert the string from the original encoding to unicode. Then convert unicode to the target encoding.

In the first step, we generally use decode() or unicode() these two functions Finish.
In the second step we use the encode() function to complete.

The black magic we are talking about here is realized in the first step. The

decode and unicode functions both have an optional parameter called errors. Take a look at the official description:

  • errors may be given to set a different error

  • handling scheme. Default is 'strict' meaning that encoding errors raise

  • ## a UnicodeDecodeError. Other possible values ​​are 'ignore' and 'replace'

  • ##as well as any other name registered with codecs . register_error that is

  • able to handle UnicodeDecodeErrors.

  • This parameter usually has three values:

  • strict default value. If an encoding error occurs, UnicodeDecodeError is thrown.

  • ignore Skip.

  • replace Replace with ?.

  • #Okay, did you see the last sentence? The show is on!

The module codec has a function called register_error. Its function allows users to register custom errors handling methods.

Used to handle UnicodeDecodeError.


Let’s take a look at the function prototype:

codecs.register_error(name, error_handler)

name:

The name of error handling

. Used to fill in the error parameter of the decode function. error_handler: processing function. This function accepts an exception parameter. Returns a tuple, which has 2 elements. The first is the error-corrected string, and the second is the starting position to continue decoding.

With the above basic concepts. Let’s take a look at the specific implementation:

def cjk_error(e):
  if not isinstance(e, UnicodeDecodeError):
    raise TypeError("don't know how to handle %r" % exc) 
  if exc.end + 1 > len(exc.object): 
    raise TypeError('unknown codec ,the object too short!') 
  ch1 = ord(exc.object[exc.start:exc.end]) 
  newpos = exc.end + 1 
  ch2 = ord(exc.object[exc.start + 1:newpos]) 
  sk = exc.object[exc.start:newpos] 
  if 0x81<=ch1<=0xFE and (0x40<=ch2<=0x7E or 0x7E<=ch2<=0xFE): # GBK 
    return (unicode(sk,&#39;cp936&#39;), newpos) 
  if 0x81<=ch1<=0xFE and (0x40<=ch2<=0x7E or 0xA1<=ch2<=0xFE): # BIG5 
    return (unicode(sk,&#39;big5&#39;), newpos) 
  raise TypeError(&#39;unknown codec !&#39;) 
codecs.register_error("cjk_replace", cjk_replace)

The above is what I

copy

from the Internet. I thought it was very good at first, but later I found out that it was a very unreflective algorithm. For example, utf8 and gbk have an intersection in the first two bytes. When a utf8 string is decoded with gbk encoding, the error occurs starting from the third byte (the first two bytes can also correspond to a Chinese character in the gbk encoding range). For example:

a = "你"              # utf8编码:&#39;\xe4\xbd\xa0&#39;
c = unicode(a[:2],&#39;gbk&#39;)  # 正常返回
c = unicode(a, &#39;gbk&#39;)    # UnicodeDecodeError 。错误发生在第三个字节

So for this situation, the following improvements have been made:

import codec

def cjk_replace(e):
  if not isinstance(e, UnicodeDecodeError):
    raise TypeError("invalid exception type %s" e)

  src = e.encoding
  if src in (&#39;gbk&#39;,&#39;gb18030&#39;, &#39;big5&#39;):
    beg = e.start - 2
    if beg >= 0:
      try:
        return unicode(e.object[beg:e.end], &#39;utf8&#39;), e.end + 1
      except:
        pass

  if exc.end + 1 > len(exc.object):
    raise TypeError(&#39;unknown codec ,the object too short!&#39;)
  ch1 = ord(exc.object[exc.start:exc.end])
  newpos = exc.end + 1
  ch2 = ord(exc.object[exc.start + 1:newpos])
  sk = exc.object[exc.start:newpos]

  if src != &#39;gbk&#39; and 0x81<=ch1<=0xFE and (0x40<=ch2<=0x7E or 0x7E<=ch2<=0xFE): # GBK
    return (unicode(sk,&#39;cp936&#39;), newpos)
  if src != &#39;big5&#39; and 0x81<=ch1<=0xFE and (0x40<=ch2<=0x7E or 0xA1<=ch2<=0xFE): # BIG5
    return (unicode(sk,&#39;big5&#39;), newpos)
  raise TypeError(&#39;unknown codec !&#39;)

codecs.register_error("cjk_replace", cjk_replace)

Of course, This logic is actually not rigorous enough. Although it is a bit realistic to deal with this abnormality of mixed encoding.

But since python provides such capabilities, everyone can discuss together, how can we do better?

The above is the detailed content of python black magic encoding conversion method. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn