Home >Backend Development >Python Tutorial >Python-solve the problem of UnicodeDecodeError when querying Cx_Oracle
In a recent project, I had to query a table with more than 1 million records, and then perform some data statistics. However, during this process, I found that a UnicodeDecodeError occurred after only a few pieces of data were queried.
Here, we use the sqlalchemy library for query, and Cx_Oracle is used internally to perform corresponding operations. The Python version used is 3.5.0, the host system is Windows 2008 Server, and then operations similar to the following are performed:
from sqlalchemy import create_engine engine = create_engine('oracle://demo:123456@192.168.1.202/TEST') conn = engine.connect() sql = 'select t.type from TS t' result = conn.execute(sql) for row in result: print(row['type'])
Here, we first create a connection to the database, and then perform the corresponding query operation. Unfortunately, a UnicodeDecodeError occurred after querying less than 10 records.
Originally thought it was a server encoding problem in the database, so the encoding parameter was added to the create_engine function and changed to:
engine = create_engine('oracle://demo:123456@192.168.1.202/TEST',encoding="UTF-8")
The other available method is to directly specify the encoding in the connection path, similar to As follows:
engine = create_engine('oracle://demo:123456@192.168.1.202/TEST?charset=utf-8')
But the problem is still not solved. I searched the Internet but couldn't find a suitable solution. I suddenly remembered that when using Mysql database (I personally like Postgresql more), when garbled characters appeared, we often performed the following operations:
set names gbk;
We set the client's encoding in this way, rather than the server's encoding, to solve the problem of garbled characters on the terminal (since the default database of Postgresql is UTF-8, the possibility of garbled characters is low). In addition, when installing the Oracle client under Linux, an environment variable of NLS_LANG
is set. For details, please refer to the article Installing Oracle Instant Client in Ubuntu 14.04. Of course, this article has some details. No introduction.
Generally, we make the following settings in cmd:
setenv NLS_LANG=SIMPLIFIED CHINESE_CHINA.ZHS16GBK
We specify that the language used by Oracle messages is Simplified Chinese, and the client's character set is GBK.
In addition, we can also execute the following statement to ensure that the above operation is correct:
SELECT * FROM v$nls_parameters;
Since the above database server is deployed on Windows, the result is naturally GBK, so if our customer If the client uses the UTF8 character set for decoding, decoding errors will naturally occur.
What we need to note is that only when the encoding of the database server and client are consistent, we can display non-ASCII encoding normally, and sqlalchemy will force the query string to be converted to Unicode by default. Therefore, it is similar to the following process in Python3:
>>> a='中国'.encode('gbk') >>> a b'\xd6\xd0\xb9\xfa'
. In sqlalchemy, due to forced encoding conversion, it is similar to the following process:
>>> a.decode('utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd6 in position 0: invalid continuation byte
Therefore, the above problem occurs. Under normal circumstances, the encoding should be specified as GBK:
>>> a.decode('gbk') '中国'
, and setting NLS_LANG
is equivalent to modifying the above encoding to GBK.
For more Python-solve the problem of UnicodeDecodeError when querying Cx_Oracle, please pay attention to the PHP Chinese website for related articles!