Home > Article > Backend Development > Python handles crawling Chinese encoding and judging encoding
In the process of developing my own crawler, some web pages are utf-8, some are gb2312, and some are gbk. If not processed, the collected data will be garbled. The solution is to process the html into a unified utf-8 encoding
Version python2.7
#coding:utf-8 import chardet #抓取网页html line = "http://www.pythontab.com" html_1 = urllib2.urlopen(line,timeout=120).read() encoding_dict = chardet.detect(html_1) print encoding web_encoding = encoding_dict['encoding'] #处理,整个html就不会是乱码。 if web_encoding == 'utf-8' or web_encoding == 'UTF-8': html = html_1 else : html = html_1.decode('gbk','ignore').encode('utf-8')