Home >Backend Development >Python Tutorial >Save crawled data to mysql

Save crawled data to mysql

大家讲道理
大家讲道理Original
2017-05-28 09:58:262917browse

Python and mysql use pymysql for data connection

<span style="color: #0000ff">import<span style="color: #000000"> pymysql



conn =pymysql.connect(host=<span style="color: #800000">'<span style="color: #800000">127.0.0.1<span style="color: #800000">',user=<span style="color: #800000">'<span style="color: #800000">root<span style="color: #800000">',passw<a href="http://www.php.cn/wiki/1360.html" target="_blank">ord</a>=<span style="color: #800000">'<span style="color: #800000">123456<span style="color: #800000">',db=<span style="color: #800000">'<span style="color: #800000">company<span style="color: #800000">',char<a href="http://www.php.cn/code/8209.html" target="_blank">set</a>=<span style="color: #800000">"<span style="color: #800000">utf8<span style="color: #800000">"<span style="color: #000000">)

cur=<span style="color: #000000">conn.cursor()
sql=‘’‘<br><br>’‘’<span style="color: #000000">employee=<span style="color: #000000">cur.execute(sql)
conn.commit()
cur.close()
conn.close()</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span>

The basic operation is probably like this
This time we crawled the Taobao product page

The process is omitted and the code is omitted


import requestsimport reimport pymysqldef getHTMLtext(url):    try:
       r=requests.get(url,timeout=100)
       r.raise_for_status()
       r.encoding=r.apparent_encoding       return r.text    except:        return ""def getpage(itl,html):    try:
        plt=re.findall(r'"view_price":"[\d.]*"',html)
        nlt=re.findall(r'"raw_title":".*?"',html)        for i in range(len(plt)):
            price = eval(plt[i].split(':')[1])
            title = eval(nlt[i].split(':')[1])
            itl.append([price, title])    except:       print("")def printgoods(itl):
    tplt = "{:2}\t{:8}\t{:16}"
    print(tplt.format("序号", "价格", "商品名称"))

    count = 0
    conn = pymysql.connect(host='127.0.0.1', user='root', password='123456', db='company',charset="utf8")

    cur = conn.cursor()

    sqlc = '''
                create table coffee(
                id int(11) not null auto_increment primary key,
                name varchar(255) not null,
                price float not null)DEFAULT CHARSET=utf8;                '''

    try:
        A = cur.execute(sqlc)
        conn.commit()        print('成功')    except:        print("错误")    for g in itl:
        count = count + 1
        b=tplt.format(count, g[0], g[1])



        sqla = '''
        insert into  coffee(name,price)
        values(%s,%s);       '''
        try:
            B = cur.execute(sqla,(g[1],g[0]))
            conn.commit()            print('成功')        except:            print("错误")        # save_path = 'D:/taobao.txt'
        # f=open(save_path,'a')
        #        # f.write(b+'\n')
        # f.close()
    conn.commit()
    cur.close()
    conn.close()def main():
    goods="咖啡"
    depth =2
    start_url='https://s.taobao.com/search?q='+goods
    List =[]    for i in range(depth):        try:
            url =start_url +"&s="+ str(i*44)
            html=getHTMLtext(url)
            getpage(List,html)        except:           continue


    print(printgoods(List))    # savefiles(data)main()


##You can see that the required data has been Saved to database

The above is the detailed content of Save crawled data to mysql. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn