Home  >  Article  >  Backend Development  >  Python crawls are you hungry?

Python crawls are you hungry?

步履不停
步履不停Original
2019-07-01 13:31:495174browse

Python crawls are you hungry?

I am learning data visualization and lack some data for practical operation, so I want to crawl some takeaway store information from Ele.me.

Mainly to obtain data, so the code is relatively simple, just go to the code:

import requests
import json
import csv
def crawler_ele(page=0):
def get_page(page):
url = 'https://h5.ele.me/restapi/shopping/v3/restaurants?latitude=xxxx&longitude=xxxx&offset={page}&limit=8&terminal=h5'.format(page=page*8)
headers = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.80 Safari/537.36",
    'cookie': r'xxxx'
}
re = json.loads(requests.get(url,headers=headers).text)
return re
re = get_page(page)
if re.get('items'):
with open('data.csv','a',newline='') as f:
writer = csv.DictWriter(f,fieldnames=['名称', '月销售量','配送费', '起送价', '风味','评分', '配送时长', '评分统计', '距离', '地址'])
writer.writeheader()
for item in re.get('items'):
info = dict()
restaurant = item.get('restaurant')
info['地址'] = restaurant.get('address')
info['配送费'] = restaurant.get('float_delivery_fee')
info['名称'] = restaurant.get('name')
info['配送时长'] = restaurant.get('order_lead_time')
info['距离'] = restaurant.get('distance')
info['起送价'] = restaurant.get('float_minimum_order_amount')
info['评分'] = restaurant.get('rating')
info['月销售量'] = restaurant.get('recent_order_num')
info['评分统计'] = restaurant.get('rating_count')
info['风味'] = restaurant.get('flavors')[0].get('name')
writer.writerow(info) 
# print(info)
if re.get('has_next') == True:
crawler_page(page+1)
crawler_ele(0)

Here are a few simple comments:

The longitude and latitude in the url have been removed, you can query it yourself Add the latitude and longitude of the location to be crawled, or you can obtain the latitude and longitude by calling the map api;

Cookies need to be added to the headers, otherwise the login permission will limit the number of crawled pages;

The final call is recursive It is not a loop, so there will be multiple duplicate headers in the csv file that saves the results. You can open it with Excel and delete the duplicate values.

Related tutorial recommendations: Python video tutorial

The above is the detailed content of Python crawls are you hungry?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn