Home > Article > Backend Development > How to implement a simple crawler program in Python
How to implement a simple crawler program in Python
With the development of the Internet, data has become one of the most valuable resources in today's society. The crawler program has become one of the important tools for obtaining Internet data. This article will introduce how to implement a simple crawler program in Python and provide specific code examples.
import requests from bs4 import BeautifulSoup
url = "目标网站的URL" response = requests.get(url) html = response.text soup = BeautifulSoup(html, "html.parser")
# 示例:提取新闻标题和链接 news_list = soup.find_all("a", class_="news-title") # 假设新闻标题使用CSS类名 "news-title" for news in news_list: title = news.text link = news["href"] print(title, link)
# 示例:将数据存储到文件 with open("news.txt", "w", encoding="utf-8") as f: for news in news_list: title = news.text link = news["href"] f.write(f"{title} {link} ")
import time # 示例:设置延时和爬取数量 interval = 2 # 延时2秒 count = 0 # 爬取数量计数器 for news in news_list: if count < 10: # 爬取10条新闻 title = news.text link = news["href"] print(title, link) count += 1 time.sleep(interval) # 延时 else: break
The above is the implementation process of a simple crawler program. Through this example, you can learn how to use Python to write a basic crawler program to obtain data from the target website and store it in a file. Of course, the functions of the crawler program are much more than this, and you can further expand and improve them according to your own needs.
At the same time, it should be noted that when writing crawler programs, you must abide by legal and ethical norms, respect the website's robots.txt file, and avoid unnecessary burdens on the target website.
The above is the detailed content of How to implement a simple crawler program in Python. For more information, please follow other related articles on the PHP Chinese website!