search
HomeBackend DevelopmentPython TutorialIn order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

Recently, due to a sudden change in work, the new office location is far away from the current residence, so I have to rent a new house.

I got on the agency’s eMule and started exploring strange corners of the city.

In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

In the process of switching between various rental apps, I was really worried because the efficiency was really low:

First of all, because I live with my girlfriend Together, the two people need to consider the distance to work at the same time, but the function of finding a house based on commuting time on each platform is relatively useless. Some platforms do not support the selection of multiple locations at the same time, and some platforms can only mechanically obtain the commuting distance from each location. The points with the same duration cannot meet the usage needs.

Secondly, from the perspective of a renter, there are too many rental platforms, and the filtering and sorting logic of each platform is inconsistent, making it difficult to horizontally compare information on similar properties.

But it doesn’t matter. As a programmer, of course you have to use programmers’ methods to solve problems. So, last night I used a python script to obtain all the housing information of a rental platform in the Shanghai area. There were more than 20,000 pieces in total:

In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

The following is the crawl data The whole process is shared with everyone.

Analyze the page and find the entry point

First enter the rental page of the platform. You can see that the house listing on the homepage already includes most of the information we need, and these Information can be obtained directly from the DOM, so consider collecting web page data directly through simulated requests.

https://sh.lianjia.com/zufang/

In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

#So the next step is to consider how to get the url. Through observation, we found that there are more than 20,000 houses in the area, but only the first 100 pages of data can be accessed through the web page. The upper limit of the number displayed on each page is 30, which works out to a total of 3k. It is impossible to obtain all the information. .

In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

But we can solve this problem by adding filter conditions. Select "Jing'an" in the filter item and enter the following url:

https://sh.lianjia.com/zufang/jingan/

In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

You can see that there are more than 2k houses in the area, and the number of data pages is 75. With 30 entries per page, all data can theoretically be accessed. Therefore, all data in the city can be obtained by obtaining housing data in each district separately.

https://sh.lianjia.com/zufang/jingan/pg2/

After clicking the second page button, you enter the above url. You can find that as long as you modify the number after pg, you can enter the corresponding page number.

However, a problem is found here. The data obtained for each visit to the same number of pages is different, which will lead to duplication of collected data. So we click on "Latest on the shelves" in the sorting conditions and enter the following link:

https://sh.lianjia.com/zufang/jingan/pg2rco11/

The order of data obtained by this sorting method is stable. At this point, our idea is: first visit each item separately The first page of the small region, and then obtain the maximum number of pages in the current region through the first page, and then access the simulated request to access each page to obtain all data.

Crawling data

After we have the idea, we need to start writing the code. First, we need to collect all the links. The code is as follows:

# 所有小地区对应的标识
list=['jingan','xuhui','huangpu','changning','putuo','pudong','baoshan','hongkou','yangpu','minhang','jinshan','jiading','chongming','fengxian','songjiang','qingpu']
# 存放所有链接
urls = []
for a in list:
urls.append('https://sh.lianjia.com/zufang/{}/pg1rco11/'.format(a))
# 设置请求头,避免ip被ban
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.9 Safari/537.36'}
# 获取当前小地区第1页的dom信息
res = requests.get('https://sh.lianjia.com/zufang/{}/pg1rco11/'.format(a), headers=headers)
content = res.text
soup = BeautifulSoup(content, 'html.parser')
# 获取当前页面的最大页数
page_num = int(soup.find('div', attrs={'class': 'content__pg'}).attrs['data-totalpage'])
for i in range(2,page_num+1):
# 将所有链接保存到urls中
urls.append('https://sh.lianjia.com/zufang/{}/pg{}rco11/'.format(a,i))

After that, we need to do it one by one Process the urls obtained in the previous step and obtain the data in the link. The code is as follows:

num=1
for url in urls:
print("正在处理第{}页数据...".format(str(num)))
res1 = requests.get(url, headers=headers)
content1 = res1.text
soup1 = BeautifulSoup(content1, 'html.parser')
infos = soup1.find('div', {'class': 'content__list'}).find_all('div', {'class': 'content__list--item'})

Organize the data and export the file

By observing the page structure, we can get the storage location of each element , find the corresponding page element, and then we can get the information we need.

In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.

The complete code is attached here. Interested friends can replace the regional identifier and small regional identifier in the link according to their own needs, and then they can obtain their own Information about your area. The crawling methods of other rental platforms are mostly similar, so I won’t go into details.

import time, re, csv, requests
import codecs
from bs4 import BeautifulSoup

print("****处理开始****")
with open(r'..sh.csv', 'wb+')as fp:
fp.write(codecs.BOM_UTF8)
f = open(r'..sh.csv','w+',newline='', encoding='utf-8')
writer = csv.writer(f)
urls = []

# 所有小地区对应的标识
list=['jingan','xuhui','huangpu','changning','putuo','pudong','baoshan','hongkou','yangpu','minhang','jinshan','jiading','chongming','fengxian','songjiang','qingpu']
# 存放所有链接
urls = []
for a in list:
urls.append('https://sh.lianjia.com/zufang/{}/pg1rco11/'.format(a))
# 设置请求头,避免ip被ban
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.9 Safari/537.36'}
# 获取当前小地区第1页的dom信息
res = requests.get('https://sh.lianjia.com/zufang/{}/pg1rco11/'.format(a), headers=headers)
content = res.text
soup = BeautifulSoup(content, 'html.parser')
# 获取当前页面的最大页数
page_num = int(soup.find('div', attrs={'class': 'content__pg'}).attrs['data-totalpage'])
for i in range(2,page_num+1):
# 将所有链接保存到urls中
urls.append('https://sh.lianjia.com/zufang/{}/pg{}rco11/'.format(a,i))

num=1
for url in urls:
# 模拟请求
print("正在处理第{}页数据...".format(str(num)))
res1 = requests.get(url, headers=headers)
content1 = res1.text
soup1 = BeautifulSoup(content1, 'html.parser')
# 读取页面中数据
infos = soup1.find('div', {'class': 'content__list'}).find_all('div', {'class': 'content__list--item'})

# 数据处理
for info in infos:
house_url = 'https://sh.lianjia.com' + info.a['href']
title = info.find('p', {'class': 'content__list--item--title'}).find('a').get_text().strip()
group = title.split()[0][3:]
price = info.find('span', {'class': 'content__list--item-price'}).get_text()
tag = info.find('p', {'class': 'content__list--item--bottom oneline'}).get_text()
mixed = info.find('p', {'class': 'content__list--item--des'}).get_text()
mix = re.split(r'/', mixed)
address = mix[0].strip()
area = mix[1].strip()
door_orientation = mix[2].strip()
style = mix[-1].strip()
region = re.split(r'-', address)[0]
writer.writerow((house_url, title, group, price, area, address, door_orientation, style, tag, region))
time.sleep(0)
print("第{}页数据处理完毕,共{}条数据。".format(str(num), len(infos)))
num+=1

f.close()
print("****全部完成****")

After some operations, we obtained the complete housing information of various local rental platforms. At this point, we can already obtain the data we need through some basic filtering methods.

The above is the detailed content of In order to rent a house in Shanghai, I used Python to crawl through more than 20,000 housing information overnight.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Python and Time: Making the Most of Your Study TimePython and Time: Making the Most of Your Study TimeApr 14, 2025 am 12:02 AM

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python: Games, GUIs, and MorePython: Games, GUIs, and MoreApr 13, 2025 am 12:14 AM

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

Python vs. C  : Applications and Use Cases ComparedPython vs. C : Applications and Use Cases ComparedApr 12, 2025 am 12:01 AM

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

The 2-Hour Python Plan: A Realistic ApproachThe 2-Hour Python Plan: A Realistic ApproachApr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python: Exploring Its Primary ApplicationsPython: Exploring Its Primary ApplicationsApr 10, 2025 am 09:41 AM

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

How Much Python Can You Learn in 2 Hours?How Much Python Can You Learn in 2 Hours?Apr 09, 2025 pm 04:33 PM

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

How to teach computer novice programming basics in project and problem-driven methods within 10 hours?How to teach computer novice programming basics in project and problem-driven methods within 10 hours?Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.