Home > Article > Backend Development > Python crawler [1] Download girl pictures in batches
The girl pictures feature on Jiedan.com has very high-quality beautiesPictures, today I will share the method of using python to download these girl pictures in batches
#1 Required. Understand the basic syntax of python. For this article, you only need to know how to operate list, for...in..., and how to define functions. . Learn the functions of web crawling, analyzing and saving files as you use them
##2 Need to install the third-party library Beautif. ulSoup4. Using pip to install is a very convenient method. The latest version of python comes with the pip tool. Press the windows+x shortcut key under Windows to open the command prompt (administrator) and enter
pip install beautifulsoup4Press Enter to run
No knowledge of HTML is required, but a browser for viewing source code and elements is still required, such as chr##. #ome and firefox. (If you don’t have pip, please search
how to install pip.)Want to download two For all the images on more than a thousand web pages, you must first learn to download a web page:)js, css, etc. The address is included in these source codes, so the first step is to download these html codes
#.
import urllib.request
url = 'http://jandan.net/ooxx/page-2397#comments'function
res = urllib.request.urlopen(url)urllib.request.urlopen() What does this function do? As its name suggests, it can be used to open a url. It can accept either a str (what we passed) or a Request
object
. The return value of this
is always an object that can work like a context manager, and has its own methods such as geturl(), info(), getcode(), etc. In fact, we don’t have to worry about that much. We just need to remember that this function can accept a URL and then return us an object containing all the information of this URL. We operate on this object.
Now read out the html code in the res object and assign it to the
html. Use the res.read() method.
(html)Try
print
##Intercepted part of the code.
html = res.read().decode('utf-8')
Then print(html)
## Part of the code has been intercepted.
OK! Same, this is because the decode('utf-8') of read() can encode the return value of read() in utf-8. But we still use html = res.read() because it also contains the information we need. So far we have only used 4 lines of python code to download and store the html code of the web page http://jandan.net/ooxx/page-2397#comments into the variable html. As follows:import urllib.request2. Parse the address Next, use beautifulsoup4 to parse html. How to determine where the html code corresponding to a certain picture is? Right click on the page - Inspect. At this time, the left half of the screen is the original web page, and the right half of the screen is html code and a bunch of functional#Download webpageurl = 'http://jandan.net/ooxx/page-2397# comments'res = urllib.request.urlopen(url)html = res.read()
buttons.
src="//wx2.sinaimg.cn/mw600/66b3de17gy1fdrf0wcuscj20p60zktad.jpg" part is the address of this picture, and src is the source. The style after src is its style, don't worry about it. You can try it out at this time, add http: before src, visit http://wx2.sinaimg.cn/mw600/66b3de17gy1fdrf0wcuscj20p60zktad.jpg and you should be able to see the original picture.
max-width are similar to key-value. This is related to the method used later to extract the address of the image.
Look at the codes corresponding to other pictures, you can see that their formats are the same, that is, they are all included in.
soup = BeautifulSoup(html,'html.parser')This line of code parses html into a soup object. We can easily operate on this object. For example, only extract the text content containing 'img':
result = soup.find_all('img')Use the find_all() method. print(result) You can see that the result is a list, and each element is a src-picture address key-value pair, but it contains and other content we don’t need.
Use the get method to extract the address in double quotes and add http: at the beginning.
links=[]
for content in result:links.app
endcontent.get('src') is to get the value corresponding to the key src in content, that is, the address in double quotes.
links.append() is a common method of adding elements to a list.
print(links) You can see that each element in this list is the original image address in double quotes. As shown below:
Use a browser to open any address and you can see the corresponding picture! YO! This means we’re just down to the final step, download them!
The address extraction part is completed. The code is also quite concise, as follows:
#Parse web pages
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'html.parser')
result = soup. find_all('img')
links=[]
for content in result:
links.append('http:'+content.get('src') )
The last step is to visit the addresses in the links in sequence and download the picture!
At the beginning
import os
First create a photo folder to store the downloaded pictures. The following code will create the photo folder in this program. py file is located.
if not os.path.exists('photo'):
os.makedirs('photo')
We know that links are a list, so it is best to use loop to download, name and store them one by one.
i=0
for link in links:
i+=1
filename ='photo\\'+'photo'+str(i)+'.png'
with open(filename,'w ') as file:
urllib.request.urlretrieve(link,filename)
i is the loop variable, i+=1 is the statement to control the loop.
filename names the picture, but it actually creates a file with this name first and then writes the picture into it. As can be seen from the assignment statement of filename, 'photo\\' indicates that it is located in the photo folder, and the following 'photo'+str(i) is for order. After the complete download is complete, it will look like photo1, photo2, and photo3. It feels like '.png' is the suffix. Using the + sign to connect strings is also a common practice in python.
With these two lines of statements, get the image pointed to by the address in the link locally, and then store it in filename.
open(filename,'w'), open the folder filename, 'w' means the opening method is write. That is to say, open() accepts two parameters here, one is the file name (file path), and the other is the opening method.
The function of urllib.request.urlretrieve(link,filename) is to access the link link, and then retrieve a copy and put it into filename.
After writing part 3, click Run! You can find the photo folder in the path where the .py file is located, which is full of the pictures we downloaded~
The complete code is as follows:
import urllib.request
from bs4 import BeautifulSoup
import os
#Download webpage
url = 'http://jandan.net/ooxx/page-2397#comments'
res = urllib.request.urlopen(url)
html = res.read()
#Parsing web pages
soup = BeautifulSoup(html,'html.parser')
result = soup.find_all('img ')
links=[]
for content in result:
links.append('http:'+content.get('src'))
#Download and store pictures
if not os.path.exists('photo'):
os.makedirs('photo')
i=0
for link in links:
i+=1
filename ='photo\\'+'photo'+str(i)+'.png'
with open(filename,'w') as file:
urllib.request.urlretrieve(link,filename)
This small program is written in a process-oriented way. From top to bottom, there are no functions defined. This may be easier for newbies to understand.
Link to girl picture
http://jandan.net/ooxx/page-2397#comments Only the middle number will change between 1-2XXX.
url = 'http://jandan.net/ooxx/page-'+str(i)+'#comments'
Just change the value of i Downloaded in batches. However, some comments say that frequent visits to this website may result in your IP being blocked. I don’t understand this, so please try it yourself!
The above is the detailed content of Python crawler [1] Download girl pictures in batches. For more information, please follow other related articles on the PHP Chinese website!