Home > Article > Backend Development > Obtain and download super embarrassing picture links in batches_PHP tutorial
In the past, I often went to some wallpaper websites under Windows, or QQ photo albums of beautiful women, or downloaded pictures from a passionate beauty picture website. I often needed to right-click and then "save as". If I encountered a classic set of pictures, this kind of repetitive Operation will definitely make you lose motivation to download. Later, I used a plug-in for Firefox, which seems to be DownloadThemAll (I can’t remember, but it is for downloading web links in batches, and you can filter the formats to download images). Using it in conjunction with Thunder can greatly improve the efficiency of image downloading (but if the page image is too small) Many, it also takes a long time to filter and delete useless files after downloading). Now I use Ubuntu, I don’t have Windows or Thunder, and I haven’t used Firefox for many years. How do we download beautiful pictures from web pages in batches? I have been looking for such a plug-in in chrome, but unfortunately I only found the IMG inspector plug-in. The way this plug-in works is to define a base URL, use placeholders, defined step sizes and cycle ranges to regenerate links and preview them. . I have to say that this function is too weak. Even the URLs of the set of pictures are not necessarily regular, so this method is not advisable or practical.
So I later thought about learning Chrome plug-in development and making one myself. Unfortunately, I never had much motivation. And I don’t know whether the chrome plug-in can solve the problem of image downloading: whether to call the client download software or the native browser to download. Moreover, in this way, I have to learn some advanced methods of chrome API, and the development cost suddenly increases. Then gave up. So I changed my mind and continued to think about it. Then break down several difficult issues one by one and analyze them (environment: UBUNTU+CHROME/FIREFOX):
1) How to get the image address of the current page?
The easiest way to do this is to execute the script in the chrome console or firebug. I have also thought about using web crawler tools such as SimpleHtmlDom, a powerful open source framework (if you are familiar with jquery, this framework allows you to use it on the server side) jquery is very convenient to get tags), but this may be a bit less efficient. The complexity has also increased a bit.
2) How to determine whether the image size of the current page meets my "appetite"?
There are two situations to consider when scoring this question: Web page images are generally divided into thumbnails and original images. Thumbnails usually come with a link to the original image. That is, to wrap the img tag with the a tag, we need to get the href value of the a tag instead of the src value of the img tag; the original image is generally just an img tag, so you can use the width and height of the Image object to filter these images. For thumbnail filtering, you can set a new Image object to the value of src and then filter its height and width. However, I generally do not filter thumbnails. Such pictures are generally very large. On the contrary, what you are most likely to filter is the img object in the a tag, because many logo and button pictures are small pictures wrapped in links.
3) How to download pictures?
The above two steps are completed using the console script program. The entire code does not exceed 10 lines (including the jquery camera loading code). Finally, you can get the address of the image after filtering on the current page without any effort. Unfortunately, it's just this step. It's useless to get the address. The really useless step is how to download these pictures to the local machine at once. If I know how to develop chrome plug-ins and know how chrome calls system methods (actually, I’m not sure if chrome can do it. If the browser’s security restrictions are strict enough, then this will definitely not work), then I can become familiar with wget, a powerful downloader. Order. This is easily solved. Unfortunately, I'm not familiar with the first two, but it doesn't matter. All roads lead to Rome. There must be another way.
4) Fly over the console Console
Now our thoughts are stuck in the chrome console. We have a lot of image links but don’t know how to download them (actually they are just image links in the current window). I have always had some illusions about chrome plug-in development, but unfortunately I have no motivation to learn, and I have always questioned whether a browser as safe and strict as chrome will allow js to interact with the client?
So I started to think about the problem backwards. Instead of downloading the files, I could just store these files locally anywhere I could read them later. So the idea continued to the local storage localStorage and local database of html5, and also considered the local database of Google Gears, but later found that it was either too complicated or not feasible. Slowly, my thoughts began to drift in a simple direction--jQuery, yes it is him--$.getJSON(). Wouldn't it be enough if we could send the image address across domains to a local website and then download it in the background? So I immediately used Code Igniter to create a website: I only added a php controller class file with only one method in the controller, and the code still did not exceed 10 lines. The function of the code is just to wrap all image links and write them into a text file (urls.txt).
5) Invincible Downloader
This is almost over, maybe someone asked me, haven’t you still not been able to download pictures? Hehe, the nb character is about to appear: wget -i -b urls.txt. Enter the website directory in the terminal and execute this command to automatically download the image address of each line in the text file in the background.
PS: I was not familiar with this command a few days ago. I experimented with an H network and misused parameters to silently download a 1.1G pornographic image in the background. Later I found out that I forcibly killed the process. In short, this command is extremely powerful to download website content. If you use linux, this command is very evil, and you can do bad things if you want to!
6) Can you not be embarrassed?
This operation process is really embarrassing.
囧1: a) Deploy the local website--》b) Right-click to open the chrome console or firebug--》c) Copy the script--》d) Paste--》e) Enter--f》Open the terminal --g》wget. This operation is very convenient for a web page with many pictures (thumbnails, sets of pictures). But if I open ten web pages, each page must perform operations b, d, and e. If this script can be made into a chrome plug-in and embedded in the browser, then at least 2 steps can be saved: open the page and click it manually. Plug-in icon or set to automatically execute scripts. This greatly improves the ease of use.
囧2: How to skip the wget step and directly execute the download in the php background. This requires considering using php to call the ubuntu system method. I am not familiar with it and need to study it.
囧3: How to save the step of deploying a website. The fundamental reason for needing a website is that I cannot merge and store the image addresses of multiple pages together. I have considered cookies but the size limit is a problem. It is normal for the image addresses to exceed 100 ( Especially when Chinese is included). We also need to consider the problem of easy access after storage.
These issues have not been considered yet, and now we are just considering the implementation of 囧二. In fact, this process did not improve the technology itself much, but later looking back on the entire process, I found that the ideas and methodologies when facing problems have been significantly improved. Everyone is welcome to discuss!