搜索

首页  >  问答  >  正文

网页爬虫 - python爬虫爬图,报错<urlopen error no hsot given>.

python利用urllib爬虫,图片获取二十几张后就报错
python版本3.6 windows系统下运行

urllib.error.URLError:<urlopen error no host given>

代码如下:

#!/usr/bin/python
# -*- coding:utf-8 -*-
import urllib
import requests
import re
from bs4 import BeautifulSoup
import csv
import socket 
socket.setdefaulttimeout(300)
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"}

#url池
url1 = ('http://jandan.net/ooxx/page-')
urls = []
src_list = []
for i in range(2000,2020):
    
    urls.append(url1+str(i))
print (urls)  
for url in urls:#从url池中获取url
    r = requests.get(url,headers = headers)#获取页面
    html = BeautifulSoup(r.text,'html.parser')#利用tag结构划分#获取名称
    
    
    for link in html.find_all('img'):
        src_list.append('http:'+link.get('src'))
                

src_list.pop()
print (src_list)

tmp = 1        
for src in src_list:
    urllib.request.urlretrieve(src,r'C:\Users\姜梦天\Desktop\spider\img\%s.jpg' % tmp)
    tmp += 1
    
print ('下载完毕')


.....尝试解决未果...
求各位大大DEBUG....
黄舟黄舟2872 天前1214

全部回复(2)我来回复

  • PHP中文网

    PHP中文网2017-04-18 10:26:51

    雷雷

    回复
    0
  • 伊谢尔伦

    伊谢尔伦2017-04-18 10:26:51

    ......改进了代码.写入了try.虽然还是会报错......但是不妨碍继续下载图片........
    代码如下:

    #!/usr/bin/python
    # -*- coding:utf-8 -*-
    import urllib
    import requests
    import re
    from bs4 import BeautifulSoup
    import csv
    import socket 
    socket.setdefaulttimeout(300)
    headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"}
    
    #url池
    url1 = ('http://jandan.net/ooxx/page-')
    urls = []
    src_list = []
    for i in range(1500,2380):
        
        urls.append(url1+str(i))
    print (urls)  
    for url in urls:#从url池中获取url
        r = requests.get(url,headers = headers)#获取页面
        html = BeautifulSoup(r.text,'html.parser')
        for link in html.find_all('img'):
            try:
                src_list.append('http:'+link.get('src'))
            except:
                print ('url获取错误')
    
      
    src_list.pop()
    print (src_list)
    
    tmp = 0        
    for src in src_list:
        try:
            urllib.request.urlretrieve(src,r'C:\Users\姜梦天\Desktop\spider\img\%s.jpg' % tmp)
            tmp = tmp+1
        except:
            print ('失败')
        
    print ('下载完毕')
    

    可以下载众多妹子的图片了

    回复
    0
  • 取消回复