相信做过网站爬虫工作的同学都知道,python的urllib2用起来很方便,使用以下几行代码就可以轻松拿到某个网站的源码:
#coding=utf-8import urllibimport urllib2import reurl = "http://wetest.qq.com"request = urllib2.Request(url)page = urllib2.urlopen(url)html = page.read()print html
最后通过一定的正则匹配,解析返回的响应内容即可拿到你想要的东东。
但这样的方式在办公网和开发网下,处理部分外网站点时则会行不通。
比如: http://tieba.baidu.com/p/2460150866 ,执行时一直报10060的错误码,提示连接失败。
#coding=utf-8import urllibimport urllib2import reurl = "http://tieba.baidu.com/p/2460150866"request = urllib2.Request(url)page = urllib2.urlopen(url)html = page.read()print html
执行后,错误提示截图如下:
为了分析这一问题的原因,撸主采用了如下过程:
1、在浏览器里输入,可以正常打开,说明该站点是可以访问的。
2、同样的脚本放在公司的体验网上运行OK,说明脚本本身没有问题。
通过以上两个步骤,初步判断是公司对于外网的访问策略限制导致的。于是查找了下如何给urllib2设置ProxyHandler代理 ,将代码修改为如下:
#coding=utf-8import urllibimport urllib2import re# The proxy address and port:proxy_info = { 'host' : 'web-proxy.oa.com','port' : 8080 }# We create a handler for the proxyproxy_support = urllib2.ProxyHandler({"http" : "http://%(host)s:%(port)d" % proxy_info})# We create an opener which uses this handler:opener = urllib2.build_opener(proxy_support)# Then we install this opener as the default opener for urllib2:urllib2.install_opener(opener)url = "http://tieba.baidu.com/p/2460150866"request = urllib2.Request(url)page = urllib2.urlopen(url)html = page.read()print html
再次运行,可以拿到所要的Html页面了。到这里就完了么?没有啊!撸主想拿到贴吧里的各种美图,保存在本地,上代码吧:
#coding=utf-8import urllibimport urllib2import re# The proxy address and port:proxy_info = { 'host' : 'web-proxy.oa.com','port' : 8080 }# We create a handler for the proxyproxy_support = urllib2.ProxyHandler({"http" : "http://%(host)s:%(port)d" % proxy_info})# We create an opener which uses this handler:opener = urllib2.build_opener(proxy_support)# Then we install this opener as the default opener for urllib2:urllib2.install_opener(opener)url = "http://tieba.baidu.com/p/2460150866"request = urllib2.Request(url)page = urllib2.urlopen(url)html = page.read()#正则匹配reg = r'src="(.+?\.jpg)" pic_ext'imgre = re.compile(reg)imglist = re.findall(imgre,html)print 'start dowload pic'x = 0for imgurl in imglist:urllib.urlretrieve(imgurl,'pic\\%s.jpg' % x)x = x+1
再次运行,发现还是有报错!尼玛!又是10060报错,我设置了urllib2的代理了啊,为啥还是报错!
于是撸主继续想办法,一定要想拿到贴吧里的各种美图。既然通过正则匹配可以拿到贴吧里的图片的url,为何不手动去调用urllib2.urlopen去打开对应的url,获得对应的response,然后read出对应的图片二进制数据,然后保存图片到本地文件。于是有了下面的代码:
#coding=utf-8import urllibimport urllib2import re# The proxy address and port:proxy_info = { 'host' : 'web-proxy.oa.com','port' : 8080 }# We create a handler for the proxyproxy_support = urllib2.ProxyHandler({"http" : "http://%(host)s:%(port)d" % proxy_info})# We create an opener which uses this handler:opener = urllib2.build_opener(proxy_support)# Then we install this opener as the default opener for urllib2:urllib2.install_opener(opener)url = "http://tieba.baidu.com/p/2460150866"request = urllib2.Request(url)page = urllib2.urlopen(url)html = page.read()#正则匹配reg = r'src="(.+?\.jpg)" pic_ext'imgre = re.compile(reg)imglist = re.findall(imgre,html)x = 0print 'start'for imgurl in imglist:print imgurlresp = urllib2.urlopen(imgurl)respHtml = resp.read()picFile = open('%s.jpg' % x, "wb")picFile.write(respHtml)picFile.close()x = x+1print 'done'
再次运行,发现图片的url按预期的打印出来,并且图片也被保存下来了:
至此,已完成撸主原先要做的目的。哈哈,希望总结的东东对其他小伙伴也有用。

The article discusses the HTML <datalist> element, which enhances forms by providing autocomplete suggestions, improving user experience and reducing errors.Character count: 159

The article discusses the HTML <progress> element, its purpose, styling, and differences from the <meter> element. The main focus is on using <progress> for task completion and <meter> for stati

The article discusses the HTML <meter> element, used for displaying scalar or fractional values within a range, and its common applications in web development. It differentiates <meter> from <progress> and ex

The article discusses the <iframe> tag's purpose in embedding external content into webpages, its common uses, security risks, and alternatives like object tags and APIs.

The article discusses using HTML5 form validation attributes like required, pattern, min, max, and length limits to validate user input directly in the browser.

The article discusses the viewport meta tag, essential for responsive web design on mobile devices. It explains how proper use ensures optimal content scaling and user interaction, while misuse can lead to design and accessibility issues.

Article discusses best practices for ensuring HTML5 cross-browser compatibility, focusing on feature detection, progressive enhancement, and testing methods.

This article explains the HTML5 <time> element for semantic date/time representation. It emphasizes the importance of the datetime attribute for machine readability (ISO 8601 format) alongside human-readable text, boosting accessibilit


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

WebStorm Mac version
Useful JavaScript development tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software
