搜尋

首頁  >  問答  >  主體

python - Scrapy模擬登陸遇到404問題

用python模擬登陸一個網站,一直遇到404問題,求指導!

程式碼

-- coding: utf-8 --

import scrapy
from scrapy.http import Request, FormRequest
from scrapy.selector import Selector

#class StackSpiderSpider(scrapy.Spider):

name = "stack_spider"
start_urls = ['https://stackoverflow.com/']

headers = {
    "host": "cdn.sstatic.net",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept-Language": "en-US,en;q=0.5",
    "Connection": "keep-alive",
    "Content-Type":" application/x-www-form-urlencoded; charset=UTF-8",
    "User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0"
    }
#重写了爬虫类的方法, 实现了自定义请求, 运行成功后会调用callback回调函数
def start_requests(self) :
    return [Request("https://stackoverflow.com/users/login", 
    meta = {
    # 'dont_redirect': True, 
    # 'handle_httpstatus_list': [302],
    'cookiejar' : 1}, 
    callback = self.post_login)] #添加了meta
    
    #FormRequeset
def post_login(self, response) :
    # 请求网页后返回网页中的_xsrf字段的文字, 用于成功提交表单
    fkey = Selector(response).xpath('//input[@name="fkey"]/@value').extract()[0]
    ssrc = Selector(response).xpath('//input[@name="ssrc"]/@value').extract()[0]
    print fkey
    print ssrc
    #FormRequeset.from_response是Scrapy提供的一个函数, 用于post表单
    #登陆成功后, 会调用after_login回调函数
    return [FormRequest.from_response(response, 
                        meta = {
                        # 'dont_redirect': True,
                        # 'handle_httpstatus_list': [302],
                        'cookiejar' : response.meta['cookiejar']}, #注意这里cookie的获取
                        headers = self.headers,
                        formdata = {
                        "fkey":fkey,
                        "ssrc":ssrc,
                        "email":"1045608243@qq.com",
                        "password":"12345",
                        "oauth_version":"",
                        "oauth_server":"",
                        "openid_username":"",
                        "openid_identifier":""
                        },
                        callback = self.after_login,
                        dont_filter = True
                        )]
def after_login(self, response) :
    filename = "1.html"
    with open(filename,'wb') as fp:
        fp.write(response.body)
    # print response.body

偵錯資訊
2017-04-18 11:19:23 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: text5)
2017-04-18 11:19:23 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MO
DULE': 'text5.spiders', 'SPIDER_MODULES': ['text5.spiders'], 'BOT_NAME': 'text5'
}
2017-04-18 11:19:23 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',netConsole',
'scrapy.extensions.corestats.CoreStats']
2017-04-18 11:19:24 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewarewarewarewareware.httpauth.H ,
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragentHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent. RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewarewares.HttpCompressionMiddleware',
'scrapy.downloadermiddlewarewares.Sc.r. Cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-18 11:19:24 [scrapy.middleware] INFO: Enabled spider# middlewares:19:24 [scrapy.middleware] INFO: Enabled spider#.wares:19:24 [scrapy.middleware. .httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
# 'scrapy.spidermiddlewares.referer.RefererMiddleware',
# 'scrapy.spidermiddlewarewarewarewarewarewarec.RefererMiddlec,
>,piderm. .spidermiddlewares.depth.DepthMiddleware']
2017-04-18 11:19:24 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-18 11:19:
2017-04-18 11:19:24 [scrapy.core.engine] INFO: Spider opened
2017-04-18 11:19:24 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2017-04-18 11:19:24 [scrapy.extensions.telnet] DEBUG: Telnet console listening o
n 127.0.0.1:6023
#2017-04- 18 11:19:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stack

overflow.com/users/login> (referer: None)###1145f3f2e28e56c298bc28

2017-04-18 11:19:25 [scrapy.core.engine] 偵錯:已爬網(404) overflow.com/search?q=&ssrc=&openid_username= &oauth_server =&oauth_version=&fkey=
1145f3f2e28e56c298bc28a1a735254b&password=wanglihong1993&email=1067863906@qopenq.c.
id_d; (參考:https://stackoverflow.com/use...
2017-04-18 11:19:25 [scrapy.spidermiddlewares.httperror]訊息:忽略回應
<404 https://stackoverflow . com/sea...
auth_version=&fkey=1145f3f2e28e56c298bc28a1a735254b&password=wanglihong1993&emai
l=1067863906@qq.com#open1385601067863906@qqq. 11:19:25 [scrapy.core.engine] 訊息:關閉蜘蛛(完成)
2017-04-18 11:19:25 [scrapy.statscollectors] 資訊:轉儲Scrapy 統計資料:
{ '下載器/request_bytes':881,
'下載器/request_count':2,
'下載器/request_method_count/GET':2,
'下載器/response_bytes':12631,
'下載器/response_count' : 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1,
'finish_reason': '完成',
' finish_time': datetime.datetime (2017, 4, 18, 3, 19, 25, 143000),
'log_count/DEBUG': 3,
'log_count/INFO': 8,
'request_depth_
'log_count/INFO': 8,
'request_depth_'max' : 1,
'response_received_count': 2,
'調度程式/出隊': 2,
'調度程式/出隊/記憶體': 2,
'調度程式/入隊' : 2,
'調度程式/入隊/記憶體': 2,
'start_time': datetime.datetime(2017, 4, 18, 3, 19, 24, 146000)}
2017-04 -18 11:19:25 [scrapy.core.engine]訊息:蜘蛛已關閉(完成)

黄舟黄舟2751 天前1049

全部回覆(1)我來回復

  • PHPz

    PHPz2017-05-18 11:03:22

    老弟,你的密碼洩漏了

    回覆
    0
  • 取消回覆