


How to use Python's Requests package to implement simulated login
This article mainly introduces in detail the use of Python's Requests package to simulate login. It has a certain reference value. Interested friends can refer to it
I liked to use python to capture some pages some time ago. Play, but they basically use get to request some pages, and then filter them through regular rules.
I tried it today and simulated logging into my personal website. The discovery is also relatively simple. Reading this article requires a certain understanding of the http protocol and http sessions.
Note: Because the simulated login is my personal website, the following code handles the personal website and account password.
Website Analysis
The essential first step for crawlers is to analyze the target website. Here we use Google Chrome’s developer tools for analysis.
Fetch through login and see such a request.
The upper part is the request header, and the lower part is the parameters passed by the request. As can be seen from the picture, the page submits three parameters through the form. They are _csrf, usermane, password respectively.
The csrf is to prevent cross-domain script forgery. The principle is very simple, that is, for every request, the server generates an encrypted string. Place it in a hidden input form. When making another request, pass this string together to verify whether it is a request from the same user.
So, our code logic is there. Start by requesting a login page. Then analyze the page and get the csrf string. Finally, this string and the account password are passed to the server for login.
The first code
#!/usr/bin/env python2.7 # -*- coding: utf-8 -*- import requests import re # 头部信息 headers = { 'Host':"localhost", 'Accept-Language':"zh-CN,zh;q=0.8", 'Accept-Encoding':"gzip, deflate", 'Content-Type':"application/x-www-form-urlencoded", 'Connection':"keep-alive", 'Referer':"http://localhost/login", 'User-Agent':"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36" } # 登陆方法 def login(url,csrf): data = { "_csrf" : csrf, "username": "xiedj", "password": "***" } response = requests.post(url, data=data, headers=headers) return response.content # 第一次访问获取csrf值 def get_login_web(url): page = requests.get('http://localhost/login') reg = r'<meta name="csrf-token" content="(.+)">' csrf = re.findall(reg,page.content)[0] login_page = login(url,csrf) print login_page if __name__ == "__main__": url = "http://localhost/login/checklogin" get_login_web(url)
The code seems to have no problem. However, an error occurred during execution. After checking, the reason for the error is that the csrf verification failed!
After repeatedly confirming that the csrf obtained and the csrf string requested to log in were OK, I thought of a problem.
If you still don’t know the cause of the error, you can pause and think about a problem here. "How does the server know that the first request to obtain csrf and the second post login request are from the same user?"
At this point, it should be clear. If you want to log in successfully, you need to solve how to make the service believe that both The requests are from the same user. You need to use http session here (if you are not sure, you can Baidu yourself, here is a brief introduction).
The http protocol is a stateless protocol. To make this stateless become stateful, sessions were introduced. To put it simply, record this status through the session. When a user requests a web service for the first time, the server will generate a session to save the user's information. At the same time, when returning to the user, the session ID is saved in cookies. When the user requests again, the browser will bring this cookie with it. Therefore, the server can know whether multiple requests are for the same user.
So our code needs to get this sessionID when making the first request. Pass this sessionID together with the second request. The great thing about requests is that you can use this session object with a simple request.Session().
The second code
#!/usr/bin/env python2.7 # -*- coding: utf-8 -*- import requests import re # 头部信息 headers = { 'Host':"localhost", 'Accept-Language':"zh-CN,zh;q=0.8", 'Accept-Encoding':"gzip, deflate", 'Content-Type':"application/x-www-form-urlencoded", 'Connection':"keep-alive", 'Referer':"http://localhost/login", 'User-Agent':"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36" } # 登陆方法 def login(url,csrf,r_session): data = { "_csrf" : csrf, "username": "xiedj", "password": "***" } response = r_session.post(url, data=data, headers=headers) return response.content # 第一次访问获取csrf值 def get_login_web(url): r_session = requests.Session() page = r_session.get('http://localhost/login') reg = r'<meta name="csrf-token" content="(.+)">' csrf = re.findall(reg,page.content)[0] login_page = login(url,csrf,r_session) print login_page if __name__ == "__main__": url = "http://localhost/login/checklogin" get_login_web(url)
The page after successful login
You can know from the code that after requests.Session() starts the session object, the second request will automatically pass the last session ID together.
Related recommendations:
How to use Python to export Excel charts and export them as pictures
Analyze the open function using python Reasons for the No Such File or DIr error
The above is the detailed content of How to use Python's Requests package to implement simulated login. For more information, please follow other related articles on the PHP Chinese website!

Pythonarrayssupportvariousoperations:1)Slicingextractssubsets,2)Appending/Extendingaddselements,3)Insertingplaceselementsatspecificpositions,4)Removingdeleteselements,5)Sorting/Reversingchangesorder,and6)Listcomprehensionscreatenewlistsbasedonexistin

NumPyarraysareessentialforapplicationsrequiringefficientnumericalcomputationsanddatamanipulation.Theyarecrucialindatascience,machinelearning,physics,engineering,andfinanceduetotheirabilitytohandlelarge-scaledataefficiently.Forexample,infinancialanaly

Useanarray.arrayoveralistinPythonwhendealingwithhomogeneousdata,performance-criticalcode,orinterfacingwithCcode.1)HomogeneousData:Arrayssavememorywithtypedelements.2)Performance-CriticalCode:Arraysofferbetterperformancefornumericaloperations.3)Interf

No,notalllistoperationsaresupportedbyarrays,andviceversa.1)Arraysdonotsupportdynamicoperationslikeappendorinsertwithoutresizing,whichimpactsperformance.2)Listsdonotguaranteeconstanttimecomplexityfordirectaccesslikearraysdo.

ToaccesselementsinaPythonlist,useindexing,negativeindexing,slicing,oriteration.1)Indexingstartsat0.2)Negativeindexingaccessesfromtheend.3)Slicingextractsportions.4)Iterationusesforloopsorenumerate.AlwayschecklistlengthtoavoidIndexError.

ArraysinPython,especiallyviaNumPy,arecrucialinscientificcomputingfortheirefficiencyandversatility.1)Theyareusedfornumericaloperations,dataanalysis,andmachinelearning.2)NumPy'simplementationinCensuresfasteroperationsthanPythonlists.3)Arraysenablequick

You can manage different Python versions by using pyenv, venv and Anaconda. 1) Use pyenv to manage multiple Python versions: install pyenv, set global and local versions. 2) Use venv to create a virtual environment to isolate project dependencies. 3) Use Anaconda to manage Python versions in your data science project. 4) Keep the system Python for system-level tasks. Through these tools and strategies, you can effectively manage different versions of Python to ensure the smooth running of the project.

NumPyarrayshaveseveraladvantagesoverstandardPythonarrays:1)TheyaremuchfasterduetoC-basedimplementation,2)Theyaremorememory-efficient,especiallywithlargedatasets,and3)Theyofferoptimized,vectorizedfunctionsformathematicalandstatisticaloperations,making


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Atom editor mac version download
The most popular open source editor

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Mac version
God-level code editing software (SublimeText3)
