


Python crawler simulates logging into the Academic Affairs Office and saves data locally
Just started to get in touchPython, I saw many people playing crawlers I also want to play. After searching around, I found that the first thing many people do with web crawlers is to simulate login. To make it more difficult, they simulate login and then obtain data. However, there are very few Python 3.x demos on the Internet that can simulate login. For reference, and I don’t know much about Html, so writing this first Python crawler was extremely difficult, but the final result was satisfactory. Let’s sort out the learning process this time.
Tools
System: win7 64-bit system
Browser:Chrome
Python version: Python 3.5 64-bit
IDE: JetBrains PyCharm (seems like many people use this)
I aimed at our Academic Affairs Office. The purpose of this crawler is to obtain scores from the Academic Affairs Office and enter the scores into ExcelForm to save them. The address of our school’s Academic Affairs Office is: http:/ /jwc.ecjtu.jx.cn/, usually every time we get the results, we need to enter the Academic Affairs Office first, then click on the resultsquery, enter the public account password to enter, and finally enter the relevant information to obtain the score form, here Login does not require a verification code, which saved me a lot of effort, so we first enter the score query system login interface, first see how to simulate the login process, and press F12 in the Chrome browser to open the developer panel:
Developer Panel
Here the password for the inquiry system of our school’s Academic Affairs Office is the public jwc, which is the pinyin abbreviation. We enter the user name and password Click to log in. At this time, pay attention to POST request:
Simulated login to the Academic Affairs Office Go directly to the code:
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import requests url = 'http://jwc.ecjtu.jx.cn/mis_o/login.php' datas = {'user': 'jwc', 'pass': 'jwc', 'Submit': '%CC%E1%BD%BB' } headers = {'Referer': 'http://jwc.ecjtu.jx.cn/mis_o/login.htm', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Language': 'zh-CN,zh;q=0.8', } sessions = requests.session() response = sessions.post(url, headers=headers, data=datas) print(response.status_code)Code output:
200It means that our simulated login is successful. The Requests module is used here. If you don’t know how to use it, you can check the Chinese document. Its definition is: HTTP
score_healders = {'Connection': 'keep-alive', 'User - Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) ' 'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36', 'Content - Type': 'application / x - www - form - urlencoded', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Content - Length': '69', 'Host': 'jwc.ecjtu.jx.cn', 'Referer': 'http: // jwc.ecjtu.jx.cn / mis_o / main.php', 'Upgrade - Insecure - Requests': '1', 'Accept - Language': 'zh - CN, zh;q = 0.8' } score_url = 'http://jwc.ecjtu.jx.cn/mis_o/query.php?start=' + str( pagenum) + '&job=see&=&Name=&Course=&ClassID=&Term=&StuID=' + num score_data = {'Name': '', 'StuID': num, 'Course': '', 'Term': '', 'ClassID': '', 'Submit': '%B2%E9%D1%AF' } score_response = sessions.post(score_url, data=score_data, headers=score_healders) content = score_response.content
这里解释一下上面的代码,上面的score_url
并不是浏览器上显示的地址,我们要获取真正的地址,在Chrome下右键--查看网页源代码,找到这么一行:
a href=query.php?start=1&job=see&=&Name=&Course=&ClassID=&Term=&StuID=xxxxxxx
这个才是真正的地址,点击这个地址转入的才是真正的界面,因为这里成绩数据较多,所以这里采用了分页显示,这个start=1
说明是第一页,这个参数是可变的需要我们传入,还有StuID
后面的是我们输入的学号,这样我们就可以拼接出Url地址:
score_url = 'http://jwc.ecjtu.jx.cn/mis_o/query.php?start=' + str(pagenum) + '&job=see&=&Name=&Course=&ClassID=&Term=&StuID=' + num
同样使用Post方法传递数据并获取响应的内容:
score_response = sessions.post(score_url, data=score_data,headers=score_healders) content = score_response.content
这里采用Beautiful Soup 4.2.0来解析返回的响应内容,因为我们要获取的是成绩,这里到教务处成绩查询界面,查看获取到的成绩在网页中是以表格的形式存在:
网页源代码
观察表格的网页源代码:
学期 | 学号 | 姓名 | 课程 | 课程要求 | 学分 | 成绩 | 重考一 | 重考二 |
这里拿出第一行举例,虽然我不太懂Html但是从这里可以看出来<tr> 代表的是一行,而<code><td>应该是代表这一行中的每一列,这样就好办了,取出每一行然后分解出每一列,打印输出就可以得到我们要的结果:<pre class="brush:php;toolbar:false">from bs4 import BeautifulSoup
soup = BeautifulSoup(content, 'html.parser')
# 找到每一行
target = soup.findAll('tr')</pre>
<p>这里分解每一列的时候要小心,因为这里表格分成了三页显示,每页最多显示30条数据,这里因为只是收集已经毕业的学生的成绩数据所以不对其他数据量不足的学生成绩的情况做统计,默认收集的都是大四毕业的学生成绩数据。这里采用两个<a href="http://www.php.cn/wiki/70.html" target="_blank">变量</a><code>i
和j
分别代表行和列:
# 注:这里的print单纯是我为了验证结果打印在PyCharm的控制台上而已 i=0, j=0 for tag in target[1:]: tds = tag.findAll('td') # 每一次都是从列头开始获取 j = 0 # 学期 semester = str(tds[0].string) if semester == 'None': break else: print(semester.ljust(6) + '\t\t\t', end='') # 学号 studentid = tds[1].string print(studentid.ljust(14) + '\t\t\t', end='') j += 1 # 姓名 name = tds[2].string print(name.ljust(3) + '\t\t\t', end='') j += 1 # 课程 course = tds[3].string print(course.ljust(20, ' ') + '\t\t\t', end='') j += 1 # 课程要求 requirments = tds[4].string print(requirments.ljust(10, ' ') + '\t\t', end='') j += 1 # 学分 scredit = tds[5].string print(scredit.ljust(2, ' ') + '\t\t', end='') j += 1 # 成绩 achievement = tds[6].string print(achievement.ljust(2) + '\t\t', end='') j += 1 # 重考一 reexaminef = tds[7].string print(reexaminef.ljust(2) + '\t\t', end='') j += 1 # 重考二 reexamines = tds[8].string print(reexamines.ljust(2) + '\t\t') j += 1 i += 1
这里查了很多别人的博客都是用正则表达式来分解数据,表示自己的正则写的并不好也尝试了但是没成功,所以无奈选择这种方式,如果有人有测试成功的正则欢迎跟我说一声,我也学习学习。
把数据保存到Excel
因为已经清楚了这个网页保存成绩的具体结构,所以顺着每次循环解析将数据不断加以保存就是了,这里使用xlwt
写入数据到Excel,因为xlwt
模块打印输出到Excel中的样式宽度偏小,影响观看,所以这里还加入了一个方法去控制打印到Excel表格中的样式:
file = xlwt.Workbook(encoding='utf-8') table = file.add_sheet('achieve') # 设置Excel样式 def set_style(name, height, bold=False): style = xlwt.XFStyle() # 初始化样式 font = xlwt.Font() # 为样式创建字体 font.name = name # 'Times New Roman' font.bold = bold font.color_index = 4 font.height = height style.font = font return style
运用到代码中:
for tag in target[1:]: tds = tag.findAll('td') j = 0 # 学期 semester = str(tds[0].string) if semester == 'None': break else: print(semester.ljust(6) + '\t\t\t', end='') table.write(i, j, semester, set_style('Arial', 220)) # 学号 studentid = tds[1].string print(studentid.ljust(14) + '\t\t\t', end='') j += 1 table.write(i, j, studentid, set_style('Arial', 220)) table.col(i).width = 256 * 16 # 姓名 name = tds[2].string print(name.ljust(3) + '\t\t\t', end='') j += 1 table.write(i, j, name, set_style('Arial', 220)) # 课程 course = tds[3].string print(course.ljust(20, ' ') + '\t\t\t', end='') j += 1 table.write(i, j, course, set_style('Arial', 220)) # 课程要求 requirments = tds[4].string print(requirments.ljust(10, ' ') + '\t\t', end='') j += 1 table.write(i, j, requirments, set_style('Arial', 220)) # 学分 scredit = tds[5].string print(scredit.ljust(2, ' ') + '\t\t', end='') j += 1 table.write(i, j, scredit, set_style('Arial', 220)) # 成绩 achievement = tds[6].string print(achievement.ljust(2) + '\t\t', end='') j += 1 table.write(i, j, achievement, set_style('Arial', 220)) # 重考一 reexaminef = tds[7].string print(reexaminef.ljust(2) + '\t\t', end='') j += 1 table.write(i, j, reexaminef, set_style('Arial', 220)) # 重考二 reexamines = tds[8].string print(reexamines.ljust(2) + '\t\t') j += 1 table.write(i, j, reexamines, set_style('Arial', 220)) i += 1 file.save('demo.xls')
最后稍加整合,写成一个方法:
# 获取成绩 # 这里num代表输入的学号,pagenum代表页数,总共76条数据,一页30条所以总共有三页 def getScore(num, pagenum, i, j): score_healders = {'Connection': 'keep-alive', 'User - Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) ' 'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36', 'Content - Type': 'application / x - www - form - urlencoded', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Content - Length': '69', 'Host': 'jwc.ecjtu.jx.cn', 'Referer': 'http: // jwc.ecjtu.jx.cn / mis_o / main.php', 'Upgrade - Insecure - Requests': '1', 'Accept - Language': 'zh - CN, zh;q = 0.8' } score_url = 'http://jwc.ecjtu.jx.cn/mis_o/query.php?start=' + str( pagenum) + '&job=see&=&Name=&Course=&ClassID=&Term=&StuID=' + num score_data = {'Name': '', 'StuID': num, 'Course': '', 'Term': '', 'ClassID': '', 'Submit': '%B2%E9%D1%AF' } score_response = sessions.post(score_url, data=score_data, headers=score_healders) # 输出到文本 with open('text.txt', 'wb') as f: f.write(score_response.content) content = score_response.content soup = BeautifulSoup(content, 'html.parser') target = soup.findAll('tr') try: for tag in target[1:]: tds = tag.findAll('td') j = 0 # 学期 semester = str(tds[0].string) if semester == 'None': break else: print(semester.ljust(6) + '\t\t\t', end='') table.write(i, j, semester, set_style('Arial', 220)) # 学号 studentid = tds[1].string print(studentid.ljust(14) + '\t\t\t', end='') j += 1 table.write(i, j, studentid, set_style('Arial', 220)) table.col(i).width = 256 * 16 # 姓名 name = tds[2].string print(name.ljust(3) + '\t\t\t', end='') j += 1 table.write(i, j, name, set_style('Arial', 220)) # 课程 course = tds[3].string print(course.ljust(20, ' ') + '\t\t\t', end='') j += 1 table.write(i, j, course, set_style('Arial', 220)) # 课程要求 requirments = tds[4].string print(requirments.ljust(10, ' ') + '\t\t', end='') j += 1 table.write(i, j, requirments, set_style('Arial', 220)) # 学分 scredit = tds[5].string print(scredit.ljust(2, ' ') + '\t\t', end='') j += 1 table.write(i, j, scredit, set_style('Arial', 220)) # 成绩 achievement = tds[6].string print(achievement.ljust(2) + '\t\t', end='') j += 1 table.write(i, j, achievement, set_style('Arial', 220)) # 重考一 reexaminef = tds[7].string print(reexaminef.ljust(2) + '\t\t', end='') j += 1 table.write(i, j, reexaminef, set_style('Arial', 220)) # 重考二 reexamines = tds[8].string print(reexamines.ljust(2) + '\t\t') j += 1 table.write(i, j, reexamines, set_style('Arial', 220)) i += 1 except: print('出了一点小Bug') file.save('demo.xls')
在模拟登陆操作后增加一个判断:
# 判断是否登陆 def isLogin(num): return_code = response.status_code if return_code == 200: if re.match(r"^\d{14}$", num): print('请稍等') else: print('请输入正确的学号') return True else: return False
最后在main
中这么调用:
if name == 'main': num = input('请输入你的学号:') if isLogin(num): getScore(num, pagenum=0, i=0, j=0) getScore(num, pagenum=1, i=31, j=0) getScore(num, pagenum=2, i=62, j=0)
在PyCharm下按alt
+shift
+x
快捷键运行程序:
控制台输出
控制台会有如下输出(这里只截取部分,不要吐槽没有对齐,这里我也用了格式化输出还是不太行,不过最起码出来了结果,而且我们的目的是输出到Excel中不是吗)
控制台输出
然后去程序根目录找看看有没有生成一个叫demo.xls
的文件,我的程序就放在桌面,所以去桌面找:
桌面图标
点开查看是否成功获取:
最终获取结果
至此,大功告成
The above is the detailed content of Python crawler simulates logging into the Academic Affairs Office and saves data locally. For more information, please follow other related articles on the PHP Chinese website!

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.

Python's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.

Whether to choose Python or C depends on project requirements: 1) Python is suitable for rapid development, data science, and scripting because of its concise syntax and rich libraries; 2) C is suitable for scenarios that require high performance and underlying control, such as system programming and game development, because of its compilation and manual memory management.

Python is widely used in data science and machine learning, mainly relying on its simplicity and a powerful library ecosystem. 1) Pandas is used for data processing and analysis, 2) Numpy provides efficient numerical calculations, and 3) Scikit-learn is used for machine learning model construction and optimization, these libraries make Python an ideal tool for data science and machine learning.

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 English version
Recommended: Win version, supports code prompts!

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool