search
HomeBackend DevelopmentPython TutorialHow to use Python for xpath, JsonPath, and bs4?

1.xpath

1.1 Use xpath

  • Google installs the xpath plug-in in advance, press ctrl shift x and a small black box will appear

  • Install lxml librarypip install lxml ‐i https://pypi.douban.com/simple

  • Import lxml.etreefrom lxml import etree

  • etree.parse() parses local files html_tree = etree.parse('XX.html')

  • etree.HTML() Server response filehtml_tree = etree.HTML(response.read().decode('utf‐8')

  • .html_tree.xpath(xpath path)

##1.2 Basic xpath syntax

1. Path query

  • Find all descendant nodes , regardless of hierarchical relationship

  • Find direct child nodes

2. Predicate query

//div[@id] 
//div[@id="maincontent"]

3.Attribute query

//@class

4. Fuzzy query

//div[contains(@id, "he")] 
//div[starts‐with(@id, "he")]

5. Content query

//div/h2/text()

6. Logical operation

//div[@id="head" and @class="s_down"] 
//title | //price

1.3 Example

xpath .html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8"/>
    <title>Title</title>
</head>
<body>
    <ul>
        <li id="l1" class="class1">北京</li>
        <li id="l2" class="class2">上海</li>
        <li id="d1">广州</li>
        <li>深圳</li>
    </ul>
</body>
</html>
from lxml import etree

# xpath解析
# 本地文件:                                          etree.parse
# 服务器相应的数据    response.read().decode(&#39;utf-8&#39;)  etree.HTML()


tree = etree.parse(&#39;xpath.html&#39;)

# 查找url下边的li
li_list = tree.xpath(&#39;//body/ul/li&#39;)
print(len(li_list))  # 4

# 获取标签中的内容
li_list = tree.xpath(&#39;//body/ul/li/text()&#39;)
print(li_list)  # [&#39;北京&#39;, &#39;上海&#39;, &#39;广州&#39;, &#39;深圳&#39;]

# 获取带id属性的li
li_list = tree.xpath(&#39;//ul/li[@id]&#39;)
print(len(li_list))  # 3

# 获取id为l1的标签内容
li_list = tree.xpath(&#39;//ul/li[@id="l1"]/text()&#39;)
print(li_list)  # [&#39;北京&#39;]

# 获取id为l1的class属性值
c1 = tree.xpath(&#39;//ul/li[@id="l1"]/@class&#39;)
print(c1)  # [&#39;class1&#39;]

# 获取id中包含l的标签
li_list = tree.xpath(&#39;//ul/li[contains(@id, "l")]/text()&#39;)
print(li_list)  # [&#39;北京&#39;, &#39;上海&#39;]
# 获取id以d开头的标签
li_list = tree.xpath(&#39;//ul/li[starts-with(@id,"d")]/text()&#39;)
print(li_list)  # [&#39;广州&#39;]
# 获取id为l2并且class为class2的标签
li_list = tree.xpath(&#39;//ul/li[@id="l2" and @class="class2"]/text()&#39;)
print(li_list)  # [&#39;上海&#39;]
# 获取id为l2或id为d1的标签
li_list = tree.xpath(&#39;//ul/li[@id="l2"]/text() | //ul/li[@id="d1"]/text()&#39;)
print(li_list)  # [&#39;上海&#39;, &#39;广州&#39;]

1.4

Crawling the value of Baidu search button
import urllib.request
from lxml import etree
url = &#39;http://www.baidu.com&#39;
headers = {
    &#39;User-Agent&#39;: &#39;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36&#39;
}
request = urllib.request.Request(url=url, headers=headers)
response = urllib.request.urlopen(request)
content = response.read().decode(&#39;utf-8&#39;)
tree = etree.HTML(content)
value = tree.xpath(&#39;//input[@id="su"]/@value&#39;)
print(value)

How to use Python for xpath, JsonPath, and bs4?

1.5 Crawling pictures of webmaster materials

# 需求 下载的前十页的图片
# https://sc.chinaz.com/tupian/qinglvtupian.html   1
# https://sc.chinaz.com/tupian/qinglvtupian_page.html
import urllib.request
from lxml import etree
def create_request(page):
    if (page == 1):
        url = &#39;https://sc.chinaz.com/tupian/qinglvtupian.html&#39;
    else:
        url = &#39;https://sc.chinaz.com/tupian/qinglvtupian_&#39; + str(page) + &#39;.html&#39;
    headers = {
        &#39;User-Agent&#39;: &#39;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36&#39;,
    }
    request = urllib.request.Request(url=url, headers=headers)
    return request
def get_content(request):
    response = urllib.request.urlopen(request)
    content = response.read().decode(&#39;utf-8&#39;)
    return content
def down_load(content):
    #     下载图片
    # urllib.request.urlretrieve(&#39;图片地址&#39;,&#39;文件的名字&#39;)
    tree = etree.HTML(content)
    name_list = tree.xpath(&#39;//div[@id="container"]//a/img/@alt&#39;)
    # 一般设计图片的网站都会进行懒加载
    src_list = tree.xpath(&#39;//div[@id="container"]//a/img/@src2&#39;)
    print(src_list)
    for i in range(len(name_list)):
        name = name_list[i]
        src = src_list[i]
        url = &#39;https:&#39; + src
        urllib.request.urlretrieve(url=url, filename=&#39;./loveImg/&#39; + name + &#39;.jpg&#39;)
if __name__ == &#39;__main__&#39;:
    start_page = int(input(&#39;请输入起始页码&#39;))
    end_page = int(input(&#39;请输入结束页码&#39;))

    for page in range(start_page, end_page + 1):
        # (1) 请求对象的定制
        request = create_request(page)
        # (2)获取网页的源码
        content = get_content(request)
        # (3)下载
        down_load(content)

2. JsonPath

2.1 pip installation

pip install jsonpath

2.2 Use of jsonpath

obj = json.load(open(&#39;json文件&#39;, &#39;r&#39;, encoding=&#39;utf‐8&#39;)) 
ret = jsonpath.jsonpath(obj, &#39;jsonpath语法&#39;)

Comparison of JSONPath syntax elements and corresponding XPath elements:

How to use Python for xpath, JsonPath, and bs4?

Example:

jsonpath.json

{ "store": {
    "book": [
      { "category": "修真",
        "author": "六道",
        "title": "坏蛋是怎样练成的",
        "price": 8.95
      },
      { "category": "修真",
        "author": "天蚕土豆",
        "title": "斗破苍穹",
        "price": 12.99
      },
      { "category": "修真",
        "author": "唐家三少",
        "title": "斗罗大陆",
        "isbn": "0-553-21311-3",
        "price": 8.99
      },
      { "category": "修真",
        "author": "南派三叔",
        "title": "星辰变",
        "isbn": "0-395-19395-8",
        "price": 22.99
      }
    ],
    "bicycle": {
      "author": "老马",
      "color": "黑色",
      "price": 19.95
    }
  }
}
import json
import jsonpath

obj = json.load(open(&#39;jsonpath.json&#39;, &#39;r&#39;, encoding=&#39;utf-8&#39;))

# 书店所有书的作者
author_list = jsonpath.jsonpath(obj, &#39;$.store.book[*].author&#39;)
print(author_list)  # [&#39;六道&#39;, &#39;天蚕土豆&#39;, &#39;唐家三少&#39;, &#39;南派三叔&#39;]

# 所有的作者
author_list = jsonpath.jsonpath(obj, &#39;$..author&#39;)
print(author_list)  # [&#39;六道&#39;, &#39;天蚕土豆&#39;, &#39;唐家三少&#39;, &#39;南派三叔&#39;, &#39;老马&#39;]

# store下面的所有的元素
tag_list = jsonpath.jsonpath(obj, &#39;$.store.*&#39;)
print(
    tag_list)  # [[{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;六道&#39;, &#39;title&#39;: &#39;坏蛋是怎样练成的&#39;, &#39;price&#39;: 8.95}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;天蚕土豆&#39;, &#39;title&#39;: &#39;斗破苍穹&#39;, &#39;price&#39;: 12.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;唐家三少&#39;, &#39;title&#39;: &#39;斗罗大陆&#39;, &#39;isbn&#39;: &#39;0-553-21311-3&#39;, &#39;price&#39;: 8.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}], {&#39;author&#39;: &#39;老马&#39;, &#39;color&#39;: &#39;黑色&#39;, &#39;price&#39;: 19.95}]

# store里面所有东西的price
price_list = jsonpath.jsonpath(obj, &#39;$.store..price&#39;)
print(price_list)  # [8.95, 12.99, 8.99, 22.99, 19.95]

# 第三个书
book = jsonpath.jsonpath(obj, &#39;$..book[2]&#39;)
print(book)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;唐家三少&#39;, &#39;title&#39;: &#39;斗罗大陆&#39;, &#39;isbn&#39;: &#39;0-553-21311-3&#39;, &#39;price&#39;: 8.99}]

# 最后一本书
book = jsonpath.jsonpath(obj, &#39;$..book[(@.length-1)]&#39;)
print(book)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}]
# 	前面的两本书
book_list = jsonpath.jsonpath(obj, &#39;$..book[0,1]&#39;)
# book_list = jsonpath.jsonpath(obj,&#39;$..book[:2]&#39;)
print(
    book_list)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;六道&#39;, &#39;title&#39;: &#39;坏蛋是怎样练成的&#39;, &#39;price&#39;: 8.95}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;天蚕土豆&#39;, &#39;title&#39;: &#39;斗破苍穹&#39;, &#39;price&#39;: 12.99}]

# 条件过滤需要在()的前面添加一个?
# 	 过滤出所有的包含isbn的书。
book_list = jsonpath.jsonpath(obj, &#39;$..book[?(@.isbn)]&#39;)
print(
    book_list)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;唐家三少&#39;, &#39;title&#39;: &#39;斗罗大陆&#39;, &#39;isbn&#39;: &#39;0-553-21311-3&#39;, &#39;price&#39;: 8.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}]
# 哪本书超过了10块钱
book_list = jsonpath.jsonpath(obj, &#39;$..book[?(@.price>10)]&#39;)
print(
    book_list)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;天蚕土豆&#39;, &#39;title&#39;: &#39;斗破苍穹&#39;, &#39;price&#39;: 12.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}]

3. BeautifulSoup

3.1 Basic introduction

1.Installation

pip install bs4

2.Import

from bs4 import BeautifulSoup

3. Create object

  • The file generated by the server response object soup = BeautifulSoup(response.read().decode() , 'lxml')

  • Local file generation object soup = BeautifulSoup(open('1.html'), 'lxml')

Note: The default encoding format for opening files is gbk, so you need to specify the opening encoding format utf-8

3.2 Installation and creation

1.根据标签名查找节点 
	soup.a 【注】只能找到第一个a 
		soup.a.name 
		soup.a.attrs 
2.函数 
	(1).find(返回一个对象) 
		find(&#39;a&#39;):只找到第一个a标签
		find(&#39;a&#39;, title=&#39;名字&#39;) 
		find(&#39;a&#39;, class_=&#39;名字&#39;) 
	(2).find_all(返回一个列表) 
		find_all(&#39;a&#39;) 查找到所有的a 
		find_all([&#39;a&#39;, &#39;span&#39;]) 返回所有的a和span 
		find_all(&#39;a&#39;, limit=2) 只找前两个a 
	(3).select(根据选择器得到节点对象)【推荐】 
		1.element 
			eg:p 
		2..class 
			eg:.firstname 
		3.#id
			eg:#firstname 
		4.属性选择器 
			[attribute] 
				eg:li = soup.select(&#39;li[class]&#39;) 
			[attribute=value] 
				eg:li = soup.select(&#39;li[class="hengheng1"]&#39;) 
		5.层级选择器 
			element element 
				div p 
			element>element 
				div>p 
			element,element 
				div,p 
					eg:soup = soup.select(&#39;a,span&#39;)

3.3 Node positioning

1.根据标签名查找节点 
	soup.a 【注】只能找到第一个a 
		soup.a.name 
		soup.a.attrs 
2.函数 
	(1).find(返回一个对象) 
		find(&#39;a&#39;):只找到第一个a标签
		find(&#39;a&#39;, title=&#39;名字&#39;) 
		find(&#39;a&#39;, class_=&#39;名字&#39;) 
	(2).find_all(返回一个列表) 
		find_all(&#39;a&#39;) 查找到所有的a 
		find_all([&#39;a&#39;, &#39;span&#39;]) 返回所有的a和span 
		find_all(&#39;a&#39;, limit=2) 只找前两个a 
	(3).select(根据选择器得到节点对象)【推荐】 
		1.element 
			eg:p 
		2..class 
			eg:.firstname 
		3.#id
			eg:#firstname 
		4.属性选择器 
			[attribute] 
				eg:li = soup.select(&#39;li[class]&#39;) 
			[attribute=value] 
				eg:li = soup.select(&#39;li[class="hengheng1"]&#39;) 
		5.层级选择器 
			element element 
				div p 
			element>element 
				div>p 
			element,element 
				div,p 
					eg:soup = soup.select(&#39;a,span&#39;)

3.5 Node information

(1).获取节点内容:适用于标签中嵌套标签的结构 
	obj.string 
	obj.get_text()【推荐】 
(2).节点的属性 
	tag.name 获取标签名 
		eg:tag = find(&#39;li) 
			print(tag.name) 
	tag.attrs将属性值作为一个字典返回 
(3).获取节点属性 
	obj.attrs.get(&#39;title&#39;)【常用】 
	obj.get(&#39;title&#39;) 
	obj[&#39;title&#39;]
(1).获取节点内容:适用于标签中嵌套标签的结构 
	obj.string 
	obj.get_text()【推荐】 
(2).节点的属性 
	tag.name 获取标签名 
		eg:tag = find(&#39;li) 
			print(tag.name) 
	tag.attrs将属性值作为一个字典返回 
(3).获取节点属性 
	obj.attrs.get(&#39;title&#39;)【常用】 
	obj.get(&#39;title&#39;) 
	obj[&#39;title&#39;]

3.6 Usage example

bs4.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>

    <div>
        <ul>
            <li id="l1">张三</li>
            <li id="l2">李四</li>
            <li>王五</li>
            <a href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" " class="a1">google</a>
            <span>嘿嘿嘿</span>
        </ul>
    </div>


    <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>

    <div id="d1">
        <span>
            哈哈哈
        </span>
    </div>

    <p id="p1" class="p1">呵呵呵</p>
</body>
</html>
from bs4 import BeautifulSoup
# 通过解析本地文件 来将bs4的基础语法进行讲解
# 默认打开的文件的编码格式是gbk 所以在打开文件的时候需要指定编码
soup = BeautifulSoup(open(&#39;bs4.html&#39;, encoding=&#39;utf-8&#39;), &#39;lxml&#39;)
# 根据标签名查找节点
# 找到的是第一个符合条件的数据
print(soup.a)  # <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>
# 获取标签的属性和属性值
print(soup.a.attrs)  # {&#39;href&#39;: &#39;&#39;, &#39;id&#39;: &#39;&#39;, &#39;class&#39;: [&#39;a1&#39;]}
# bs4的一些函数
# (1)find
# 返回的是第一个符合条件的数据
print(soup.find(&#39;a&#39;))  # <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>
# 根据title的值来找到对应的标签对象
print(soup.find(&#39;a&#39;, title="a2"))  # <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>

# 根据class的值来找到对应的标签对象  注意的是class需要添加下划线
print(soup.find(&#39;a&#39;, class_="a1"))  # <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>

# (2)find_all  返回的是一个列表 并且返回了所有的a标签
print(soup.find_all(&#39;a&#39;))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>]

# 如果想获取的是多个标签的数据 那么需要在find_all的参数中添加的是列表的数据
print(soup.find_all([&#39;a&#39;,&#39;span&#39;]))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <span>嘿嘿嘿</span>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百</a><spa哈</span>]

# limit的作用是查找前几个数据
print(soup.find_all(&#39;li&#39;, limit=2))  # [<li id="l1">张三</li>, <li id="l2">李四</li>]

# (3)select(推荐)
# select方法返回的是一个列表  并且会返回多个数据
print(soup.select(&#39;a&#39;))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>]

# 可以通过.代表class  我们把这种操作叫做类选择器
print(soup.select(&#39;.a1&#39;))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>]

print(soup.select(&#39;#l1&#39;))  # [<li id="l1">张三</li>]

# 属性选择器---通过属性来寻找对应的标签
# 查找到li标签中有id的标签
print(soup.select(&#39;li[id]&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>]

# 查找到li标签中id为l2的标签
print(soup.select(&#39;li[id="l2"]&#39;))  # [<li id="l2">李四</li>]

# 层级选择器
#  后代选择器
# 找到的是div下面的li
print(soup.select(&#39;div li&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>]

# 子代选择器
#  某标签的第一级子标签
# 注意:很多的计算机编程语言中 如果不加空格不会输出内容  但是在bs4中 不会报错 会显示内容
print(soup.select(&#39;div > ul > li&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>]

# 找到a标签和li标签的所有的对象
print(soup.select(
    &#39;a,li&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>, <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>]

# 节点信息
#    获取节点内容
obj = soup.select(&#39;#d1&#39;)[0]
# 如果标签对象中 只有内容 那么string和get_text()都可以使用
# 如果标签对象中 除了内容还有标签 那么string就获取不到数据 而get_text()是可以获取数据
# 我们一般情况下  推荐使用get_text()
print(obj.string)  # None
print(obj.get_text())  # 哈哈哈

# 节点的属性
obj = soup.select(&#39;#p1&#39;)[0]
# name是标签的名字
print(obj.name)  # p
# 将属性值左右一个字典返回
print(obj.attrs)  # {&#39;id&#39;: &#39;p1&#39;, &#39;class&#39;: [&#39;p1&#39;]}

# 获取节点的属性
obj = soup.select(&#39;#p1&#39;)[0]
#
print(obj.attrs.get(&#39;class&#39;))  # [&#39;p1&#39;]
print(obj.get(&#39;class&#39;))  # [&#39;p1&#39;]
print(obj[&#39;class&#39;])  # [&#39;p1&#39;]

3.7 Parse Starbucks product name

import urllib.request
url = &#39;https://www.starbucks.com.cn/menu/&#39;
response = urllib.request.urlopen(url)
content = response.read().decode(&#39;utf-8&#39;)
from bs4 import BeautifulSoup
soup = BeautifulSoup(content,&#39;lxml&#39;)
# //ul[@class="grid padded-3 product"]//strong/text()
# 一般先用xpath方式通过google插件写好解析的表达式
name_list = soup.select(&#39;ul[class="grid padded-3 product"] strong&#39;)
for name in name_list:
    print(name.get_text())

How to use Python for xpath, JsonPath, and bs4?

The above is the detailed content of How to use Python for xpath, JsonPath, and bs4?. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:亿速云. If there is any infringement, please contact admin@php.cn delete
Python: Games, GUIs, and MorePython: Games, GUIs, and MoreApr 13, 2025 am 12:14 AM

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

Python vs. C  : Applications and Use Cases ComparedPython vs. C : Applications and Use Cases ComparedApr 12, 2025 am 12:01 AM

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

The 2-Hour Python Plan: A Realistic ApproachThe 2-Hour Python Plan: A Realistic ApproachApr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python: Exploring Its Primary ApplicationsPython: Exploring Its Primary ApplicationsApr 10, 2025 am 09:41 AM

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

How Much Python Can You Learn in 2 Hours?How Much Python Can You Learn in 2 Hours?Apr 09, 2025 pm 04:33 PM

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

How to teach computer novice programming basics in project and problem-driven methods within 10 hours?How to teach computer novice programming basics in project and problem-driven methods within 10 hours?Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6?What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6?Apr 02, 2025 am 07:12 AM

Error loading Pickle file in Python 3.6 environment: ModuleNotFoundError:Nomodulenamed...

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools