此脚本将从 BoardGameGeek API 获取棋盘游戏数据并将数据存储在 CSV 文件中。 API 响应采用 XML 格式,并且由于没有端点可以一次获取多个棋盘游戏数据,因此这将通过根据棋盘游戏 ID 向端点发出单个棋盘游戏的请求来实现,同时递增每个请求后给定 ID 范围内的 ID。
查看我的 GitHub 个人资料上的存储库
为每个棋盘游戏获取和存储的信息如下:
名称、游戏 ID、评级、权重、发布年份、最少玩家数、最大玩家数、最短游戏时间、最大支付时间、最小年龄、所属者、类别、机制、设计师、艺术家和发行商。
我们首先导入此脚本所需的库:
# Import libraries from bs4 import BeautifulSoup from csv import DictWriter import pandas as pd import requests import time
我们需要定义请求的标头以及每个请求之间的暂停(以秒为单位)。 BGG API 文档中没有有关请求速率限制的信息,并且其论坛中有一些非官方信息表明每秒的请求数被限制为 2 个。如果脚本开始达到限制速率,则可能需要调整请求之间的暂停。
# Define request url headers headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:85.0) Gecko/20100101 Firefox/85.0", "Accept-Language": "en-GB, en-US, q=0.9, en" } # Define sleep timer value between requests SLEEP_BETWEEN_REQUEST = 0.5
接下来是定义需要从 BGG 获取并处理的棋盘游戏 ID 的范围。在创建此脚本时,现有棋盘游戏数据的上限约为 402000 个 ids,并且这个数字将来很可能会增加。
# Define game ids range game_id = 264882 # initial game id last_game_id = 264983 # max game id (currently, it's around 402000)
以下是脚本完成时根据 ID 范围调用的函数。此外,如果发出请求时出现错误,将调用此函数以存储截至异常发生时附加到游戏列表的所有数据。
# CSV file saving function def save_to_csv(games): csv_header = [ 'name', 'game_id', 'rating', 'weight', 'year_published', 'min_players', 'max_players', 'min_play_time', 'max_play_time', 'min_age', 'owned_by', 'categories', 'mechanics', 'designers', 'artists', 'publishers' ] with open('BGGdata.csv', 'a', encoding='UTF8') as f: dictwriter_object = DictWriter(f, fieldnames=csv_header) if f.tell() == 0: dictwriter_object.writeheader() dictwriter_object.writerows(games)
下面是这个脚本的主要逻辑。它将在ID范围内执行代码,这意味着它将向BGG API发出请求,使用BeautifulSoup获取所有数据,进行必要的检查数据是否与棋盘游戏相关(有数据与棋盘游戏相关)其他类别。请参阅 BGG API 了解更多信息。),之后它将处理数据并将其附加到游戏列表中,最后存储到 CSV 文件中。
# Create an empty 'games' list where each game will be appended games = [] while game_id <= last_game_id: url = "https://boardgamegeek.com/xmlapi2/thing?id=" + str(game_id) + "&stats=1" try: response = requests.get(url, headers=headers) except Exception as err: # In case of exception, store to CSV the fetched items up to this point. save_to_csv(games) print(">>> ERROR:") print(err) soup = BeautifulSoup(response.text, features="html.parser") item = soup.find("item") # Check if the request returns an item. If not, break the while loop if item: # If the item is not a board game - skip if not item['type'] == 'boardgame': game_id += 1 continue # Set values for each field in the item name = item.find("name")['value'] year_published = item.find("yearpublished")['value'] min_players = item.find("minplayers")['value'] max_players = item.find("maxplayers")['value'] min_play_time = item.find("minplaytime")['value'] max_play_time = item.find("maxplaytime")['value'] min_age = item.find("minage")['value'] rating = item.find("average")['value'] weight = item.find("averageweight")['value'] owned = item.find("owned")['value'] categories = [] mechanics = [] designers = [] artists = [] publishers = [] links = item.find_all("link") for link in links: if link['type'] == "boardgamecategory": categories.append(link['value']) if link['type'] == "boardgamemechanic": mechanics.append(link['value']) if link['type'] == "boardgamedesigner": designers.append(link['value']) if link['type'] == "boardgameartist": artists.append(link['value']) if link['type'] == "boardgamepublisher": publishers.append(link['value']) game = { "name": name, "game_id": game_id, "rating": rating, "weight": weight, "year_published": year_published, "min_players": min_players, "max_players": max_players, "min_play_time": min_play_time, "max_play_time": max_play_time, "min_age": min_age, "owned_by": owned, "categories": ', '.join(categories), "mechanics": ', '.join(mechanics), "designers": ', '.join(designers), "artists": ', '.join(artists), "publishers": ', '.join(publishers), } # Append the game (item) to the 'games' list games.append(game) else: # If there is no data for the request - skip to the next one print(f">>> Empty item. Skipped item with id ({game_id}).") game_id += 1 continue # Increment game id and set sleep timer between requests game_id += 1 time.sleep(SLEEP_BETWEEN_REQUEST) save_to_csv(games)
下面您可以以 pandas DataFrame 的形式预览 CSV 文件中的前几行记录。
# Preview the CSV as pandas DataFrame df = pd.read_csv('./BGGdata.csv') print(df.head(5))
以上是BoardGameGeek 使用 Python 获取棋盘游戏数据的详细内容。更多信息请关注PHP中文网其他相关文章!