Python爬虫一共需要爬取65页数据,每页数据的列数不确定。现在我能把每一列的数据抓下来,但是因为列数不确定,写入的文件名就不能确定。问题在于怎么才能把第x列数据写入第x个文件。也就是如何才能动态选择file=的文件名。代码如下:
f_1 = open('fitment/1.txt', 'a')
f_2 = open('fitment/2.txt', 'a')
f_3 = open('fitment/3.txt', 'a')
for i in range(66):
pr = random.choice(proxy)
url = 'https://*****' + str(i) + '****'
page_url = requests.get(url, headers=head, proxies=pr)
page_get = page_url.text
page_text = BeautifulSoup(page_get, 'lxml')
fitment_1 = page_text.find_all('tr', {'class': 'fitment listRowEven'})
for each_tag_1 in fitment_1:
td_text_1 = each_tag_1.find_all('td')
for x in range(len(td_text_1)+1):
print(td_text_1[x].string, file=)
网页的结构类如下,每个tr标签即为一列,具体要抓取的数据位于每个td标签内
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
ringa_lee2017-06-12 09:25:40
先不要定义好open文件对象,可以根据列数打开相应的文件操作
with open('列数.txt', 'a') as f:
f.write('内容')