python批量爬取xml文件
1.网站链接:https://www.cnvd.org.cn/shareData/list
2.需要下载的页面文件:
3.该页面需要登陆,然后批量下载共享漏洞文件,我们就通过cookie来实现。
#!/usr/bin/env python# -*- coding: utf-8 -*-"""Date: 2019-08-17Author: BobDescription: python爬取xml文件"""import requestsfrom bs4 import BeautifulSoupdef cnvd_spider(): url = 'https://www.cnvd.org.cn/shareData/list?max=240&offset=0' headers = { "Cookie": "__jsluid_s=65d5e7902f04498e89b16e93fb010b3c; __jsluid_h=1ab428e655aee36ac3c9835db29b6714; JSESSIONID=91BB91B37543D365AA64895EDFCD828F; __jsl_clearance=1566003116.655|0|CYPFsKirGYBG12qtoOrS5Kq1rM0%3D", "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36", } html = requests.get(url=url, headers=headers).text soup = BeautifulSoup(html, 'lxml') links = soup.find_all('a', attrs={'title': '下载xml'}) for link in links: url = 'https://www.cnvd.org.cn' + link.get('href') file_name = link.get_text() html_data = requests.get(url=url, headers=headers) with open(file_name, 'w') as f: f.write(html_data.content)if __name__ == '__main__': cnvd_spider()
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。