爬虫应用——寻找乔丹相关10条URL

网友投稿 644 2022-08-27

爬虫应用——寻找乔丹相关10条URL

爬虫应用——寻找乔丹相关10条URL

直接百度乔丹的URL是这样的: root-url:

然后我用了一个笨办法,直接百度科比,然后找到乔丹的链接,查看元素并作为HTML编辑,OK,本尊出现:

从之前的学习记录看,我们至少需要调度器spider_man,URL管理器url_manager,HTML解析器html_parser,HTML-器html_downloader 计划输出结果保存成一个文件,所以,我们的文件结构是:

spider_man.pyurl_manager.pyhtml_downloader.pyhtml_parser.pyoutputer.py

查看网页上其他链接的格式,右击——查看元素——作为HTML编辑:

芝加哥公牛队

学习了BeautifulSoup官网上相关的筛选语法后,为了精确的定位我们想要的信息,可以这样:

new_urls=set() #芝加哥公牛队 links=soup.find_all('a',href=re.compile(r"/view/\d+\.htm")) #正则匹配links=soup.find_all(target="_blank")

编码部分: html_downloader.py:

#coding:utf8import urllib2class HtmlDownloader(object): def download(self,url): if url is None: return None response = urllib2.urlopen(url) if response.getcode()!=200: return None #print response.read() return

outputer.py:

#coding:utf8class HtmlOutputer(object): def __init__(self): self.datas=[] def collect_data(self,data): if data is None: return self.datas.append(data) def output_html(self): fout=open('output.html','w') fout.write('') fout.write('') fout.write('

') for data in self.datas: fout.write('') fout.write('' %data['url']) fout.write('' %data['title'].encode('utf-8')) fout.write('') fout.write('
%s%s
') fout.write('') fout.write('') fout.close()

html_parser.py:

#coding:utf8from bs4 import BeautifulSoupimport reimport urlparseclass HtmlParser(object): def _get_new_urls(self,page_url,soup): new_urls=set() #芝加哥公牛队 links=soup.find_all('a',href=re.compile(r"/view/\d+\.htm")) #正则匹配 links=soup.find_all(target="_blank") for link in links: new_url=link['href'] new_full_url=urlparse.urljoin(page_url,new_url) new_urls.add(new_full_url) #print new_urls return new_urls def _get_new_data(self,page_url,soup): res_data={} res_data['url']=page_url #

#

迈克尔·乔丹

title_node=soup.find('dd',class_="lemmaWgt-lemmaTitle-title").find("h1") res_data['title']=title_node.get_text() return res_data def parse(self,page_url,html_cont): if page_url is None or html_cont is None: return soup=BeautifulSoup(html_cont,'html.parser',from_encoding='utf-8') #print soup new_urls=self._get_new_urls(page_url,soup) print new_urls new_data=self._get_new_data(page_url,soup) return

url_manager.py:

#coding:utf8class UrlManager(object): def __init__(self): self.new_urls=set() self.old_urls=set() def add_new_url(self,url): if url is None: return if url not in self.new_urls and url not in self.old_urls: self.new_urls.add(url) def add_new_urls(self,urls): if urls is None or len(urls)==0: return for url in urls : self.add_new_url(url) def has_new_url(self): return len(self.new_urls)!=0 def get_new_url(self): new_url=self.new_urls.pop() self.old_urls.add(new_url) return

spider_man.py:

#coding: utf8import url_manager,html_downloader,html_outputer,html_parserclass SpiderMain(object): """docstring for SpiderMain""" def __init__(self): self.urls=url_manager.UrlManager() self.downloader=html_downloader.HtmlDownloader() self.parser=html_parser.HtmlParser() self.outputer=html_outputer.HtmlOutputer() def craw(self,root_url): count=1 #爬取第几个URL self.urls.add_new_url(root_url) while self.urls.has_new_url(): try: new_url=self.urls.get_new_url() print 'NO.%d: %s' % (count,new_url) html_cont=self.downloader.download(new_url) # 解析得到URL和数据 new_urls, new_data=self.parser.parse(new_url,html_cont) print new_urls #print new_data self.urls.add_new_urls(new_urls) self.outputer.collect_data(new_data) if count==20: break count=count+1 except Exception, e: print e #print count self.outputer.output_html()if __name__ == "__main__": root_url="class="data-table" data-id="t7a7e9d1-2OL3bXVP" data-width="" style="outline: none; border-collapse: collapse; width: 100%;">

​data-id="t8c0e6ab-GID5kM3K" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

迈克尔·乔丹

​data-id="t8c0e6ab-Pm6M81FY" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

杰里·斯隆

​data-id="t8c0e6ab-6rPDGpLr" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

卡尔·马龙

​data-id="t8c0e6ab-1VWYVH1A" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

公国

​data-id="t8c0e6ab-CKlLs74D" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

赛季

​data-id="t8c0e6ab-CM6dBIoC" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

NBA

​data-id="t8c0e6ab-ywQO6oab" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

篮圈

​data-id="t8c0e6ab-3vjiwGd1" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

迈克尔·里德

​data-id="t8c0e6ab-cQnBUDl7" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

蒂姆·邓肯

​data-id="t8c0e6ab-uw2WBaGs" style="vertical-align: top; min-width: auto; overflow-wrap: break-word; margin: 4px 8px; border: 1px solid rgb(217, 217, 217); padding: 4px 8px; cursor: default;">

NBA季前赛

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:VTK 学习初步
下一篇:SDL 加载显示JPEG图片
相关文章

 发表评论

暂时没有评论,来抢沙发吧~