国产化驱动经济自主性与科技创新的未来之路
696
2022-08-27
爬虫应用——寻找乔丹相关10条URL
直接百度乔丹的URL是这样的: root-url:
然后我用了一个笨办法,直接百度科比,然后找到乔丹的链接,查看元素并作为HTML编辑,OK,本尊出现:
从之前的学习记录看,我们至少需要调度器spider_man,URL管理器url_manager,HTML解析器html_parser,HTML-器html_downloader 计划输出结果保存成一个文件,所以,我们的文件结构是:
spider_man.pyurl_manager.pyhtml_downloader.pyhtml_parser.pyoutputer.py
查看网页上其他链接的格式,右击——查看元素——作为HTML编辑:
学习了BeautifulSoup官网上相关的筛选语法后,为了精确的定位我们想要的信息,可以这样:
new_urls=set() #芝加哥公牛队 links=soup.find_all('a',href=re.compile(r"/view/\d+\.htm")) #正则匹配links=soup.find_all(target="_blank")
编码部分: html_downloader.py:
#coding:utf8import urllib2class HtmlDownloader(object): def download(self,url): if url is None: return None response = urllib2.urlopen(url) if response.getcode()!=200: return None #print response.read() return
outputer.py:
#coding:utf8class HtmlOutputer(object): def __init__(self): self.datas=[] def collect_data(self,data): if data is None: return self.datas.append(data) def output_html(self): fout=open('output.html','w') fout.write('') fout.write('
') fout.write('%s | ' %data['url']) fout.write('%s | ' %data['title'].encode('utf-8')) fout.write('
html_parser.py:
#coding:utf8from bs4 import BeautifulSoupimport reimport urlparseclass HtmlParser(object): def _get_new_urls(self,page_url,soup): new_urls=set() #芝加哥公牛队 links=soup.find_all('a',href=re.compile(r"/view/\d+\.htm")) #正则匹配 links=soup.find_all(target="_blank") for link in links: new_url=link['href'] new_full_url=urlparse.urljoin(page_url,new_url) new_urls.add(new_full_url) #print new_urls return new_urls def _get_new_data(self,page_url,soup): res_data={} res_data['url']=page_url #
url_manager.py:
#coding:utf8class UrlManager(object): def __init__(self): self.new_urls=set() self.old_urls=set() def add_new_url(self,url): if url is None: return if url not in self.new_urls and url not in self.old_urls: self.new_urls.add(url) def add_new_urls(self,urls): if urls is None or len(urls)==0: return for url in urls : self.add_new_url(url) def has_new_url(self): return len(self.new_urls)!=0 def get_new_url(self): new_url=self.new_urls.pop() self.old_urls.add(new_url) return
spider_man.py:
#coding: utf8import url_manager,html_downloader,html_outputer,html_parserclass SpiderMain(object): """docstring for SpiderMain""" def __init__(self): self.urls=url_manager.UrlManager() self.downloader=html_downloader.HtmlDownloader() self.parser=html_parser.HtmlParser() self.outputer=html_outputer.HtmlOutputer() def craw(self,root_url): count=1 #爬取第几个URL self.urls.add_new_url(root_url) while self.urls.has_new_url(): try: new_url=self.urls.get_new_url() print 'NO.%d: %s' % (count,new_url) html_cont=self.downloader.download(new_url) # 解析得到URL和数据 new_urls, new_data=self.parser.parse(new_url,html_cont) print new_urls #print new_data self.urls.add_new_urls(new_urls) self.outputer.collect_data(new_data) if count==20: break count=count+1 except Exception, e: print e #print count self.outputer.output_html()if __name__ == "__main__": root_url="class="data-table" data-id="t7a7e9d1-2OL3bXVP" data-width="" style="outline: none; border-collapse: collapse; width: 100%;">
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~