后台小程序开发的全方位指南
1184
2022-10-30
crawley - 基于非阻塞I/O操作的pythonic爬虫框架
Pythonic Crawling / Scraping Framework Built on Eventlet
Features
High Speed WebCrawler built on Eventlet.Supports relational databases engines like Postgre, Mysql, Oracle, Sqlite.Supports NoSQL databased like Mongodb and Couchdb. New!Export your data into Json, XML or CSV formats. New!Command line tools.Extract data using your favourite tool. XPath or Pyquery (A Jquery-like library for python).Cookie Handlers.Very easy to use (see the example).
Documentation
http://packages.python.org/crawley/
Project WebSite
http://project.crawley-cloud.com/
To install crawley run
~$ python setup.py install
or from pip
~$ pip install crawley
To start a new project run
~$ crawley startproject [project_name]~$ cd [project_name]
Write your Models
""" models.py """from crawley.persistance import Entity, UrlEntity, Field, Unicodeclass Package(Entity): #add your table fields here updated = Field(Unicode(255)) package = Field(Unicode(255)) description = Field(Unicode(255))
Write your Scrapers
""" crawlers.py """from crawley.crawlers import BaseCrawlerfrom crawley.scrapers import BaseScraperfrom crawley.extractors import XPathExtractorfrom models import *class pypiScraper(BaseScraper): #specify the urls that can be scraped by this class matching_urls = ["%"] def scrape(self, response): #getting the current document's url. current_url = response.url #getting the html table. table = response.html.xpath("/html/body/div[5]/div/div/div[3]/table")[0] #for rows 1 to n-1 for tr in table[1:-1]: #obtaining the searched html inside the rows td_updated = tr[0] td_package = tr[1] package_link = td_package[0] td_description = tr[2] #storing data in Packages table Package(updated=td_updated.text, package=package_link.text, description=td_description.text)class pypiCrawler(BaseCrawler): #add your starting urls here start_urls = ["http://pypi.python.org/pypi"] #add your scraper classes here scrapers = [pypiScraper] #specify you maximum crawling depth level max_depth = 0 #select your favourite HTML parsing tool extractor = XPathExtractor
Configure your settings
""" settings.py """import os PATH = os.path.dirname(os.path.abspath(__file__))#Don't change this if you don't have renamed the projectPROJECT_NAME = "pypi"PROJECT_ROOT = os.path.join(PATH, PROJECT_NAME)DATABASE_ENGINE = 'sqlite' DATABASE_NAME = 'pypi' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' SHOW_DEBUG_INFO = True
Finally, just run the crawler
~$ crawley run
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~