75 skills found · Page 1 of 3
Gerapy / GerapyDistributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js
my8100 / ScrapydwebWeb app for Scrapyd cluster management, Scrapy log analysis & visualization, Auto packaging, Timer tasks, Monitor & Alert, and Mobile UI. Docs 文档 :point_right:
scrapy / ScrapydA service daemon to run Scrapy spiders
DormyMo / SpiderKeeperadmin ui for scrapy/open source scrapinghub
kezhenxu94 / House RentingPossibly the best practice of Scrapy 🕷 and renting a house 🏡
baabaaox / ScrapyDouban豆瓣电影/豆瓣读书 Scarpy 爬虫
scrapy / Scrapyd ClientCommand line client for Scrapyd server
casual-silva / NewsCrawl狠心开源企业级舆情新闻爬虫项目:支持任意数量爬虫一键运行、爬虫定时任务、爬虫批量删除;爬虫一键部署;爬虫监控可视化; 配置集群爬虫分配策略;👉 现成的docker一键部署文档已为大家踩坑
mouday / Spider Admin Prospider-admin-pro 一个集爬虫Scrapy+Scrapyd爬虫项目查看 和 爬虫任务定时调度的可视化管理工具,SpiderAdmin的升级版
my8100 / FilesDocs and files for ScrapydWeb, Scrapyd, Scrapy, and other projects
djm / Python Scrapyd ApiA Python wrapper for working with Scrapyd's API.
crawlab-team / Crawlab LiteLite version of Crawlab. 轻量版 Crawlab 爬虫管理平台
bitmakerla / Estelaestela, an elastic web scraping cluster 🕸
adriancast / Scrapyd Django TemplateBasic setup to run ScrapyD + Django and save it in Django Models. You can be up and running in just a few minutes
my8100 / Scrapyd Cluster On HerokuSet up free and scalable Scrapyd cluster for distributed web-crawling with just a few clicks. DEMO :point_right:
dequinns / ScrapydArt在scrapyd基础上新增权限验证、爬虫运行信息统计、界面重构、,并增加排序、筛选过滤等多个API
aaldaber / Distributed Multi User Scrapy System With A Web UIDjango based application that allows creating, deploying and running Scrapy spiders in a distributed manner
mouday / SpiderAdminSpiderAdmin 一个集爬虫Scrapy+Scrapyd爬虫项目查看 和 爬虫任务定时调度的可视化管理工具
my8100 / LogparserA tool for parsing Scrapy log files periodically and incrementally, extending the HTTP JSON API of Scrapyd.
datawizard1337 / ARGUSARGUS is an easy-to-use web scraping tool. The program is based on the Scrapy Python framework and is able to crawl a broad range of different websites. On the websites, ARGUS is able to perform tasks like scraping texts or collecting hyperlinks between websites. See: https://link.springer.com/article/10.1007/s11192-020-03726-9