GithubHelp home page GithubHelp logo

adam0429 / spideradmin Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mouday/spideradmin

0.0 1.0 0.0 2.03 MB

SpiderAdmin 一个集爬虫Scrapy+Scrapyd爬虫项目查看 和 爬虫任务定时调度的可视化管理工具

Python 11.08% JavaScript 74.42% HTML 14.47% Shell 0.04%

spideradmin's Introduction

SpiderAdmin

PyPI PyPI - Downloads PyPI - Python Version PyPI - License

新版本发布说明

SpiderAdmin 项目短期内不进行维护升级,如需体验新版,请移步:

spider-admin-pro

spider-admin-pro 项目基于SpiderAdmin 对现有功能进行了重写,前端采用Vue,后端采用Flask,分模块开发,更加利于二次开发和代码维护。

功能介绍

  1. 对Scrapyd 接口进行可视化封装,对Scrapy爬虫项目进行删除 和 查看

  2. 并没有实现修改,添加功能, 部署推荐使用

$ scrapyd-deploy -a
  1. 对爬虫设置定时任务,支持apscheduler 的3中方式和随机延时,共计4中方式
  • 单次运行 date
  • 周期运行 corn
  • 间隔运行 interval
  • 随机运行 random
  1. 基于Flask-BasicAuth 做了简单的权限校验

启动运行

$ pip3 install -U spideradmin

$ spideradmin init  # 初始化,可选配置,也可以使用默认配置

$ spideradmin       # 启动服务

访问: http://127.0.0.1:5000/

开启站点认证

修改启动目录下配置文件config.py

# 如果为 True 整个站点都验证, 默认不验证
BASIC_AUTH_FORCE = True

页面截图

TODO

  1. 增加登录页面做权限校验
  2. 增加定时设置的多样性
  3. 增加定时随机运行
  4. 支持scrapyd做nginx认证,支持分布式多机部署

部署Scrapyd注意版本问题

  • Scrapyd==1.2.0
  • Scrapy==1.6.0
  • Twisted==18.9.0

启用执行结果扩展

安装依赖

pip install PureMySQL

新建数据表

CREATE TABLE `log_spider` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `spider_name` varchar(50) DEFAULT NULL,
  `item_count` int(11) DEFAULT NULL,
  `duration` int(11) DEFAULT NULL,
  `log_error` int(11) DEFAULT NULL,
  `create_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='爬虫执行结果统计'

1、在scrapy项目中添加数据收集扩展文件

item_count_extension.py

# -*- coding: utf-8 -*-

import logging

from scrapy import signals
from puremysql import PureMysql
from datetime import datetime


class SpiderItemCountExtension(object):
    
    # 设置为数据库链接url eg: mysql://root:[email protected]:3306/mydata 
    ITEM_LOG_DATABASE_URL = None
    
    # 设置数据表
    ITEM_LOG_TABLE = "log_spider"

    @classmethod
    def from_crawler(cls, crawler):
        ext = cls()
        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
        return ext

    def spider_closed(self, spider, reason):
        stats = spider.crawler.stats.get_stats()
        scraped_count = stats.get("item_scraped_count", 0)
        dropped_count = stats.get("item_dropped_count", 0)

        log_error = stats.get("log_count/ERROR", 0)
        start_time = stats.get("start_time")
        finish_time = stats.get("finish_time")
        duration = (finish_time - start_time).seconds

        count = scraped_count + dropped_count

        logging.debug("*" * 50)
        logging.debug("* {}".format(spider.name))
        logging.debug("* item count: {}".format(count))
        logging.debug("*" * 50)

        item = {
            "spider_name": spider.name,
            "item_count": count,
            "duration": duration,
            "log_error": log_error,
            "create_time": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
        }

        mysql = PureMysql(db_url=self.ITEM_LOG_DATABASE_URL)
        table = mysql.table(self.ITEM_LOG_TABLE)
        table.insert(item)
        mysql.close()

2、scrapy项目启用扩展 settings.py

EXTENSIONS = {
    # 'scrapy.extensions.telnet.TelnetConsole': None,
    "item_count_extension.SpiderItemCountExtension": 100
}

3、配置相同的db_url default_config.py

ITEM_LOG_DATABASE_URL = "mysql://root:[email protected]:3306/mydata"
ITEM_LOG_TABLE = "log_spider"

更新日志

版本 日期 描述
0.0.17 2019-07-02 优化文件,优化随机调度,增加调度历史统计和可视化
0.0.20 2019-10-08 增加执行结果统计
0.0.29 2020-03-25 优化刷新显示,停留在当前页面下

沟通交流

关注本项目的小伙伴越来越多,为了更好地交流沟通,可以加入群聊

问题:邀请码 答案:SpiderAdmin

项目说明

为了现有项目能够稳定运行,短期内不发布新的版本,以免不兼容

spideradmin's People

Contributors

mouday avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.