GithubHelp home page GithubHelp logo

website-scraper / node-website-scraper-phantom Goto Github PK

View Code? Open in Web Editor NEW
58.0 5.0 13.0 18 KB

Plugin for website-scraper which returns html for dynamic websites using PhantomJS.

Home Page: https://www.npmjs.com/package/website-scraper-phantom

License: MIT License

JavaScript 92.24% HTML 7.76%
javascript nodejs website-scraper phantomjs scraper hacktoberfest

node-website-scraper-phantom's Introduction

⚠️ This plugin is deprecated and no longer maintained. Please consider using website-scraper-puppeteer instead.

Version Downloads Build Status

website-scraper-phantom

Plugin for website-scraper which returns html for dynamic websites using PhantomJS.

This module is an Open Source Software maintained by one developer in free time. If you want to thank the author of this module you can use GitHub Sponsors or Patreon.

Requirements

  • nodejs version >= 8
  • website-scraper version >= 4

if you need plugin for website-scraper version < 4, you can find it here (version 0.1.0)

Installation

npm install website-scraper website-scraper-phantom

Usage

const scrape = require('website-scraper');
const PhantomPlugin = require('website-scraper-phantom');

scrape({
    urls: ['https://www.instagram.com/gopro/'],
    directory: '/path/to/save',
    plugins: [ new PhantomPlugin() ]
});

How it works

It starts PhantomJS which just opens page and waits when page is loaded. It is far from ideal because probably you need to wait until some resource is loaded or click some button or log in. Currently this module doesn't support such functionality.

node-website-scraper-phantom's People

Contributors

aivus avatar s0ph1e avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

node-website-scraper-phantom's Issues

Timing Question

I installed this and got it running (apparently).

The first time I ran it, I didn't get any feedback/logging/console messages for so long that I quit the process via Control+C.

I then added onResourceSaved and onResourceError options to my code (which I will post below).

It seems to be running but the last console.log message was printed to standard output over 30 minutes ago...and the first message was printed more than 30 minutes after I first executed the command.

const scrape = require('website-scraper');
const phantomHtml = require('website-scraper-phantom');

scrape({
    urls: ['http://example.com/'],
    directory: '/Users/a/Sites/example/website-scraper/site',
    recursive: true,
    request: {
        headers: {
          'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'
        }
    },
    prettifyUrls: true,
    urlFilter: function(url){
      return url.indexOf('http://example.com') === 0;
    },
    maxDepth: 20,
    filenameGenerator: 'bySiteStructure',
    onResourceSaved: (resource) => {
        console.log(`Resource ${resource} was saved to fs`);
    },
    onResourceError: (resource, err) => {
        console.log(`Resource ${resource} was not saved because of ${err}`);
    },
    httpResponseHandler: phantomHtml
}).then(console.log).catch(console.log);

Here's the output from my console after more than an hour running:

$ node with-phantom.js

Resource { url: "http://example.com/", filename: "index.html", depth: 0 } was saved to fs
Resource { url: "http://example.com/content/images/global/logo--white.svg", filename: "content/images/global/logo--white.svg", depth: 1 } was saved to fs
Resource { url: "http://example.com/images/default-source/new/erica-neher.jpg", filename: "images/default-source/new/erica-neher.jpg", depth: 1 } was saved to fs
Resource { url: "http://example.com/images/default-source/new/mgelman.jpg?sfvrsn=2", filename: "images/default-source/new/mgelman.jpg", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-621522020/header-holistic.js", filename: "content/scripts/v-621522020/header-holistic.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/resource.axd?07IkLtzIKOUyqM8H2sHUPv81&t=636178367520", filename: "resource.axd", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/libs/modernizr-2.6.2.js", filename: "content/scripts/libs/modernizr-2.6.2.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/hotjar_include.js", filename: "content/scripts/hotjar_include.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/resource.axd?iinqIDoUO_VbuPa8B9csLfpVDCXSHdCKAoN-OP_pDFQ_0&t=636178367740", filename: "resource.axd", depth: 1 } was saved to fs
Resource { url: "http://example.com/resource.axd?PCLcor6BQmkX8d4Ln6gfp6zwZkEXw_F40hkBMSazY5i0&t=636178367740", filename: "resource.axd", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/hubspot.js", filename: "content/scripts/hubspot.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-603870980/ga.js", filename: "content/scripts/v-603870980/ga.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-625081030/holistic-header-footer.js", filename: "content/scripts/v-625081030/holistic-header-footer.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-603870980/global.js", filename: "content/scripts/v-603870980/global.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/vimeo.ga.min.js", filename: "content/scripts/vimeo.ga.min.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-6216491424941450/main.js", filename: "content/scripts/v-6216491424941450/main.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-621522020/form-validations.js", filename: "content/scripts/v-621522020/form-validations.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-6283065698575657/holistic.js", filename: "content/scripts/v-6283065698575657/holistic.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/resource.axd?zdee0I2COZBAoCu6mO5ApQ6PJv2uymlLGla6EwEYYVYUOCk4hiLpCNnwd89UU1&t=636178367740", filename: "resource.axd", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/sharethis.js", filename: "content/scripts/sharethis.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/scripts/v-6225564540/plugins.js", filename: "content/scripts/v-6225564540/plugins.js", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/styles/icon-fonts.css", filename: "content/styles/icon-fonts.css", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/styles/v-6250897590/holistic-header-footer.min.css", filename: "content/styles/v-6250897590/holistic-header-footer.min.css", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/styles/v-6256985850/holistic.min.css", filename: "content/styles/v-6256985850/holistic.min.css", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/styles/example.min.css", filename: "content/styles/example.min.css", depth: 1 } was saved to fs
Resource { url: "http://example.com/content/images/apple-touch-icon-precomposed.png", filename: "content/images/apple-touch-icon-precomposed.png", depth: 1 } was saved to fs
Resource { url: "http://example.com/favicon.ico", filename: "favicon.ico", depth: 1 } was saved to fs

Why is it taking so long to crawl/scrape/download the webpages and assets of the site?

write files as they are done, rather than "don't write until everything is done"?

It looks like right now every single page is kept in memory until the entire site has been mirrored, after which it writes everything to file. This means you can easily need 30GB of ram to fit everything in memory, and completely locks up a computer once things are done and filewriting starts happening.

Can this be changed to simply writing files to disk as they finish, before resolving all links?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.