GithubHelp home page GithubHelp logo

fmacpro / horseman-article-parser Goto Github PK

View Code? Open in Web Editor NEW
19.0 2.0 6.0 7.02 MB

A web page article parser which returns an object containing the article's formatted text and other attributes including sentiment, keyphrases, people, places, organisations, spelling suggestions, in-article links, meta data & lighthouse audit results.

License: GNU General Public License v3.0

JavaScript 100.00%
scraper lighthouse horseman nodejs article-parser sentiment spelling-suggestions keyphrases puppeteer

horseman-article-parser's Introduction

Horseman Article Parser

A web page article parser which returns an object containing the article's formatted text and other attributes including sentiment, keyphrases, people, places, organisations, spelling suggestions, in-article links, meta data & lighthouse audit results.

Prerequisites

Node.js & NPM

Install

npm install horseman-article-parser --save

Usage

parseArticle(options, socket) ⇒ Object

Param Type Description
options Object the options object
socket Object the optional socket

Returns: Object - article parser results object

Usage Example

var parser = require('horseman-article-parser');

var options = {
  url: "https://www.theguardian.com/politics/2018/sep/24/theresa-may-calls-for-immigration-based-on-skills-and-wealth",
  enabled: ['lighthouse', 'screenshot', 'links', 'sentiment', 'entities', 'spelling', 'keywords']
}

parser.parseArticle(options)
  .then(function (article) {

    var response = {
      title: article.title.text,
      excerpt: article.excerpt,
      metadescription: article.meta.description.text,
      url: article.url,
      sentiment: { score: article.sentiment.score, comparative: article.sentiment.comparative },
      keyphrases: article.processed.keyphrases,
      keywords: article.processed.keywords,
      people: article.people,
      orgs: article.orgs,
      places: article.places,
      text: {
        raw: article.processed.text.raw,
        formatted: article.processed.text.formatted,
        html: article.processed.text.html
      },
      spelling: article.spelling,
      meta: article.meta,
      links: article.links,
      lighthouse: article.lighthouse
    }

    console.log(response);
  })
  .catch(function (error) {
    console.log(error.message)
    console.log(error.stack);
  })

parseArticle(options, <socket>) accepts an optional socket for pipeing the response object, status messages and errors to a front end UI.

See horseman-article-parser-ui as an example.

Options

The options below are set by default

var options = {
  // puppeteer options (https://github.com/GoogleChrome/puppeteer)
  puppeteer: {
    // puppeteer launch options (https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerlaunchoptions)
    launch: {
      headless: true,
      defaultViewport: null
    },
    // puppeteer goto options (https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pagegotourl-options)
    goto: {
      waitUntil: 'domcontentloaded'
    },
    // Ignore content security policy
    setBypassCSP: true 
  },
  // clean-html options (https://ghub.io/clean-html)
  cleanhtml: {
    'add-remove-tags': ['blockquote', 'span'],
    'remove-empty-tags': ['span'],
    'replace-nbsp': true
  },
  // html-to-text options (https://ghub.io/html-to-text)
  htmltotext: {
    wordwrap: 100,
    noLinkBrackets: true,
    ignoreHref: true,
    tables: true,
    uppercaseHeadings: true
  },
  // retext-keywords options (https://ghub.io/retext-keywords)
  retextkeywords: { maximum: 10 }
}

At a minimum you should pass a url

var options = {
  url: "https://www.theguardian.com/politics/2018/sep/24/theresa-may-calls-for-immigration-based-on-skills-and-wealth"
}

If you want to enable the advanced features you should pass the following

var options = {
  url: "https://www.theguardian.com/politics/2018/sep/24/theresa-may-calls-for-immigration-based-on-skills-and-wealth",
  enabled: ['lighthouse', 'screenshot', 'links', 'sentiment', 'entities', 'spelling', 'keywords']
}

You may pass rules for returning an articles title & contents. This is useful in a case where the parser is unable to return the desired title or content e.g.

rules: [
  {
    host: 'www.bbc.co.uk',
    content: () => {
      var j = window.$
      j('article section, article figure, article header').remove()
      return j('article').html()
    }
  },
  {
    host: 'www.youtube.com',
    title: () => {
      return window.ytInitialData.contents.twoColumnWatchNextResults.results.results.contents[0].videoPrimaryInfoRenderer.title.runs[0].text
    },
    content: () => {
      return window.ytInitialData.contents.twoColumnWatchNextResults.results.results.contents[1].videoSecondaryInfoRenderer.description.runs[0].text
    }
  }
]

If you want to pass cookies to puppeteer use the following

var options = {
  puppeteer: {
    cookies: [{ name: 'cookie1', value: 'val1', domain: '.domain1' },{ name: 'cookie2', value: 'val2', domain: '.domain2' }]
  }
}

To strip tags before processing use the following

var options = {
  striptags: ['.something', '#somethingelse']
}

If you need to dismiss any popups e.g. a privacy popup use the following

var options = {
  clickelements: ['#button1', '#button2']
}

there are some additional "complex" options available

var options = {

  // array of html elements to stip before analysis
  striptags: [],

  // array of resource types to block e.g. ['image' ]
  blockedResourceTypes: [],

  // array of resource source names (all resources from 
  // these sources are skipped) e.g. [ 'google', 'facebook' ]
  skippedResources: [],

  // readability options (https://ghub.io/node-readability)
  readability: {},

  // retext spell options (https://ghub.io/retext-spell)
  retextspell: {}

  // compromise nlp options
  nlp: { plugins: [ myPlugin, anotherPlugin ] }

}

Using Compromise plugins to improve results

Compromise is the natural language processor that allows horseman-article-parser to return topics e.g. people, places & organisations. You can now pass custom plugins to compromise to modify or add to the word lists like so:

/** add some names
let testPlugin = function(Doc, world) {
  world.addWords({
    'rishi': 'FirstName',
    'sunak': 'LastName',
  })
}

const options = {
  url: 'https://www.theguardian.com/commentisfree/2020/jul/08/the-guardian-view-on-rishi-sunak-right-words-right-focus-wrong-policies',
  enabled: ['lighthouse', 'screenshot', 'links', 'sentiment', 'entities', 'spelling', 'keywords'],
  nlp: {
    plugins: [testPlugin]
  }
}

This allows us to match - for example - names which are not in the base compromise word lists.

Check out the compromise plugin docs for more info.

Development

Please feel free to fork the repo or open pull requests to the development branch. I've used eslint for linting.

Module API Docs

Build the dependencies with:

npm install

Lint the project files with:

npm run lint

Test the package with:

npm run test

Update API docs with:

npm run docs

Dependencies

Dev Dependencies

License

This project is licensed under the GNU GENERAL PUBLIC LICENSE Version 3 - see the LICENSE file for details

Notes

Due to node-readability being stale I have imported the relevent functions into this project and refactored it so it doesn't use request and therfor has no vulnrabilities.

horseman-article-parser's People

Contributors

edanedison avatar fmacpro avatar frmacdonald avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

horseman-article-parser's Issues

Option to return original page HTML from Puppeteer

The Horseman article parser is fantastic! However, I would like to perform additional analysis on the raw HTML downloaded through Puppeteer without having to crawl the page again with a separate tool and risk getting flagged. The original HTML does not seem to be returned with the article object, and I can't find any included option that would allow it. Would it be possible to include an option to allow return of the original HTML, or is there another way to accomplish this goal?

Maybe you should contact Postlight?

Hi.
Maybe you should contact @ginatrapani from @postlight?
Postlight is currently maintining Mercury Web Parser — all what left from Readability API.
They have no time to support and develop it … maybe you may cooperate somehow?
I would happy to see open source and living project that do such kind of things.

Regards. Anton.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.