GithubHelp home page GithubHelp logo

teamhg-memex / autologin Goto Github PK

View Code? Open in Web Editor NEW
122.0 15.0 44.0 2.03 MB

A project to attempt to automatically login to a website given a single seed

License: Apache License 2.0

Python 84.90% JavaScript 0.77% HTML 8.31% Lua 4.63% Dockerfile 1.40%

autologin's Introduction

Autologin: Automatic login for web spiders

PyPI Version Build Status Code Coverage

Autologin is a library that makes it easier for web spiders to crawl websites that require login. Provide it with credentials and a URL or the html source of a page (normally the homepage), and it will attempt to login for you. Cookies are returned to be used by your spider.

The goal of Autologin is to make it easier for web spiders to crawl websites that require authentication without having to re-write login code for each website.

Autologin can be used as a library, on the command line, or as a service. You can make use of Autologin without generating http requests, so you can drop it right into your spider without worrying about impacting rate limits.

If you are using Scrapy for crawling, check out autologin-middleware, which is a scrapy middleware that uses autologin http-api to maintain a logged-in state for a scrapy spider.

Autologin works on Python 2.7 and 3.3+.

Note

The library is in the alpha stage. API can still change, especially around the keychain UI.

  • Automatically find login forms and fields
  • Obtain authenticated cookies
  • Obtain form requests to submit from your own spider
  • Extract links to login pages
  • Use as a library with or without making http requests
  • Command line client
  • Web service
  • UI for managing login credentials
  • Captcha support

Don't like reading documentation?

from autologin import AutoLogin

url = 'https://reddit.com'
username = 'foo'
password = 'bar'
al = AutoLogin()
cookies = al.auth_cookies_from_url(url, username, password)

You now have a cookiejar that you can use in your spider. Don't want a cookiejar?

cookies.__dict__

You now have a dictionary.

Install the latest release from PyPI:

$ pip install -U autologin

or the version with the latest changes from Github:

$ pip install git+https://github.com/TeamHG-Memex/autologin.git

Autologin depends on Formasaurus for field and form classification, which has quite a lot of dependencies. These packages may require extra steps to install, so the command above may fail. In this case install dependencies manually, one by one (follow their install instructions).

A recent pip is recommended (update it with pip install pip -U). On Ubuntu, the following packages are required:

$ apt-get install build-essential libssl-dev libffi-dev \
                  libxml2-dev libxslt1-dev \
                  python-dev  # or python3-dev for python 3

If you want to just use the HTTP API, another option is to use a docker image:

docker pull hyperiongray/autologin
docker run -p 8088:8088 -p 8089:8089 hyperiongray/autologin

Keychain UI stores credentials in an sqlite database that is stored in the /var/autologin volume. One way of preserving the credentials is to mount a host directory (current directory in this example) as the data volume:

docker run -p 8088:8088 -p 8089:8089 -v `pwd`:/var/autologin/ \
    hyperiongray/autologin

A db.sqlite file will be created in the specified directory. The keychain UI will be accessible at http://127.0.0.1:8088 for Linux, and at the ip address of the Docker machine for OS X and Windows (for example, http://192.168.99.100:8088).

This method makes an http request to the URL, extracts the login form (if there is one), fills the fields and submits the form. It then return any cookies it has picked up:

cookies = al.auth_cookies_from_url(url, username, password)

Note that it returns all cookies, they may be session cookies rather than authenticated cookies.

This call is blocking, and uses Crochet to run the Twisted reactor and a Scrapy spider in a separate thread. If you have a Scrapy spider (or use Twisted in some other way), use the HTTP API, or the non-blocking API (it's not documented, see http_api.AutologinAPI._login).

There are also optional arguments for AutoLogin.auth_cookies_from_url:

  • settings is a dictionary with Scrapy settings to override. Useful settings to pass include:

    • HTTP_PROXY, HTTPS_PROXY set proxies to use for all requests.
    • SPLASH_URL if set, Splash will be used to make all requests. Use it if your crawler also uses splash and the session is tied to IP and User-Agent, or for Tor sites.
    • USER_AGENT overrides default User-Agent.
  • extra_js (experimental) is a string with an extra JS script that should be executed on the login page before making a POST request. For example, it can be used to accept cookie use. It is supported only when SPLASH_URL is also given in settings.

An example of using this options:

cookies = al.auth_cookies_from_url(
    url, username, password,
    extra_js='document.getElementById("accept-cookies").click();',
    settings={
        'SPLASH_URL': 'http://127.0.0.1:8050',
        'USER_AGENT': 'Mozilla/2.02 [fr] (WinNT; I)',
    })

This method extracts the login form (if there is one), fills the fields and returns a dictionary with the form url and args for your spider to submit. No http requests are made:

>>> al.login_request(html_source, username, password, base_url=None)
{'body': 'login=admin&password=secret',
 'headers': {b'Content-Type': b'application/x-www-form-urlencoded'},
 'method': 'POST',
 'url': '/login'}

Relative form action will be resolved against the base_url.

$ autologin
usage: autologin [-h] [--splash-url SPLASH_URL] [--http-proxy HTTP_PROXY]
                 [--https-proxy HTTPS_PROXY] [--extra-js EXTRA_JS]
                 [--show-in-browser]
                 username password url

You can start the autologin HTTP API with:

$ autologin-http-api

and use /login-cookies endpoint. Make a POST request with JSON body. The following arguments are supported:

  • url (required): url of the site where we would like to login
  • username (optional): if not provided, it will be fetched from the login keychain
  • password (optional): if not provided, it will be fetched from the login keychain
  • extra_js (optional, experimental) is a string with an extra JS script that should be executed on the login page before making a POST request. For example, it can be used to accept cookie use. It is supported only when SPLASH_URL is also given in settings.
  • settings (optional) - a dictionary with Scrapy settings to override, useful values are described above.

If username and password are not provided, autologin tries to find them in the login keychain. If no matching credentials are found (they are matched by domain, not by precise url), then human is expected to eventually provide them in the keychain UI, or mark domain as "skipped".

Response is JSON with the following fields:

  • status, which can take the following values:

    • error status means an error occurred, error field has more info
    • skipped means that domain is marked as "skipped" in keychain UI
    • pending means there is an item in keychain UI (or it was just created), and no credentials have been entered yet
    • solved means that login was successful and cookies were obtained
  • error - human-readable explanation of the error.

  • response - last response received by autologin (can be None in some cases). This is a dict with cookies, headers, and either a text or body_b64 fields (depending on response content type).

  • cookies - a list of dictionaries in Cookie.__dict__ format. Present only if status is solved.

  • start_url - a url that was reached after successful login.

Proxies can be specified via HTTP_PROXY and HTTPS_PROOXY keys in settings argument. Username and password can be specified as part of the proxy url (the format is protocol://username:password@url).

If you are using proxy with Splash, it is assumed that you want to have Splash make requests via given proxy, and not make a request to Splash via proxy. HTTP_PROXY is always used for Splash.

There is experimental captcha support: if the login form contains a captcha, we will try to solve it using an external service (DeathByCaptcha), and will submit it as part of login request. This does not affect API in any way, you only have to provide environment variables with your DeathByCaptcha account details: DEATHBYCAPTCHA_USERNAME and DEATHBYCAPTCHA_PASSWORD. This applies to all APIs: autologin-http-api, autologin, and the Python API.

You also need to install the decaptcha library:

pip install git+https://github.com/TeamHG-Memex/decaptcha.git

Support is still experimental, new Google ReCaptcha/NoCaptcha are not supported. Also, it currently works only with splash (when SPLASH_URL is passed in settings).

Start keychain UI with:

$ autologin-server

Note that both autologin-server and autologin-http-api are not protected by any authentication.

Keychain UI stores credentials in an sqlite database. It is located near the library itself by default, which is not always good, especially if you want to persist the data between updates or do not have write permissions for that folder. You can configure database location and secret_key used by the flask app by creating an /etc/autologin.cfg or ~/.autologin.cfg file (should be the same user under which autologin services are running). Here is an example config that changes default secret_key and specifies a different database path (both items are optional):

[autologin]
secret_key = 8a0b923820dcc509a6f75849b
db = /var/autologin/db.sqlite

Source code and bug tracker are on github: https://github.com/TeamHG-Memex/autologin.

Run tests with tox:

$ tox

Splash support is not tested directly here, but there are indirect tests for it in the autologin-middleware test suite.

License is MIT.


define hyperiongray

autologin's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autologin's Issues

Sites that do not change cookies during login

There are some sites that set a session cookie on first visit, and do not change it after successful login, just mark it as "logged in" internally. Autologin currently does not work for such sites (it works but does not detect that it has logged in).
One way to fix is to fist submit invalid credentials, and then try to submit the valid credentials and see if the page changed significantly?

error: legacy-install-failure

"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Icrfsuite/include/ -Icrfsuite/lib/cqdb/include -Iliblbfgs/include -Ipycrfsuite -IC:\Users\tonia\AppData\Local\Programs\Python\Python310\include -IC:\Users\tonia\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tppycrfsuite/_pycrfsuite.cpp /Fobuild\temp.win-amd64-3.10\Release\pycrfsuite/_pycrfsuite.obj
_pycrfsuite.cpp
pycrfsuite/_pycrfsuite.cpp(13546): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3
pycrfsuite/_pycrfsuite.cpp(13562): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3
pycrfsuite/_pycrfsuite.cpp(13906): warning C4996: 'PyUnicode_FromUnicode': deprecated in 3.3
pycrfsuite/_pycrfsuite.cpp(17069): error C3861: '_PyGen_Send': identifier not found
pycrfsuite/_pycrfsuite.cpp(17074): error C3861: '_PyGen_Send': identifier not found
pycrfsuite/_pycrfsuite.cpp(17158): error C3861: '_PyGen_Send': identifier not found
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe' failed with exit code 2
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> python-crfsuite

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.

C:\Windows\system32>

v0.1.4: ImportError: cannot import name 'joblib' from 'sklearn.externals'

$ python3 -m pip install autologin
[...]
Successfully installed Automat-22.10.0 Flask-0.10.1 Flask-Admin-1.4.2 Flask-SQLAlchemy-2.1 Flask-WTF-0.12 MarkupSafe-2.1.1 PyDispatcher-2.0.6 Twisted-22.10.0 WTForms-2.1 Werkzeug-2.2.2 autologin-0.1.4 constantly-15.1.0 crochet-2.0.0 cssselect-1.2.0 docopt-0.6.2 formasaurus-0.8.1 hyperlink-21.0.0 incremental-22.10.0 itemadapter-0.7.0 itemloaders-1.0.6 itsdangerous-2.1.2 jmespath-1.0.1 joblib-1.2.0 parsel-1.7.0 protego-0.2.1 python-crfsuite-0.9.8 queuelib-1.6.2 requests-file-1.5.1 scikit-learn-1.1.3 scrapy-2.7.0 scrapy-splash-0.8.0 service-identity-21.1.0 sklearn-crfsuite-0.3.6 tabulate-0.9.0 threadpoolctl-3.1.0 tldextract-3.4.0 w3lib-2.0.1 zope.interface-5.5.0
$ autologin -h
Traceback (most recent call last):
  File "/home/dbosk/.local/bin/autologin", line 5, in <module>
    from autologin.autologin import main
  File "/home/dbosk/.local/lib/python3.10/site-packages/autologin/__init__.py", line 3, in <module>
    from .autologin import *
  File "/home/dbosk/.local/lib/python3.10/site-packages/autologin/autologin.py", line 9, in <module>
    from .spiders import LoginSpider, get_login_form, login_params, crawl_runner, \
  File "/home/dbosk/.local/lib/python3.10/site-packages/autologin/spiders.py", line 10, in <module>
    import formasaurus
  File "/home/dbosk/.local/lib/python3.10/site-packages/formasaurus/__init__.py", line 5, in <module>
    from .classifiers import (
  File "/home/dbosk/.local/lib/python3.10/site-packages/formasaurus/classifiers.py", line 6, in <module>
    from sklearn.externals import joblib
ImportError: cannot import name 'joblib' from 'sklearn.externals' (/home/dbosk/.local/lib/python3.10/site-packages/sklearn/externals/__init__.py)

No idea, but seems to me like joblib moved out from sklearn.externals.joblib to its own package joblib.

Merge crawler-integration

This branch brings some improvements:

  • search for login and registration forms
  • UI for managing login credentials
  • HTTP API with the ability to get login credentials by url
  • optional splash support, which adds support for login on Tor sites, and simplifies using autologin from a splash-enabled crawler

Still, there are some unresolved issues:

  • Usage as a library (see discussion in #6)
  • Update documentation, check installation
  • Add tests, add to travis
  • Cleanup, remove dead code

Very high memory usage

Memory usage was about 7-8 Gb and growing rapidly (presumably, due to FormSpider). It should be possible to reproduce this and check what is the reason.

Tor support

To be able to login (and find login/register forms) on tor sites.

Update flask-wtf

It has an issue fixed that prevented running autologin on python 3.5

Splash support

Will solve #5 and will also allow to support phpbb3-style sessions that are tied to user-agent and ip.

I see two ways to implement it:

  • call splash directly via requests, perhaps with a simple splash script.
  • use a simple scrapy spider with hh_splash middleware.

I'm still not sure which is better... The first option is more self-contained.

If the login form redirects to the same URL, nologinform is returned

At least thats what is looks like:

DEBUG: Crawled (200) <GET http://localhost:8781> (referer: None) [<scrapy.http.cookies.CookieJar object at 0x10cfdbb10>]
INFO: found login form: {'fields': {'password': 'password', 'login': 'username'}, 'form': u'login'}
DEBUG: submit parameters: {'method': 'POST', 'headers': {'Content-Type': 'application/x-www-form-urlencoded'}, 'url': 'http://localhost:8781/', 'body': 'login=admin&password=secret'}
DEBUG: Redirecting (302) to <GET http://localhost:8781/> from <POST http://localhost:8781/>
DEBUG: Filtered duplicate request: <GET http://localhost:8781/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
INFO: Closing spider (finished)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.