Shadowblocker - A metadata crawler for Twitter Blocklists.
Shadowblocker is one component of a collaborative project which was initially created using Tweepy to crawl twitter for high volumes of tweets with similar or identical content, in order to identify spammers and copypasta enthusiasts. The UUIDs of accounts found to be inauthentic or engaged in harmful activity were then passed to the Brexit Blocklist Project, where a blocktogether instance allowed twitter users to subscribe to an automatically generated and curated public blocklist which auto-blocks harmful or inauthentic accounts on their behalf.
Since then, it has evolved into a powerful metadata analytics tool which continuously scrapes twitter to build a large and reliable dataset for analytical tasks.
The BrexitBlocklist project comprises a dedicated analytics server using ElasticSearch and Kibana to store, process, and analyze data. Shadowblocker is treated as a separate appliance, where one or many collectors can be deployed to ingest the twitter data into elasticsearch.
With recent proposed changess to Twitter's API policies, including the threat of free access being revoked, the shadowblocker engine is in the process of being ported from using the official API, to one or more API indepedent solutions.
As the documentation behind some of these solutions varies widely in the level of detail The BrexitBlocklist project have chosen to publish the revised shadowblocker engine here.
Usage
Shadowblocker makes use of input list files. These are simple txt files which can be edited by anyone with a basic level of technical skill. The crawler scripts will recursively iterate through each line of the text files. The crawler scripts can also be invoked manually and run directly from bash.
USERIDS
A text file with line seperated twitter account UUIDs (a blank line should be left at the end). Makes use of the userid.py crawler-script.
USERS
A text file with line seperated twitter account Usernames (a blank line should be left at the end). Makes use of the username.py crawler-script.
SEARCH
A text file with line seperated search terms, equivalent the a keyword search (a blank line should be left at the end). Makes use of the search.py crawler-script.
HASHTAGS
A text file with line seperated twitter hashtags (a blank line should be left at the end). Makes use of the hashtag.py crawler-script.
ADVANCEDSEARCH
A text file with line seperated advanced twitter search parameters (a blank line should be left at the end). Makes use of the advsearch.py crawler-script.
As the scripts run, they will parse the information to an elasticsearch instance.
Dependencies.
Hashivault. ElasticSearch. Kibana. SNSCRAPE. twint-zero.
CONFIGURATION
#USAGE