It's a web crawler based on GNU Wget. It reads its input from csv file and writes its output to a given Amazon S3 bucket. It's running inside a docker container.
It contains three programs: crawl
which does the actual crawling and
seed
which sends requests to the crawler, and report
which generate
a report.
The input file shall have Tab-Separated-Values like format, where:
- one column is the seed URL (field name:
homepage
), - one column is some identifier (field name:
place_id
).
An example could be:
homepage place_id
http://example.com/ 1234
http://slashdot.org/ 2345
The output will be a gziped warc file uploaded to S3 bucket. The filename is created from the given url by UUID (version 3) method and with extension '.warc.gz'.
The output generated by the report
command print the results location and
the input values. Each line is a JSON document. The input lines are grouped
by the seed url.
{
"seed": "http://example.com/",
"results": "s3://bucket/c07ce0c5-005e-3838-81a9-c97d34719ac8.tag.warc.gz",
"ppids": [
"840dnq82-9cbfe6ab1b3442afa04afcdb428d9521",
"840dnq82-9e8362d27a9446bda65958fea3873798"
]
}
!!!This one is not tested recently!!!
To build the docker image with a tag crawler
on it:
$ docker build -t crawler .
To run the container with the same tag as before:
$ docker run --env AWS_ACCESS_KEY_ID=<your_aws_access_key> \
--env AWS_SECRET_ACCESS_KEY=<your_aws_secret_key> \
-v <directory_from_host>:/data \
-i -t crawler
Although there are ideas to improve it. This project was a one shot from my side. It was rather interesting to see how to solve the basic problems. I don't really need the functionality anymore... But if somebody find it useful and found problems and bugs, would be more than glad to hear about those.