luiswirth / crawler Goto Github PK
View Code? Open in Web Editor NEWAn asynchronous web crawler.
An asynchronous web crawler.
ChatGPT (GPT-4) says
When building a web scraper, it's essential to respect a few rules to maintain good behavior, avoid being blocked, and ensure your crawler conforms to web standards. Here are some key principles to follow:
Respect the Robots.txt: A robots.txt
file is a file at the root of a website that indicates those parts of a website you're allowed to scrape. It's very important to parse this file and respect its rules before you start scraping.
Respect the Crawl-Delay: The robots.txt
file may also contain a Crawl-Delay
directive, which specifies how many seconds a bot should wait between successive requests. Respecting this delay can prevent your IP from getting blocked.
Implement Retries and Handle Failures: If a request fails or times out, you should have a retry mechanism in place, possibly with an incremental backoff strategy (each subsequent retry waits longer). However, if requests continue to fail for a specific website, it's better to log the error and move on, as the server might be intentionally blocking the requests.
Spoof the User-Agent: Some websites don't like being scraped and will block requests from common web scraping and crawling user-agents. By setting your user-agent to mimic a web browser, you can sometimes bypass this restriction. However, be aware that this is a gray area and might be against the terms of service of some websites.
Limit the Rate of Crawling: Crawling at a high rate can overwhelm the server and may lead to your IP getting blocked. Always try to limit the rate of your requests to a reasonable speed.
Use a Pool of Proxies: If you're crawling at a large scale, you might want to use a pool of proxies to avoid IP-based blocking. This, however, could be against the terms of service of the website you're scraping, so use this strategy judiciously.
Respect Website's Terms of Service: Always make sure you're familiar with a website's Terms of Service before you start scraping. Some websites explicitly disallow web scraping and not respecting this can lead to your IP being blocked, or potentially more serious legal issues.
Consider Using APIs: If the website provides an API, consider using it instead of scraping the website. APIs are designed for programmatic access and provide data in a much cleaner, and more structured format, making it easier to work with.
Finally, note that being polite with your web scraping bot is not only about not getting blocked, it's also about ethics and legal considerations. Some websites rely on their servers for their business, and overwhelming those servers with a poorly designed scraper can have serious consequences for the people who depend on them.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.