GithubHelp home page GithubHelp logo

mfrager / crawl4ai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from unclecode/crawl4ai

0.0 0.0 0.0 126.06 MB

πŸ”₯πŸ•·οΈ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper

License: Apache License 2.0

JavaScript 5.11% Python 50.82% CSS 1.09% HTML 42.33% Dockerfile 0.65%

crawl4ai's Introduction

Crawl4AI v0.2.74 πŸ•·οΈπŸ€–

GitHub Stars GitHub Forks GitHub Issues GitHub Pull Requests License

Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. πŸ†“πŸŒ

Try it Now!

  • Use as REST API: Open In Colab
  • Use as Python library: This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much. Open In Colab

✨ visit our Documentation Website

Features ✨

  • πŸ†“ Completely free and open-source
  • πŸ€– LLM-friendly output formats (JSON, cleaned HTML, markdown)
  • 🌍 Supports crawling multiple URLs simultaneously
  • 🎨 Extracts and returns all media tags (Images, Audio, and Video)
  • πŸ”— Extracts all external and internal links
  • πŸ“š Extracts metadata from the page
  • πŸ”„ Custom hooks for authentication, headers, and page modifications before crawling
  • πŸ•΅οΈ User-agent customization
  • πŸ–ΌοΈ Takes screenshots of the page
  • πŸ“œ Executes multiple custom JavaScripts before crawling
  • πŸ“š Various chunking strategies: topic-based, regex, sentence, and more
  • 🧠 Advanced extraction strategies: cosine clustering, LLM, and more
  • 🎯 CSS selector support
  • πŸ“ Passes instructions/keywords to refine extraction

Cool Examples πŸš€

Quick Start

from crawl4ai import WebCrawler

# Create an instance of WebCrawler
crawler = WebCrawler()

# Warm up the crawler (load necessary models)
crawler.warmup()

# Run the crawler on a URL
result = crawler.run(url="https://www.nbcnews.com/business")

# Print the extracted content
print(result.markdown)

How to install πŸ› 

virtualenv venv
source venv/bin/activate
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
```️

### Speed-First Design πŸš€

Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing.

```python
import time
from crawl4ai.web_crawler import WebCrawler
crawler = WebCrawler()
crawler.warmup()

start = time.time()
url = r"https://www.nbcnews.com/business"
result = crawler.run( url, word_count_threshold=10, bypass_cache=True)
end = time.time()
print(f"Time taken: {end - start}")

Let's take a look the calculated time for the above code snippet:

[LOG] πŸš€ Crawling done, success: True, time taken: 1.3623387813568115 seconds
[LOG] πŸš€ Content extracted, success: True, time taken: 0.05715131759643555 seconds
[LOG] πŸš€ Extraction, time taken: 0.05750393867492676 seconds.
Time taken: 1.439958095550537

Fetching the content from the page took 1.3623 seconds, and extracting the content took 0.0575 seconds. πŸš€

Extract Structured Data from Web Pages πŸ“Š

Crawl all OpenAI models and their fees from the official page.

import os
from crawl4ai import WebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from pydantic import BaseModel, Field

class OpenAIModelFee(BaseModel):
    model_name: str = Field(..., description="Name of the OpenAI model.")
    input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
    output_fee: str = Field(..., description="Fee for output token ßfor the OpenAI model.")

url = 'https://openai.com/api/pricing/'
crawler = WebCrawler()
crawler.warmup()

result = crawler.run(
        url=url,
        word_count_threshold=1,
        extraction_strategy= LLMExtractionStrategy(
            provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'), 
            schema=OpenAIModelFee.schema(),
            extraction_type="schema",
            instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens. 
            Do not miss any models in the entire content. One extracted model JSON format should look like this: 
            {"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
        ),            
        bypass_cache=True,
    )

print(result.extracted_content)

Execute JS, Filter Data with CSS Selector, and Clustering

from crawl4ai import WebCrawler
from crawl4ai.chunking_strategy import CosineStrategy

js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"]

crawler = WebCrawler()
crawler.warmup()

result = crawler.run(
    url="https://www.nbcnews.com/business",
    js=js_code,
    css_selector="p",
    extraction_strategy=CosineStrategy(semantic_filter="technology")
)

print(result.extracted_content)

Documentation πŸ“š

For detailed documentation, including installation instructions, advanced features, and API reference, visit our Documentation Website.

Contributing 🀝

We welcome contributions from the open-source community. Check out our contribution guidelines for more information.

License πŸ“„

Crawl4AI is released under the Apache 2.0 License.

Contact πŸ“§

For questions, suggestions, or feedback, feel free to reach out:

Happy Crawling! πŸ•ΈοΈπŸš€

Star History

Star History Chart

crawl4ai's People

Contributors

unclecode avatar mfrager avatar gkhngyk avatar qin2dim avatar ntohidi avatar shivkumar0757 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.