GithubHelp home page GithubHelp logo

assafelovic / gpt-researcher Goto Github PK

View Code? Open in Web Editor NEW
12.7K 106.0 1.5K 10.77 MB

GPT based autonomous agent that does online comprehensive research on any given topic

Home Page: https://gptr.dev

License: MIT License

Python 90.18% JavaScript 3.30% HTML 3.27% CSS 2.63% Dockerfile 0.61%
ai autonomous-agent gpt-4 gpt-researcher openai python research-assistant web-search

gpt-researcher's Introduction

GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.

The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent Plan-and-Solve and RAG papers, GPT Researcher addresses issues of speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.

Our mission is to empower individuals and organizations with accurate, unbiased, and factual information by leveraging the power of AI.

Why GPT Researcher?

  • To form objective conclusions for manual research tasks can take time, sometimes weeks to find the right resources and information.
  • Current LLMs are trained on past and outdated information, with heavy risks of hallucinations, making them almost irrelevant for research tasks.
  • Current LLMs are limited to short token outputs which are not sufficient for long detailed research reports (2k+ words).
  • Services that enable web search (such as ChatGPT + Web Plugin), only consider limited sources and content that in some cases result in superficial and biased answers.
  • Using only a selection of web sources can create bias in determining the right conclusions for research tasks.

Demo

demo.mp4

Architecture

The main idea is to run "planner" and "execution" agents, whereas the planner generates questions to research, and the execution agents seek the most related information based on each generated research question. Finally, the planner filters and aggregates all related information and creates a research report.

The agents leverage both gpt3.5-turbo and gpt-4o (128K context) to complete a research task. We optimize for costs using each only when necessary. The average research task takes around 3 minutes to complete, and costs ~$0.1.

More specifically:

  • Create a domain specific agent based on research query or task.
  • Generate a set of research questions that together form an objective opinion on any given task.
  • For each research question, trigger a crawler agent that scrapes online resources for information relevant to the given task.
  • For each scraped resources, summarize based on relevant information and keep track of its sources.
  • Finally, filter and aggregate all summarized sources and generate a final research report.

Tutorials

Features

  • πŸ“ Generate research, outlines, resources and lessons reports with local documents and web sources
  • πŸ“œ Can generate long and detailed research reports (over 2K words)
  • 🌐 Aggregates over 20 web sources per research to form objective and factual conclusions
  • πŸ–₯️ Includes an easy-to-use web interface (HTML/CSS/JS)
  • πŸ” Scrapes web sources with javascript support
  • πŸ“‚ Keeps track and context of visited and used web sources
  • πŸ“„ Export research reports to PDF, Word and more...

πŸ“– Documentation

Please see here for full documentation on:

  • Getting started (installation, setting up the environment, simple examples)
  • Customization and configuration
  • How-To examples (demos, integrations, docker support)
  • Reference (full API docs)

βš™οΈ Getting Started

Installation

Step 0 - Install Python 3.11 or later. See here for a step-by-step guide.

Step 1 - Download the project and navigate to its directory

git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher

Step 3 - Set up API keys using two methods: exporting them directly or storing them in a .env file.

For Linux/Windows temporary setup, use the export method:

export OPENAI_API_KEY={Your OpenAI API Key here}

For a more permanent setup, create a .env file in the current gpt-researcher directory and input the env vars (without export).

  • For LLM, we recommend OpenAI GPT, but you can use any other LLM model (including open sources). To learn how to change the LLM model, please refer to the documentation page.
  • For web search API, the default is duckduckgo (no signup required), but you can also refer to other web search APIs of your choice by adding env RETRIEVER to google, bing, tavily, googleSerp, serpapi, searx and more.

Quickstart

Step 1 - Install dependencies

pip install -r requirements.txt

Step 2 - Run the agent with FastAPI

python -m uvicorn main:app --reload

Step 3 - Go to http://localhost:8000 on any browser and enjoy researching!


To learn how to get started with Docker, Poetry or a virtual environment check out the documentation page.

Run as PIP package

pip install gpt-researcher
...
from gpt_researcher import GPTResearcher

query = "why is Nvidia stock going up?"
researcher = GPTResearcher(query=query, report_type="research_report")
# Conduct research on the given query
research_result = await researcher.conduct_research()
# Write the report
report = await researcher.write_report()
...

For more examples and configurations, please refer to the PIP documentation page.

πŸ“„ Research on Local Documents

You can instruct the GPT Researcher to run research tasks based on your local documents. Currently supported file formats are: PDF, plain text, CSV, Excel, Markdown, PowerPoint, and Word documents.

Step 1: Add the env variable DOC_PATH pointing to the folder where your documents are located.

export DOC_PATH="./my-docs"

Step 2:

  • If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options.
  • If you're running GPT Researcher with the PIP package, pass the report_source argument as "documents" when you instantiate the GPTResearcher class code sample here.

One-Click Deployment

Deploy to RepoCloud

πŸ‘ͺ Multi-Agent Assistant

As AI evolves from prompt engineering and RAG to multi-agent systems, we're excited to introduce our new multi-agent assistant built with LangGraph.

By using LangGraph, the research process can be significantly improved in depth and quality by leveraging multiple agents with specialized skills. Inspired by the recent STORM paper, this project showcases how a team of AI agents can work together to conduct research on a given topic, from planning to publication.

An average run generates a 5-6 page research report in multiple formats such as PDF, Docx and Markdown.

Check it out here or head over to our documentation for more information.

πŸš€ Contributing

We highly welcome contributions! Please check out contributing if you're interested.

Please check out our roadmap page and reach out to us via our Discord community if you're interested in joining our mission.

βœ‰οΈ Support / Contact us

πŸ›‘ Disclaimer

This project, GPT Researcher, is an experimental application and is provided "as-is" without any warranty, express or implied. We are sharing codes for academic purposes under the MIT license. Nothing herein is academic advice, and NOT a recommendation to use in academic or research papers.

Our view on unbiased research claims:

  1. The main goal of GPT Researcher is to reduce incorrect and biased facts. How? We assume that the more sites we scrape the less chances of incorrect data. By scraping over 20 sites per research, and choosing the most frequent information, the chances that they are all wrong is extremely low.
  2. We do not aim to eliminate biases; we aim to reduce it as much as possible. We are here as a community to figure out the most effective human/llm interactions.
  3. In research, people also tend towards biases as most have already opinions on the topics they research about. This tool scrapes many opinions and will evenly explain diverse views that a biased person would never have read.

Please note that the use of the GPT-4 language model can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.


Star History Chart

gpt-researcher's People

Contributors

aaaastark avatar arsaboo avatar artempt239 avatar arun279 avatar assafelovic avatar aurelian-shuttleworth avatar baskaryan avatar danhumassmed avatar dependabot[bot] avatar devon-ye avatar dphiggs01 avatar elishakay avatar eltociear avatar gkhngyk avatar gregdrizz avatar gschmutz avatar hwchase17 avatar jacobsingh avatar jortegac avatar kpcofgs avatar mowkalim avatar proy9714 avatar ray-ruisun avatar reasonmethis avatar rotemweiss57 avatar saunakghosh10 avatar scottmoney avatar sebaxzero avatar sockthedev avatar will-holley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-researcher's Issues

Goes on forever?

I fed in my research topic as β€œAI for drug design”, and it’s been running for over 2 hours.

At some instances, the model seems to be scraping irrelevant websites. Moreover, the model also scrapes YouTube links and concludes that it has β€œno text to summarise”. I suggest that the YouTube scraping feature to be removed. This may be the reason why I find gpt-researcher extremely time consuming.

May I know the average time this model takes to generate the report?

DevToolsActivePort file doesn't exist

Running on Windows. Getting this error:

An error occurred while processing the url https://en.wikipedia.org/wiki/John_Calvin: Message: unknown error: Chrome failed to start: was killed.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location C:\Program Files\Google\Chrome\Application\chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)

json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 34)

Here is my error details:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/uvicorn/protocols/websockets/wsproto_impl.py", line 249, in run_asgi
    result = await self.app(self.scope, self.receive, self.send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/fastapi/applications.py", line 289, in __call__
    await super().__call__(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/middleware/errors.py", line 149, in __call__
    await self.app(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/routing.py", line 341, in handle
    await self.app(scope, receive, send)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/starlette/routing.py", line 82, in app
    await func(session)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/site-packages/fastapi/routing.py", line 324, in app
    await dependant.call(**values)
  File "/home/lhs/ζ–‡ζ‘£/typ/gpt-researcher/main.py", line 50, in websocket_endpoint
    await manager.start_streaming(task, report_type, agent, websocket)
  File "/home/lhs/ζ–‡ζ‘£/typ/gpt-researcher/agent/run.py", line 38, in start_streaming
    report, path = await run_agent(task, report_type, agent, websocket)
  File "/home/lhs/ζ–‡ζ‘£/typ/gpt-researcher/agent/run.py", line 50, in run_agent
    await assistant.conduct_research()
  File "/home/lhs/ζ–‡ζ‘£/typ/gpt-researcher/agent/research_agent.py", line 137, in conduct_research
    search_queries = await self.create_search_queries()
  File "/home/lhs/ζ–‡ζ‘£/typ/gpt-researcher/agent/research_agent.py", line 92, in create_search_queries
    return json.loads(result)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/home/lhs/anaconda3/envs/gptresearcher/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 34)

Method not allowed

Hi I'm able to start the research but it return immediately 405 method not allowed

ERROR: Exception in ASGI application

Due to the latest changes I now get the above error on my MacBook Pro M1 with all the correct dependencies installed. It was working a week ago for me with the FastAPI installation.

ERROR: Cannot find Chrome binary

This error kept repeating with every url.

"An error occurred while processing the url https://xxxxx/: Message: unknown error: cannot find Chrome binary"

Later:

streaming response... ERROR: Exception in ASGI application Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 254, in run_asgi result = await self.app(self.scope, self.asgi_receive, self.asgi_send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
etc. etc. etc.

Running in python virtual environment on MacOS 10.15 (Catalina/Intel), python 3.11.

----Versions installed:
Requirement already satisfied: asyncio==3.4.3 in ./lib/python3.11/site-packages (from -r requirements.txt (line 2)) (3.4.3)
Requirement already satisfied: beautifulsoup4==4.9.3 in ./lib/python3.11/site-packages (from -r requirements.txt (line 3)) (4.9.3)
Requirement already satisfied: colorama==0.4.6 in ./lib/python3.11/site-packages (from -r requirements.txt (line 4)) (0.4.6)
Requirement already satisfied: duckduckgo_search==3.0.2 in ./lib/python3.11/site-packages (from -r requirements.txt (line 5)) (3.0.2)
Requirement already satisfied: md2pdf==1.0.1 in ./lib/python3.11/site-packages (from -r requirements.txt (line 6)) (1.0.1)
Requirement already satisfied: openai~=0.27.0 in ./lib/python3.11/site-packages (from -r requirements.txt (line 7)) (0.27.8)
Requirement already satisfied: playwright==1.35.0 in ./lib/python3.11/site-packages (from -r requirements.txt (line 8)) (1.35.0)
Requirement already satisfied: python-dotenv~=0.21.0 in ./lib/python3.11/site-packages (from -r requirements.txt (line 9)) (0.21.1)
Requirement already satisfied: pyyaml==6.0 in ./lib/python3.11/site-packages (from -r requirements.txt (line 10)) (6.0)
Requirement already satisfied: selenium==4.10.0 in ./lib/python3.11/site-packages (from -r requirements.txt (line 11)) (4.10.0)
Requirement already satisfied: webdriver-manager==3.8.6 in ./lib/python3.11/site-packages (from -r requirements.txt (line 12)) (3.8.6)
Requirement already satisfied: flask in ./lib/python3.11/site-packages (from -r requirements.txt (line 13)) (2.3.2)
Requirement already satisfied: uvicorn in ./lib/python3.11/site-packages (from -r requirements.txt (line 14)) (0.22.0)
Requirement already satisfied: pydantic in ./lib/python3.11/site-packages (from -r requirements.txt (line 15)) (2.0.2)
Requirement already satisfied: fastapi in ./lib/python3.11/site-packages (from -r requirements.txt (line 16)) (0.100.0)
Requirement already satisfied: python-multipart in ./lib/python3.11/site-packages (from -r requirements.txt (line 17)) (0.0.6)
Requirement already satisfied: markdown in ./lib/python3.11/site-packages (from -r requirements.txt (line 18)) (3.4.3)
Requirement already satisfied: soupsieve>1.2 in ./lib/python3.11/site-packages (from beautifulsoup4==4.9.3->-r requirements.txt (line 3)) (2.4.1)
Requirement already satisfied: click>=8.1.3 in ./lib/python3.11/site-packages (from duckduckgo_search==3.0.2->-r requirements.txt (line 5)) (8.1.4)
Requirement already satisfied: requests>=2.30.0 in ./lib/python3.11/site-packages (from duckduckgo_search==3.0.2->-r requirements.txt (line 5)) (2.31.0)
Requirement already satisfied: markdown2 in ./lib/python3.11/site-packages (from md2pdf==1.0.1->-r requirements.txt (line 6)) (2.4.9)
Requirement already satisfied: docopt in ./lib/python3.11/site-packages (from md2pdf==1.0.1->-r requirements.txt (line 6)) (0.6.2)
Requirement already satisfied: WeasyPrint in ./lib/python3.11/site-packages (from md2pdf==1.0.1->-r requirements.txt (line 6)) (59.0)
Requirement already satisfied: greenlet==2.0.2 in ./lib/python3.11/site-packages (from playwright==1.35.0->-r requirements.txt (line 8)) (2.0.2)
Requirement already satisfied: pyee==9.0.4 in ./lib/python3.11/site-packages (from playwright==1.35.0->-r requirements.txt (line 8)) (9.0.4)
Requirement already satisfied: urllib3[socks]<3,>=1.26 in ./lib/python3.11/site-packages (from selenium==4.10.0->-r requirements.txt (line 11)) (2.0.3)
Requirement already satisfied: trio~=0.17 in ./lib/python3.11/site-packages (from selenium==4.10.0->-r requirements.txt (line 11)) (0.22.1)
Requirement already satisfied: trio-websocket~=0.9 in ./lib/python3.11/site-packages (from selenium==4.10.0->-r requirements.txt (line 11)) (0.10.3)
Requirement already satisfied: certifi>=2021.10.8 in ./lib/python3.11/site-packages (from selenium==4.10.0->-r requirements.txt (line 11)) (2023.5.7)
Requirement already satisfied: tqdm in ./lib/python3.11/site-packages (from webdriver-manager==3.8.6->-r requirements.txt (line 12)) (4.65.0)
Requirement already satisfied: packaging in ./lib/python3.11/site-packages (from webdriver-manager==3.8.6->-r requirements.txt (line 12)) (23.1)
Requirement already satisfied: typing-extensions in ./lib/python3.11/site-packages (from pyee==9.0.4->playwright==1.35.0->-r requirements.txt (line 8)) (4.7.1)
Requirement already satisfied: aiohttp in ./lib/python3.11/site-packages (from openai~=0.27.0->-r requirements.txt (line 7)) (3.8.4)
Requirement already satisfied: Werkzeug>=2.3.3 in ./lib/python3.11/site-packages (from flask->-r requirements.txt (line 13)) (2.3.6)
Requirement already satisfied: Jinja2>=3.1.2 in ./lib/python3.11/site-packages (from flask->-r requirements.txt (line 13)) (3.1.2)
Requirement already satisfied: itsdangerous>=2.1.2 in ./lib/python3.11/site-packages (from flask->-r requirements.txt (line 13)) (2.1.2)
Requirement already satisfied: blinker>=1.6.2 in ./lib/python3.11/site-packages (from flask->-r requirements.txt (line 13)) (1.6.2)
Requirement already satisfied: h11>=0.8 in ./lib/python3.11/site-packages (from uvicorn->-r requirements.txt (line 14)) (0.14.0)
Requirement already satisfied: annotated-types>=0.4.0 in ./lib/python3.11/site-packages (from pydantic->-r requirements.txt (line 15)) (0.5.0)
Requirement already satisfied: pydantic-core==2.1.2 in ./lib/python3.11/site-packages (from pydantic->-r requirements.txt (line 15)) (2.1.2)
Requirement already satisfied: starlette<0.28.0,>=0.27.0 in ./lib/python3.11/site-packages (from fastapi->-r requirements.txt (line 16)) (0.27.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./lib/python3.11/site-packages (from Jinja2>=3.1.2->flask->-r requirements.txt (line 13)) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in ./lib/python3.11/site-packages (from requests>=2.30.0->duckduckgo_search==3.0.2->-r requirements.txt (line 5)) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in ./lib/python3.11/site-packages (from requests>=2.30.0->duckduckgo_search==3.0.2->-r requirements.txt (line 5)) (3.4)
Requirement already satisfied: anyio<5,>=3.4.0 in ./lib/python3.11/site-packages (from starlette<0.28.0,>=0.27.0->fastapi->-r requirements.txt (line 16)) (3.7.1)
Requirement already satisfied: attrs>=20.1.0 in ./lib/python3.11/site-packages (from trio~=0.17->selenium==4.10.0->-r requirements.txt (line 11)) (23.1.0)
Requirement already satisfied: sortedcontainers in ./lib/python3.11/site-packages (from trio~=0.17->selenium==4.10.0->-r requirements.txt (line 11)) (2.4.0)
Requirement already satisfied: outcome in ./lib/python3.11/site-packages (from trio~=0.17->selenium==4.10.0->-r requirements.txt (line 11)) (1.2.0)
Requirement already satisfied: sniffio in ./lib/python3.11/site-packages (from trio~=0.17->selenium==4.10.0->-r requirements.txt (line 11)) (1.3.0)
Requirement already satisfied: exceptiongroup in ./lib/python3.11/site-packages (from trio-websocket~=0.9->selenium==4.10.0->-r requirements.txt (line 11)) (1.1.2)
Requirement already satisfied: wsproto>=0.14 in ./lib/python3.11/site-packages (from trio-websocket~=0.9->selenium==4.10.0->-r requirements.txt (line 11)) (1.2.0)
Requirement already satisfied: pysocks!=1.5.7,<2.0,>=1.5.6 in ./lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium==4.10.0->-r requirements.txt (line 11)) (1.7.1)
Requirement already satisfied: multidict<7.0,>=4.5 in ./lib/python3.11/site-packages (from aiohttp->openai~=0.27.0->-r requirements.txt (line 7)) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./lib/python3.11/site-packages (from aiohttp->openai~=0.27.0->-r requirements.txt (line 7)) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in ./lib/python3.11/site-packages (from aiohttp->openai~=0.27.0->-r requirements.txt (line 7)) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in ./lib/python3.11/site-packages (from aiohttp->openai~=0.27.0->-r requirements.txt (line 7)) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in ./lib/python3.11/site-packages (from aiohttp->openai~=0.27.0->-r requirements.txt (line 7)) (1.3.1)
Requirement already satisfied: pydyf>=0.6.0 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (0.7.0)
Requirement already satisfied: cffi>=0.6 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (1.15.1)
Requirement already satisfied: html5lib>=1.1 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (1.1)
Requirement already satisfied: tinycss2>=1.0.0 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (1.2.1)
Requirement already satisfied: cssselect2>=0.1 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (0.7.0)
Requirement already satisfied: Pyphen>=0.9.1 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (0.14.0)
Requirement already satisfied: Pillow>=9.1.0 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (10.0.0)
Requirement already satisfied: fonttools[woff]>=4.0.0 in ./lib/python3.11/site-packages (from WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (4.40.0)
Requirement already satisfied: pycparser in ./lib/python3.11/site-packages (from cffi>=0.6->WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (2.21)
Requirement already satisfied: webencodings in ./lib/python3.11/site-packages (from cssselect2>=0.1->WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (0.5.1)
Requirement already satisfied: zopfli>=0.1.4 in ./lib/python3.11/site-packages (from fonttools[woff]>=4.0.0->WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (0.2.2)
Requirement already satisfied: brotli>=1.0.1 in ./lib/python3.11/site-packages (from fonttools[woff]>=4.0.0->WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (1.0.9)
Requirement already satisfied: six>=1.9 in ./lib/python3.11/site-packages (from html5lib>=1.1->WeasyPrint->md2pdf==1.0.1->-r requirements.txt (line 6)) (1.16.0)

ζ— ζ³•ε‚δΈŽ

discord 无法ζŽ₯ε—ι‚€θ―·οΌŒζœ‰ζ²‘ζœ‰ε…Άδ»–ηš„εΉ³ε°δΎ›δΊ€ζ΅

Exception in ASGI application

I successfully install the project, I get the web page but when launching a request I get the following error:

INFO: connection open
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)

ERROR: Exception in ASGI application

Research appears to run but then throws this error:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/simonc/.local/lib/python3.8/site-packages/uvicorn/protocols/websockets/wsproto_impl.py", line 249, in run_asgi
result = await self.app(self.scope, self.receive, self.send)
File "/home/simonc/.local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/fastapi/applications.py", line 289, in call
await super().call(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 149, in call
await self.app(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 79, in call
raise exc
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "/home/simonc/.local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in call
raise e
File "/home/simonc/.local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in call
await self.app(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/routing.py", line 718, in call
await route.handle(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/routing.py", line 341, in handle
await self.app(scope, receive, send)
File "/home/simonc/.local/lib/python3.8/site-packages/starlette/routing.py", line 82, in app
await func(session)
File "/home/simonc/.local/lib/python3.8/site-packages/fastapi/routing.py", line 324, in app
await dependant.call(**values)
File "/home/simonc/gpt-researcher/main.py", line 50, in websocket_endpoint
await manager.start_streaming(task, report_type, agent, websocket)
File "/home/simonc/gpt-researcher/agent/run.py", line 38, in start_streaming
report, path = await run_agent(task, report_type, agent, websocket)
File "/home/simonc/gpt-researcher/agent/run.py", line 50, in run_agent
await assistant.conduct_research()
File "/home/simonc/gpt-researcher/agent/research_agent.py", line 137, in conduct_research
search_queries = await self.create_search_queries()
File "/home/simonc/gpt-researcher/agent/research_agent.py", line 92, in create_search_queries
return json.loads(result)
File "/usr/lib/python3.8/json/init.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 42)

Web scraping not working on windows

with chrome.
also tried running chrome --headless

I've no idea.

Scraping url https://github.com/topics/frame-interpolation?l=python&o=desc&s=updated with question recent frame interpolation projects python github
An error occurred while processing the url https://github.com/topics/frame-interpolation: Message: unknown error: Chrome failed to start: was killed.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location C:\Program Files\Google\Chrome\Application\chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Stacktrace:
Backtrace:
GetHandleVerifier [0x0068A813+48355]
(No symbol) [0x0061C4B1]
(No symbol) [0x00525358]
(No symbol) [0x00543621]
(No symbol) [0x00540579]
(No symbol) [0x00570C55]
(No symbol) [0x0057093C]
(No symbol) [0x0056A536]
(No symbol) [0x005482DC]
(No symbol) [0x005493DD]
GetHandleVerifier [0x008EAABD+2539405]
GetHandleVerifier [0x0092A78F+2800735]
GetHandleVerifier [0x0092456C+2775612]
GetHandleVerifier [0x007151E0+616112]
(No symbol) [0x00625F8C]
(No symbol) [0x00622328]
(No symbol) [0x0062240B]
(No symbol) [0x00614FF7]
BaseThreadInitThunk [0x758000C9+25]
RtlGetAppContainerNamedObjectPath [0x770F7B4E+286]
RtlGetAppContainerNamedObjectPath [0x770F7B1E+238]

Module missing?

File "/opt/anaconda3/lib/python3.9/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 19, in
from websockets.datastructures import Headers
ModuleNotFoundError: No module named 'websockets.datastructures'

ERROR on launch: No module named 'duckduckgo_search'

After installing requirements in a fresh environment and running uvicorn main:app --reload I run into the error:

ModuleNotFoundError: No module named 'duckduckgo_search'

Full output:

gpt-researcher % uvicorn main:app --reload
INFO:     Will watch for changes in these directories: ['/Users/christophersettles/code/gpt-researcher']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [70521] using WatchFiles
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/Users/christophersettles/anaconda3/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/Users/christophersettles/anaconda3/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/christophersettles/anaconda3/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/Users/christophersettles/anaconda3/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/Users/christophersettles/anaconda3/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  File "/Users/christophersettles/anaconda3/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/Users/christophersettles/anaconda3/lib/python3.10/site-packages/uvicorn/config.py", line 473, in load
    self.loaded_app = import_from_string(self.app)
  File "/Users/christophersettles/anaconda3/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/Users/christophersettles/anaconda3/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/Users/christophersettles/anaconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/christophersettles/code/gpt-researcher/main.py", line 8, in <module>
    from agent.run import WebSocketManager
  File "/Users/christophersettles/code/gpt-researcher/agent/run.py", line 7, in <module>
    from agent.research_agent import ResearchAgent
  File "/Users/christophersettles/code/gpt-researcher/agent/research_agent.py", line 6, in <module>
    from actions.web_search import web_search
  File "/Users/christophersettles/code/gpt-researcher/actions/web_search.py", line 3, in <module>
    from duckduckgo_search import DDGS
ModuleNotFoundError: No module named 'duckduckgo_search'

Error: Incorrect API key provided

I am using ubuntu 20.04

Installation, happened smoothly, the webpage started as expected...But when pressing research button...its showing error that given api key is not correct .

image

I have changed the API key many times but, its always the same

model: `gpt-4` does not exist

Hi

Thx for the great work. I am getting the following error in the terminal when pressing the blue "Research" button:
openai.error.InvalidRequestError: The model: gpt-4 does not exist

My provided API key indeed does not have access to the gpt-4 model:
curl https://api.openai.com/v1/models -H "Authorization: Bearer $OPENAI_API_KEY"

I am on a Plus plan in ChatGPT -- does it have any influences on which models I can access with the API kex?
How can I solve the issue?

BR, Christoph

OSError: cannot load library 'pango-1.0-0'

Hi, thanks for creating this project, I'm very interested in trying it. i followed the instructions to clone the repo and install the python packages, but then when trying to run the application I get an OSError: cannot load library 'pango-1.0-0' error. I searched the web for answers on that and found one link that said since I'm on a Mac to try brew install pango but that didn't fix it. I also tried switching to different versions of python but 3.10 and 3.11 didn't work when trying to install the python packages, I got through that with python 3.8.5

Any ideas on what would fix it on my M1 Mac? Thanks!

here are some relevant sections from the output:

macbook:gpt-researcher $ uvicorn main:app --reload
INFO: Will watch for changes in these directories: ['...']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [16032] using StatReload


WeasyPrint could not import some external libraries. Please carefully follow the installation steps before reporting an issue:
https://doc.courtbouillon.org/weasyprint/stable/first_steps.html#installation
https://doc.courtbouillon.org/weasyprint/stable/first_steps.html#troubleshooting


Process SpawnProcess-1:
Traceback (most recent call last):
File "python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)

...

File "/.pyenv/versions/3.8.5/lib/python3.8/site-packages/uvicorn/config.py", line 473, in load
self.loaded_app = import_from_string(self.app)
File "/.pyenv/versions/3.8.5/lib/python3.8/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/.pyenv/versions/3.8.5/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import

...

File "/.pyenv/versions/3.8.5/lib/python3.8/site-packages/cffi/api.py", line 832, in _make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
File "/.pyenv/versions/3.8.5/lib/python3.8/site-packages/cffi/api.py", line 827, in _load_backend_lib
raise OSError(msg)
OSError: cannot load library 'pango-1.0-0': dlopen(pango-1.0-0, 0x0002): tried: 'pango-1.0-0' (no such file), '/System/Volumes/Preboot/Cryptexes/OSpango-1.0-0' (no such file), '/usr/lib/pango-1.0-0' (no such file, not in dyld cache), 'pango-1.0-0' (no such file), '/usr/local/lib/pango-1.0-0' (no such file), '/usr/lib/pango-1.0-0' (no such file, not in dyld cache). Additionally, ctypes.util.find_library() did not manage to locate a library called 'pango-1.0-0'

WeasyPrint could not import some external libraries.

Win11x64 Python 3.10 Conda

WeasyPrint could not import some external libraries. Please carefully follow the installation steps before reporting an issue:
https://doc.courtbouillon.org/weasyprint/stable/first_steps.html#installation
https://doc.courtbouillon.org/weasyprint/stable/first_steps.html#troubleshooting


Process SpawnProcess-1:
Traceback (most recent call last):
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\uvicorn_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\uvicorn\server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\uvicorn\server.py", line 68, in serve
config.load()
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\uvicorn\config.py", line 473, in load
self.loaded_app = import_from_string(self.app)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\importlib_init
.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\xxxx\Deep\gpt-researcher\main.py", line 8, in
from agent.run import WebSocketManager
File "C:\Users\xxxx\Deep\gpt-researcher\agent\run.py", line 7, in
from agent.research_agent import ResearchAgent
File "C:\Users\xxxx\Deep\gpt-researcher\agent\research_agent.py", line 7, in
from actions.web_scrape import async_browse
File "C:\Users\xxxx\Deep\gpt-researcher\actions\web_scrape.py", line 23, in
import processing.text as summary
File "C:\Users\xxxx\Deep\gpt-researcher\processing\text.py", line 11, in
from md2pdf.core import md2pdf
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\md2pdf_init
.py", line 7, in
from .core import md2pdf # noqa
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\md2pdf\core.py", line 5, in
from weasyprint import HTML, CSS
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\weasyprint_init
.py", line 387, in
from .css import preprocess_stylesheet # noqa isort:skip
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\weasyprint\css_init.py", line 25, in
from . import computed_values, counters, media_queries
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\weasyprint\css\computed_values.py", line 11, in
from ..text.ffi import ffi, pango, units_to_double
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\weasyprint\text\ffi.py", line 428, in
gobject = _dlopen(
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\weasyprint\text\ffi.py", line 417, in _dlopen
return ffi.dlopen(names[0]) # pragma: no cover
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\cffi\api.py", line 150, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\cffi\api.py", line 832, in _make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
File "C:\Users\xxxx\anaconda3\envs\gpt-researcher\lib\site-packages\cffi\api.py", line 827, in _load_backend_lib
raise OSError(msg)
OSError: cannot load library 'gobject-2.0-0': error 0x7e. Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0-0'

The model: `gpt-4` does not exist

From a fresh install. Seems after end of month everyone will have access to gpt-4. After clicking research I get:

Exception in ASGI application
Traceback (most recent call last):
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/uvicorn/protocols/websockets/wsproto_impl.py", line 249, in run_asgi
result = await self.app(self.scope, self.receive, self.send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/applications.py", line 289, in call
await super().call(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 149, in call
await self.app(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in call
raise exc
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in call
raise e
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in call
await self.app(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in call
await route.handle(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/routing.py", line 341, in handle
await self.app(scope, receive, send)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/routing.py", line 82, in app
await func(session)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 324, in app
await dependant.call(**values)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/main.py", line 50, in websocket_endpoint
await manager.start_streaming(task, report_type, agent, websocket)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/run.py", line 38, in start_streaming
report, path = await run_agent(task, report_type, agent, websocket)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/run.py", line 50, in run_agent
await assistant.conduct_research()
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/research_agent.py", line 133, in conduct_research
search_queries = await self.create_search_queries()
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/research_agent.py", line 89, in create_search_queries
result = await self.call_agent(prompts.generate_search_queries_prompt(self.question))
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/research_agent.py", line 76, in call_agent
answer = create_chat_completion(
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/llm_utils.py", line 48, in create_chat_completion
response = send_chat_completion_request(
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/llm_utils.py", line 69, in send_chat_completion_request
result = openai.ChatCompletion.create(
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model: gpt-4 does not exist

Benchmark quality of research using elo ranking

Following the conversation on reddit.

Maybe you can try to set up something like an elo ranking, people could use 20 minutes to search using your project then 20 min using google and voting what software brought the best results.

This could be useful to validate this methodology is an improvement, but it might be interesting to see how other "searching systems" work, for example if duckduckgo avoidance of a "filter bubble" is beneficial.

openai.error.InvalidRequestError: The model: `gpt-4` does not exist

I thought this will work with GPT-3.5 and also thought that OpenAI made it possible to anyone to use GPT-4 API but after several hours tinkering with this I am not able to get it to work, I see the webpage but it keeps throwing an error after another (fixed them all) but I am unable to make it work because of this error

Not working properly - {"detail":"Method Not Allowed"}

Hello, IΒ΄m getting an error

{"detail":"Method Not Allowed"}

Te output throws a https 405 error. Sometimes it does the request without internet search, other times it just stops working.

After a successful install and no errors in the execution in both python or docker. Is there something IΒ΄m missing?

Thanks in advance.

PS. working on macos ventura intel, python 3.11.1 and latest docker.

Remote (non localhost) address support

If one hosts it on public IP / address, then it throws the error WebSocket connection to 'ws://localhost:8000/ws' failed:
Changing client/scripts.js localhost value to my host's IP worked but obviously, it won't work for next update.
I started the app uvicorn main:app --reload --host 0.0.0.0
Can we support remote addresses please?

.env file for API key

Sorry if this is a very basic question but where do I need to save the .env file to with my API key? I have tried to create a .env file in the config folder as follows:
OPENAI_API_KEY={my API Key}
but this doesn't seem to work?

Enhancing the WebPage for the GPT Researcher application.

Remove the inline styles and create a separate CSS file to keep the code more organized.
Add appropriate alt attributes to the tags for accessibility.
Consider adding a element for the research question input field to improve usability and accessibility.
Add form validation and error handling to provide user-friendly feedback when required fields are missing or invalid.

Error Windows 11

I think maybe it should use venv but I am not sure what is causing the error (didn't diagnostic it properly yet)

image

error gobject-2.0-0'

any issue for this error ???

OSError: cannot load library 'gobject-2.0-0': dlopen(gobject-2.0-0, 0x0002): tried: 'gobject-2.0-0' (no such file), '/usr/lib/gobject-2.0-0' (no such file), '/Users/mac/Downloads/gpt-researcher-master/gobject-2.0-0' (no such file), '/usr/lib/gobject-2.0-0' (no such file). Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0-0'

[Errno 22] Invalid argument

according to your instruction, #40
i updated latest code but still this error exist..

Exception in ASGI application
Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\uvicorn\protocols\websockets\wsproto_impl.py", line 249, in run_asgi
result = await self.app(self.scope, self.receive, self.send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\fastapi\applications.py", line 289, in call
await super().call(scope, receive, send)
File "C:\Python311\Lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "C:\Python311\Lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "C:\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in call
raise e
File "C:\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in call
await self.app(scope, receive, send)
File "C:\Python311\Lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Python311\Lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "C:\Python311\Lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "C:\Python311\Lib\site-packages\fastapi\routing.py", line 324, in app
await dependant.call(**values)
File "C:\test\gpt-researcher\main.py", line 50, in websocket_endpoint
await manager.start_streaming(task, report_type, agent, websocket)
File "C:\test\gpt-researcher\agent\run.py", line 38, in start_streaming
report, path = await run_agent(task, report_type, agent, websocket)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\test\gpt-researcher\agent\run.py", line 50, in run_agent
await assistant.conduct_research()
File "C:\test\gpt-researcher\agent\research_agent.py", line 139, in conduct_research
research_result = await self.run_search_summary(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\test\gpt-researcher\agent\research_agent.py", line 125, in run_search_summary
write_to_file(f"./outputs/{self.directory_name}/research-{query}.txt", result)
File "C:\test\gpt-researcher\processing\text.py", line 136, in write_to_file
with open(filename, "w") as file:
^^^^^^^^^^^^^^^^^^^
OSError: [Errno 22] Invalid argument: './outputs/ai trend within 3months/research-".txt'

Weasyprint

For Windows 10 users Weasyprint is highly problematic. I eventually gave up on the install and ended up using the Gregdrizz's md2pdf pull request version to get it running.

"update: if you are having issues with weasyprint, please visit their website and follow the installation instructions: https://doc.courtbouillon.org/weasyprint/stable/first_steps.html" From the installation instructions there, "Only Windows 11 64-bit is supported."

Output Folder Generated Without Names

Hello,

I am currently experiencing an issue with the generation of output folders in the program. The folder creation appears to be based on the search term that is used. However, when the search term is in a language other than English (e.g., Korean), the output folder is created without a name.

This appears to occur due to the following line of code from research_agent.py:

self.directory_name = ''.join(c for c in question if c.isascii() and c not in string.punctuation)[:100]
스크란샷 2023-07-17 α„‹α…©α„Œα…₯ᆫ 11 12 24

This code seems to delete the folder name if it includes characters that are non-ASCII or punctuations. Consequently, when the search term is in Korean, the output folder is created without a name because all the characters are deleted by this line of code.

I believe this is a bug, as folder names should be created regardless of the language of the search term.

Could you review this issue and consider a fix or an update to address it?

Thank you in advance for your attention to this matter.

Best regards,

need an option for more in depth output

Currently it seems that the research report is limited to a 2 page PDF output. Definitely need an option to increase the output so that more information is provided, especially for the academic research agent.

Getting gpt-4 does not exist despite having access

When I run 'research' I get
openai.error.InvalidRequestError: The model: gpt-4 does not exist
I do have access to the GPT-4 API as far as I'm aware. I've used it in other projects and I'm using premium. I also replaced GPT-4 varaible in config.py > self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-3.5-turbo-16k")

I'm still getting the same message. I'm using docker, so perhaps it's just rebuilding the codebase from somewhere I haven't changed? I'm no developer.

Rate limit

Hi, I'm using the 'gpt-3.5-turbo-16k' model, I've so far adopted all the fixes for the issues I've encountered that were already solved, and thank you very much for your amazing support.

However I've encountered a new issue regarding the rate limit, here's the output of the error I've encountered:

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\protocols\websockets\wsproto_impl.py", line 249, in run_asgi
result = await self.app(self.scope, self.receive, self.send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\applications.py", line 289, in call
await super().call(scope, receive, send)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in call
raise e
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in call
await self.app(scope, receive, send)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\routing.py", line 324, in app
await dependant.call(**values)
File "C:\Users\tiago\gpt-researcher\main.py", line 50, in websocket_endpoint
await manager.start_streaming(task, report_type, agent, websocket)
File "C:\Users\tiago\gpt-researcher\agent\run.py", line 38, in start_streaming
report, path = await run_agent(task, report_type, agent, websocket)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\gpt-researcher\agent\run.py", line 52, in run_agent
report, path = await assistant.write_report(report_type, websocket)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\gpt-researcher\agent\research_agent.py", line 169, in write_report
path = await write_md_to_pdf(report_type, self.directory_name, await answer)
^^^^^^^^^^^^
File "C:\Users\tiago\gpt-researcher\agent\llm_utils.py", line 85, in stream_response
for chunk in openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: Rate limit reached for default-gpt-3.5-turbo-16k in organization org-JsGGDnWMEgbr9x3ZmlIZva3l on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.

There is no such driver by url https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790

The recent modifications to the hosting paths at chromedrivers causes this issue. Check the related PR in webdriver_manager

Anyone having the same issue and don't want to wait for the forthcoming fix, just uninstall the existing webdriver manager and install the PR branch

pip uninstall webdriver_manager
pip install git+https://github.com/david-engelmann/webdriver_manager@de-upgrade-chrome-version

Local Models

Why is everyone forgetting about local models. Many can't even buy an GPT4 api key.
Just add support.

ERR_EMPTY_RESPONSE

root@jk-prome:/home/gpt-researcher# uvicorn main:app --reload
INFO: Will watch for changes in these directories: ['/home/gpt-researcher']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1706093] using StatReload
INFO: Started server process [1706095]
INFO: Waiting for application startup.
INFO: Application startup complete.

telnet 127.0.0.1:8000 works but no response when accessing pages on port 8000.

agen.run issue

Hi,

When trying to run the app, I get the following message in my command prompt:

line 8, in
from agent.run import WebSocketManager
ModuleNotFoundError: No module named 'agent.run'; 'agent' is not a package

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.