GithubHelp home page GithubHelp logo

jaypyles / freeaskinternet-api Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nashsu/freeaskinternet

26.0 0.0 1.0 812 KB

Better engineered version of the FreeAskInternet (only the API) tool, empowering developers to build apps which integrate AI/SearXNG.

License: Apache License 2.0

Python 96.84% Dockerfile 2.67% Makefile 0.49%
ai free gpt self-hosted searchxng

freeaskinternet-api's Introduction

FreeAskInternet++ API

This is a fork of the original FreeAskInternet, but only the backend, re-engineered to be more useful to build different apps (and in English).

Now with Ollama support!

What is FreeAskInternet

FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, Without GPU needed. The user can ask a question and the system will use searxng to make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results. All process running locally and No GPU or OpenAI or Google API keys are needed.

Deployment Guide

git clone https://github.com/jaypyles/FreeAskInternet-API.git
cd FreeAskInternet-API
make build up

This deployment relies on a docker network called search. Which can be made through docker network create search.

Example Usage

body = {
  "model": "gpt3.5",
  "messages": [{"role": "user", "content": "Why does the moon create the tides"}],
}

url = "http://searchbackend:8000/v1/chat/completions"
response = requests.post(url, json=body)

This is a STREAMED response, so the data is being sent in the response as it is generated, and will need to be parsed for use.

Response:

The Moon creates tides through its gravitational pull on Earth's oceans, causing bulges on the side facing the Moon and the side opposite it, resulting in high tides.
The low points correspond to low tides [citation:1][citation:2]. These tides occur due to the gravitational forces between the Moon and Earth,
leading to the ocean's surface rising and falling regularly [citation:1]. While the Moon and Earth's gravitational interaction primarily drives tides, other factors,
such as the shape of Earth, its geography, and the depth of the ocean, also influence tidal patterns [citation:1]. Additionally, the timing of high and low tide events
varies by location due to factors like the continental shelf's slope, leading to lag times of hours or even close to a day [citation:1].
NOAA utilizes advanced equipment to monitor tides in over 3,000 locations in the U.S. [citation:1].

---
Citations:
1. https://oceanservice.noaa.gov/facts/moon-tide.html
2. https://science.nasa.gov/resource/tides/
3. https://scijinks.gov/tides/
4. https://www.timeanddate.com/astronomy/moon/tides.html
5. https://oceanservice.noaa.gov/education/tutorial_tides/tides06_variations.html

Example of how I parse the data being received:

  response_text = response.text
  data_chunks = response_text.split("\n")
  total_content = ""
  for chunk in data_chunks:
      if chunk:
          clean_json = chunk.replace("data: ", "")
          try:
              if clean_json:
                  dict_data = json.loads(clean_json)
                  token = dict_data["choices"][0]["delta"].get("content", "")
                  if token:
                      total_content += token
          except json.JSONDecodeError as e:
              print(f"Failed to decode JSON: {e} - Chunk: {clean_json}")

  return total_content

Ollama Guide

In the docker-compose file, provide the environmental variable OLLAMA_HOST

backend:
  image: jpyles0524/freeaskinternetapi:latest
  build:
    context: .
  container_name: searchbackend
  environment:
    - OLLAMA_HOST=http://ollama:11434
  depends_on:
    - llm-freegpt35
  restart: on-failure
  networks:
    - search

In the request you send to the API, change the model from gpt3.5 to ollama, and provide the Ollama model name:

body = {
  "model": "ollama",
  "messages": [{"role": "user", "content": "Why does the moon create the tides"}],
  "ollama_model": "llama2"
}

Credits

License

Apache-2.0 license

freeaskinternet-api's People

Contributors

nashsu avatar jaypyles avatar

Stargazers

 avatar  avatar  avatar HD avatar  avatar  avatar vc avatar Kaarthik Andavar avatar  avatar SowmithK avatar AJ avatar KleinHE avatar 叶叶菜 avatar Felitendo avatar  avatar Pavan Mirla avatar XueYao avatar  avatar Joe avatar Steven Lynn avatar  avatar  avatar  avatar Deng Changdong avatar Alok Saboo avatar Michael Gathara avatar

freeaskinternet-api's Issues

Completions end point doesn't show the content, it does show the links t

Completions end point doesn't show the content, it does show the links though.
Do you see any issue? Thank you
What tests/changes do you suggest? Thx

import requests

url = "http://localhost:8000/v1/chat/completions"

body = {
    "model": "gpt3.5",
    "messages": [{"role": "user", "content": "Why does the moon create the tides"}]
}

response = requests.post(url, json=body, stream=True)

if response.status_code == 200:
    try:
        for line in response.iter_lines():
            if line:
                decoded_line = line.decode('utf-8')
                print(decoded_line)  # or process your line here
    except Exception as e:
        print(f"An error occurred: {e}")
else:
    print(f"Failed to get response, status code: {response.status_code}")



data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n"},"finish_reason":null}]}
data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"---\n"},"finish_reason":null}]}
data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"Citations:\n"},"finish_reason":null}]}
data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"*[1. https://www.timeanddate.com/astronomy/moon/tides.html](https://www.timeanddate.com/astronomy/moon/tides.html)*"},"finish_reason":null}]}
data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"\n"},"finish_reason":null}]}
data: {"model":"gpt3.5","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"*[2. [https://scijinks.gov/tides/](htt](https://scijinks.gov/tides/](https://scijinks.gov/tides/)*)

docker ps

CONTAINER ID   IMAGE                                  COMMAND                  CREATED         STATUS              PORTS                    NAMES
a6134c8abf07   jpyles0524/freeaskinternetapi:latest   "pdm run python -m u…"   6 minutes ago   Up 45 seconds       0.0.0.0:8000->8000/tcp   searchbackend
edd2e63a5a76   searxng/searxng:latest                 "/sbin/tini -- /usr/…"   6 minutes ago   Up 45 seconds       8080/tcp                 freeaskinternet-api-searxng-1
e33c6697f75d   missuo/freegpt35:latest                "docker-entrypoint.s…"   6 minutes ago   Up 45 seconds       3040/tcp                 llm-freegpt35
8506bd3ed524   moby/buildkit:buildx-stable-1          "buildkitd"              12 hours ago    Up About a minute                            buildx_buildkit_mybuilder0

ollama config: Does it still require the port settings?

services:
backend:
image: jpyles0524/freeaskinternetapi:latest
build:
context: .
container_name: searchbackend
environment:
- OLLAMA_HOST=http://ollama:11434 << should this be replaced with localhost?
ports:
- 8000:8000
depends_on:
- llm-freegpt35
restart: on-failure
networks:
- search

With this>>
I see an error.
curl -X POST "http://localhost:8000/v1/chat/completions"
-H "Content-Type: application/json"
-d '{
"model": "ollama",
"messages": [{"role": "user", "content": "Why does the moon create the tides"}], "ollama_model":"llama3"
}'

data: {"model":"ollama","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}

curl: (18) transfer closed with outstanding read data remaining

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.