GithubHelp home page GithubHelp logo

tomdyson / microllama Goto Github PK

View Code? Open in Web Editor NEW
67.0 3.0 6.0 179 KB

The smallest possible LLM API

License: MIT License

Dockerfile 2.99% Python 60.59% HTML 36.42%
fastapi gpt langchain llm openai prompts python

microllama's Introduction

llama-small

MicroLlama

The smallest possible LLM API. Build a question and answer interface to your own content in a few minutes. Uses OpenAI embeddings, gpt-3.5 and Faiss, via Langchain.

Usage

  1. Combine your source documents into a single JSON file called source.json. It should look like this:
[
    {
        "source": "Reference to the source of your content. Typically a title.",
        "url": "URL for your source. This key is optional.",
        "content": "Your content as a single string. If there's a title or summary, put these first, separated by new lines."
    }, 
    ...
]

See example.source.json for an example.

  1. Install MicroLlama into a virtual environment:
pip install microllama
  1. Get an OpenAI API key and add it to the environment, e.g. export OPENAI_API_KEY=sk-etc. Note that indexing and querying require OpenAI credits, which aren't free.

  2. Run your server with microllama. If a vector search index doesn't exist, it'll be created from your source.json, and stored.

  3. Query your documents at /api/ask?your question.

  4. Microllama includes an optional web front-end, which is generated with microllama make-front-end. This command creates a single index.html file which you can edit. It's served at /.

Configuration

Microllama is configured through environment variables, with the following defaults:

  • OPENAI_API_KEY: required
  • FAISS_INDEX_PATH: "faiss_index"
  • SOURCE_JSON: "source.json"
  • MAX_RELATED_DOCUMENTS: "5"
  • EXTRA_CONTEXT: "Answer in no more than three sentences. If the answer is not included in the context, say 'Sorry, this is no answer for this in my sources.'."
  • UVICORN_HOST: "0.0.0.0"
  • UVICORN_PORT: "8080"

Deploying your API

Create a Dockerfile with microllama make-dockerfile. Then:

On Fly.io

Sign up for a Fly.io account and install flyctl. Then:

fly launch # answer no to Postgres, Redis and deploying now 
fly secrets set OPENAI_API_KEY=sk-etc 
fly deploy

On Google Cloud Run

gcloud run deploy --source . --set-env-vars="OPENAI_API_KEY=sk-etc"

For Cloud Run and other serverless platforms you should generate the FAISS index at container build time, to reduce startup time. See the two commented lines in Dockerfile.

You can also generate these commands with microllama deploy.

Based on

TODO

  • Use splitting which generates more meaningful fragments, e.g. text_splitter = SpacyTextSplitter(chunk_size=700, chunk_overlap=200, separator=" ")

microllama's People

Contributors

tomdyson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

microllama's Issues

Evaluate similarity search performance with NLP-based splitting

e.g.

pip install spacy
python -m spacy download en_core_web_sm

and in Dockerfile:

RUN pip install spacy
RUN python -m spacy download en_core_web_sm

In microllama.py:

from langchain.text_splitter import SpacyTextSplitter

and

splitter = CharacterTextSplitter(... -> splitter = SpacyTextSplitter(...

Index creation takes roughly twice as long.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.