GithubHelp home page GithubHelp logo

vortext / llama.clj Goto Github PK

View Code? Open in Web Editor NEW

This project forked from phronmophobic/llama.clj

0.0 1.0 0.0 135 KB

Run LLMs locally. A clojure wrapper for llama.cpp.

License: MIT License

Clojure 100.00%

llama.clj's Introduction

llama.clj

Run LLMs locally. A clojure wrapper for llama.cpp.

Quick Start

If you're just looking for a model to try things out, try the 3.6Gb llama2 7B chat model from TheBloke. Make sure to check the link for important info like license and use policy.

mkdir -p models
# Download 3.6Gb model to models/ directory
(cd models && curl -L -O 'https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_0.bin')
# mvn-llama alias pulls precompiled llama.cpp libs from maven
clojure -M:mvn-llama -m com.phronemophobic.llama "models/llama-2-7b-chat.ggmlv3.q4_0.bin" "what is 2+2?"

Documentation

Getting Started
Intro to Running LLMs Locally
API Reference Docs

Dependency

For llama.clj with required native dependencies:

com.phronemophobic/llama-clj-combined {:mvn/version "0.8-alpha1"}

For llama.clj only (see below for various alternatives for specifying native dependencies):

com.phronemophobic/llama-clj {:mvn/version "0.8-alpha1"}

Native Dependency

llama.clj relies on the excellent llama.cpp library.

The llama.cpp shared library can either be compiled locally or can be included as a standalone maven dependency.

Precompiled native deps on clojars

The easiest method is to include the corresponding native dependency for your platform (including multiple is fine, but will increase the size of your dependencies). See the mvn-llama alias for an example.

;; gguf dependencies
com.phronemophobic.cljonda/llama-cpp-gguf-linux-x86-64 {:mvn/version "c3f197912f1ce858ac114d70c40db512de02e2e0"}
com.phronemophobic.cljonda/llama-cpp-gguf-darwin-aarch64 {:mvn/version "c3f197912f1ce858ac114d70c40db512de02e2e0"}
com.phronemophobic.cljonda/llama-cpp-gguf-darwin-x86-64 {:mvn/version "c3f197912f1ce858ac114d70c40db512de02e2e0"}

;; ggml dependencies
com.phronemophobic.cljonda/llama-cpp-darwin-aarch64 {:mvn/version "6e88a462d7d2d281e33f35c3c41df785ef633bc1"}
com.phronemophobic.cljonda/llama-cpp-darwin-x86-64 {:mvn/version "6e88a462d7d2d281e33f35c3c41df785ef633bc1"}
com.phronemophobic.cljonda/llama-cpp-linux-x86-64 {:mvn/version "6e88a462d7d2d281e33f35c3c41df785ef633bc1"}

Locally compiled

Clone https://github.com/ggerganov/llama.cpp and follow the instructions for building. Make sure to include the shared library options.

Note: The llama.cpp ffi bindings are based on the 4329d1acb01c353803a54733b8eef9d93d0b84b2 git commit for ggml models and 40e07a60f9ce06e79f3ccd4c903eba300fb31b5e for gguf models. Future versions of llama.cpp might not be compatible if breaking changes are made. TODO: include instructions for updating ffi bindings.

Note: Dual wielding ggml and gguf llama.cpp versions is possible, but not currently supported for locally compiled builds. Please file an issue if you need this

For Example:

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
git checkout 4329d1acb01c353803a54733b8eef9d93d0b84b2
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS=ON ..
cmake --build . --config Release

Next, include an alias that includes the path to the directory where the shared library is located:

;; in aliases
;; add jvm opt for local llama build.
:local-llama {:jvm-opts ["-Djna.library.path=/path/to/llama.cpp/build/"]}

Obtaining models

For more complete information about the models that llama.clj can work with, refer to the llama.cpp readme.

Another good resource for models is TheBloke on huggingface.

Cli Usage

clojure -M -m com.phronemophobic.llama <path-to-model> <prompt>

Example:

mkdir -p models
# Download 3.6Gb model to models/ directory
(cd models && curl -L -O 'https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_0.bin')
clojure -M:mvn-llama -m com.phronemophobic.llama "models/llama-2-7b-chat.ggmlv3.q4_0.bin" "what is 2+2?"

cuBLAS support

For gpu support on linux, cuda must be installed. The instructions for cuda installation can be found in nvidia's documentation.

Make sure to restart and follow the post installation instructions so that the cuda development tools like nvcc are available on the path.

Currently, pre-compiled binaries of llama.cpp with cuBLAS support are not available. The llama.cpp native dependencies must be compiled locally with -DLLAMA_CUBLAS=ON as argument. Something like:

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
git checkout 4329d1acb01c353803a54733b8eef9d93d0b84b2
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS=ON -DLLAMA_CUBLAS=ON ..
cmake --build . --config Release

More cuBLAS Resources

"Roadmap"

  • Pure clojure implementation for mirostatv2 and other useful samplers.
  • Provide reasonable default implementations for generating responses larger than the context size.
  • Update llama.cpp to support gguf format
  • More docs!
    • Reference docs
    • Intro Guide to LLMs.

License

The MIT License (MIT)

Copyright © 2023 Adrian Smith

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

llama.clj's People

Contributors

phronmophobic avatar ruped avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.