GithubHelp home page GithubHelp logo

Comments (6)

martindevans avatar martindevans commented on June 26, 2024

Are you sure you need KV cache access? If you're just wanting to pre-process a prompt and re-use it then it sounds to me more like you want to save/load states. The high level executors (Instruct/Interact Executor) expose methods to save a file which can be used to load that conversation back up later, I believe that should contain the saved KV cache for that sequence.

There are various ways to access the KV cache exposed in a few different places.

The "raw" low level API is in NativeApi starting at line 285:

  • llama_get_kv_cache_token_count
  • llama_get_kv_cache_used_cells
  • llama_kv_cache_clear
  • llama_kv_cache_seq_rm
  • llama_kv_cache_seq_cp
  • llama_kv_cache_seq_keep
  • llama_kv_cache_seq_add
  • llama_kv_cache_seq_div
  • llama_kv_cache_seq_pos_max

The SafeLLamaContextHandle exposes wrappers around these starting at line 566. You should prefer to use these wrappers over the raw APIs, they're intended to expose all of the power of the lower level but with extra safety where possible (e.g. a pointer and a length parameter would be replaced with a Span in these wrappers).

If you're using the BatchedExecutor (which is a in-development "low level" executor, more difficult to sue than the other executors but more powerful) then each Conversation object exposes a KV accessor, which can be used to manipulate the KV cache for that sequences. You can see that in use here.

from llamasharp.

abhiaagarwal avatar abhiaagarwal commented on June 26, 2024

Yep, I've thought about using those APIs, but I believe my use case is a bit more specific. The prompts themselves aren't static, but parts of the prompt are. For example, with RAG, the prompt might be:

You are a helpful assistant. Answer the following question using the following pieces of information:

# Context

{c1}

{c2}

{c3}

{c1}, {c2}, {c3}, etc. aren't static but I'm confident the rest of the prompt is. I've benchmarked and the initial prompt evaluation is a big fixed cost.

I've also thought about using the APIs exposed in SafeLLamaContextHandle, but based on the documentation in the llama.cpp header file, those are only for debugging and only provide a view on the underlying cache.

I haven't tested this yet, but I'm not sure the SaveState method is portable? In addition, I think it probably includes stuff I'm not interested in, and would like to minimize the size of the file. I guess in general, I'm interested in an analogous equivalent to the llama.cpp --prompt-cache CLI argument.

from llamasharp.

abhiaagarwal avatar abhiaagarwal commented on June 26, 2024

Actually, I dug a little bit through the llama.cpp code base, and it seems that all the prompt-cache option does is call the llama_state_save_file, which is already the function you've exposed. So that's good, I was mistaken. That being said, it would be nice to have a higher-level API for manipulation of the KV cache outside the SafeLLamaContextHandle.

from llamasharp.

abhiaagarwal avatar abhiaagarwal commented on June 26, 2024

For an overall API design, Llama-cpp-python exposes a LLamaState and LLamaCache construct, the former of which is represented in the Llama.cpp internals and the former is a construct that a high-level construct without an analogous inquiry. Interestingly, it seems LLamaCache and LlamaState are actually present in the docs here, but based on a code search, they don't actually exist?

from llamasharp.

martindevans avatar martindevans commented on June 26, 2024

{c1}, {c2}, {c3}, etc. aren't static but I'm confident the rest of the prompt is. I've benchmarked and the initial prompt evaluation is a big fixed cost.

Yep so my suggestion was to evaluate everything before {c1}, save that, and then you can resume from there later on.

but based on the documentation in the llama.cpp header file, those are only for debugging and only provide a view on the underlying cache.

Some of the functions here are specifically for debugging only, but not all of them (it will be mentioned in the comments). The debugging ones are exposed in a higher level wrapper through LLamaKvCacheView.

e.g. llama_get_kv_cache_token_count is for debugging but llama_kv_cache_seq_rm is definitely not!

For an overall API design, Llama-cpp-python exposes a LLamaState and LLamaCache

I can't find any docs on LLamaCache, do you have a link to any? From a look at the implementation here, it looks like it automatically loads and saves states (using llama_state_save_file presumably) so you can resume a sequence later using the same cache?

from llamasharp.

abhiaagarwal avatar abhiaagarwal commented on June 26, 2024

Yeah, I just read through the code, it's not really documented. Here's where it's actually used though. https://github.com/abetlen/llama-cpp-python/blob/165b4dc6c188f8fda2fc616154e111f710484eba/llama_cpp/llama.py#L1073C1-L1089C1. It seems LLamaCache is basically just a thin wrapper over LlamaState, where it handles continuous exporting of the state?

from llamasharp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.