Comments (6)
Are you sure you need KV cache access? If you're just wanting to pre-process a prompt and re-use it then it sounds to me more like you want to save/load states. The high level executors (Instruct/Interact Executor) expose methods to save a file which can be used to load that conversation back up later, I believe that should contain the saved KV cache for that sequence.
There are various ways to access the KV cache exposed in a few different places.
The "raw" low level API is in NativeApi
starting at line 285:
- llama_get_kv_cache_token_count
- llama_get_kv_cache_used_cells
- llama_kv_cache_clear
- llama_kv_cache_seq_rm
- llama_kv_cache_seq_cp
- llama_kv_cache_seq_keep
- llama_kv_cache_seq_add
- llama_kv_cache_seq_div
- llama_kv_cache_seq_pos_max
The SafeLLamaContextHandle
exposes wrappers around these starting at line 566. You should prefer to use these wrappers over the raw APIs, they're intended to expose all of the power of the lower level but with extra safety where possible (e.g. a pointer and a length parameter would be replaced with a Span in these wrappers).
If you're using the BatchedExecutor
(which is a in-development "low level" executor, more difficult to sue than the other executors but more powerful) then each Conversation
object exposes a KV accessor, which can be used to manipulate the KV cache for that sequences. You can see that in use here.
from llamasharp.
Yep, I've thought about using those APIs, but I believe my use case is a bit more specific. The prompts themselves aren't static, but parts of the prompt are. For example, with RAG, the prompt might be:
You are a helpful assistant. Answer the following question using the following pieces of information:
# Context
{c1}
{c2}
{c3}
{c1}, {c2}, {c3}, etc.
aren't static but I'm confident the rest of the prompt is. I've benchmarked and the initial prompt evaluation is a big fixed cost.
I've also thought about using the APIs exposed in SafeLLamaContextHandle
, but based on the documentation in the llama.cpp header file, those are only for debugging and only provide a view on the underlying cache.
I haven't tested this yet, but I'm not sure the SaveState
method is portable? In addition, I think it probably includes stuff I'm not interested in, and would like to minimize the size of the file. I guess in general, I'm interested in an analogous equivalent to the llama.cpp --prompt-cache
CLI argument.
from llamasharp.
Actually, I dug a little bit through the llama.cpp code base, and it seems that all the prompt-cache
option does is call the llama_state_save_file
, which is already the function you've exposed. So that's good, I was mistaken. That being said, it would be nice to have a higher-level API for manipulation of the KV cache outside the SafeLLamaContextHandle
.
from llamasharp.
For an overall API design, Llama-cpp-python exposes a LLamaState
and LLamaCache
construct, the former of which is represented in the Llama.cpp
internals and the former is a construct that a high-level construct without an analogous inquiry. Interestingly, it seems LLamaCache and LlamaState are actually present in the docs here, but based on a code search, they don't actually exist?
from llamasharp.
{c1}, {c2}, {c3}, etc. aren't static but I'm confident the rest of the prompt is. I've benchmarked and the initial prompt evaluation is a big fixed cost.
Yep so my suggestion was to evaluate everything before {c1}
, save that, and then you can resume from there later on.
but based on the documentation in the llama.cpp header file, those are only for debugging and only provide a view on the underlying cache.
Some of the functions here are specifically for debugging only, but not all of them (it will be mentioned in the comments). The debugging ones are exposed in a higher level wrapper through LLamaKvCacheView
.
e.g. llama_get_kv_cache_token_count
is for debugging but llama_kv_cache_seq_rm
is definitely not!
For an overall API design, Llama-cpp-python exposes a
LLamaState
andLLamaCache
I can't find any docs on LLamaCache, do you have a link to any? From a look at the implementation here, it looks like it automatically loads and saves states (using llama_state_save_file
presumably) so you can resume a sequence later using the same cache?
from llamasharp.
Yeah, I just read through the code, it's not really documented. Here's where it's actually used though. https://github.com/abetlen/llama-cpp-python/blob/165b4dc6c188f8fda2fc616154e111f710484eba/llama_cpp/llama.py#L1073C1-L1089C1. It seems LLamaCache
is basically just a thin wrapper over LlamaState
, where it handles continuous exporting of the state?
from llamasharp.
Related Issues (20)
- Unable to load SYCL compiled backend HOT 12
- LLamaSharp runtime binaries don't support Rosetta2 HOT 7
- Split the main package
- Make `LLamaSharp.semantic-kernel` depend on `Microsoft.SemanticKernel.Core` instead of `Microsoft.SemanticKernel`. HOT 1
- [Feature]: SemanticKernel FuctionCall HOT 3
- [BUG]: When using large models with the GPU the code crashes with cannot allocate kvcache HOT 13
- Llava DLL issue in Unity HOT 2
- [BUG]: When using the output IAsyncEnumerable<string> of session.ChatAsync() strings are not streamed into client HOT 5
- Semantic Kernel - Home Automation Sample HOT 4
- Phi-3-medium-128k-instruct - error due to tensor shape expected 245, got 243 HOT 2
- [BUG]: Offset and length were out of bounds HOT 1
- Have an error then try run example "KernelMemorySaveAndLoad" HOT 1
- Unable to Utilize Full CPU Capacity During Inference HOT 2
- [BUG]: Cannot load the backend on MACOS HOT 1
- [BUG]: qwen2 nvidia abnormal occurrence HOT 7
- [Feature]: AuthorRole can custom role labels be supported ?
- SEHException on Tokenize model. HOT 5
- Cannot figure out how to switch backend to OpenCL HOT 3
- [Feature]: Support JSON Schema from llama.cpp HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llamasharp.