Comments (3)
It would be great if you'd like to make itπ
from llamasharp.
Hi, I think it's a correct way with a rough look at your code.
Do you think it's worth to implement it into some interface at ChatSession level? (similar to how it's done at Executor level)
From my perspective I'd like to see such an API in ChatSession
. As you can see, some extra process of the output is performed when switching context, which could be wrapped in an API.
Also one thing I struggled to find is something like AddMessage but in non-lazy way with context.Decode happening at the same time, to precompute KV cache before generating the first message.
It's a good idea for performance improvement. More generally The generation
and prefill
could be separate parts and exposed to users in a certain way. Currently I think there's also a tricky way to do so. When you want to prefill it without generation, you just need to set the last several words of your prompt as antiprompt (but be sure it's alone).
FYI @martindevans
from llamasharp.
Great, thanks for taking time to look at the code.
I think I could make a PR for these changes if you want.
from llamasharp.
Related Issues (20)
- Nuget references both CPU and CUDA simultaneously HOT 2
- Thread Safety in llama.cpp HOT 1
- Llama.web app published into iis windows 64bit server, after deployment model values not loaded from appsettings HOT 4
- Not able to read fields from appsetings.json HOT 4
- Examples don't run with CUDA12 HOT 17
- Consider adding Windows on ARM build of llama.dll to LLamaSharp.Backend.Cpu HOT 7
- Drastic CUDA compute speed drop in last version HOT 13
- Build CUDA with AVX HOT 2
- Fatal error. System.Runtime.InteropServices.SEHException HOT 32
- Godot game engine example HOT 25
- how to prevent the llama from completing requests HOT 1
- about NVidia GPU use example HOT 4
- SemanticKernel ChatCompletion is Stateless HOT 5
- Segmentation fault on Docker HOT 2
- [LLamaSharp.Backend.Cuda12 v0.10.0] Unable to load the model onto multiple GPUs HOT 3
- Unable to use lora in llamasharp but can use it in llama.cpp HOT 8
- AMD GPUs Support
- Exception error when sending any prompts. HOT 7
- Parallel Inferencing? HOT 28
- Information on new important updates in llama.cpp HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llamasharp.