Comments (5)
LLamaSharp doesn't support negative prompts in an easy to use way.
All the necessary low level components are there, but they haven't been put together into an easy to use high level system that allows you to use it by just supplying a string as a negative prompt.
from llamasharp.
I see. This is an important feature since we can control model ethics with it easily. It is probably not a lot of work for someone who knows the code already very well.
from llamasharp.
Applying the guidance logits is fairly simple (I've actually just put together the code to do that).
The tricky part is that you need a second entire sequence running in parallel to the first to generate those guidance logits. This really needs batched execution to be fast, which is supported in LLamaSharp, but only very recently and it doesn't work with any kind of high level executor yet.
from llamasharp.
@zsogitbe I've implemented a demo for how to use classifier free guidance using the new batched executor. As I mentioned this is not as high level as the existing executors, but it should at least demo how things work at the lower level for now. If you could pull #536, test it out and leave any feedback on that PR, it would be very appreciated!
from llamasharp.
Added a possible bug issue: ggerganov/llama.cpp#5709
from llamasharp.
Related Issues (20)
- Nuget references both CPU and CUDA simultaneously HOT 2
- Thread Safety in llama.cpp HOT 1
- Llama.web app published into iis windows 64bit server, after deployment model values not loaded from appsettings HOT 4
- Not able to read fields from appsetings.json HOT 3
- Examples don't run with CUDA12 HOT 17
- Consider adding Windows on ARM build of llama.dll to LLamaSharp.Backend.Cpu HOT 7
- Drastic CUDA compute speed drop in last version HOT 13
- Build CUDA with AVX HOT 2
- Fatal error. System.Runtime.InteropServices.SEHException HOT 32
- Godot game engine example HOT 7
- how to prevent the llama from completing requests HOT 1
- about NVidia GPU use example HOT 4
- SemanticKernel ChatCompletion is Stateless HOT 5
- Segmentation fault on Docker HOT 2
- [LLamaSharp.Backend.Cuda12 v0.10.0] Unable to load the model onto multiple GPUs HOT 3
- Unable to use lora in llamasharp but can use it in llama.cpp HOT 8
- AMD GPUs Support
- Exception error when sending any prompts. HOT 7
- Parallel Inferencing? HOT 27
- Information on new important updates in llama.cpp HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llamasharp.