Comments (15)
I have already started working on it! 😄
from llamasharp.
Now that the CLBLAST binaries are included in the project (or at least, will be once #479 is merged) the next step is to create a nuspec file and make sure it's covered by the deployment automation script. Are you up for working on that @jasoncouture?
Indeed. I'm having some trouble getting it working, which is why no pr yet.
According to this comment clblast.dll may need to be included in the distributed package.
Alright.
from llamasharp.
I got llama.cpp Vulkan backend working.
I just rebuilt LlamaSharp after adding a Vulkan folder and updating and including all the relevant dlls from the latest premade llama.cpp release.
The current release nuget LLamaSharp 0.9.1 is built from llama.cpp before it had vulkan.
from llamasharp.
Yes... But I'm not running the unit tests. Just one sample program. I just switched the dlls to the clblast ones
from llamasharp.
If 0.9.1 is too old you might be able to use 0.10.0, that's not released to nuget yet but it's what's currently on the master branch. That's only 2 weeks old.
I don't know that much about making nuget packages and testing them locally
The nuget packages are the easy bit! There are a few stages before that:
- Update the compile action to build Vulkan binaries (https://github.com/SciSharp/LLamaSharp/blob/master/.github/workflows/compile.yml)
- Work out if there are any other files that need distributing, and if so grab those as part of the compile action too.
- Update runtime loading to decide when to load vulkan backend (https://github.com/SciSharp/LLamaSharp/blob/master/LLama/Native/NativeApi.Load.cs)
- Create nuspec files (https://github.com/SciSharp/LLamaSharp/tree/master/LLama/runtimes/build)
Even if you don't want to go all the way through this process, just doing a couple of the steps as PRs brings us closer to eventual vulkan support :)
from llamasharp.
Definitely a good idea, are you willing to work on the modifications to the Github actions pipeline to compile the necessary binaries? That's the first step to getting this supported.
from llamasharp.
I was trying to deploy llm unity to android , thought about :
avx2 with nemo intrinsics ( idk how )
vulkan backend with compute shader ( this partially implemented by other )
onnx runtime with custom model quantization ( unity doesn't support .net core bcl -clr yet)
from llamasharp.
Now that the CLBLAST binaries are included in the project (or at least, will be once #479 is merged) the next step is to create a nuspec file and make sure it's covered by the deployment automation script. Are you up for working on that @jasoncouture?
from llamasharp.
According to this comment clblast.dll may need to be included in the distributed package.
from llamasharp.
Please consider LLamaSharp.Backend.Vulkan
I tried using the llama.cpp prebuilt vulkan dlls
The "new ModelParams(modelPath){....}" works and produces:
ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: AMD Radeon VII | uma: 0 | fp16: 1 | warp size: 64
But "using var model = LLamaWeights.LoadFromFile(parameters);"
Gives
System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
--------------------------------
at LLama.Native.NativeApi.llama_load_model_from_file(System.String, LLama.Native.LLamaModelParams)
--------------------------------
at LLama.Native.SafeLlamaModelHandle.LoadFromFile(System.String, LLama.Native.LLamaModelParams)
at LLama.LLamaWeights.LoadFromFile(LLama.Abstractions.IModelParams)
from llamasharp.
I got llama.cpp Vulkan backend working. I just rebuilt LlamaSharp...
Is this what I would need to do to get the OpenCL (from clblast runtime folder) working? I am having a hard time building from source, so I installed the 0.9.1 NuGet package and copied those binaries. But I now get a similar error to what you were having before:
ggml_opencl: selecting platform: 'AMD Accelerated Parallel Processing'
ggml_opencl: selecting device: 'gfx1031'
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
at LLama.Native.NativeApi.llama_load_model_from_file(System.String, LLama.Native.LLamaModelParams)
--------------------------------
at LLama.Native.SafeLlamaModelHandle.LoadFromFile(System.String, LLama.Native.LLamaModelParams)
at LLama.LLamaWeights.LoadFromFile(LLama.Abstractions.IModelParams)
at Program+<<Main>$>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
at Program.<Main>$(System.String[])
at Program.<Main>(System.String[])
Thanks for any advice, and glad to see you have Vulkan working.
from llamasharp.
@soyfrien @moozoo64 In general there is zero compatibility between llama.cpp versions - i.e. you can't just swap in a new binary an expect it to work. You'll probably have better luck if you consult this table and use llama.cpp versions from that exact commit hash.
If either of you are interested in making the changes necessary to create a backend.Vulkan package that'd be a great addition!
from llamasharp.
@martindevans
The new lama.cpp backends where added not long ago and are undergoing rapid updates and fixes.
So, I'd rather take the latest llama.cpp dlls and build LLamaSharp around them.
With the latest nuget LlamaSharp (0.9.1) you need llama.cpp 9fb13f9
I did build 9fb13f9 but then said "hey no vulkan" doh, it's from 2 months ago.
If either of you are interested in making the changes necessary to create a backend.Vulkan package that'd be a great addition!
yeah thinking about that. I don't know that much about making nuget packages and testing them locally.
But from what I can tell someone who knows this code inside out would find adding the new backend nuget packages pretty straight forward and quick.
I'm guessing LlamaSharp is actually binding against one of the llama.dll's (noavx?) and since the rest have the same entry points, they just work.
So all that is needed to bundle the .dlls for the new backends, vulkan, kcompute, SYSCL & OpenBlas through nuget packages.
The .nuspec files being trivial to create.
I included the ggml_shared.dll and llava_shared.dll because I wasn't sure if llama.dll linked to them. Probably not.
from llamasharp.
Thank you for detailing how to do that, though I still don't get how the backends are created.
I just want to leave an update for anyone else using OpenCL: when using the current source instead of NuGet, you no longer need to rename llama.dll
to libllama.dll
.
from llamasharp.
- Update the compile action to build Vulkan binaries (https://github.com/SciSharp/LLamaSharp/blob/master/.github/workflows/compile.yml)
Thats mostly just copy and paste from around line 410 of
https://github.com/ggerganov/llama.cpp/blob/master/.github/workflows/build.yml
- Work out if there are any other files that need distributing, and if so grab those as part of the compile action too.
Just need download and expand all the latest release versions of llama.cpp and compare .dll's
For vulkan none
For OpenBlas thats openblas.dll
For kcompute I think fmt.dll
For CLBlas thats clblast.dll
I don't think ggml_shared.dll and llava_shared.dll are needed but "Dependency Walker" would check that.
- Update runtime loading to decide when to load vulkan backend
(https://github.com/SciSharp/LLamaSharp/blob/master/LLama/Native/NativeApi.Load.cs)
ok, now I know how it's finding the dlls :)
- Create nuspec files (https://github.com/SciSharp/LLamaSharp/tree/master/LLama/runtimes/build)
Take a copy of the OpenCL one, replace the word "OpenCL" with "Vulkan" everywhere. and update the files section.
from llamasharp.
Related Issues (20)
- [BUG]: Vulkan backend crash on model loading HOT 3
- [BUG]: Different continuation after restoring state HOT 1
- Improve `LLamaEmbedder` HOT 2
- [BUG]: KernelMemory.AskAsync() does not work - exception: object reference not set to an instance of an object HOT 25
- [BUG]: fatal error using gemma-2-2b-it HOT 3
- [BUG]: "The type or namespace 'Common' does not exist in the namespace 'LLama'" HOT 4
- Application Not Using GPU Despite Installing LlamaSharp.Backend.Cuda12 HOT 1
- [Feature]: Add development support for Dev Containers HOT 6
- How do i use RAG by kernel memory and Semantic kernel Handlebar Planner with llama3 HOT 3
- versioning issue HOT 11
- [BUG]: gemma-2-9b-it-GGUF - error loading model HOT 3
- [BUG]: Error when starting LLama Cuda11/12 HOT 6
- [BUG]: Second Response Empty when using Grammar HOT 3
- LLamaSharp v0.15.0 broke cuda backend HOT 15
- [BUG]: KernelMemory - Simultaneous execution of AskDocument & ImportDocument HOT 18
- [BUG]: Error setting variables HOT 1
- [BUG:] When switching to new versions of LLamaSharp 0.16.0, there was a slowdown HOT 30
- A few moments in the process of work LLamaSharp & KernelMemory
- Question about promt templates
- [BUG]: DefaultSamplingPipeline - strange behavior at high temperature
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llamasharp.