GithubHelp home page GithubHelp logo

Enable OpenCL/ROCm about llamasharp HOT 15 OPEN

jasoncouture avatar jasoncouture commented on September 26, 2024
Enable OpenCL/ROCm

from llamasharp.

Comments (15)

jasoncouture avatar jasoncouture commented on September 26, 2024 3

I have already started working on it! 😄

from llamasharp.

jasoncouture avatar jasoncouture commented on September 26, 2024 1

Now that the CLBLAST binaries are included in the project (or at least, will be once #479 is merged) the next step is to create a nuspec file and make sure it's covered by the deployment automation script. Are you up for working on that @jasoncouture?

Indeed. I'm having some trouble getting it working, which is why no pr yet.

According to this comment clblast.dll may need to be included in the distributed package.

Alright.

from llamasharp.

moozoo64 avatar moozoo64 commented on September 26, 2024 1

I got llama.cpp Vulkan backend working.
I just rebuilt LlamaSharp after adding a Vulkan folder and updating and including all the relevant dlls from the latest premade llama.cpp release.
The current release nuget LLamaSharp 0.9.1 is built from llama.cpp before it had vulkan.

image

from llamasharp.

moozoo64 avatar moozoo64 commented on September 26, 2024 1

Yes... But I'm not running the unit tests. Just one sample program. I just switched the dlls to the clblast ones
image

from llamasharp.

martindevans avatar martindevans commented on September 26, 2024 1

If 0.9.1 is too old you might be able to use 0.10.0, that's not released to nuget yet but it's what's currently on the master branch. That's only 2 weeks old.

I don't know that much about making nuget packages and testing them locally

The nuget packages are the easy bit! There are a few stages before that:

  1. Update the compile action to build Vulkan binaries (https://github.com/SciSharp/LLamaSharp/blob/master/.github/workflows/compile.yml)
  2. Work out if there are any other files that need distributing, and if so grab those as part of the compile action too.
  3. Update runtime loading to decide when to load vulkan backend (https://github.com/SciSharp/LLamaSharp/blob/master/LLama/Native/NativeApi.Load.cs)
  4. Create nuspec files (https://github.com/SciSharp/LLamaSharp/tree/master/LLama/runtimes/build)

Even if you don't want to go all the way through this process, just doing a couple of the steps as PRs brings us closer to eventual vulkan support :)

from llamasharp.

martindevans avatar martindevans commented on September 26, 2024

Definitely a good idea, are you willing to work on the modifications to the Github actions pipeline to compile the necessary binaries? That's the first step to getting this supported.

from llamasharp.

PrestigeDevop avatar PrestigeDevop commented on September 26, 2024

I was trying to deploy llm unity to android , thought about :
avx2 with nemo intrinsics ( idk how )
vulkan backend with compute shader ( this partially implemented by other )
onnx runtime with custom model quantization ( unity doesn't support .net core bcl -clr yet)

from llamasharp.

martindevans avatar martindevans commented on September 26, 2024

Now that the CLBLAST binaries are included in the project (or at least, will be once #479 is merged) the next step is to create a nuspec file and make sure it's covered by the deployment automation script. Are you up for working on that @jasoncouture?

from llamasharp.

martindevans avatar martindevans commented on September 26, 2024

According to this comment clblast.dll may need to be included in the distributed package.

from llamasharp.

moozoo64 avatar moozoo64 commented on September 26, 2024

Please consider LLamaSharp.Backend.Vulkan

I tried using the llama.cpp prebuilt vulkan dlls
The "new ModelParams(modelPath){....}" works and produces:

ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: AMD Radeon VII | uma: 0 | fp16: 1 | warp size: 64

But "using var model = LLamaWeights.LoadFromFile(parameters);"
Gives

 System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
--------------------------------
   at LLama.Native.NativeApi.llama_load_model_from_file(System.String, LLama.Native.LLamaModelParams)
--------------------------------
   at LLama.Native.SafeLlamaModelHandle.LoadFromFile(System.String, LLama.Native.LLamaModelParams)
   at LLama.LLamaWeights.LoadFromFile(LLama.Abstractions.IModelParams)

from llamasharp.

soyfrien avatar soyfrien commented on September 26, 2024

I got llama.cpp Vulkan backend working. I just rebuilt LlamaSharp...

Is this what I would need to do to get the OpenCL (from clblast runtime folder) working? I am having a hard time building from source, so I installed the 0.9.1 NuGet package and copied those binaries. But I now get a similar error to what you were having before:

ggml_opencl: selecting platform: 'AMD Accelerated Parallel Processing'
ggml_opencl: selecting device: 'gfx1031'
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
   at LLama.Native.NativeApi.llama_load_model_from_file(System.String, LLama.Native.LLamaModelParams)
--------------------------------
   at LLama.Native.SafeLlamaModelHandle.LoadFromFile(System.String, LLama.Native.LLamaModelParams)
   at LLama.LLamaWeights.LoadFromFile(LLama.Abstractions.IModelParams)
   at Program+<<Main>$>d__0.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
   at Program.<Main>$(System.String[])
   at Program.<Main>(System.String[])

Thanks for any advice, and glad to see you have Vulkan working.

from llamasharp.

martindevans avatar martindevans commented on September 26, 2024

@soyfrien @moozoo64 In general there is zero compatibility between llama.cpp versions - i.e. you can't just swap in a new binary an expect it to work. You'll probably have better luck if you consult this table and use llama.cpp versions from that exact commit hash.

If either of you are interested in making the changes necessary to create a backend.Vulkan package that'd be a great addition!

from llamasharp.

moozoo64 avatar moozoo64 commented on September 26, 2024

@martindevans
The new lama.cpp backends where added not long ago and are undergoing rapid updates and fixes.
So, I'd rather take the latest llama.cpp dlls and build LLamaSharp around them.

With the latest nuget LlamaSharp (0.9.1) you need llama.cpp 9fb13f9
I did build 9fb13f9 but then said "hey no vulkan" doh, it's from 2 months ago.

If either of you are interested in making the changes necessary to create a backend.Vulkan package that'd be a great addition!

yeah thinking about that. I don't know that much about making nuget packages and testing them locally.
But from what I can tell someone who knows this code inside out would find adding the new backend nuget packages pretty straight forward and quick.

I'm guessing LlamaSharp is actually binding against one of the llama.dll's (noavx?) and since the rest have the same entry points, they just work.
So all that is needed to bundle the .dlls for the new backends, vulkan, kcompute, SYSCL & OpenBlas through nuget packages.
The .nuspec files being trivial to create.
I included the ggml_shared.dll and llava_shared.dll because I wasn't sure if llama.dll linked to them. Probably not.

image

from llamasharp.

soyfrien avatar soyfrien commented on September 26, 2024

Thank you for detailing how to do that, though I still don't get how the backends are created.

I just want to leave an update for anyone else using OpenCL: when using the current source instead of NuGet, you no longer need to rename llama.dll to libllama.dll.

from llamasharp.

moozoo64 avatar moozoo64 commented on September 26, 2024

Thats mostly just copy and paste from around line 410 of
https://github.com/ggerganov/llama.cpp/blob/master/.github/workflows/build.yml

  • Work out if there are any other files that need distributing, and if so grab those as part of the compile action too.

Just need download and expand all the latest release versions of llama.cpp and compare .dll's
For vulkan none
For OpenBlas thats openblas.dll
For kcompute I think fmt.dll
For CLBlas thats clblast.dll

I don't think ggml_shared.dll and llava_shared.dll are needed but "Dependency Walker" would check that.

ok, now I know how it's finding the dlls :)

Take a copy of the OpenCL one, replace the word "OpenCL" with "Vulkan" everywhere. and update the files section.

from llamasharp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.