GithubHelp home page GithubHelp logo

Comments (3)

edesalve avatar edesalve commented on June 22, 2024 1

Hi @evgenyigumnov, upgrading cuda driver to >= 545 should work: #1761.

from candle.

evgenyigumnov avatar evgenyigumnov commented on June 22, 2024
[dependencies]

candle-nn = "0.4.1"
candle-core = "0.4.1"
candle-datasets = "0.4.1"
candle-transformers = "0.4.1"
candle-examples = "0.4.1"
hf-hub = "0.3.2"
tokenizers = "0.15.2"

from candle.

dommyrock avatar dommyrock commented on June 22, 2024

Got the same issue today while running the gemma ("codegemma-7b-it") example on my legacy RTX2070 gpu 😞

Error: DriverError(CUDA_ERROR_NOT_FOUND, "named symbol not found") when loading cast_u32_bf16

nvidia-smi --query-gpu=name,compute_cap,driver_version --format=csv
~ NVIDIA GeForce RTX 2070 with Max-Q Design, 7.5, 552.12

The issue is our Gpu's don't support those capabilities

As seen from Kernel code HERE:
https://github.com/huggingface/candle/blob/main/candle-kernels/src/cast.cu#L73

Only supports Architectures with Capabilities >= 800 (8.0) while my gpu and @evgenyigumnov 's 2080 fall under unsupported Architectures

And it's set to >=800 for a reason
The 16-bit __nv_bfloat16 floating-point version of atomicAdd() is only supported by devices of compute capability 8.x and higher.

Docs: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities

Search for "The 16-bit __nv_bfloat16"

Gpu capabilities:
https://developer.nvidia.com/cuda-gpus (RTX Gpu is under "CUDA-Enabled GeForce and TITAN Products" )

Related Issue : #1911

So unfortunately both me and @evgenyigumnov can't run code including dependency on that kernel matrix function on current Gpu architectures :( .
8.0 Capability is mostly 3000 gen chips after ours ... (as seen from "Gpu capabilities" page)
image


Some relevant docs if you want to know more.

Docs:

Compute capabilities:
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities

Bfloat16
https://en.wikipedia.org/wiki/Bfloat16_floating-point_format

virtual-architecture-macros
https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#virtual-architecture-macros

using-cuda-arch
https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/#using-cuda-arch

__nv_bfloat16 samples
https://github.com/NVIDIA/cuda-samples/tree/master/Samples/3_CUDA_Features/bf16TensorCoreGemm

Candle bfloat16 added:
ec79fc4

Hope it helps : )

from candle.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.