GithubHelp home page GithubHelp logo

Comments (11)

chenchunhui97 avatar chenchunhui97 commented on June 12, 2024

when I try to compile the repo from source I encountered the following error:

Using MAX_JOBS=6 as the number of jobs.
-- Build type: RelWithDebInfo
-- Target device: cuda
-- Found python matching: /data/personal/***/anaconda3/envs/vllm2/bin/python3.
CMake Error at /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:814 (message):
  Compiling the CUDA compiler identification source file
  "CMakeCUDACompilerId.cu" failed.

  Compiler: /usr/bin/nvcc

  Build flags:

  Id flags: --keep;--keep-dir;tmp -v



  The output was:

  255

  #$ _SPACE_=

  #$ _CUDART_=cudart

  #$ _HERE_=/usr/lib/nvidia-cuda-toolkit/bin

  #$ _THERE_=/usr/lib/nvidia-cuda-toolkit/bin

  #$ _TARGET_SIZE_=

  #$ _TARGET_DIR_=

  #$ _TARGET_SIZE_=64

  #$ NVVMIR_LIBRARY_DIR=/usr/lib/nvidia-cuda-toolkit/libdevice

  #$
  PATH=/usr/lib/nvidia-cuda-toolkit/bin:/usr/local/cuda-12.2/bin:/usr/local/cuda-12.2/bin:/data/personal/***/anaconda3/envs/vllm2/bin:/data/personal/***/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin


  #$ LIBRARIES= -L/usr/lib/x86_64-linux-gnu/stubs -L/usr/lib/x86_64-linux-gnu

  #$ rm tmp/a_dlink.reg.c

  #$ gcc -D__CUDA_ARCH__=300 -E -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS
  -D__CUDACC__ -D__NVCC__ -D__CUDACC_VER_MAJOR__=10 -D__CUDACC_VER_MINOR__=1
  -D__CUDACC_VER_BUILD__=243 -include "cuda_runtime.h" -m64
  "CMakeCUDACompilerId.cu" > "tmp/CMakeCUDACompilerId.cpp1.ii"

  #$ cicc --c++14 --gnu_version=90400 --allow_managed -arch compute_30 -m64
  -ftz=0 -prec_div=1 -prec_sqrt=1 -fmad=1 --include_file_name
  "CMakeCUDACompilerId.fatbin.c" -tused -nvvmir-library
  "/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.10.bc"
  --gen_module_id_file --module_id_file_name
  "tmp/CMakeCUDACompilerId.module_id" --orig_src_file_name
  "CMakeCUDACompilerId.cu" --gen_c_file_name
  "tmp/CMakeCUDACompilerId.cudafe1.c" --stub_file_name
  "tmp/CMakeCUDACompilerId.cudafe1.stub.c" --gen_device_file_name
  "tmp/CMakeCUDACompilerId.cudafe1.gpu" "tmp/CMakeCUDACompilerId.cpp1.ii" -o
  "tmp/CMakeCUDACompilerId.ptx"

  #$ ptxas -arch=sm_30 -m64 "tmp/CMakeCUDACompilerId.ptx" -o
  "tmp/CMakeCUDACompilerId.sm_30.cubin"

  ptxas fatal : Value 'sm_30' is not defined for option 'gpu-name'

  # --error 0xff --





Call Stack (most recent call first):
  /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
  /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test)
  /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCUDACompiler.cmake:131 (CMAKE_DETERMINE_COMPILER_ID)
  /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:47 (enable_language)
  /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:87 (include)
  /data/personal/***/anaconda3/envs/vllm2/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
  CMakeLists.txt:67 (find_package)

I wonder the compiler: /usr/bin/nvcc, it is of version 10.1. and I have a default nvcc of version 12.2 but is not used when compiling. so is there any wrong about my setting or missing of package?

/usr/bin/nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

which nvcc
/usr/local/cuda-12.2/bin/nvcc

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:16:58_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0

from vllm.

youkaichao avatar youkaichao commented on June 12, 2024

Seems like cmake find the wrong cuda nvcc and toolkit. It uses /usr/lib/nvidia-cuda-toolkit and /usr/bin/nvcc.

Try to clean up your environment first. Maybe deleting the old ones.

from vllm.

chenchunhui97 avatar chenchunhui97 commented on June 12, 2024

Seems like cmake find the wrong cuda nvcc and toolkit. It uses /usr/lib/nvidia-cuda-toolkit and /usr/bin/nvcc.

Try to clean up your environment first. Maybe deleting the old ones.

Thanks. Actually i have set the PATH & LD_LIBRARY_PATH in bash scripts but it still uses older version, do you know any eaiser way to use a specific version of nvcc and cuda-tool-kit? as I can not delete or move these files freely.

from vllm.

youkaichao avatar youkaichao commented on June 12, 2024

I don't know what happened in this case. You will need to dive into cmake files to see the root cause.

from vllm.

chenchunhui97 avatar chenchunhui97 commented on June 12, 2024

I try to compile vllm in another machine(RTX3090), with the following environment:

PyTorch version: 2.2.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.1
Libc version: glibc-2.35

Python version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
架构:                              x86_64
CPU 运行模式:                      32-bit, 64-bit
Address sizes:                      46 bits physical, 48 bits virtual
字节序:                            Little Endian
CPU:                                24
在线 CPU 列表:                     0-23
厂商 ID:                           GenuineIntel
型号名称:                          13th Gen Intel(R) Core(TM) i7-13700
CPU 系列:                          6
型号:                              183
每个核的线程数:                    2
每个座的核数:                      16
座:                                1
步进:                              1
CPU 最大 MHz:                      5200.0000
CPU 最小 MHz:                      800.0000
BogoMIPS:                          4224.00
标记:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
虚拟化:                            VT-x
L1d 缓存:                          640 KiB (16 instances)
L1i 缓存:                          768 KiB (16 instances)
L2 缓存:                           24 MiB (10 instances)
L3 缓存:                           30 MiB (1 instance)
NUMA 节点:                         1
NUMA 节点0 CPU:                    0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] mypy==0.991
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.1
[pip3] torchaudio==2.2.1
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] blas                      1.0                         mkl  
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] libjpeg-turbo             2.0.0                h9bf148f_0    pytorch
[conda] mkl                       2023.1.0         h213fc3f_46344  
[conda] mkl-service               2.4.0           py310h5eee18b_1  
[conda] mkl_fft                   1.3.8           py310h5eee18b_0  
[conda] mkl_random                1.2.4           py310hdb19cb5_0  
[conda] numpy                     1.26.4          py310h5f9d8c6_0  
[conda] numpy-base                1.26.4          py310hb5e798b_0  
[conda] pytorch                   2.2.1           py3.10_cuda12.1_cudnn8.9.2_0    pytorch
[conda] pytorch-cuda              12.1                 ha16c6d3_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torch                     2.2.1                    pypi_0    pypi
[conda] torchaudio                2.2.1               py310_cu121    pytorch
[conda] torchtriton               2.2.0                     py310    pytorch
[conda] torchvision               0.17.1              py310_cu121    pytorch
[conda] triton                    2.2.0                    pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	0-23	0		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

and encounter another error:

-- Caffe2: CUDA detected: 12.4
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 12.4
-- /usr/local/cuda/lib64/libnvrtc.so shorthash is 6d168ef8
-- USE_CUDNN is set to 0. Compiling without cuDNN support
-- USE_CUSPARSELT is set to 0. Compiling without cuSPARSELt support
-- Autodetected CUDA architecture(s):  8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
CMake Warning at /home/chenchunhui/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
  static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
  /home/chenchunhui/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:127 (append_torchlib_if_found)
  CMakeLists.txt:67 (find_package)


-- CUDA supported arches: 7.0;7.5;8.0;8.6;8.9;9.0
-- CUDA target arches: 86-real
-- Punica target arches: 86-real
-- Enabling C extension.
-- Enabling moe extension.
-- Configuring done (2.1s)
-- Generating done (0.0s)
-- Build files have been written to: /mnt/HD1/workspace/vllm/build/temp.linux-x86_64-cpython-310
Using MAX_JOBS=6 as the number of jobs.
ninja: no work to do.
Using MAX_JOBS=6 as the number of jobs.
[1/5] Building CUDA object CMakeFiles/_C.dir/csrc/cache_kernels.cu.o
FAILED: CMakeFiles/_C.dir/csrc/cache_kernels.cu.o 
/usr/bin/nvcc -forward-unknown-to-host-compiler -DTORCH_EXTENSION_NAME=_C -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_C_EXPORTS -I/mnt/HD1/workspace/vllm/csrc -isystem /home/chenchunhui/anaconda3/envs/vllm/include/python3.10 -isystem /home/chenchunhui/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/include -isystem /home/chenchunhui/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -DONNX_NAMESPACE=onnx_c2 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -O2 -g -DNDEBUG -std=c++17 "--generate-code=arch=compute_86,code=[sm_86]" -Xcompiler=-fPIC --expt-relaxed-constexpr -DENABLE_FP8_E5M2 --threads=1 -D_GLIBCXX_USE_CXX11_ABI=0 -MD -MT CMakeFiles/_C.dir/csrc/cache_kernels.cu.o -MF CMakeFiles/_C.dir/csrc/cache_kernels.cu.o.d -x cu -c /mnt/HD1/workspace/vllm/csrc/cache_kernels.cu -o CMakeFiles/_C.dir/csrc/cache_kernels.cu.o
In file included from /mnt/HD1/workspace/vllm/csrc/quantization/fp8_e5m2_kvcache/../../attention/attention_dtypes.h:7,
                 from /mnt/HD1/workspace/vllm/csrc/quantization/fp8_e5m2_kvcache/quant_utils.cuh:7,
                 from /mnt/HD1/workspace/vllm/csrc/cache_kernels.cu:8:
/mnt/HD1/workspace/vllm/csrc/quantization/fp8_e5m2_kvcache/../../attention/dtype_fp8.cuh:7:10: fatal error: cuda_fp8.h: 没有那个文件或目录
    7 | #include <cuda_fp8.h>
      |          ^~~~~~~~~~~~
compilation terminated.
[2/5] Building CUDA object CMakeFiles/_C.dir/csrc/attention/attention_kernels.cu.o
FAILED: CMakeFiles/_C.dir/csrc/attention/attention_kernels.cu.o 
/usr/bin/nvcc -forward-unknown-to-host-compiler -DTORCH_EXTENSION_NAME=_C -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_C_EXPORTS -I/mnt/HD1/workspace/vllm/csrc -isystem /home/chenchunhui/anaconda3/envs/vllm/include/python3.10 -isystem /home/chenchunhui/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/include -isystem /home/chenchunhui/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -DONNX_NAMESPACE=onnx_c2 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -O2 -g -DNDEBUG -std=c++17 "--generate-code=arch=compute_86,code=[sm_86]" -Xcompiler=-fPIC --expt-relaxed-constexpr -DENABLE_FP8_E5M2 --threads=1 -D_GLIBCXX_USE_CXX11_ABI=0 -MD -MT CMakeFiles/_C.dir/csrc/attention/attention_kernels.cu.o -MF CMakeFiles/_C.dir/csrc/attention/attention_kernels.cu.o.d -x cu -c /mnt/HD1/workspace/vllm/csrc/attention/attention_kernels.cu -o CMakeFiles/_C.dir/csrc/attention/attention_kernels.cu.o
In file included from /mnt/HD1/workspace/vllm/csrc/attention/attention_dtypes.h:7,
                 from /mnt/HD1/workspace/vllm/csrc/attention/attention_kernels.cu:23:
/mnt/HD1/workspace/vllm/csrc/attention/dtype_fp8.cuh:7:10: fatal error: cuda_fp8.h: 没有那个文件或目录
    7 | #include <cuda_fp8.h>
      |          ^~~~~~~~~~~~
compilation terminated.
[4/5] Building CUDA object CMakeFiles/_C.dir/csrc/quantization/marlin/marlin_cuda_kernel.cu.o
/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=1, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=1, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=2, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=2, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=3, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=3, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=4, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=4, thread_n_blocks=8, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1033): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=1, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=1, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=2, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=2, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=3, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=3, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=4, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=256, thread_m_blocks=4, thread_n_blocks=16, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1034): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=1, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=1, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=2, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=2, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=3, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=3, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=4, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=4, thread_n_blocks=8, thread_k_blocks=4, stages=4, group_blocks=-1]" 
(1035): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=1, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=1, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=2, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=2, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=3, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=3, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(455): warning #179-D: right operand of "%" is zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=4, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

/mnt/HD1/workspace/vllm/csrc/quantization/marlin/marlin_cuda_kernel.cu(487): warning #39-D: division by zero
          detected during instantiation of "void marlin::Marlin<threads,thread_m_blocks,thread_n_blocks,thread_k_blocks,stages,group_blocks>(const int4 *, const int4 *, int4 *, const int4 *, int, int, int, int *) [with threads=128, thread_m_blocks=4, thread_n_blocks=4, thread_k_blocks=8, stages=4, group_blocks=-1]" 
(1036): here

ninja: build stopped: subcommand failed.
Traceback (most recent call last):

According the error info, it lacks of <cuda_fp8.h> file, and I can not find it in repo (2024-04-10 commit id:caada5e50aa16cd5f59bd7889128a83588ca1f99). How can I get this file?

from vllm.

youkaichao avatar youkaichao commented on June 12, 2024

CUDA runtime version: 11.8.89
-- Caffe2: CUDA detected: 12.4
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda

You still have a messy cuda environment.

If you are using cuda 12, you can directly install without any compilation. Just follow https://docs.vllm.ai/en/latest/getting_started/installation.html .

from vllm.

chenchunhui97 avatar chenchunhui97 commented on June 12, 2024

CUDA runtime version: 11.8.89
-- Caffe2: CUDA detected: 12.4
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda

You still have a messy cuda environment.

If you are using cuda 12, you can directly install without any compilation. Just follow https://docs.vllm.ai/en/latest/getting_started/installation.html .

Thanks. I make some modification and want to compile and install it.

from vllm.

chenchunhui97 avatar chenchunhui97 commented on June 12, 2024

CUDA runtime version: 11.8.89
-- Caffe2: CUDA detected: 12.4
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda

You still have a messy cuda environment.

If you are using cuda 12, you can directly install without any compilation. Just follow https://docs.vllm.ai/en/latest/getting_started/installation.html .

what is suggested env configration? including pytorch, cuda_driver, nvidia-cuda-toolkit.

from vllm.

HarryWu99 avatar HarryWu99 commented on June 12, 2024

Seems like cmake find the wrong cuda nvcc and toolkit. It uses /usr/lib/nvidia-cuda-toolkit and /usr/bin/nvcc.
Try to clean up your environment first. Maybe deleting the old ones.

Thanks. Actually i have set the PATH & LD_LIBRARY_PATH in bash scripts but it still uses older version, do you know any eaiser way to use a specific version of nvcc and cuda-tool-kit? as I can not delete or move these files freely.

@chenchunhui97 @youkaichao I encounter the same problem. It automatically uses the /usr/bin/nvcc instead of the PATH & LD_LIBRARY_PATH env variable.

Is there anyway to use specific version of nvcc?

Oh!! I tried this. It worked.

CUDACXX=/usr/local/cuda-12/bin/nvcc pip install -e .

#4011 (comment)

from vllm.

zpkhor avatar zpkhor commented on June 12, 2024

For those who is still facing this issue, my solution is to install torch with conda instead of pip conda install pytorch==2.2.1 pytorch-cuda=12.1 -c pytorch -c nvidia

from vllm.

gesanqiu avatar gesanqiu commented on June 12, 2024

I assume that the issue is because you are compling vllm with a wrong nvcc.
Please use which nvcc to check the nvcc you're using in your envs, you may see /usr/local/cuda/bin or /usr/local/cuda-12.1/bin.
And use dpkg -L nvidia-cuda-toolkit to check whether the default PATH of nvcc is matching with which nvcc, in this case you may see /usr/bin/nvcc, which is wrong. You need to uninstall nvidia-cuda-toolkit.

from vllm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.