GithubHelp home page GithubHelp logo

jittor / jittorllms Goto Github PK

View Code? Open in Web Editor NEW
2.3K 28.0 178.0 39.6 MB

计图大模型推理库,具有高性能、配置要求低、中文支持好、可移植等特点

License: Apache License 2.0

Python 94.08% Cuda 0.39% C++ 3.90% Shell 1.44% Makefile 0.06% TeX 0.13%

jittorllms's Introduction

计图大模型推理库 - 笔记本没有显卡也能跑大模型

本大模型推理库JittorLLMs有以下几个特点:

  1. 成本低:相比同类框架,本库可大幅降低硬件配置要求(减少80%),没有显卡,2G内存就能跑大模型,人人皆可在普通机器上,实现大模型本地部署;是目前已知的部署成本最低的大模型库;
  2. 支持广:目前支持了大模型包括: ChatGLM大模型; 鹏程盘古大模型; BlinkDL的ChatRWKV; Meta的LLaMA/LLaMA2大模型; MOSS大模型; Atom7B大模型 后续还将支持更多国内优秀的大模型,统一运行环境配置,降低大模型用户的使用门槛。
  3. 可移植:用户不需要修改任何代码,只需要安装Jittor版torch(JTorch),即可实现模型的迁移,以便于适配各类异构计算设备和环境。
  4. 速度快:大模型加载速度慢,Jittor框架通过零拷贝技术,大模型加载开销降低40%,同时,通过元算子自动编译优化,计算性能相比同类框架提升20%以上。

Jittor大模型库架构图如下所示。

配置要求

  • 内存要求:至少2G,推荐32G
  • 显存:可选, 推荐16G
  • 操作系统:支持Windows,Mac,Linux全平台。
  • 磁盘空间:至少40GB空闲磁盘空间,用于下载参数和存储交换文件。
  • Python版本要求至少3.8(Linux的Python版本至少3.7)。

磁盘空间不够时,可以通过环境变量JITTOR_HOME指定缓存存放路径。 内存或者显存不够,出现进程被杀死的情况,请参考下方,限制内存消耗的方法

部署方法

可以通过下述指令安装依赖。(注意:此脚本会安装Jittor版torch,推荐用户新建环境运行)

# 国内使用 gitlink clone
git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1
# github: git clone https://github.com/Jittor/JittorLLMs.git --depth 1
cd JittorLLMs
# -i 指定用jittor的源, -I 强制重装Jittor版torch
pip install -r requirements.txt -i https://pypi.jittor.org/simple -I

如果出现找不到jittor版本的错误,可能是您使用的镜像还没有更新,使用如下命令更新最新版:pip install jittor -U -i https://pypi.org/simple

部署只需一行命令即可:

python cli_demo.py [chatglm|pangualpha|llama|chatrwkv|llama2|atom7b]

运行后会自动从服务器上下载模型文件到本地,会占用根目录下一定的硬盘空间。 例如对于盘古α约为 15G。最开始运行的时候会编译一些CUDA算子,这会花费一些时间进行加载。

下图是 ChatGLM 的实时对话截图:

下图是 盘古Alpha 的实时对话截图:

下图是 ChatRWKV 的实时对话截图:

下图是 LLaMA 的实时对话截图:

下图是 LLaMA2 的实时对话截图:

下图是 Atom7b 的实时对话截图:

目前支持了 ChatGLMAtom7B 和 盘古α 的中文对话,ChatRWKV,LLaMALLaMA2 支持英文对话,后续会持续更新最新的模型参数以及微调的结果。MOSS 大··模型使用方式请参考 MOSS 官方仓库。

内存或者显存不够,出现进程被杀死的情况,请参考下方,限制内存消耗的方法

WebDemo

JittorLLM通过gradio库,允许用户在浏览器之中和大模型直接进行对话。

python web_demo.py chatglm

可以得到下图所示的结果。

Web_demo

后端服务部署

JittorLLM在api.py文件之中,提供了一个架设后端服务的示例。

python api.py chatglm

接着可以使用如下代码进行直接访问

post_data = json.dumps({'prompt': 'Hello, solve 5x=13'})
print(json.loads(requests.post("http://0.0.0.0:8000", post_data).text)['response'])

配置要求低

针对大模型显存消耗大等痛点,Jittor团队研发了动态交换技术,根据我们调研,Jittor框架是世界上首个支持动态图变量自动交换功能的框架,区别于以往的基于静态图交换技术,用户不需要修改任何代码,原生的动态图代码即可直接支持张量交换,张量数据可以在显存-内存-硬盘之间自动交换,降低用户开发难度。

同时,根据我们调研,Jittor大模型推理库也是目前对配置门槛要求最低的框架,只需要参数磁盘空间和2G内存,无需显卡,也可以部署大模型,下面是在不同硬件配置条件下的资源消耗与速度对比。可以发现,JittorLLMs在显存充足的情况下,性能优于同类框架,而显存不足甚至没有显卡,JittorLLMs都能以一定速度运行。

节省内存方法,请安装Jittor版本大于1.3.7.8,并添加如下环境变量:

export JT_SAVE_MEM=1
# 限制cpu最多使用16G
export cpu_mem_limit=16000000000
# 限制device内存(如gpu、tpu等)最多使用8G
export device_mem_limit=8000000000
# windows 用户,请使用powershell
# $env:JT_SAVE_MEM="1"
# $env:cpu_mem_limit="16000000000"
# $env:device_mem_limit="8000000000"

用户可以自由设定cpu和设备内存的使用量,如果不希望对内存进行限制,可以设置为-1

# 限制cpu最多使用16G
export cpu_mem_limit=-1
# 限制device内存(如gpu、tpu等)最多使用8G
export device_mem_limit=-1
# windows 用户,请使用powershell
# $env:JT_SAVE_MEM="1"
# $env:cpu_mem_limit="-1"
# $env:device_mem_limit="-1"

如果想要清理磁盘交换文件,可以运行如下命令

python -m jittor_utils.clean_cache swap

速度更快

大模型在推理过程中,常常碰到参数文件过大,模型加载效率低下等问题。Jittor框架通过内存直通读取,减少内存拷贝数量,大大提升模型加载效率。相比PyTorch框架,Jittor框架的模型加载效率提升了40%。

可移植性高

Jittor团队发布Jittor版PyTorch接口JTorch,用户无需修改任何代码,只需要按照如下方法安装,即可通过Jittor框架的优势节省显存、提高效率。

pip install torch -i https://pypi.jittor.org/simple

通过jtorch,即可适配各类异构大模型代码,如常见的Megatron、Hugging Face Transformers,均可直接移植。同时,通过计图底层元算子硬件适配能力,可以十分方便的迁移到各类国内外计算设备上。

欢迎各位大模型用户尝试、使用,并且给我们提出宝贵的意见,未来,非十科技和清华大学可视媒体研究中心将继续专注于大模型的支撑,服务好大模型用户,提供成本更低,效率更高的解决方案,同时,欢迎各位大模型用户提交代码到JittorLLMs,丰富Jittor大模型库的支持。

后续计划

  • 模型训练与微调
  • 移植 MOSS 大模型
  • 动态 swap 性能优化
  • CPU 性能优化
  • 添加更多国内外优秀大模型支持
  • ......

模型支持TODO list

欢迎各位向我们提交请求

欢迎各位向我们提出宝贵的意见,可加入计图开发者交流群实时交流。

关于我们

本计图大模型推理库,由非十科技领衔,与清华大学可视媒体研究中心合作研发,希望为国内大模型的研究提供软硬件的支撑。

北京非十科技有限公司是国内专业从事人工智能服务的科技公司,在3D AIGC、深度学习框架以及大模型领域,具有领先的技术优势。技术上致力于加速人工智能算法从硬件到软件全流程的落地应用、提供各类计算加速硬件的适配、定制深度学习框架以及优化人工智能应用性能速度等服务。公司技术骨干毕业自清华大学,具有丰富的系统软件、图形学、编译技术和深度学习框架的研发经验。公司研发了基于计图深度学习框架的国产自主可控人工智能系统,完成了对近十个国产加速硬件厂商的适配,正积极促进于国产人工智能生态的发展。开源了的高性能的神经辐射场渲染库JNeRF,可生成高质量3D AIGC模型,开源的JittorLLMs是目前硬件配置要求最低的大模型推理库。

jittorllms's People

Contributors

cjld avatar exusial avatar jittor avatar li-xl avatar lzhengning avatar zjp-shadow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jittorllms's Issues

錯誤

RuntimeError: [f 0404 16:50:47.372000 60 cache_compile.cc:266] Check failed: src.size() Something wrong... Could you please report this issue?

python cli_demo.py chatglm 命令报错

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\Projects\JittorLLMs\JittorLLMs\cli_demo.py", line 8, in
model = models.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\JittorLLMs\JittorLLMs\models_init_.py", line 38, in get_model
globals()f"get_{model_name}"
File "D:\Projects\JittorLLMs\JittorLLMs\models\util.py", line 51, in get_chatglm
new_path.append(download_fromhub(f"jittorhub://{f}", tdir="chat-glm"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\JittorLLMs\JittorLLMs\models\util.py", line 5, in download_fromhub
import jittor as jt
File "C:\Python311\Lib\site-packages\jittor_init_.py", line 18, in
from . import compiler
File "C:\Python311\Lib\site-packages\jittor\compiler.py", line 1356, in
compile(cc_path, cc_flags+opt_flags, files, 'jittor_core'+extension_suffix)
File "C:\Python311\Lib\site-packages\jittor\compiler.py", line 151, in compile
jit_utils.run_cmds(cmds, cache_path, jittor_path, "Compiling "+base_output)
File "C:\Python311\Lib\site-packages\jittor_utils_init_.py", line 251, in run_cmds
for i,_ in enumerate(p.imap_unordered(do_compile, cmds)):
File "C:\Python311\Lib\multiprocessing\pool.py", line 873, in next
raise value
RuntimeError: [f 0404 16:41:33.781000 36 log.cc:608] Check failed ret(2) == 0(0) Run cmd failed: "C:\Users\Yanjing.cache\jittor\msvc\VC_____\bin\cl.exe" "c:\python311\lib\site-packages\jittor\src\pybind\py_var_tracer.cc" -std:c++17 -EHa -MD -utf-8 -nologo -I"C:\Users\Yanjing.cache\jittor\msvc\VC\include" -I"C:\Users\Yanjing.cache\jittor\msvc\win10_kits\include\ucrt" -I"C:\Users\Yanjing.cache\jittor\msvc\win10_kits\include\shared" -I"C:\Users\Yanjing.cache\jittor\msvc\win10_kits\include\um" -DNOMINMAX -I"c:\python311\lib\site-packages\jittor\src" -I"c:\python311\include" -I"C:\Users\Yanjing.cache\jittor\jt1.3.7\cl\py3.11.0\Windows-10-10.x4a\AMDRyzen75800Hxb1\default" -O2 -c -Fo: "C:\Users\Yanjing.cache\jittor\jt1.3.7\cl\py3.11.0\Windows-10-10.x4a\AMDRyzen75800Hxb1\default\obj_files\py_var_tracer.cc.obj"

could not load the checkpoint

文件夹下,没有生成文件:model_optim_rng.pth

(aichat) PS E:\AI\pytorch\JittorLLMs> python cli_demo.py pangualpha
WARNING: APEX is not installed, multi_tensor_applier will not be available.
WARNING: APEX is not installed, using torch.nn.LayerNorm instead of apex.normalization.FusedLayerNorm!
E:\AI\pytorch\JittorLLMs\models\pangualpha
using world size: 1 and model-parallel size: 1
using torch.float32 for parameters ...
WARNING: overriding default arguments for tokenizer_type:GPT2BPETokenizer                        with tokenizer_type:GPT2BPETokenizer
-------------------- arguments --------------------
  adlr_autoresume ................. False
  adlr_autoresume_interval ........ 1000
  apply_query_key_layer_scaling ... False
  apply_residual_connection_post_layernorm  False
  attention_dropout ............... 0.1
  attention_softmax_in_fp32 ....... False
  batch_size ...................... 1
  bert_load ....................... None
  bias_dropout_fusion ............. False
  bias_gelu_fusion ................ False
  block_data_path ................. None
  checkpoint_activations .......... False
  checkpoint_num_layers ........... 1
  clip_grad ....................... 1.0
  data_impl ....................... infer
  data_path ....................... None
  DDP_impl ........................ local
  distribute_checkpointed_activations  False
  distributed_backend ............. nccl
  dynamic_loss_scale .............. True
  eod_mask_loss ................... False
  eval_interval ................... 1000
  eval_iters ...................... 100
  exit_interval ................... None
  faiss_use_gpu ................... False
  finetune ........................ True
  fp16 ............................ False
  fp16_lm_cross_entropy ........... False
  fp32_allreduce .................. False
  genfile ......................... None
  greedy .......................... False
  hidden_dropout .................. 0.1
  hidden_size ..................... 2560
  hysteresis ...................... 2
  ict_head_size ................... None
  ict_load ........................ None
  indexer_batch_size .............. 128
  indexer_log_interval ............ 1000
  init_method_std ................. 0.02
  layernorm_epsilon ............... 1e-05
  lazy_mpu_init ................... None
  load ............................ C:\Users\Administrator\.cache\jittor\jt1.3.7\cl\py3.9.16\Windows-10-10.x34\IntelRCoreTMi5x00\default\cu11.2.67\checkpoints\pangu\Pangu-alpha_2.6B_fp16_mgt
  local_rank ...................... None
  log_interval .................... 100
  loss_scale ...................... None
  loss_scale_window ............... 1000
  lr .............................. None
  lr_decay_iters .................. None
  lr_decay_style .................. linear
  make_vocab_size_divisible_by .... 1
  mask_prob ....................... 0.15
  max_position_embeddings ......... 1024
  merge_file ...................... None
  min_lr .......................... 0.0
  min_scale ....................... 1
  mmap_warmup ..................... False
  model_parallel_size ............. 1
  no_load_optim ................... False
  no_load_rng ..................... False
  no_save_optim ................... False
  no_save_rng ..................... False
  num_attention_heads ............. 32
  num_layers ...................... 31
  num_samples ..................... 0
  num_unique_layers ............... None
  num_workers ..................... 2
  onnx_safe ....................... None
  openai_gelu ..................... False
  out_seq_length .................. 50
  override_lr_scheduler ........... False
  param_sharing_style ............. grouped
  params_dtype .................... torch.float32
  query_in_block_prob ............. 0.1
  rank ............................ 0
  recompute ....................... False
  report_topk_accuracies .......... []
  reset_attention_mask ............ False
  reset_position_ids .............. False
  sample_input_file ............... None
  sample_output_file .............. None
  save ............................ None
  save_interval ................... None
  scaled_upper_triang_masked_softmax_fusion  False
  seed ............................ 1234
  seq_length ...................... 1024
  short_seq_prob .................. 0.1
  split ........................... 969, 30, 1
  temperature ..................... 1.0
  tensorboard_dir ................. None
  titles_data_path ................ None
  tokenizer_type .................. GPT2BPETokenizer
  top_k ........................... 2
  top_p ........................... 0.0
  train_iters ..................... None
  use_checkpoint_lr_scheduler ..... False
  use_cpu_initialization .......... False
  use_one_sent_docs ............... False
  vocab_file ...................... models/pangualpha/megatron/tokenizer/bpe_4w_pcl/vocab
  warmup .......................... 0.01
  weight_decay .................... 0.01
  world_size ...................... 1
---------------- end of arguments ----------------
> building GPT2BPETokenizer tokenizer ...
 > padded vocab (size: 40000) with 0 dummy tokens (new size: 40000)
torch distributed is already initialized, skipping initialization ...
> initializing model parallel with size 1
> setting random seeds to 1234 ...
> initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
building GPT2 model ...
 > number of parameters on model parallel rank 0: 2625295360
global rank 0 is loading checkpoint C:\Users\Administrator\.cache\jittor\jt1.3.7\cl\py3.9.16\Windows-10-10.x34\IntelRCoreTMi5x00\default\cu11.2.67\checkpoints\pangu\Pangu-alpha_2.6B_fp16_mgt\iter_0001000\mp_rank_00\model_optim_rng.pth
could not load the checkpoint

Docker 运行 JittorLLMs

一键运行 JittorLLMs

docker run -it --rm --name jittorllms -e cpu_mem_limit=8000000000 -e JT_SAVE_MEM=1 -v /tmp/data:/data registry.cn-beijing.aliyuncs.com/starlink-network/jittorllms:latest python3 cli_demo.py chatglm 

CPU 跑 chatglm 模型 限制最大占用8G内存,cpu_mem_limit 限制内存使用量。

Mac m1 run cmd failed "/usr/bin/clang++"

OS: Mac ventura 13.4 beta on Mac mini m1
Command: python3 cli_demo.py chatglm

[i 0405 21:30:32.653299 04 compiler.py:955] Jittor(1.3.7.10) src: /Library/Python/3.9/site-packages/jittor
[i 0405 21:30:32.693324 04 compiler.py:956] clang at /usr/bin/clang++(14.0.3)
[i 0405 21:30:32.693533 04 compiler.py:957] cache_path: /Users/charlie/.cache/jittor/jt1.3.7/clang14.0.3/py3.9.6/macOS-13.4-armxff/AppleM1/default
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/cache_compile.cc:12:
In file included from /Library/Python/3.9/site-packages/jittor/src/misc/hash.h:8:
In file included from /Library/Python/3.9/site-packages/jittor/src/common.h:10:
/Library/Python/3.9/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
        send_log(move(out), level, verbose);
                 ^
                 std::
1 warning generated.
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/log.cc:14:
/Library/Python/3.9/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
        send_log(move(out), level, verbose);
                 ^
                 std::
/Library/Python/3.9/site-packages/jittor/src/utils/log.cc:195:12: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
    return move(logs);
           ^
           std::
/Library/Python/3.9/site-packages/jittor/src/utils/log.cc:376:19: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
    vprefix_map = move(new_map);
                  ^
                  std::
3 warnings generated.
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/tracer.cc:11:
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/tracer.h:8:
In file included from /Library/Python/3.9/site-packages/jittor/src/common.h:10:
/Library/Python/3.9/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
        send_log(move(out), level, verbose);
                 ^
                 std::
/Library/Python/3.9/site-packages/jittor/src/utils/tracer.cc:49:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
        sprintf(pid_buf, "%d", getpid());
        ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
        #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))
                                                      ^
/Library/Python/3.9/site-packages/jittor/src/utils/tracer.cc:145:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
        sprintf(pid_buf, "%d", getpid());
        ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
        #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))
                                                      ^
/Library/Python/3.9/site-packages/jittor/src/utils/tracer.cc:147:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
        sprintf(st_buf, "set backtrace limit %d", trace_depth);
        ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
        #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))
                                                      ^
/Library/Python/3.9/site-packages/jittor/src/utils/tracer.cc:213:13: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
            sprintf(syscom,"%s %p -f -p -i -e %.*s", addr2line_path.c_str(), trace[i], p, messages[i]);
            ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
        #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))
                                                      ^
5 warnings generated.
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/jit_utils.cc:7:
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/cache_compile.h:8:
In file included from /Library/Python/3.9/site-packages/jittor/src/common.h:10:
/Library/Python/3.9/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
        send_log(move(out), level, verbose);
                 ^
                 std::
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/jit_utils.cc:8:
In file included from /Library/Python/3.9/site-packages/jittor/src/pyjt/py_converter.h:17:
/Library/Python/3.9/site-packages/jittor/src/profiler/simple_profiler.h:48:48: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
    inline SimpleProfiler(string&& name): name(move(name)), cnt(0), total_ns(0), sum(0) {}
                                               ^
                                               std::
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/jit_utils.cc:8:
/Library/Python/3.9/site-packages/jittor/src/pyjt/py_converter.h:358:16: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
        return move(fetch_sync({ptr}).at(0));
               ^
               std::
/Library/Python/3.9/site-packages/jittor/src/utils/jit_utils.cc:505:37: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
                return to_py_object(move(ret));
                                    ^
                                    std::
4 warnings generated.
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/str_utils.cc:8:
In file included from /Library/Python/3.9/site-packages/jittor/src/utils/str_utils.h:8:
In file included from /Library/Python/3.9/site-packages/jittor/src/common.h:10:
/Library/Python/3.9/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
        send_log(move(out), level, verbose);
                 ^
                 std::
1 warning generated.
ld: library not found for -lomp
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Traceback (most recent call last):
  File "/Users/charlie/dev/JittorLLMs/cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "/Users/charlie/dev/JittorLLMs/models/__init__.py", line 38, in get_model
    globals()[f"get_{model_name}"]()
  File "/Users/charlie/dev/JittorLLMs/models/util.py", line 51, in get_chatglm
    new_path.append(download_fromhub(f"jittorhub://{f}", tdir="chat-glm"))
  File "/Users/charlie/dev/JittorLLMs/models/util.py", line 5, in download_fromhub
    import jittor as jt
  File "/Library/Python/3.9/site-packages/jittor/__init__.py", line 18, in <module>
    from . import compiler
  File "/Library/Python/3.9/site-packages/jittor/compiler.py", line 1189, in <module>
    check_cache_compile()
  File "/Library/Python/3.9/site-packages/jittor/compiler.py", line 884, in check_cache_compile
    recompile = compile(cc_path, cc_flags+f" {opt_flags} ", files, jit_utils.cache_path+'/jit_utils_core'+extension_suffix, True)
  File "/Library/Python/3.9/site-packages/jittor/compiler.py", line 126, in compile
    return do_compile(fix_cl_flags(cmd))
  File "/Library/Python/3.9/site-packages/jittor/compiler.py", line 91, in do_compile
    run_cmd(cmd)
  File "/Library/Python/3.9/site-packages/jittor_utils/__init__.py", line 188, in run_cmd
    raise Exception(err_msg)
Exception: Run cmd failed: "/usr/bin/clang++" "/Library/Python/3.9/site-packages/jittor/src/utils/cache_compile.cc" "/Library/Python/3.9/site-packages/jittor/src/utils/log.cc" "/Library/Python/3.9/site-packages/jittor/src/utils/tracer.cc" "/Library/Python/3.9/site-packages/jittor/src/utils/jit_utils.cc" "/Library/Python/3.9/site-packages/jittor/src/utils/str_utils.cc"   -Wall -Wno-unknown-pragmas -std=c++14 -fPIC  -mcpu=apple-m2  -fdiagnostics-color=always  -undefined dynamic_lookup -lomp  -lstdc++ -ldl -shared  -I"/Library/Python/3.9/site-packages/jittor/src" -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/Headers  -O2   -o "/Users/charlie/.cache/jittor/jt1.3.7/clang14.0.3/py3.9.6/macOS-13.4-armxff/AppleM1/default/jit_utils_core.cpython-39-darwin.so"

无显卡主机运行报错需要CUDA

PS F:\chatGPT\JittorLLMs> python cli_demo.py pangualpha
WARNING: APEX is not installed, multi_tensor_applier will not be available.
WARNING: APEX is not installed, using torch.nn.LayerNorm instead of apex.normalization.FusedLayerNorm!
F:\chatGPT\JittorLLMs\models\pangualpha
Traceback (most recent call last):
File "F:\chatGPT\JittorLLMs\cli_demo.py", line 8, in
model = models.get_model(args)
File "F:\chatGPT\JittorLLMs\models_init_.py", line 46, in get_model
return module.get_model(args)
File "F:\chatGPT\JittorLLMs\models\pangualpha_init_.py", line 173, in get_model
return PanGuAlphaModel()
File "F:\chatGPT\JittorLLMs\models\pangualpha_init_.py", line 134, in init
initialize_megatron(extra_args_provider=add_text_generate_args,
File "F:\chatGPT\JittorLLMs\models\pangualpha\megatron\initialize.py", line 44, in initialize_megatron
assert torch.cuda.is_available(), 'Megatron requires CUDA.'
AssertionError: Megatron requires CUDA.

为什么还是要求CUDA呢?

系统环境

root@JittorLLMs:~/JittorLLMs# cat /etc/issue
Ubuntu 22.04.2 LTS \n \l

root@JittorLLMs:~/JittorLLMs# uname -a
Linux JittorLLMs 5.15.0-1025-oracle #31-Ubuntu SMP Fri Nov 25 17:03:15 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
root@JittorLLMs:~/JittorLLMs# free -m
               total        used        free      shared  buff/cache   available
Mem:           23988        4654         656         247       18677       18771
Swap:              0           0           0
root@JittorLLMs:~/JittorLLMs# 

结果

root@JittorLLMs:~/JittorLLMs# python cli_demo.py pangualpha
WARNING: APEX is not installed, multi_tensor_applier will not be available.
WARNING: APEX is not installed, using torch.nn.LayerNorm instead of apex.normalization.FusedLayerNorm!
/root/JittorLLMs/models/pangualpha
Traceback (most recent call last):
  File "/root/JittorLLMs/cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "/root/JittorLLMs/models/__init__.py", line 42, in get_model
    return module.get_model(args)
  File "/root/JittorLLMs/models/pangualpha/__init__.py", line 173, in get_model
    return PanGuAlphaModel()
  File "/root/JittorLLMs/models/pangualpha/__init__.py", line 134, in __init__
    initialize_megatron(extra_args_provider=add_text_generate_args,
  File "/root/JittorLLMs/models/pangualpha/megatron/initialize.py", line 44, in initialize_megatron
    assert torch.cuda.is_available(), 'Megatron requires CUDA.'
AssertionError: Megatron requires CUDA.
root@JittorLLMs:~/JittorLLMs# 

Mac M1遇到一个问题

ImportError: dlopen(/Users/program_machine/.cache/jittor/jt1.3.7/clang14.0.0/py3.9.16/macOS-10.16-x8xb8/AppleM1Pro/default/jittor_core.cpython-39-darwin.so, 0x000A): symbol not found in flat namespace '_omp_get_max_threads'

cli_demo.py chatglm 升级到1.3.7.4后仍然报错

❯ python cli_demo.py chatglm
[i 0404 17:43:26.955718 72 compiler.py:955] Jittor(1.3.7.4) src: /home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor
[i 0404 17:43:26.957501 72 compiler.py:956] g++ at /usr/bin/g++(7.5.0)
[i 0404 17:43:26.957551 72 compiler.py:957] cache_path: /home/final/.cache/jittor/jt1.3.7/g++7.5.0/py3.10.10/Linux-5.14.21-x35/IntelRCoreTMi7x56/default
[i 0404 17:43:26.959628 72 __init__.py:411] Found /usr/local/cuda/bin/nvcc(11.8.89) at /usr/local/cuda/bin/nvcc.
[i 0404 17:43:27.012448 72 __init__.py:411] Found gdb(12.1) at /usr/bin/gdb.
[i 0404 17:43:27.014705 72 __init__.py:411] Found addr2line(150100.7.40) at /usr/bin/addr2line.
[i 0404 17:43:27.150051 72 compiler.py:1010] cuda key:cu11.8.89_sm_
[i 0404 17:43:27.322970 72 __init__.py:227] Total mem: 15.33GB, using 5 procs for compiling.
[i 0404 17:43:27.393922 72 jit_compiler.cc:28] Load cc_path: /usr/bin/g++
[i 0404 17:43:27.471475 72 init.cc:62] Found cuda archs: []
[i 0404 17:43:27.569246 72 compile_extern.py:522] mpicc not found, distribution disabled.
Traceback (most recent call last):
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor/compile_extern.py", line 235, in setup_cuda_extern
    setup_cuda_lib(lib_name, extra_flags=link_cuda_extern)
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor/compile_extern.py", line 266, in setup_cuda_lib
    cuda_include_name = search_file([cuda_include, extra_include_path, "/usr/include"], lib_name+".h")
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor/compile_extern.py", line 32, in search_file
    LOG.f(f"file {name} not found in {dirs}")
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor_utils/__init__.py", line 104, in f
    def f(self, *msg): self._log('f', 0, *msg)
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor_utils/__init__.py", line 89, in _log
    cc.log(fileline, level, verbose, msg)
RuntimeError: [f 0404 17:43:27.615175 72 compile_extern.py:32] file cudnn.h not found in ['/usr/local/cuda/include', '/usr/local/cuda/targets/x86_64-linux/include', '/usr/include']

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/final/workspace/JittorLLMs/cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "/home/final/workspace/JittorLLMs/models/__init__.py", line 38, in get_model
    globals()[f"get_{model_name}"]()
  File "/home/final/workspace/JittorLLMs/models/util.py", line 51, in get_chatglm
    new_path.append(download_fromhub(f"jittorhub://{f}", tdir="chat-glm"))
  File "/home/final/workspace/JittorLLMs/models/util.py", line 5, in download_fromhub
    import jittor as jt
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor/__init__.py", line 25, in <module>
    from . import compile_extern
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor/compile_extern.py", line 596, in <module>
    setup_cuda_extern()
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor/compile_extern.py", line 247, in setup_cuda_extern
    LOG.f(msg)
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor_utils/__init__.py", line 104, in f
    def f(self, *msg): self._log('f', 0, *msg)
  File "/home/final/miniforge3/envs/py310/lib/python3.10/site-packages/jittor_utils/__init__.py", line 89, in _log
    cc.log(fileline, level, verbose, msg)
RuntimeError: [f 0404 17:43:27.615244 72 compile_extern.py:247] CUDA found but cudnn is not loaded:
Develop version of CUDNN not found,
please refer to CUDA offical tar file installation:
https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installlinux-tar
or you can let jittor install cuda and cudnn for you:
>>> python3.10 -m jittor_utils.install_cuda

当前版本:

Python3.10
jittor == 1.3.7.4
jtorch == 0.1.3

运行报RuntimeError错误

(jittor) andy@ai:/code/source/ai/JittorLLMs$ python cli_demo.py chatglm
[i 0409 19:39:54.648478 12 compiler.py:955] Jittor(1.3.7.12) src: /code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor
[i 0409 19:39:54.652741 12 compiler.py:956] g++ at /usr/bin/g++(11.3.0)
[i 0409 19:39:54.652926 12 compiler.py:957] cache_path: /home/andy/.cache/jittor/jt1.3.7/g++11.3.0/py3.7.16/Linux-5.19.0-3x69/IntelRCoreTMi5xda/default
[i 0409 19:39:54.669861 12 install_cuda.py:93] cuda_driver_version: [11, 8]
[i 0409 19:39:54.670362 12 install_cuda.py:81] restart /code/linux/anaconda3/envs/jittor/bin/python ['cli_demo.py', 'chatglm']
[i 0409 19:39:54.814949 88 compiler.py:955] Jittor(1.3.7.12) src: /code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor
[i 0409 19:39:54.818949 88 compiler.py:956] g++ at /usr/bin/g++(11.3.0)
[i 0409 19:39:54.819099 88 compiler.py:957] cache_path: /home/andy/.cache/jittor/jt1.3.7/g++11.3.0/py3.7.16/Linux-5.19.0-3x69/IntelRCoreTMi5xda/default
[i 0409 19:39:54.835834 88 install_cuda.py:93] cuda_driver_version: [11, 8]
[i 0409 19:39:54.840973 88 init.py:411] Found /home/andy/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc(11.2.152) at /home/andy/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc.
[i 0409 19:39:54.900603 88 init.py:411] Found gdb(12.1) at /usr/bin/gdb.
[i 0409 19:39:54.905797 88 init.py:411] Found addr2line(2.38) at /usr/bin/addr2line.
[i 0409 19:39:55.047044 88 compiler.py:1010] cuda key:cu11.2.152_sm_75
[i 0409 19:39:55.281029 88 init.py:227] Total mem: 31.29GB, using 10 procs for compiling.
/usr/include/stdio.h(189): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(201): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(223): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(260): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(285): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(294): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(303): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(309): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(315): error: attribute "malloc" does not take arguments

/usr/include/stdio.h(830): error: attribute "malloc" does not take arguments

/usr/include/stdlib.h(566): error: attribute "malloc" does not take arguments

/usr/include/stdlib.h(570): error: attribute "malloc" does not take arguments

/usr/include/stdlib.h(799): error: attribute "malloc" does not take arguments

/usr/include/c++/11/type_traits(1406): error: type name is not allowed

/usr/include/c++/11/type_traits(1406): error: type name is not allowed

/usr/include/c++/11/type_traits(1406): error: identifier "__is_same" is undefined

/usr/include/wchar.h(155): error: attribute "malloc" does not take arguments

/usr/include/wchar.h(582): error: attribute "malloc" does not take arguments

/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src/misc/cstr.h(19): error: no instance of overloaded function "std::unique_ptr<_Tp [], _Dp>::reset [with _Tp=char, _Dp=std::default_delete<char []>]" matches the argument list
argument types are: (char *)
object type is: jittor::unique_ptr<char []>

/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src/misc/cstr.h(25): error: no instance of overloaded function "std::unique_ptr<_Tp [], _Dp>::reset [with _Tp=char, _Dp=std::default_delete<char []>]" matches the argument list
argument types are: (char *)
object type is: jittor::unique_ptr<char []>

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long, std::is_same<int, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=int, _CharT=char, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6620): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long, std::is_same<long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=long, _CharT=char, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6625): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const unsigned long, std::is_same<unsigned long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long, _Ret=unsigned long, _CharT=char, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6630): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long long, std::is_same<long long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long long, _Ret=long long, _CharT=char, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6635): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const unsigned long long, std::is_same<unsigned long long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long long, _Ret=unsigned long long, _CharT=char, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6640): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const float, std::is_same<float, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=float, _Ret=float, _CharT=char, _Base=<>]"
/usr/include/c++/11/bits/basic_string.h(6646): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const double, std::is_same<double, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=double, _Ret=double, _CharT=char, _Base=<>]"
/usr/include/c++/11/bits/basic_string.h(6650): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long double, std::is_same<long double, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long double, _Ret=long double, _CharT=char, _Base=<>]"
/usr/include/c++/11/bits/basic_string.h(6654): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long, std::is_same<int, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=int, _CharT=wchar_t, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6751): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long, std::is_same<long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=long, _CharT=wchar_t, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6756): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const unsigned long, std::is_same<unsigned long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long, _Ret=unsigned long, _CharT=wchar_t, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6761): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long long, std::is_same<long long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long long, _Ret=long long, _CharT=wchar_t, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6766): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const unsigned long long, std::is_same<unsigned long long, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long long, _Ret=unsigned long long, _CharT=wchar_t, _Base=]"
/usr/include/c++/11/bits/basic_string.h(6771): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const float, std::is_same<float, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=float, _Ret=float, _CharT=wchar_t, _Base=<>]"
/usr/include/c++/11/bits/basic_string.h(6777): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const double, std::is_same<double, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=double, _Ret=double, _CharT=wchar_t, _Base=<>]"
/usr/include/c++/11/bits/basic_string.h(6781): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
argument types are: (const long double, std::is_same<long double, int>)
detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long double, _Ret=long double, _CharT=wchar_t, _Base=<>]"
/usr/include/c++/11/bits/basic_string.h(6785): here

36 errors detected in the compilation of "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src/misc/nan_checker.cu".
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor_utils/init.py", line 197, in do_compile
return cc.cache_compile(cmd, cache_path, jittor_path)
RuntimeError: [f 0409 19:39:57.996372 88 log.cc:608] Check failed ret(256) == 0(0) Run cmd failed: "/home/andy/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc" "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src/misc/nan_checker.cu" -std=c++14 -Xcompiler -fPIC -Xcompiler -march=native -Xcompiler -fdiagnostics-color=always -I"/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src" -I/code/linux/anaconda3/envs/jittor/include/python3.7m -I/code/linux/anaconda3/envs/jittor/include/python3.7m -DHAS_CUDA -DIS_CUDA -I"/home/andy/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include" -I"/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/extern/cuda/inc" -I"/home/andy/.cache/jittor/jt1.3.7/g++11.3.0/py3.7.16/Linux-5.19.0-3x69/IntelRCoreTMi5xda/default/cu11.2.152_sm_75" -O2 -c -o "/home/andy/.cache/jittor/jt1.3.7/g++11.3.0/py3.7.16/Linux-5.19.0-3x69/IntelRCoreTMi5xda/default/cu11.2.152_sm_75/obj_files/nan_checker.cu.o" -x cu --cudart=shared -ccbin="/usr/bin/g++" -w -I"/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/extern/cuda/inc"
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "cli_demo.py", line 8, in
model = models.get_model(args)
File "/code/source/ai/JittorLLMs/models/init.py", line 38, in get_model
globals()f"get_{model_name}"
File "/code/source/ai/JittorLLMs/models/util.py", line 51, in get_chatglm
new_path.append(download_fromhub(f"jittorhub://{f}", tdir="chat-glm"))
File "/code/source/ai/JittorLLMs/models/util.py", line 5, in download_fromhub
import jittor as jt
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/init.py", line 18, in
from . import compiler
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/compiler.py", line 1353, in
compile(cc_path, cc_flags+opt_flags, files, 'jittor_core'+extension_suffix)
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/compiler.py", line 151, in compile
jit_utils.run_cmds(cmds, cache_path, jittor_path, "Compiling "+base_output)
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor_utils/init.py", line 251, in run_cmds
for i,_ in enumerate(p.imap_unordered(do_compile, cmds)):
File "/code/linux/anaconda3/envs/jittor/lib/python3.7/multiprocessing/pool.py", line 748, in next
raise value
RuntimeError: [f 0409 19:39:57.996372 88 log.cc:608] Check failed ret(256) == 0(0) Run cmd failed: "/home/andy/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc" "/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src/misc/nan_checker.cu" -std=c++14 -Xcompiler -fPIC -Xcompiler -march=native -Xcompiler -fdiagnostics-color=always -I"/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/src" -I/code/linux/anaconda3/envs/jittor/include/python3.7m -I/code/linux/anaconda3/envs/jittor/include/python3.7m -DHAS_CUDA -DIS_CUDA -I"/home/andy/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include" -I"/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/extern/cuda/inc" -I"/home/andy/.cache/jittor/jt1.3.7/g++11.3.0/py3.7.16/Linux-5.19.0-3x69/IntelRCoreTMi5xda/default/cu11.2.152_sm_75" -O2 -c -o "/home/andy/.cache/jittor/jt1.3.7/g++11.3.0/py3.7.16/Linux-5.19.0-3x69/IntelRCoreTMi5xda/default/cu11.2.152_sm_75/obj_files/nan_checker.cu.o" -x cu --cudart=shared -ccbin="/usr/bin/g++" -w -I"/code/linux/anaconda3/envs/jittor/lib/python3.7/site-packages/jittor/extern/cuda/inc"

compile operators failed

webui and cli can start,but when input text and infer the jittor will compile operators and fail.
GPU RTX3060 laptop 6G/ CPU R7 5800H/Windows 11/16G ram
Here're the output:

Compiling Operators(14/14) used: 4.21s eta: 0s
[e 0407 01:20:46.873000 72 log.cc:565] cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc
E:\program_files\anaconda\include\cuda\std\detail/libcxx/include/type_traits(4842): error: identifier "__builtin_is_constant_evaluated" is undefined

E:\program_files\anaconda\include\cuda\std\detail/libcxx/include/type_traits(4847): error: identifier "__builtin_is_constant_evaluated" is undefined

2 errors detected in the compilation of "C:/Users/username/.cache/jittor/jt1.3.7/cl/py3.10.9/Windows-10-10.x13/AMDRyzen75800Hxb1/main/cu11.2.67/jit/code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc".
code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc

Traceback (most recent call last):
File "E:\SAM_model\jittor\JittorLLMs\cli_demo.py", line 9, in
model.chat()
File "E:\SAM_model\jittor\JittorLLMs\models\chatglm_init_.py", line 36, in chat
for response, history in self.model.stream_chat(self.tokenizer, text, history=history):
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1259, in stream_chat
for outputs in self.stream_generate(**input_ids, **gen_kwargs):
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1336, in stream_generate
outputs = self(
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jtorch\nn_init_.py", line 16, in call
return self.forward(*args, **kw)
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1138, in forward
transformer_outputs = self.transformer(
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jtorch\nn_init_.py", line 16, in call
return self.forward(*args, **kw)
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 973, in forward
layer_ret = layer(
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jtorch\nn_init_.py", line 16, in call
return self.forward(*args, **kw)
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 614, in forward
attention_outputs = self.attention(
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jtorch\nn_init_.py", line 16, in call
return self.forward(*args, **kw)
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 454, in forward
cos, sin = self.rotary_emb(q1, seq_len=position_ids.max() + 1)
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jtorch\nn_init_.py", line 16, in call
return self.forward(*args, **kw)
File "C:\Users\username/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 202, in forward
t = torch.arange(seq_len, device=x.device, dtype=self.inv_freq.dtype)
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jtorch_init_.py", line 31, in inner
ret = func(*args, **kw)
File "E:\SAM_model\jittor\JittorLLMs\venv\lib\site-packages\jittor\misc.py", line 809, in arange
if isinstance(start, jt.Var): start = start.item()
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.item)).

Types of your inputs are:
self = Var,
args = (),

The function declarations are:
ItemData item()

Failed reason:[f 0407 01:20:46.875000 72 parallel_compiler.cc:330] Error happend during compilation:
[Error] source file location:C:\Users\username.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x13\AMDRyzen75800Hxb1\main\cu11.2.67\jit\code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc
Compile operator(1/7)failed:Op(12536:0:1:1:i1:o1:s0,code->12537)

Reason: [f 0407 01:20:46.873000 72 log.cc:608] Check failed ret(1) == 0(0) Run cmd failed: "C:\Users\username.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe" "C:\Users\username.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x13\AMDRyzen75800Hxb1\main\cu11.2.67\jit\code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc" -shared -L"E:\program_files\anaconda\libs" -lpython310 -Xcompiler -EHa -Xcompiler -MD -Xcompiler -utf-8 -I"C:\Users\username.cache\jittor\msvc\VC\include" -I"C:\Users\username.cache\jittor\msvc\win10_kits\include\ucrt" -I"C:\Users\username.cache\jittor\msvc\win10_kits\include\shared" -I"C:\Users\username.cache\jittor\msvc\win10_kits\include\um" -DNOMINMAX -L"C:\Users\username.cache\jittor\msvc\VC\lib" -L"C:\Users\username.cache\jittor\msvc\win10_kits\lib\um\x64" -L"C:\Users\username.cache\jittor\msvc\win10_kits\lib\ucrt\x64" -I"e:\sam_model\jittor\jittorllms\venv\lib\site-packages\jittor\src" -I"E:\program_files\anaconda\include" -DHAS_CUDA -DIS_CUDA -I"C:\Users\username.cache\jittor\jtcuda\cuda11.2_cudnn8_win\include" -I"e:\sam_model\jittor\jittorllms\venv\lib\site-packages\jittor\extern\cuda\inc" -lcudart -L"C:\Users\username.cache\jittor\jtcuda\cuda11.2_cudnn8_win\lib\x64" -L"C:\Users\username.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin" -I"C:\Users\username.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x13\AMDRyzen75800Hxb1\main\cu11.2.67" -L"C:\Users\username.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x13\AMDRyzen75800Hxb1\main\cu11.2.67" -L"C:\Users\username.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x13\AMDRyzen75800Hxb1\main" -l"jit_utils_core.cp310-win_amd64" -l"jittor_core.cp310-win_amd64" -x cu --cudart=shared -ccbin="C:\Users\username.cache\jittor\msvc\VC_____\bin\cl.exe" --use_fast_math -w -I"e:\sam_model\jittor\jittorllms\venv\lib\site-packages\jittor\extern/cuda/inc" -arch=compute_86 -code=sm_86 -o "C:\Users\username.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x13\AMDRyzen75800Hxb1\main\cu11.2.67\jit\code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.dll" -Xlinker -EXPORT:"?jit_run@CodeOp@jittor@@QEAAXXZ"

能开放个接口改cache_dir吗?

C盘只剩5G的win用户落泪,CUDA都下载不完就满了
看了下源码,改cache_dir的环境变量,还是没有用
能不能给个参数接口啊😢

[i 0403 20:52:58.337000 36 compile_extern.py:522] mpicc not found, distribution disabled.

Traceback (most recent call last):
File "E:\JittorLLMs\cli_demo.py", line 8, in
model = models.get_model(args)
File "E:\JittorLLMs\models_init_.py", line 42, in get_model
return module.get_model(args)
File "E:\JittorLLMs\models\chatglm_init_.py", line 40, in get_model
return ChatGLMMdoel(args)
File "E:\JittorLLMs\models\chatglm_init_.py", line 19, in init
self.tokenizer = AutoTokenizer.from_pretrained(os.path.dirname(file), trust_remote_code=True)
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 663, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 388, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 269, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 134, in check_imports
importlib.import_module(imp)
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\icetk_init
.py", line 1, in
from .ice_tokenizer import IceTokenizer
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\icetk\ice_tokenizer.py", line 9, in
import torch
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_init
.py", line 5, in
import jtorch
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jtorch_init.py", line 10, in
import jtorch.compiler
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jtorch\compiler.py", line 25, in
jt.compiler.compile(
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jittor\compiler.py", line 156, in compile
do_compile(fix_cl_flags(cmd))
File "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jittor\compiler.py", line 89, in do_compile
return jit_utils.cc.cache_compile(cmd, cache_path, jittor_path)
RuntimeError: [f 0403 20:52:59.986000 36 cache_compile.cc:266] Check failed: src.size() Something wrong... Could you please report this issue?
Source read failed: E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jtorch/src\data.obj cmd: "e:\users\administrator\appdata\local\programs\python\python310\python.exe" "e:\users\administrator\appdata\local\programs\python\python310\lib\site-packages\jittor\utils\dumpdef.py" "C:\Users\Administrator.cache\jittor\jt1.3.7\cl\py3.10.10\Windows-10-10.x91\IntelRXeonRCPUx6b\default\jtorch_objs\pyjt_jtorch_core.cc.obj" "C:\Users\Administrator.cache\jittor\jt1.3.7\cl\py3.10.10\Windows-10-10.x91\IntelRXeonRCPUx6b\default\jtorch_objs\pyjt_all.cc.obj" "E:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jtorch/src\data.obj" -Fo: "C:\Users\Administrator.cache\jittor\jt1.3.7\cl\py3.10.10\Windows-10-10.x91\IntelRXeonRCPUx6b\default\jtorch_core.cp310-win_amd64.pyd.def"

主页Readme中、实际未看到速度对比数据

在主页Readme中,有以下内容:

**下面是在不同硬件配置条件下的资源消耗与速度对比。**可以发现,JittorLLMs在显存充足的情况下,性能优于同类框架,而显存不足甚至没有显卡,JittorLLMs都能以一定速度运行。

但实际未看到对比数据:还是想参考一下的

MacOS M1无法运行

(jittor) ➜ JittorLLMs git:(main) python web_demo.py chatglm
[i 0415 15:22:37.083849 20 compiler.py:955] Jittor(1.3.7.13) src: /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor
[i 0415 15:22:37.099079 20 compiler.py:956] clang at /usr/bin/clang++(14.0.3)
[i 0415 15:22:37.099178 20 compiler.py:957] cache_path: /Users/wilson/.cache/jittor/jt1.3.7/clang14.0.3/py3.8.16/macOS-13.3.1-ax07/AppleM1Max/default
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/cache_compile.cc:12:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/misc/hash.h:8:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/common.h:10:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
send_log(move(out), level, verbose);
^
std::
1 warning generated.
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.cc:14:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
send_log(move(out), level, verbose);
^
std::
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.cc:195:12: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
return move(logs);
^
std::
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.cc:376:19: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
vprefix_map = move(new_map);
^
std::
3 warnings generated.
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.cc:11:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.h:8:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/common.h:10:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
send_log(move(out), level, verbose);
^
std::
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.cc:49:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
sprintf(pid_buf, "%d", getpid());
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
#define __deprecated_msg(_msg) attribute((deprecated(_msg)))
^
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.cc:145:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
sprintf(pid_buf, "%d", getpid());
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
#define __deprecated_msg(_msg) attribute((deprecated(_msg)))
^
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.cc:147:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
sprintf(st_buf, "set backtrace limit %d", trace_depth);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
#define __deprecated_msg(_msg) attribute((deprecated(_msg)))
^
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.cc:213:13: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
sprintf(syscom,"%s %p -f -p -i -e %.*s", addr2line_path.c_str(), trace[i], p, messages[i]);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
__deprecated_msg("This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
#define __deprecated_msg(_msg) attribute((deprecated(_msg)))
^
5 warnings generated.
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/jit_utils.cc:7:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/cache_compile.h:8:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/common.h:10:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
send_log(move(out), level, verbose);
^
std::
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/jit_utils.cc:8:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/pyjt/py_converter.h:17:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/profiler/simple_profiler.h:48:48: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
inline SimpleProfiler(string&& name): name(move(name)), cnt(0), total_ns(0), sum(0) {}
^
std::
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/jit_utils.cc:8:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/pyjt/py_converter.h:358:16: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
return move(fetch_sync({ptr}).at(0));
^
std::
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/jit_utils.cc:505:37: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
return to_py_object(move(ret));
^
std::
4 warnings generated.
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/str_utils.cc:8:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/str_utils.h:8:
In file included from /Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/common.h:10:
/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.h:138:18: warning: unqualified call to 'std::move' [-Wunqualified-std-cast-call]
send_log(move(out), level, verbose);
^
std::
1 warning generated.
ld: library not found for -lomp
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Traceback (most recent call last):
File "web_demo.py", line 26, in
model = models.get_model(args)
File "/Users/wilson/pythonProject/ai/JittorLLMs/models/init.py", line 38, in get_model
globals()f"get_{model_name}"
File "/Users/wilson/pythonProject/ai/JittorLLMs/models/util.py", line 51, in get_chatglm
new_path.append(download_fromhub(f"jittorhub://{f}", tdir="chat-glm"))
File "/Users/wilson/pythonProject/ai/JittorLLMs/models/util.py", line 5, in download_fromhub
import jittor as jt
File "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/init.py", line 18, in
from . import compiler
File "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/compiler.py", line 1189, in
check_cache_compile()
File "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/compiler.py", line 884, in check_cache_compile
recompile = compile(cc_path, cc_flags+f" {opt_flags} ", files, jit_utils.cache_path+'/jit_utils_core'+extension_suffix, True)
File "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/compiler.py", line 126, in compile
return do_compile(fix_cl_flags(cmd))
File "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/compiler.py", line 91, in do_compile
run_cmd(cmd)
File "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor_utils/init.py", line 188, in run_cmd
raise Exception(err_msg)
Exception: Run cmd failed: "/usr/bin/clang++" "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/cache_compile.cc" "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/log.cc" "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/tracer.cc" "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/jit_utils.cc" "/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src/utils/str_utils.cc" -Wall -Wno-unknown-pragmas -std=c++14 -fPIC -mcpu=apple-m2 -fdiagnostics-color=always -undefined dynamic_lookup -lomp -L/Users/wilson/opt/anaconda3/envs/jittor/lib -Wl,-rpath,/Users/wilson/opt/anaconda3/envs/jittor/lib -lstdc++ -ldl -shared -I"/Users/wilson/opt/anaconda3/envs/jittor/lib/python3.8/site-packages/jittor/src" -I/Users/wilson/opt/anaconda3/envs/jittor/include/python3.8 -I/Users/wilson/opt/anaconda3/envs/jittor/include/python3.8 -O2 -o "/Users/wilson/.cache/jittor/jt1.3.7/clang14.0.3/py3.8.16/macOS-13.3.1-ax07/AppleM1Max/default/jit_utils_core.cpython-38-darwin.so"

RuntimeError

RuntimeError: [f 0406 15:14:28.485000 56 cache_compile.cc:266] Check failed: src.size() Something wrong... Could you please report this issue?
Source read failed: C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jtorch/src\data.obj cmd: "c:\users\administrator\appdata\local\programs\python\python310\python.exe" "c:\users\administrator\appdata\local\programs\python\python310\lib\site-packages\jittor\utils\dumpdef.py" "C:\Users\Administrator.cache\jittor\jt1.3.7\cl\py3.10.10\Windows-10-10.x64\IntelRXeonRCPUx84\default\cu11.2.67\jtorch_objs\pyjt_jtorch_core.cc.obj" "C:\Users\Administrator.cache\jittor\jt1.3.7\cl\py3.10.10\Windows-10-10.x64\IntelRXeonRCPUx84\default\cu11.2.67\jtorch_objs\pyjt_all.cc.obj" "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\jtorch/src\data.obj" -Fo: "C:\Users\Administrator.cache\jittor\jt1.3.7\cl\py3.10.10\Windows-10-10.x64\IntelRXeonRCPUx84\default\cu11.2.67\jtorch_core.cp310-win_amd64.pyd.def"

win下CLI/API无法执行

CLI报错:

python .\cli_demo.py chatglm
Traceback (most recent call last):
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor_utils\lock.py", line 2, in <module>
    import fcntl
ModuleNotFoundError: No module named 'fcntl'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor_utils\lock.py", line 6, in <module>
    import win32file
ImportError: DLL load failed while importing win32file: 找不到指定的程序。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\PycharmProjects\JittorLLMs\cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "E:\PycharmProjects\JittorLLMs\models\__init__.py", line 38, in get_model
    globals()[f"get_{model_name}"]()
  File "E:\PycharmProjects\JittorLLMs\models\util.py", line 51, in get_chatglm
    new_path.append(download_fromhub(f"jittorhub://{f}", tdir="chat-glm"))
  File "E:\PycharmProjects\JittorLLMs\models\util.py", line 5, in download_fromhub
    import jittor as jt
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor\__init__.py", line 13, in <module>
    from jittor_utils import lock
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor_utils\lock.py", line 10, in <module>
    raise Exception("""pywin32 package not found, please install it.
Exception: pywin32 package not found, please install it.
>>> python3.x -m pip install pywin32
If conda is used, please install with command:
>>> conda install pywin32

API报错

 python .\api.py chatglm
Traceback (most recent call last):
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor_utils\lock.py", line 2, in <module>
    import fcntl
ModuleNotFoundError: No module named 'fcntl'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor_utils\lock.py", line 6, in <module>
    import win32file
ImportError: DLL load failed while importing win32file: 找不到指定的程序。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\PycharmProjects\JittorLLMs\api.py", line 5, in <module>
    import torch
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\torch\__init__.py", line 4, in <module>
    import jittor as jt
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor\__init__.py", line 13, in <module>
    from jittor_utils import lock
  File "E:\PycharmProjects\JittorLLMs\.venv\lib\site-packages\jittor_utils\lock.py", line 10, in <module>
    raise Exception("""pywin32 package not found, please install it.
Exception: pywin32 package not found, please install it.
>>> python3.x -m pip install pywin32
If conda is used, please install with command:
>>> conda install pywin32

pwin32是有装的

python -m pip install pywin32
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pywin32 in e:\pycharmprojects\jittorllms\.venv\lib\site-packages (306)

提问:如何恢复jittor框架之前的安装环境?

您好!
我创建并激活了一个新的环境用于安装jittor,

conda create -n jittor
conda activate jittor
并在环境中运行
git clone https://github.com/Jittor/JittorLLMs.git --depth 1
cd JittorLLMs
pip install -r requirements.txt -i https://pypi.jittor.org/simple -I

现在当我切换回base环境时,运行别的项目代码时仍然是基于jittor框架的
请问该如何解决呢?

RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.sync)).

请问启动web_demo.py chatglm一直报这个错是怎么回事呢?

(jittor) D:\Jianan\Develop\github\JittorLLMs>python web_demo.py chatglm
[i 0407 16:09:30.146000 88 compiler.py:955] Jittor(1.3.7.12) src: d:\jianan\win\anaconda3\envs\jittor\lib\site-packages\jittor
[i 0407 16:09:30.220000 88 compiler.py:956] cl at D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\VC_____\bin\cl.exe(19.29.30133)
[i 0407 16:09:30.221000 88 compiler.py:957] cache_path: D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default
[i 0407 16:09:30.273000 88 install_cuda.py:93] cuda_driver_version: [12, 1, 0]
[i 0407 16:09:30.401000 88 init.py:411] Found D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe(11.2.67) at D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe.
[i 0407 16:09:30.560000 88 compiler.py:1010] cuda key:cu11.2.67
[i 0407 16:09:30.563000 88 init.py:227] Total mem: 15.92GB, using 5 procs for compiling.
[i 0407 16:09:37.958000 88 jit_compiler.cc:28] Load cc_path: D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\VC_____\bin\cl.exe
[i 0407 16:09:38.058000 88 init.cc:62] Found cuda archs: [50,]
[i 0407 16:09:38.174000 88 compile_extern.py:522] mpicc not found, distribution disabled.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s][e 0407 16:10:24.366000 88 log.cc:565] cl : Command line warning D9025 : overriding '/Os' with '/Ot'
cl : Command line warning D9002 : ignoring unknown option '-Of'
cl : Command line warning D9002 : ignoring unknown option '-Oa'
getitem__Ti_float16__IDIM_1__ODIM_1__IV0_0__IO0_0__VS0_1__JIT_1__JIT_cpu_1__index_t_int32_hash_eaf0bc6f85e36896_op.cc
D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67\jit\getitem__Ti_float16__IDIM_1__ODIM_1__IV0_0__IO0_0__VS0_1__JIT_1__JIT_cpu_1__index_t_int32_hash_eaf0bc6f85e36896_op.cc(28): warning C4068: unknown pragma 'GCC'
D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67\jit\getitem__Ti_float16__IDIM_1__ODIM_1__IV0_0__IO0_0__VS0_1__JIT_1__JIT_cpu_1__index_t_int32_hash_eaf0bc6f85e36896_op.cc : fatal error C1083: Cannot open compiler generated file: '': Invalid argument

Loading checkpoint shards: 0%| | 0/8 [00:26<?, ?it/s]
Traceback (most recent call last):
File "web_demo.py", line 26, in
model = models.get_model(args)
File "D:\Jianan\Develop\github\JittorLLMs\models_init_.py", line 42, in get_model
return module.get_model(args)
File "D:\Jianan\Develop\github\JittorLLMs\models\chatglm_init_.py", line 48, in get_model
return ChatGLMMdoel(args)
File "D:\Jianan\Develop\github\JittorLLMs\models\chatglm_init_.py", line 22, in init
self.model = AutoModel.from_pretrained(os.path.dirname(file), trust_remote_code=True)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\transformers\models\auto\auto_factory.py", line 459, in from_pretrained
return model_class.from_pretrained(
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\transformers\modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\transformers\modeling_utils.py", line 2812, in _load_pretrained_model
error_msgs += load_state_dict_into_model(model_to_load, state_dict, start_prefix)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\transformers\modeling_utils.py", line 491, in load_state_dict_into_model
load(model_to_load, state_dict, prefix=start_prefix)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\transformers\modeling_utils.py", line 485, in load
module.load_from_state_dict(*args)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\jittor_init
.py", line 1342, in load_from_state_dict
self.load_state_dict(state)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\jtorch_init
.py", line 104, in load_state_dict
return super().load_state_dict(state_dict)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\jittor_init
.py", line 1333, in load_state_dict
self.load_parameters(params)
File "D:\Jianan\Win\Anaconda3\envs\jittor\lib\site-packages\jittor_init
.py", line 1594, in load_parameters
v.sync(False, False)
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.sync)).

Types of your inputs are:
self = Var,
args = (bool, bool, ),

The function declarations are:
VarHolder* sync(bool device_sync = false, bool weak_sync = true)

Failed reason:[f 0407 16:10:24.462000 88 parallel_compiler.cc:330] Error happend during compilation:
[Error] source file location:D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67\jit\getitem__Ti_float16__IDIM_1__ODIM_1__IV0_0__IO0_0__VS0_1__JIT_1__JIT_cpu_1__index_t_int32_hash_eaf0bc6f85e36896_op.cc
Compile operator(0/1)failed:Op(6386:0:1:1:i1:o1:s0,getitem.bool->6387)

Reason: [f 0407 16:10:24.367000 88 log.cc:608] Check failed ret(1) == 0(0) Run cmd failed: "D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\VC_____\bin\cl.exe" "D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67\jit\getitem__Ti_float16__IDIM_1__ODIM_1__IV0_0__IO0_0__VS0_1__JIT_1__JIT_cpu_1__index_t_int32_hash_eaf0bc6f85e36896_op.cc" -Fe: "D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67\jit\getitem__Ti_float16__IDIM_1__ODIM_1__IV0_0__IO0_0__VS0_1__JIT_1__JIT_cpu_1__index_t_int32_hash_eaf0bc6f85e36896_op.dll" -std:c++17 -LD -EHa -MD -utf-8 -nologo -I"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\VC\include" -I"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\win10_kits\include\ucrt" -I"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\win10_kits\include\shared" -I"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\win10_kits\include\um" -DNOMINMAX -I"d:\jianan\win\anaconda3\envs\jittor\lib\site-packages\jittor\src" -I"d:\jianan\win\anaconda3\envs\jittor\include" -DHAS_CUDA -DIS_CUDA -I"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jtcuda\cuda11.2_cudnn8_win\include" -I"d:\jianan\win\anaconda3\envs\jittor\lib\site-packages\jittor\extern\cuda\inc" -I"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67" -Ofast -link -LIBPATH:"d:\jianan\win\anaconda3\envs\jittor\libs" python38.lib -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\VC\lib" -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\win10_kits\lib\um\x64" -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\msvc\win10_kits\lib\ucrt\x64" cudart.lib -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jtcuda\cuda11.2_cudnn8_win\lib\x64" -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin" -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default\cu11.2.67" -LIBPATH:"D:\Jianan\Win\Anaconda3\envs\jittor\JittorHome.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x04\IntelRCoreTMi7x3b\default" "jit_utils_core.cp38-win_amd64".lib "jittor_core.cp38-win_amd64".lib -EXPORT:"?jit_run@GetitemOp@jittor@@QEAAXXZ"

通过GPU 运行提示错误

执行命令

 python3 cli_demo.py pangualpha

错误提示:

/usr/include/stdio.h(189): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(201): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(223): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(260): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(285): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(294): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(303): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(309): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(315): error: attribute "__malloc__" does not take arguments

/usr/include/stdio.h(830): error: attribute "__malloc__" does not take arguments

/usr/include/stdlib.h(566): error: attribute "__malloc__" does not take arguments

/usr/include/stdlib.h(570): error: attribute "__malloc__" does not take arguments

/usr/include/stdlib.h(799): error: attribute "__malloc__" does not take arguments

/usr/include/c++/11/type_traits(1406): error: type name is not allowed

/usr/include/c++/11/type_traits(1406): error: type name is not allowed

/usr/include/c++/11/type_traits(1406): error: identifier "__is_same" is undefined

/usr/include/wchar.h(155): error: attribute "__malloc__" does not take arguments

/usr/include/wchar.h(582): error: attribute "__malloc__" does not take arguments

/usr/local/lib/python3.10/dist-packages/jittor/src/misc/cstr.h(19): error: no instance of overloaded function "std::unique_ptr<_Tp [], _Dp>::reset [with _Tp=char, _Dp=std::default_delete<char []>]" matches the argument list
            argument types are: (char *)
            object type is: jittor::unique_ptr<char []>

/usr/local/lib/python3.10/dist-packages/jittor/src/misc/cstr.h(25): error: no instance of overloaded function "std::unique_ptr<_Tp [], _Dp>::reset [with _Tp=char, _Dp=std::default_delete<char []>]" matches the argument list
            argument types are: (char *)
            object type is: jittor::unique_ptr<char []>

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long, std::is_same<int, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=int, _CharT=char, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6620): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long, std::is_same<long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=long, _CharT=char, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6625): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const unsigned long, std::is_same<unsigned long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long, _Ret=unsigned long, _CharT=char, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6630): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long long, std::is_same<long long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long long, _Ret=long long, _CharT=char, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6635): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const unsigned long long, std::is_same<unsigned long long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long long, _Ret=unsigned long long, _CharT=char, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6640): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const float, std::is_same<float, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=float, _Ret=float, _CharT=char, _Base=<>]" 
/usr/include/c++/11/bits/basic_string.h(6646): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const double, std::is_same<double, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=double, _Ret=double, _CharT=char, _Base=<>]" 
/usr/include/c++/11/bits/basic_string.h(6650): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long double, std::is_same<long double, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long double, _Ret=long double, _CharT=char, _Base=<>]" 
/usr/include/c++/11/bits/basic_string.h(6654): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long, std::is_same<int, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=int, _CharT=wchar_t, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6751): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long, std::is_same<long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long, _Ret=long, _CharT=wchar_t, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6756): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const unsigned long, std::is_same<unsigned long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long, _Ret=unsigned long, _CharT=wchar_t, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6761): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long long, std::is_same<long long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long long, _Ret=long long, _CharT=wchar_t, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6766): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const unsigned long long, std::is_same<unsigned long long, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=unsigned long long, _Ret=unsigned long long, _CharT=wchar_t, _Base=<int>]" 
/usr/include/c++/11/bits/basic_string.h(6771): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const float, std::is_same<float, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=float, _Ret=float, _CharT=wchar_t, _Base=<>]" 
/usr/include/c++/11/bits/basic_string.h(6777): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const double, std::is_same<double, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=double, _Ret=double, _CharT=wchar_t, _Base=<>]" 
/usr/include/c++/11/bits/basic_string.h(6781): here

/usr/include/c++/11/ext/string_conversions.h(85): error: no instance of overloaded function "_Range_chk::_S_chk" matches the argument list
            argument types are: (const long double, std::is_same<long double, int>)
          detected during instantiation of "_Ret __gnu_cxx::__stoa(_TRet (*)(const _CharT *, _CharT **, _Base...), const char *, const _CharT *, std::size_t *, _Base...) [with _TRet=long double, _Ret=long double, _CharT=wchar_t, _Base=<>]" 
/usr/include/c++/11/bits/basic_string.h(6785): here

36 errors detected in the compilation of "/usr/local/lib/python3.10/dist-packages/jittor/src/misc/nan_checker.cu".
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/usr/local/lib/python3.10/dist-packages/jittor_utils/__init__.py", line 197, in do_compile
    return cc.cache_compile(cmd, cache_path, jittor_path)
RuntimeError: [f 0407 02:31:50.880568 12 log.cc:608] Check failed ret(256) == 0(0) Run cmd failed: "/root/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc"  "/usr/local/lib/python3.10/dist-packages/jittor/src/misc/nan_checker.cu"     -std=c++14 -Xcompiler -fPIC  -Xcompiler -march=native  -Xcompiler -fdiagnostics-color=always   -I"/usr/local/lib/python3.10/dist-packages/jittor/src" -I/usr/include/python3.10 -I/usr/include/python3.10 -DHAS_CUDA -DIS_CUDA -I"/root/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include" -I"/usr/local/lib/python3.10/dist-packages/jittor/extern/cuda/inc"   -I"/root/.cache/jittor/jt1.3.7/g++11.3.0/py3.10.6/Linux-5.15.0-6x72/IntelRXeonRCPUxc3/default/cu11.2.152_sm_60"   -O2   -c -o "/root/.cache/jittor/jt1.3.7/g++11.3.0/py3.10.6/Linux-5.15.0-6x72/IntelRXeonRCPUxc3/default/cu11.2.152_sm_60/obj_files/nan_checker.cu.o" -x cu --cudart=shared -ccbin="/usr/bin/g++"   -w  -I"/usr/local/lib/python3.10/dist-packages/jittor/extern/cuda/inc" 
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/build/cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "/build/models/__init__.py", line 38, in get_model
    globals()[f"get_{model_name}"]()
  File "/build/models/util.py", line 59, in get_pangualpha
    path = download_fromhub(f"jittorhub://model_optim_rng.pth", tdir="pangu")
  File "/build/models/util.py", line 5, in download_fromhub
    import jittor as jt
  File "/usr/local/lib/python3.10/dist-packages/jittor/__init__.py", line 18, in <module>
    from . import compiler
  File "/usr/local/lib/python3.10/dist-packages/jittor/compiler.py", line 1353, in <module>
    compile(cc_path, cc_flags+opt_flags, files, 'jittor_core'+extension_suffix)
  File "/usr/local/lib/python3.10/dist-packages/jittor/compiler.py", line 151, in compile
    jit_utils.run_cmds(cmds, cache_path, jittor_path, "Compiling "+base_output)
  File "/usr/local/lib/python3.10/dist-packages/jittor_utils/__init__.py", line 251, in run_cmds
    for i,_ in enumerate(p.imap_unordered(do_compile, cmds)):
  File "/usr/lib/python3.10/multiprocessing/pool.py", line 873, in next
    raise value
RuntimeError: [f 0407 02:31:50.880568 12 log.cc:608] Check failed ret(256) == 0(0) Run cmd failed: "/root/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc"  "/usr/local/lib/python3.10/dist-packages/jittor/src/misc/nan_checker.cu"     -std=c++14 -Xcompiler -fPIC  -Xcompiler -march=native  -Xcompiler -fdiagnostics-color=always   -I"/usr/local/lib/python3.10/dist-packages/jittor/src" -I/usr/include/python3.10 -I/usr/include/python3.10 -DHAS_CUDA -DIS_CUDA -I"/root/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include" -I"/usr/local/lib/python3.10/dist-packages/jittor/extern/cuda/inc"   -I"/root/.cache/jittor/jt1.3.7/g++11.3.0/py3.10.6/Linux-5.15.0-6x72/IntelRXeonRCPUxc3/default/cu11.2.152_sm_60"   -O2   -c -o "/root/.cache/jittor/jt1.3.7/g++11.3.0/py3.10.6/Linux-5.15.0-6x72/IntelRXeonRCPUxc3/default/cu11.2.152_sm_60/obj_files/nan_checker.cu.o" -x cu --cudart=shared -ccbin="/usr/bin/g++"   -w  -I"/usr/local/lib/python3.10/dist-packages/jittor/extern/cuda/inc" 

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.30.02              Driver Version: 530.30.02    CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla P100-PCIE-16GB            On | 00000000:04:00.0 Off |                    0 |
| N/A   42C    P0               26W / 250W|      0MiB / 16384MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  Tesla P100-PCIE-16GB            On | 00000000:42:00.0 Off |                    0 |
| N/A   40C    P0               28W / 250W|      0MiB / 16384MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

RuntimeError: [f 0403 18:34:55.057000 36 cache_compile.cc:266] Check failed: src.size()

你好,在尝试使用ChatGLM:python.exe cli_demo.py chatglm

出现以下错误:

[i 0403 18:45:22.605000 40 compiler.py:955] Jittor(1.3.7.1) src: c:\users\parsl\appdata\local\programs\python\python38\lib\site-packages\jittor
[i 0403 18:45:22.647000 40 compiler.py:956] cl at C:\Users\parsl\.cache\jittor\msvc\VC\_\_\_\_\_\bin\cl.exe(19.29.30133)
[i 0403 18:45:22.648000 40 compiler.py:957] cache_path: C:\Users\parsl\.cache\jittor\jt1.3.7\cl\py3.8.10\Windows-10-10.x0f\IntelRCoreTMi5xbf\default
[i 0403 18:45:22.651000 40 install_cuda.py:93] cuda_driver_version: [11, 6, 0]
[i 0403 18:45:22.687000 40 __init__.py:411] Found C:\Users\parsl\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe(11.2.67) at C:\Users\parsl\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe.
[i 0403 18:45:22.765000 40 compiler.py:1010] cuda key:cu11.2.67
[i 0403 18:45:22.767000 40 __init__.py:227] Total mem: 39.85GB, using 13 procs for compiling.
[i 0403 18:45:25.582000 40 jit_compiler.cc:28] Load cc_path: C:\Users\parsl\.cache\jittor\msvc\VC\_\_\_\_\_\bin\cl.exe
[i 0403 18:45:25.583000 40 init.cc:62] Found cuda archs: [61,]
[i 0403 18:45:26.753000 40 compile_extern.py:522] mpicc not found, distribution disabled.
[w 0403 18:45:26.845000 40 compile_extern.py:203] CUDA related path found in LD_LIBRARY_PATH or PATH(['', 'C', '\\Users\\parsl\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\lib64', '', 'C', '\\Users\\parsl\\.cache\\jittor\\mkl\\dnnl_win_2.2.0_cpu_vcomp\\bin', '', 'C', '\\Users\\parsl\\.cache\\jittor\\mkl\\dnnl_win_2.2.0_cpu_vcomp\\lib', '', 'C', '\\Users\\parsl\\.cache\\jittor\\jt1.3.7\\cl\\py3.8.10\\Windows-10-10.x0f\\IntelRCoreTMi5xbf\\default', '', 'C', '\\Users\\parsl\\.cache\\jittor\\jt1.3.7\\cl\\py3.8.10\\Windows-10-10.x0f\\IntelRCoreTMi5xbf\\default\\cu11.2.67', '', 'C', '\\Users\\parsl\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\bin', '', 'C', '\\Users\\parsl\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\lib\\x64', '', 'C', '\\Users\\parsl\\.cache\\jittor\\msvc\\win10_kits\\lib\\ucrt\\x64', '', 'C', '\\Users\\parsl\\.cache\\jittor\\msvc\\win10_kits\\lib\\um\\x64', '', 'C', '\\Users\\parsl\\.cache\\jittor\\msvc\\VC\\lib', '', 'c', '\\users\\parsl\\appdata\\local\\programs\\python\\python38\\libs', 'C', '\\Users\\parsl\\.cache\\jittor\\msvc\\VC\\_\\_\\_\\_\\_\\bin', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\libnvvp', 'C', '\\Program Files\\Eclipse Adoptium\\jdk-8.0.362.9-hotspot\\bin', 'C', '\\Windows\\system32', 'C', '\\Windows', 'C', '\\Windows\\System32\\Wbem', 'C', '\\Windows\\System32\\WindowsPowerShell\\v1.0\\', 'C', '\\Windows\\System32\\OpenSSH\\', 'C', '\\Program Files\\dotnet\\', 'C', '\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common', 'C', '\\Program Files (x86)\\NetSarang\\Xshell 7\\', 'C', '\\Program Files (x86)\\NetSarang\\Xftp 7\\', 'C', '\\Program Files\\nodejs\\', 'C', '\\Program Files\\Git\\cmd', 'C', '\\Program Files (x86)\\HP\\Common\\HPDestPlgIn\\', 'C', '\\Program Files\\IBM\\SPSS\\Statistics\\25\\JRE\\bin', 'C', '\\Program Files\\NVIDIA Corporation\\Nsight Compute 2022.1.0\\', 'C', '\\Users\\parsl\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\', 'C', '\\Users\\parsl\\AppData\\Local\\Programs\\Python\\Python37\\', 'C', '\\Users\\parsl\\AppData\\Local\\Microsoft\\WindowsApps', 'C', '\\Users\\parsl\\AppData\\Local\\Programs\\Microsoft VS Code\\bin', 'C', '\\Users\\parsl\\AppData\\Roaming\\npm', 'C', '\\Users\\parsl\\.dotnet\\tools', 'C', '\\Users\\parsl\\AppData\\Local\\Programs\\Fiddler']), This path may cause jittor 
found the wrong libs, please unset LD_LIBRARY_PATH and remove cuda lib path in Path. 
Or you can let jittor install cuda for you: `python3.x -m jittor_utils.install_cuda`
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
  File "cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "D:\Code\JittorLLMs\models\__init__.py", line 42, in get_model
    return module.get_model(args)
  File "D:\Code\JittorLLMs\models\chatglm\__init__.py", line 40, in get_model
    return ChatGLMMdoel(args)
  File "D:\Code\JittorLLMs\models\chatglm\__init__.py", line 19, in __init__
    self.tokenizer = AutoTokenizer.from_pretrained(os.path.dirname(__file__), trust_remote_code=True)
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 642, in from_pretrained
    tokenizer_class = get_class_from_dynamic_module(
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\dynamic_module_utils.py", line 363, in get_class_from_dynamic_module
    final_module = get_cached_module_file(
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\dynamic_module_utils.py", line 237, in get_cached_module_file
    modules_needed = check_imports(resolved_module_file)
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\dynamic_module_utils.py", line 129, in check_imports
    importlib.import_module(imp)
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 848, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\icetk\__init__.py", line 1, in <module>
    from .ice_tokenizer import IceTokenizer
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\icetk\ice_tokenizer.py", line 9, in <module>
    import torch
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\__init__.py", line 5, in <module>
    import jtorch
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\jtorch\__init__.py", line 10, in <module>
    import jtorch.compiler
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\jtorch\compiler.py", line 25, in <module>
    jt.compiler.compile(
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\jittor\compiler.py", line 156, in compile
    do_compile(fix_cl_flags(cmd))
  File "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\jittor\compiler.py", line 89, in do_compile
    return jit_utils.cc.cache_compile(cmd, cache_path, jittor_path)
RuntimeError: [f 0403 18:45:32.592000 40 cache_compile.cc:266] Check failed: src.size()  Something wrong... Could you please report this issue?
 Source read failed: C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\jtorch/src\data.obj cmd: "c:\users\parsl\appdata\local\programs\python\python38\python.exe" "c:\users\parsl\appdata\local\programs\python\python38\lib\site-packages\jittor\utils\dumpdef.py" "C:\Users\parsl\.cache\jittor\jt1.3.7\cl\py3.8.10\Windows-10-10.x0f\IntelRCoreTMi5xbf\default\cu11.2.67\jtorch_objs\pyjt_jtorch_core.cc.obj" "C:\Users\parsl\.cache\jittor\jt1.3.7\cl\py3.8.10\Windows-10-10.x0f\IntelRCoreTMi5xbf\default\cu11.2.67\jtorch_objs\pyjt_all.cc.obj" "C:\Users\parsl\AppData\Local\Programs\Python\Python38\lib\site-packages\jtorch/src\data.obj" -Fo: "C:\Users\parsl\.cache\jittor\jt1.3.7\cl\py3.8.10\Windows-10-10.x0f\IntelRCoreTMi5xbf\default\cu11.2.67\jtorch_core.cp38-win_amd64.pyd.def"

执行python -m jittor_utils.install_cuda结果:

[i 0403 18:54:55.710155 88 install_cuda.py:93] cuda_driver_version: [11, 6, 0]
[i 0403 18:54:55.711154 88 install_cuda.py:162] nvcc is installed at C:\Users\parsl\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe

如果单独import jtorch,也会出现上述的报错。

环境:

Windows 10 21H2
Python 3.8.10
CUDA 11.6

已安装的依赖:

icetk                       0.0.7
jittor                      1.3.7.1
jtorch                      0.1.0
torch                       2.0.0    (来自pypi.jittor.org)
torchvision                 0.15    (来自pypi.jittor.org)

多次尝试重新compile问题依旧。

GPTQ support / 支持 GPTQ

能不能支持 GPTQ ,用于多GPU并行处理。例如支持 类似于 GPTQ-for-LLaMa 的能力。
现在最多就是使用单卡算力。

chatglm可不可以8bit量化

我是3060M(6G)+16G内存的,跑chatglm能把内存显存统统吃干抹净,有没有办法进行8bit量化?

No module named 'torch.optim'

具体环境在路径中可看到,项目是https://github.com/l15y/wenda.jittor版本为最新。这段 代码加载了两个模型,有两个不同错误

Traceback (most recent call last):
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\wenda\GLM6BAPI.py", line 144, in load_model
    model = AutoModel.from_pretrained(glm_path, local_files_only=True, trust_remote_code=True)
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\transformers\models\auto\auto_factory.py", line 459, in from_pretrained
    return model_class.from_pretrained(
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\transformers\modeling_utils.py", line 2362, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "C:\Users\cly/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 940, in __init__
    self.quantize(self.config.quantization_bit, self.config.quantization_embeddings, use_quantization_cache=True, empty_init=True)
  File "C:\Users\cly/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1277, in quantize
    self.transformer = quantize(self.transformer, bits, use_quantization_cache=use_quantization_cache, empty_init=empty_init, **kwargs)
  File "C:\Users\cly/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 437, in quantize
Traceback (most recent call last):
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\langchain\embeddings\huggingface.py", line 37, in __init__
    import sentence_transformers
    layer.attention.query_key_value = QuantizedLinearWithPara(
  File "C:\Users\cly/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 293, in __init__
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\sentence_transformers\__init__.py", line 3, in <module>
    from .datasets import SentencesDataset, ParallelSentencesDataset
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\sentence_transformers\datasets\__init__.py", line 3, in <module>
    from .ParallelSentencesDataset import ParallelSentencesDataset
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\sentence_transformers\datasets\ParallelSentencesDataset.py", line 4, in <module>
    from .. import SentenceTransformer
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 15, in <module>
    from torch.optim import Optimizer
ModuleNotFoundError: No module named 'torch.optim'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\wenda\GLM6BAPI.py", line 152, in <module>
    embeddings = HuggingFaceEmbeddings(model_name=embeddings_path)
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\langchain\embeddings\huggingface.py", line 41, in __init__
    raise ValueError(
    super(QuantizedLinear, self).__init__(*args, **kwargs)
ValueError: Could not import sentence_transformers python package. Please install it with `pip install sentence_transformers`.
  File "D:\WPy64-31090\python-3.10.9.amd64\lib\site-packages\jtorch\__init__.py", line 118, in __init__
    super().__init__(*args, **kw)
TypeError: Linear.__init__() got an unexpected keyword argument 'device'

安装jTorch后,cuDNN检查脚本报错

脚本内容:

import torch

if torch.cuda.is_available():
    print('CUDA is available')
else:
    print('CUDA is not available')

torch.backends.cudnn.enabled = True

if torch.backends.cudnn.enabled:
    print('cuDNN is available')
else:
    print('cuDNN is not available')
    print('CUDA version:', torch.version.cuda)
    print('cuDNN version:', torch.backends.cudnn.version())

报错信息:

CUDA is available
Traceback (most recent call last):
  File "G:\pys\cuda_cudnn.py", line 10, in <module>
    torch.backends.cudnn.enabled = True
AttributeError: module 'torch' has no attribute 'backends'

Compiling Problems

Thanks for releasing source code of this amazing work.
I was able to run chatglm successfully, but when i run pangualpha, shows below errors.

How to fix this then? Many thanks!

python3 cli_demo.py pangualpha

WARNING: APEX is not installed, multi_tensor_applier will not be available.
WARNING: APEX is not installed, using torch.nn.LayerNorm instead of apex.normalization.FusedLayerNorm!
Traceback (most recent call last):
  File "/home/janice/Documents/JittorLLMs/cli_demo.py", line 8, in <module>
    model = models.get_model(args)
  File "/home/janice/Documents/JittorLLMs/models/__init__.py", line 41, in get_model
    module = importlib.import_module(f"models.{model_name}")
  File "/home/janice/anaconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/janice/Documents/JittorLLMs/models/pangualpha/__init__.py", line 6, in <module>
    from megatron.text_generation_utils import pad_batch, get_batch
  File "/home/janice/Documents/JittorLLMs/models/pangualpha/megatron/text_generation_utils.py", line 29, in <module>
    from megatron.utils import get_ltor_masks_and_position_ids
  File "/home/janice/Documents/JittorLLMs/models/pangualpha/megatron/utils.py", line 27, in <module>
    from megatron.data.samplers import DistributedBatchSampler
  File "/home/janice/Documents/JittorLLMs/models/pangualpha/megatron/data/samplers.py", line 22, in <module>
    class RandomSampler(data.sampler.Sampler):
AttributeError: module 'jtorch.utils.data' has no attribute 'sampler'

I have updated jittor to 1.3.7.3, jtorch to 0.1.3

could not load the checkpoint,移动了文件夹还是不行

(jittor) PS F:\test-code\JittorLLMs> python cli_demo.py pangualpha
WARNING: APEX is not installed, multi_tensor_applier will not be available.
WARNING: APEX is not installed, using torch.nn.LayerNorm instead of apex.normalization.FusedLayerNorm!
F:\test-code\JittorLLMs\models\pangualpha
using world size: 1 and model-parallel size: 1
using torch.float32 for parameters ...
WARNING: overriding default arguments for tokenizer_type:GPT2BPETokenizer                        with tokenizer_type:GPT2BPETokenizer
-------------------- arguments --------------------
  adlr_autoresume ................. False
  adlr_autoresume_interval ........ 1000
  apply_query_key_layer_scaling ... False
  apply_residual_connection_post_layernorm  False
  attention_dropout ............... 0.1
  attention_softmax_in_fp32 ....... False
  batch_size ...................... 1
  bert_load ....................... None
  bias_dropout_fusion ............. False
  bias_gelu_fusion ................ False
  block_data_path ................. None
  checkpoint_activations .......... False
  checkpoint_num_layers ........... 1
  clip_grad ....................... 1.0
  data_impl ....................... infer
  data_path ....................... None
  DDP_impl ........................ local
  distribute_checkpointed_activations  False
  distributed_backend ............. nccl
  dynamic_loss_scale .............. True
  eod_mask_loss ................... False
  eval_interval ................... 1000
  eval_iters ...................... 100
  exit_interval ................... None
  faiss_use_gpu ................... False
  finetune ........................ True
  fp16 ............................ False
  fp16_lm_cross_entropy ........... False
  fp32_allreduce .................. False
  genfile ......................... None
  greedy .......................... False
  hidden_dropout .................. 0.1
  hidden_size ..................... 2560
  hysteresis ...................... 2
  ict_head_size ................... None
  ict_load ........................ None
  indexer_batch_size .............. 128
  indexer_log_interval ............ 1000
  init_method_std ................. 0.02
  layernorm_epsilon ............... 1e-05
  lazy_mpu_init ................... None
  load ............................ C:\Users\xgp\.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x52\AMDRyzen75800Xxc8\default\cu11.2.67\checkpoints\pangu\Pangu-alpha_2.6B_fp16_mgt
  local_rank ...................... None
  log_interval .................... 100
  loss_scale ...................... None
  loss_scale_window ............... 1000
  lr .............................. None
  lr_decay_iters .................. None
  lr_decay_style .................. linear
  make_vocab_size_divisible_by .... 1
  mask_prob ....................... 0.15
  max_position_embeddings ......... 1024
  merge_file ...................... None
  min_lr .......................... 0.0
  min_scale ....................... 1
  mmap_warmup ..................... False
  model_parallel_size ............. 1
  no_load_optim ................... False
  no_load_rng ..................... False
  no_save_optim ................... False
  no_save_rng ..................... False
  num_attention_heads ............. 32
  num_layers ...................... 31
  num_samples ..................... 0
  num_unique_layers ............... None
  num_workers ..................... 2
  onnx_safe ....................... None
  openai_gelu ..................... False
  out_seq_length .................. 50
  override_lr_scheduler ........... False
  param_sharing_style ............. grouped
  params_dtype .................... torch.float32
  query_in_block_prob ............. 0.1
  rank ............................ 0
  recompute ....................... False
  report_topk_accuracies .......... []
  reset_attention_mask ............ False
  reset_position_ids .............. False
  sample_input_file ............... None
  sample_output_file .............. None
  save ............................ None
  save_interval ................... None
  scaled_upper_triang_masked_softmax_fusion  False
  seed ............................ 1234
  seq_length ...................... 1024
  short_seq_prob .................. 0.1
  split ........................... 969, 30, 1
  temperature ..................... 1.0
  tensorboard_dir ................. None
  titles_data_path ................ None
  tokenizer_type .................. GPT2BPETokenizer
  top_k ........................... 2
  top_p ........................... 0.0
  train_iters ..................... None
  use_checkpoint_lr_scheduler ..... False
  use_cpu_initialization .......... False
  use_one_sent_docs ............... False
  vocab_file ...................... models/pangualpha/megatron/tokenizer/bpe_4w_pcl/vocab
  warmup .......................... 0.01
  weight_decay .................... 0.01
  world_size ...................... 1
---------------- end of arguments ----------------
> building GPT2BPETokenizer tokenizer ...
 > padded vocab (size: 40000) with 0 dummy tokens (new size: 40000)
torch distributed is already initialized, skipping initialization ...
> initializing model parallel with size 1
> setting random seeds to 1234 ...
> initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
building GPT2 model ...
 > number of parameters on model parallel rank 0: 2625295360
global rank 0 is loading checkpoint C:\Users\xgp\.cache\jittor\jt1.3.7\cl\py3.8.16\Windows-10-10.x52\AMDRyzen75800Xxc8\default\cu11.2.67\checkpoints\pangu\Pangu-alpha_2.6B_fp16_mgt\iter_0001000\mp_rank_00\model_optim_rng.pth
could not load the checkpoint

image
从文件夹外移动过来了,还是不行

cli_demo.py chatglm demo报错

环境:
win 10
ananconda python 3.10
CUDA 11.7.1
jTorch 已经安装


出错信息:

PS G:\pys\JittorLLMs> python cli_demo.py chatglm
[i 0417 02:47:56.967000 52 compiler.py:955] Jittor(1.3.7.13) src: e:\anaconda3\lib\site-packages\jittor
[i 0417 02:47:57.020000 52 compiler.py:956] cl at C:\Users\linke\.cache\jittor\msvc\VC\_\_\_\_\_\bin\cl.exe(19.29.30133)
[i 0417 02:47:57.021000 52 compiler.py:957] cache_path: C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default
[i 0417 02:47:57.026000 52 install_cuda.py:93] cuda_driver_version: [12, 0, 0]
[i 0417 02:47:57.079000 52 __init__.py:411] Found C:\Users\linke\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe(11.2.67) at C:\Users\linke\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe.
[i 0417 02:47:57.155000 52 compiler.py:1010] cuda key:cu11.2.67
[i 0417 02:47:57.156000 52 __init__.py:227] Total mem: 31.88GB, using 10 procs for compiling.
[i 0417 02:47:58.942000 52 jit_compiler.cc:28] Load cc_path: C:\Users\linke\.cache\jittor\msvc\VC\_\_\_\_\_\bin\cl.exe
[i 0417 02:47:58.944000 52 init.cc:62] Found cuda archs: [75,]
[i 0417 02:47:59.243000 52 compile_extern.py:522] mpicc not found, distribution disabled.
[w 0417 02:47:59.352000 52 compile_extern.py:203] CUDA related path found in LD_LIBRARY_PATH or PATH(['', 'C', '\\Users\\linke\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\lib64', '', 'C', '\\Users\\linke\\.cache\\jittor\\mkl\\dnnl_win_2.2.0_cpu_vcomp\\bin', '', 'C', '\\Users\\linke\\.cache\\jittor\\mkl\\dnnl_win_2.2.0_cpu_vcomp\\lib', '', 'C', '\\Users\\linke\\.cache\\jittor\\jt1.3.7\\cl\\py3.10.9\\Windows-10-10.x9e\\IntelRCoreTMi5x25\\default', '', 'C', '\\Users\\linke\\.cache\\jittor\\jt1.3.7\\cl\\py3.10.9\\Windows-10-10.x9e\\IntelRCoreTMi5x25\\default\\cu11.2.67', '', 'C', '\\Users\\linke\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\bin', '', 'C', '\\Users\\linke\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\lib\\x64', '', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\win10_kits\\lib\\ucrt\\x64', '', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\win10_kits\\lib\\um\\x64', '', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\VC\\lib', '', 'e', '\\anaconda3\\libs', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\VC\\_\\_\\_\\_\\_\\bin', 'C', '\\Users\\linke\\AppData\\Local\\Programs\\Python\\Python38\\Scripts\\', 'C', '\\Users\\linke\\AppData\\Local\\Programs\\Python\\Python38\\', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.7\\bin', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.7\\libnvvp', '', 'C', '\\Program Files\\Eclipse Foundation\\jdk-8.0.302.8-hotspot\\bin', 'C', '\\Program Files\\Common Files\\Oracle\\Java\\javapath', 'C', '\\Program Files\\ImageMagick-7.0.8-Q16', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.1\\bin', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.1\\libnvvp', 'C', '\\windows\\system32', 'C', '\\windows', 'C', '\\windows\\System32\\Wbem', 'C', '\\windows\\System32\\WindowsPowerShell\\v1.0\\', 'C', '\\windows\\System32\\OpenSSH\\', 'C', '\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common', 'C', '\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR', 'C', '\\Program Files\\dotnet\\', 'C', '\\Program Files (x86)\\dotnet\\', 'C', '\\ProgramData\\chocolatey\\bin', 'C', '\\Program Files (x86)\\vim\\vim80', 'C', '\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.22.27905\\bin\\Hostx64\\x64', 'C', '\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.22.27905\\include', 'E', '\\Program Files\\CMake\\bin', 'C', '\\WINDOWS\\system32', 'C', '\\WINDOWS', 'C', '\\WINDOWS\\System32\\Wbem', 'C', '\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\', 'C', '\\WINDOWS\\System32\\OpenSSH\\', 'C', '\\Program Files (x86)\\IncrediBuild', 'C', '\\Program Files\\nodejs\\', 'C', '\\Program Files\\Redis\\', 'C', '\\Program Files\\PuTTY\\', 'C', '\\Program Files (x86)\\ZeroTier\\One\\', 'D', '\\Go\\bin', 'C', '\\Program Files\\GitHub CLI\\', 'C', '\\Program Files (x86)\\Tailscale IPN', 'C', '\\Program Files\\NVIDIA Corporation\\Nsight Compute 2022.2.1\\', 'C', '\\Program Files\\Process Lasso\\', 'C', '\\Program Files\\Git\\cmd', 'E', '\\anaconda3', 'E', '\\anaconda3\\Scripts', 'E', '\\anaconda3\\condabin', 'E', '\\anaconda3\\DLLs', 'C', '\\Python310', 'C', '\\Python310\\Scripts', 'C', '\\Python310\\Lib\\site-packages', 'G', '\\cuda117_bin', 'D', '\\msys64\\ucrt64\\bin', 'D', '\\msys64\\usr\\bin', 'C', '\\Python310\\lib\\site-packages\\torch\\lib', 'C', '\\Python39', 'c', '\\python39\\Scripts', 'C', '\\ProgramData\\chocolatey\\bin', 'G', '\\pypy3.9-v7.3.11-win64', 'C', '\\Users\\linke\\go\\bin', 'C', '\\Users\\linke\\scoop\\shims', 'C', '\\Users\\linke\\AppData\\Local\\Microsoft\\WindowsApps', 'C', '\\Users\\linke\\AppData\\Local\\Programs\\Microsoft VS Code\\bin', 'C', '\\Users\\linke\\AppData\\Roaming\\npm', 'C', '\\Users\\linke\\AppData\\Local\\ComposerSetup\\bin', 'C', '\\Users\\linke\\AppData\\Roaming\\Composer\\vendor\\bin', 'C', '\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\Hostx64\\x64', '']), This path may cause jittor found the wrong libs, please unset LD_LIBRARY_PATH and remove cuda lib path in Path.
Or you can let jittor install cuda for you: `python3.x -m jittor_utils.install_cuda`
[w 0417 02:47:59.356000 52 compile_extern.py:203] CUDA related path found in LD_LIBRARY_PATH or PATH(['', 'C', '\\Users\\linke\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\lib64', '', 'C', '\\Users\\linke\\.cache\\jittor\\mkl\\dnnl_win_2.2.0_cpu_vcomp\\bin', '', 'C', '\\Users\\linke\\.cache\\jittor\\mkl\\dnnl_win_2.2.0_cpu_vcomp\\lib', '', 'C', '\\Users\\linke\\.cache\\jittor\\jt1.3.7\\cl\\py3.10.9\\Windows-10-10.x9e\\IntelRCoreTMi5x25\\default', '', 'C', '\\Users\\linke\\.cache\\jittor\\jt1.3.7\\cl\\py3.10.9\\Windows-10-10.x9e\\IntelRCoreTMi5x25\\default\\cu11.2.67', '', 'C', '\\Users\\linke\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\bin', '', 'C', '\\Users\\linke\\.cache\\jittor\\jtcuda\\cuda11.2_cudnn8_win\\lib\\x64', '', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\win10_kits\\lib\\ucrt\\x64', '', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\win10_kits\\lib\\um\\x64', '', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\VC\\lib', '', 'e', '\\anaconda3\\libs', 'C', '\\Users\\linke\\.cache\\jittor\\msvc\\VC\\_\\_\\_\\_\\_\\bin', 'C', '\\Users\\linke\\AppData\\Local\\Programs\\Python\\Python38\\Scripts\\', 'C', '\\Users\\linke\\AppData\\Local\\Programs\\Python\\Python38\\', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.7\\bin', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.7\\libnvvp', '', 'C', '\\Program Files\\Eclipse Foundation\\jdk-8.0.302.8-hotspot\\bin', 'C', '\\Program Files\\Common Files\\Oracle\\Java\\javapath', 'C', '\\Program Files\\ImageMagick-7.0.8-Q16', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.1\\bin', 'C', '\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.1\\libnvvp', 'C', '\\windows\\system32', 'C', '\\windows', 'C', '\\windows\\System32\\Wbem', 'C', '\\windows\\System32\\WindowsPowerShell\\v1.0\\', 'C', '\\windows\\System32\\OpenSSH\\', 'C', '\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common', 'C', '\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR', 'C', '\\Program Files\\dotnet\\', 'C', '\\Program Files (x86)\\dotnet\\', 'C', '\\ProgramData\\chocolatey\\bin', 'C', '\\Program Files (x86)\\vim\\vim80', 'C', '\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.22.27905\\bin\\Hostx64\\x64', 'C', '\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.22.27905\\include', 'E', '\\Program Files\\CMake\\bin', 'C', '\\WINDOWS\\system32', 'C', '\\WINDOWS', 'C', '\\WINDOWS\\System32\\Wbem', 'C', '\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\', 'C', '\\WINDOWS\\System32\\OpenSSH\\', 'C', '\\Program Files (x86)\\IncrediBuild', 'C', '\\Program Files\\nodejs\\', 'C', '\\Program Files\\Redis\\', 'C', '\\Program Files\\PuTTY\\', 'C', '\\Program Files (x86)\\ZeroTier\\One\\', 'D', '\\Go\\bin', 'C', '\\Program Files\\GitHub CLI\\', 'C', '\\Program Files (x86)\\Tailscale IPN', 'C', '\\Program Files\\NVIDIA Corporation\\Nsight Compute 2022.2.1\\', 'C', '\\Program Files\\Process Lasso\\', 'C', '\\Program Files\\Git\\cmd', 'E', '\\anaconda3', 'E', '\\anaconda3\\Scripts', 'E', '\\anaconda3\\condabin', 'E', '\\anaconda3\\DLLs', 'C', '\\Python310', 'C', '\\Python310\\Scripts', 'C', '\\Python310\\Lib\\site-packages', 'G', '\\cuda117_bin', 'D', '\\msys64\\ucrt64\\bin', 'D', '\\msys64\\usr\\bin', 'C', '\\Python310\\lib\\site-packages\\torch\\lib', 'C', '\\Python39', 'c', '\\python39\\Scripts', 'C', '\\ProgramData\\chocolatey\\bin', 'G', '\\pypy3.9-v7.3.11-win64', 'C', '\\Users\\linke\\go\\bin', 'C', '\\Users\\linke\\scoop\\shims', 'C', '\\Users\\linke\\AppData\\Local\\Microsoft\\WindowsApps', 'C', '\\Users\\linke\\AppData\\Local\\Programs\\Microsoft VS Code\\bin', 'C', '\\Users\\linke\\AppData\\Roaming\\npm', 'C', '\\Users\\linke\\AppData\\Local\\ComposerSetup\\bin', 'C', '\\Users\\linke\\AppData\\Roaming\\Composer\\vendor\\bin', 'C', '\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\Hostx64\\x64', '']), This path may cause jittor found the wrong libs, please unset LD_LIBRARY_PATH and remove cuda lib path in Path.
Or you can let jittor install cuda for you: `python3.x -m jittor_utils.install_cuda`
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [02:24<00:00, 18.07s/it]
[i 0417 02:50:28.604000 52 cuda_flags.cc:39] CUDA enabled.
用户输入:你是谁?

Compiling Operators(5/5) used: 8.74s eta:    0s
[e 0417 08:15:20.745000 52 log.cc:565] cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc
e:\anaconda3\include\cuda\std\detail/libcxx/include/type_traits(4842): error: identifier "__builtin_is_constant_evaluated" is undefined

e:\anaconda3\include\cuda\std\detail/libcxx/include/type_traits(4847): error: identifier "__builtin_is_constant_evaluated" is undefined

2 errors detected in the compilation of "C:/Users/linke/.cache/jittor/jt1.3.7/cl/py3.10.9/Windows-10-10.x9e/IntelRCoreTMi5x25/default/cu11.2.67/jit/code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc".
code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc

Traceback (most recent call last):
  File "G:\pys\JittorLLMs\cli_demo.py", line 9, in <module>
    model.chat()
  File "G:\pys\JittorLLMs\models\chatglm\__init__.py", line 36, in chat
    for response, history in self.model.stream_chat(self.tokenizer, text, history=history):
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1259, in stream_chat
    for outputs in self.stream_generate(**input_ids, **gen_kwargs):
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1336, in stream_generate
    outputs = self(
  File "E:\anaconda3\lib\site-packages\jtorch\nn\__init__.py", line 16, in __call__
    return self.forward(*args, **kw)
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1138, in forward
    transformer_outputs = self.transformer(
  File "E:\anaconda3\lib\site-packages\jtorch\nn\__init__.py", line 16, in __call__
    return self.forward(*args, **kw)
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 973, in forward
    layer_ret = layer(
  File "E:\anaconda3\lib\site-packages\jtorch\nn\__init__.py", line 16, in __call__
    return self.forward(*args, **kw)
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 614, in forward
    attention_outputs = self.attention(
  File "E:\anaconda3\lib\site-packages\jtorch\nn\__init__.py", line 16, in __call__
    return self.forward(*args, **kw)
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 454, in forward
    cos, sin = self.rotary_emb(q1, seq_len=position_ids.max() + 1)
  File "E:\anaconda3\lib\site-packages\jtorch\nn\__init__.py", line 16, in __call__
    return self.forward(*args, **kw)
  File "C:\Users\linke/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 202, in forward
    t = torch.arange(seq_len, device=x.device, dtype=self.inv_freq.dtype)
  File "E:\anaconda3\lib\site-packages\jtorch\__init__.py", line 31, in inner
    ret = func(*args, **kw)
  File "E:\anaconda3\lib\site-packages\jittor\misc.py", line 809, in arange
    if isinstance(start, jt.Var): start = start.item()
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.item)).

Types of your inputs are:
 self   = Var,
 args   = (),

The function declarations are:
 ItemData item()

Failed reason:[f 0417 08:15:20.757000 52 parallel_compiler.cc:330] Error happend during compilation:
 [Error] source file location:C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default\cu11.2.67\jit\code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc
Compile operator(1/7)failed:Op(12536:0:1:1:i1:o1:s0,code->12537)

Reason: [f 0417 08:15:20.745000 52 log.cc:608] Check failed ret(1) == 0(0) Run cmd failed: "C:\Users\linke\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe" "C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default\cu11.2.67\jit\code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.cc"            -shared  -L"e:\anaconda3\libs" -lpython310  -Xcompiler -EHa -Xcompiler -MD -Xcompiler -utf-8   -I"C:\Users\linke\.cache\jittor\msvc\VC\include" -I"C:\Users\linke\.cache\jittor\msvc\win10_kits\include\ucrt" -I"C:\Users\linke\.cache\jittor\msvc\win10_kits\include\shared" -I"C:\Users\linke\.cache\jittor\msvc\win10_kits\include\um" -DNOMINMAX  -L"C:\Users\linke\.cache\jittor\msvc\VC\lib" -L"C:\Users\linke\.cache\jittor\msvc\win10_kits\lib\um\x64" -L"C:\Users\linke\.cache\jittor\msvc\win10_kits\lib\ucrt\x64"  -I"e:\anaconda3\lib\site-packages\jittor\src" -I"e:\anaconda3\include" -DHAS_CUDA -DIS_CUDA -I"C:\Users\linke\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\include" -I"e:\anaconda3\lib\site-packages\jittor\extern\cuda\inc"  -lcudart -L"C:\Users\linke\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\lib\x64" -L"C:\Users\linke\.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin"  -I"C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default\cu11.2.67" -L"C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default\cu11.2.67" -L"C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default"  -l"jit_utils_core.cp310-win_amd64"  -l"jittor_core.cp310-win_amd64"  -x cu --cudart=shared -ccbin="C:\Users\linke\.cache\jittor\msvc\VC\_\_\_\_\_\bin\cl.exe" --use_fast_math  -w  -I"e:\anaconda3\lib\site-packages\jittor\extern/cuda/inc"  -arch=compute_75  -code=sm_75  -o "C:\Users\linke\.cache\jittor\jt1.3.7\cl\py3.10.9\Windows-10-10.x9e\IntelRCoreTMi5x25\default\cu11.2.67\jit\code__IN_SIZE_1__in0_dim_4__in0_type_float32__OUT_SIZE_1__out0_dim_4__out0_type_float32__H___hash_3febe3994cb3e308_op.dll" -Xlinker -EXPORT:"?jit_run@CodeOp@jittor@@QEAAXXZ"

OSError: [WinError 1314] 客户端没有所需的特权

D:\JittorLLMs>python cli_demo.py chatglm
[i 0406 21:04:11.694000 84 compiler.py:955] Jittor(1.3.7.12) src: d:\python\python311\lib\site-packages\jittor
[i 0406 21:04:11.728000 84 compiler.py:956] cl at C:\Users\XXXX.cache\jittor\msvc\VC_____\bin\cl.exe(19.29.30133)
[i 0406 21:04:11.728000 84 compiler.py:957] cache_path: C:\Users\XXXX.cache\jittor\jt1.3.7\cl\py3.11.3\Windows-10-10.x06\AMD\default
[i 0406 21:04:11.761000 84 init.py:227] Total mem: 31.96GB, using 10 procs for compiling.
[i 0406 21:04:13.000000 84 jit_compiler.cc:28] Load cc_path: C:\Users\XXXX.cache\jittor\msvc\VC_____\bin\cl.exe
[i 0406 21:04:13.144000 84 compile_extern.py:522] mpicc not found, distribution disabled.
Traceback (most recent call last):
File "D:\JittorLLMs\cli_demo.py", line 8, in
model = models.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\JittorLLMs\models_init_.py", line 38, in get_model
globals()f"get_{model_name}"
File "D:\JittorLLMs\models\util.py", line 54, in get_chatglm
os.symlink(new_path[-1], os.path.join(ln_dir, f))
OSError: [WinError 1314] 客户端没有所需的特权。: 'C:\Users\XXXX\.cache\jittor\jt1.3.7\cl\py3.11.3\Windows-10-10.x06\AMD\default\checkpoints\chat-glm/pytorch_model-00005-of-00008.bin' -> 'D:\JittorLLMs\models\chatglm\pytorch_model-00005-of-00008.bin'

手动复制每一个下载的bin文件 C: -> D: 可以继续

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.