Comments (6)
Hi @khayamgondal I am also trying to install Deepspeed for jetson nanoarchitecture. but I am encountering some different issues, can you let me know if you encountered the same error while installation?
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-2ye8_7hh/deepspeed_e72ce29d13824bf99346091714d95736/setup.py", line 247, in <module>
if isinstance(torch.cuda.nccl.version(), int):
File "/home/ies/miniconda3/envs/gdrnpp/lib/python3.10/site-packages/torch/cuda/nccl.py", line 35, in version
ver = torch._C._nccl_version()
AttributeError: module 'torch._C' has no attribute '_nccl_version'
DS_BUILD_OPS=0
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] please install triton==1.0.0 if you want to use sparse attention
Install Ops={'async_io': False, 'fused_adam': False, 'cpu_adam': False, 'cpu_adagrad': False, 'cpu_lion': False, 'evoformer_attn': False, 'fp_quantizer': False, 'fused_lamb': False, 'fused_lion': False, 'inference_core_ops': False, 'cutlass_ops': False, 'transformer_inference': False, 'quantizer': False, 'ragged_device_ops': False, 'ragged_ops': False, 'random_ltd': False, 'sparse_attn': False, 'spatial_inference': False, 'transformer': False, 'stochastic_transformer': False}
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
I think this might be the issue of nccl installation, but on the documentation page no nccl version satisfies aarch64 requirements
from deepspeed.
@khayamgondal and @Sar-thak-3, unfortunately we have not tested DeepSpeed in those environments due to lack of access.
from deepspeed.
anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/comm/shm.cpp:10:10: fatal error: immintrin.h: No such file or directory
10 | #include <immintrin.h>
@khayamgondal, immintrin.h
is not available on arm64. However, I think you should not be compiling this extension if you are using CUDA. Can you share your build/installation command?
from deepspeed.
Thanks for getting back - I installed deepspeed using pip install deepspeed
. For cuda I am using NVIDIA NGC Pytorch container.
from deepspeed.
@khayamgondal, it is worthwhile building the source from the github to make it easier to see and fix the issues. Like I said earlier, we have not previously tested on arm.
from deepspeed.
@Sar-thak-3, it looks like the error you're seeing is unrelated to the original issue. We have not tested DeepSpeed on ARM, but the error you are seeing seems related to not having NCCL installed. Please install NCCL and then try installing DeepSpeed again. Alternately, you can also try building the source from github.
from deepspeed.
Related Issues (20)
- Different seeds are giving the exact same loss on Zero 1,2 and 3 during multi gpu training [BUG]
- [BUG] fp16 not supported for CPU? HOT 1
- Issue with LoRA Tuning on llama3-70b using PEFT and TRL's SFTTrainer
- [REQUEST] Asynchronous Checkpointing HOT 1
- [BUG] ImportError: /home/nlp/.cache/torch_extensions/py310_cu121/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory HOT 1
- CUDA error: no kernel image is available for execution on the device [BUG]
- lr scheduler defined in config cannot be overwritten by lr scheduler defined in code and pass to `deepspeed.initialize` [BUG]
- [BUG] PipelineEngine calculates loss with outputs and labels from different batches. HOT 1
- [BUG] Learning rate scheduler and optimizer logical issue
- In distributed training, in order to continue training, an error occurred when loading model checkpoints after saving them.
- DS communication issue when using NCCL backend: All_reduce instead of reduce_scatter (or several reduce ops) HOT 5
- [BUG] I can't run fp8 with pipeline parallel HOT 2
- [BUG] Multi-gpu stuck when the computation graph is not complete for wach process.
- [BUG] Multi-node fine-tuning with thunderbolt HOT 1
- Multi-node multi-GPUs training is slower than single-node multi-GPUs training[BUG] HOT 2
- Default libcurand path fails HOT 1
- [BUG] Universal checkpoint conversion - "Cannot find layer_01* files in there"
- test
- how to set "training_step" during training?
- does deepspeed support pure bf16 training?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.