dptech-corp / uni-core Goto Github PK
View Code? Open in Web Editor NEWan efficient distributed PyTorch framework
License: MIT License
an efficient distributed PyTorch framework
License: MIT License
Hello,
I am doing a cpu-based project and I need to install Uni-core, but when I am running setup.py on my cpu machine, I met with metadata-generation-failed error, so I wonder whether Uni-core can be installed on cpu? If yes, could you please tell me how to do this? If not, will you update a version which can be installed on cpu?
Thank you very much
Hi, I tried installing Unicore using 'pip install https://github.com/dptech-corp/Uni-Core/releases/download/0.0.3/unicore-0.0.1+cu118torch2.0.0-cp310-cp310-linux_x86_64.whl'.. However, I get the following error:
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
ERROR: unicore-0.0.1+cu118torch2.0.0-cp310-cp310-linux_x86_64.whl is not a supported wheel on this platform.
Python version is 3.10.12. I also installed pytorch (2.0.0) and cuda-toolkit (11.8.0), nvidia-pyindex.
Please let me know if you have a solution for this. Thank you!
请问unicore适用于windows系统嘛
i think this is an issue for a lot of people...
pip install git+https://github.com/dptech-corp/Uni-Core.git
creates wheels forever, and does not work. torch version is over what you guys require.
I was training Uni-Mol using Uni-Core, on multiple GPUs (one node). However, I met the following error message:
diff = self.param - new_param
diff = self.param - new_param
diff = self.param - new_param
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ^~ ~~ ~~ ~~~~ ~~ diff = self.param - new_param~~
~~~~diff = self.param - new_param ~~~
^~
~~ ~~ RuntimeError~~ : ~~ Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!~~
~~ ~~ ~~ ~~ ~~ ~
~ ^~ ~RuntimeError~ ~: ~ ~Expected all tensors to be on the same device, but found at least two devices, cuda:4 and cpu!~~~
~~~~~~~~~~~~~~~~~~^~
~~~~RuntimeError~~: ~^Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!~
~~~~~~~~~~
~~~RuntimeError~: ~Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cpu!
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:5 and cpu!
diff = self.param - new_param
~~~~~~~~~~~^~~~~~~~~~ ~diff = self.param - new_param
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
~~~~~~~~~~~^~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cpu!
diff = self.param - new_param
~~~~~~~~~~~^~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cpu!
The direct cause is clear.
Line 47 in ec396a7
Assumes self.param
and new_param
are on the same device, but they are not.
A workaround is to manually move them together in the update()
function. However, that might hide the root cause, which is worth digging.
I use pip install .
got these errors
Building wheel for unicore (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [80 lines of output]
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
arning: Torch did not find available GPUs on this system.
If your intention is to cross-compile, this is not an error.
By default, it will cross-compile for Volta (compute capability 7.0), Turing (compute capability 7.5),
and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0).
If you wish to cross-compile for a single specific architecture,
export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.
torch.__version__ = 2.0.1
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for unicore
Running setup.py clean for unicore
Building wheel for ml_collections (setup.py) ... done
Created wheel for ml_collections: filename=ml_collections-0.1.1-py3-none-any.whl size=94506 sha256=aadee6f43895d8e7e348aca3c5cab4b2583e285e175cd288f34299c39a48dbfa
Stored in directory: /root/.cache/pip/wheels/28/82/ef/a6971b09a96519d55ce6efef66f0cbcdef2ae9cc1e6b41daf7
Successfully built ml_collections
Failed to build unicore
ERROR: Could not build wheels for unicore, which is required to install pyproject.toml-based projects
Hi. I am using cuda 10.1. But installing Uni-Core requires exact version of cuda 10.2. Is it because you the written cuda kernels that are specifically tied to this cuda version? Is there are any ways to install Uni-Core in cuda 10.1? I think installing it without those cuda kernels would work?
The original download url is invalid now.
Can you provide an official document in order help us to get started quickly with your great work?
hello, I am try to install uni-core by pip. But got the errors
ERROR: Could not find a version that satisfies the requirement uni-core==0.0.1 (from versions: none)
ERROR: No matching distribution found for uni-core==0.0.1
My cuda version is 12.1, and I have installed the torch==2.1.0.
So you have any idea about these errors?
Thanks!
I tried to run the fine-tuning script provided in Uni-Mol (pasted here for easy reference).
data_path="./molecular_property_prediction" # replace to your data path
save_dir="./save_finetune" # replace to your save path
n_gpu=4
MASTER_PORT=10086
dict_name="dict.txt"
weight_path="./weights/checkpoint.pt" # replace to your ckpt path
task_name="qm9dft" # molecular property prediction task name
task_num=3
loss_func="finetune_smooth_mae"
lr=1e-4
batch_size=32
epoch=40
dropout=0
warmup=0.06
local_batch_size=32
only_polar=0
conf_size=11
seed=0
if [ "$task_name" == "qm7dft" ] || [ "$task_name" == "qm8dft" ] || [ "$task_name" == "qm9dft" ]; then
metric="valid_agg_mae"
elif [ "$task_name" == "esol" ] || [ "$task_name" == "freesolv" ] || [ "$task_name" == "lipo" ]; then
metric="valid_agg_rmse"
else
metric="valid_agg_auc"
fi
export NCCL_ASYNC_ERROR_HANDLING=1
export OMP_NUM_THREADS=1
update_freq=`expr $batch_size / $local_batch_size`
python -m torch.distributed.launch --nproc_per_node=$n_gpu --master_port=$MASTER_PORT $(which unicore-train) $data_path --task-name $task_name --user-dir ./unimol --train-subset train --valid-subset valid \
--conf-size $conf_size \
--num-workers 8 --ddp-backend=c10d \
--dict-name $dict_name \
--task mol_finetune --loss $loss_func --arch unimol_base \
--classification-head-name $task_name --num-classes $task_num \
--optimizer adam --adam-betas "(0.9, 0.99)" --adam-eps 1e-6 --clip-norm 1.0 \
--lr-scheduler polynomial_decay --lr $lr --warmup-ratio $warmup --max-epoch $epoch --batch-size $local_batch_size --pooler-dropout $dropout\
--update-freq $update_freq --seed $seed \
--fp16 --fp16-init-scale 4 --fp16-scale-window 256 \
--log-interval 100 --log-format simple \
--validate-interval 1 \
--finetune-from-model $weight_path \
--best-checkpoint-metric $metric --patience 20 \
--save-dir $save_dir --only-polar $only_polar \
--reg
# --reg, for regression task
# --maximize-best-checkpoint-metric, for classification task
However, I encountered the following error:
unicore-train: error: unrecognized arguments: --local-rank=0
and the argument --local-rank does not even appear in Uni-Core. I am using PyTorch 2.0, and the log also warns me that:
If your script expects `--local-rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions
It confuses me whether it means Uni-Core does not support PyTorch 2.0 (which seems not likely), or is there another problem?
This is absolute a good framework. But it's very similar to fariseq. I am wondering if there are anything different from fairseq that makes uni-core standing out, easy-to-use? fast?
Best,
Zhangzhi
Hi,
Thanks for developing this powerful package. When I trained a programme using a batch_size higher than 1, Uni-Core will assert whether the batch_seize is equal to 1 and return AssertionError. Is this normal or it will affect downstream processes?
Hello,when I install unicore by using pip install unicore-0.0.1+cu113torch1.12.1-cp310-cp310-linux_x86_64.whl
the error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
unicore 0.0.1 requires wandb, which is not installed.
unicore 0.0.1 requires torch>=2.0.0, but you have torch 1.12.1+cu113 which is incompatible.
And my cuda version is 11.5
Please tell me how to fix it.
Hi, does Uni-Core currently support CUDA 12? Thanks!
Dear developers,
I am trying to install a cpu version of Uni-Core on WSL from the source using command python setup.py install --disable-cuda-ext
. But I always get the following message:
error: urllib3 2.2.1 is installed but urllib3<1.27,>=1.21.1 is required by {'requests'}
After I install urllib3 1.26
, it still goes wrong as showing below:
Processing dependencies for unicore==0.0.1
error: urllib3 2.2.1 is installed but urllib3<1.27,>=1.21.1 is required by {'requests'}
(venv_torch) (base) jingheng@Bai-Group:~/Uni-Core$ pip uninstall urllib3==2.2.1
Found existing installation: urllib3 1.26.0
Uninstalling urllib3-1.26.0:
Would remove:
/home/jingheng/venv_torch/lib/python3.9/site-packages/urllib3-1.26.0.dist-info/*
/home/jingheng/venv_torch/lib/python3.9/site-packages/urllib3/*
Proceed (Y/n)?
I am not sure what the problem is, please let me know how to fix it. Thank you.
when i set the "--task" "unimol_plus",it alerts me"unicore-train: error: argument --task: invalid choice: 'unimol_plus' (choose from )".
It does not have any choices.I don't know how to fix it?
I noticed the example : https://github.com/dptech-corp/Uni-Core/blob/main/examples/bert/train_bert_test.sh
How could i use fp32 ?
Just remove --fp16 --fp16-init-scale 4 --fp16-scale-window 256.
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.