GithubHelp home page GithubHelp logo

GPU implementation about spectra HOT 16 OPEN

ccruizm avatar ccruizm commented on July 26, 2024 1
GPU implementation

from spectra.

Comments (16)

russellkune avatar russellkune commented on July 26, 2024 2

from spectra.

ccruizm avatar ccruizm commented on July 26, 2024 1

Nice you made it work @kvshams! About your second comment, I also experienced something similar (running on CPU tho). I raised an issue (#22) but closed it because I could not reproduce it in another HPC infrastructure. If you are having the same problem, it might be worthwhile to look at why this is happening.

from spectra.

kvshams avatar kvshams commented on July 26, 2024

@ccruizm seems you can invoke the GPU utility by

from spectra import spectra_gpu as spc_gpu

and all the commands are available with spc_gpu.'...'. It seems still experimental and gives info while calling

Spectra GPU support is still under development. Raise any issues on github 
 
 Changes from v1: 
 (1) GPU support [see tutorial] 
 (2) minibatching for local parameters and data 
 Note that minibatching may affect optimization results 
 Code will eventually be merged into spectra.py

I have tried it on a V100 GPU and it takes forever even for 2 epochs on my data. I am not sure are there any further preprocessing required!. I think I should try the spectra test data and see what happens.

from spectra.

ccruizm avatar ccruizm commented on July 26, 2024

Thanks for your comments @kvshams and looking forward to testing the new GPU implementation @russellkune πŸ€“

from spectra.

ccruizm avatar ccruizm commented on July 26, 2024

I am running now in parallel comparison between CPU and GPU. My dataset is 120K cells, 460 gene sets and five main cell types. In the case of the CPU, the estimated time to complete 10000 epochs is less than 32h (~12s/it). Importing spectra_gpu in a cluster with an NVIDIA A100-SXM4-40GB show now an estimated time of 10h (3.7s/it) πŸ˜€ However, I am puzzled by the fact that when I check the use of resources of the GPU (nvidia-smi, torch.cuda.memory_allocated() or torch.cuda.memory_cached()) I do not use any allocation/running processes.

I assume it is using the GPU since it reduces the running time by 3x, but not sure why it does not display it πŸ˜…

Screenshot 2023-06-15 at 09 55 46

from spectra.

kvshams avatar kvshams commented on July 26, 2024

That’s a great improvement but not sure why it’s not showing in the resource usage. Is there any branch that I can try the new implementation?.

from spectra.

ccruizm avatar ccruizm commented on July 26, 2024

It is implemented in the master branch. I only changed the code from spectra import spectra_gpu as spc and kept everything the same.

from spectra.

kvshams avatar kvshams commented on July 26, 2024

from spectra.

ccruizm avatar ccruizm commented on July 26, 2024

Not at all. I am running exactly the same pipeline, only with the change I mentioned before. Maybe there are some issues with torch recognizing your GPU (have seen it with other tools/env set ups)?
If you do:

import torch

# Check if CUDA is available
if torch.cuda.is_available():
    print("CUDA is available")
    
    # Get the number of available GPUs
    num_gpus = torch.cuda.device_count()
    print(f"Number of available GPUs: {num_gpus}")
    
    # Get the name of each available GPU
    for i in range(num_gpus):
        gpu_name = torch.cuda.get_device_name(i)
        print(f"GPU {i}: {gpu_name}")
else:
    print("CUDA is not available")

Do you see you GPU?

from spectra.

kvshams avatar kvshams commented on July 26, 2024

Yes, I have GPU detected in the session. Just modified the snippets above

import torch

# Check if CUDA is available
if torch.cuda.is_available():
    print("CUDA is available")

    # Get the number of available GPUs
    num_gpus = torch.cuda.device_count()
    print(f"Number of available GPUs: {num_gpus}")

    # Get the name and memory status of each available GPU
    for i in range(num_gpus):
        gpu_name = torch.cuda.get_device_name(i)
        print(f"GPU {i}: {gpu_name}")

        # Get the memory information
        gpu_memory = torch.cuda.get_device_properties(i).total_memory
        gpu_memory_allocated = torch.cuda.memory_allocated(i)
        gpu_memory_cached = torch.cuda.memory_cached(i)
        gpu_memory_free = gpu_memory - gpu_memory_allocated - gpu_memory_cached

        print(f"\tTotal Memory: {gpu_memory / 1024**3:.2f} GB")
        print(f"\tAllocated Memory: {gpu_memory_allocated / 1024**3:.2f} GB")
        print(f"\tCached Memory: {gpu_memory_cached / 1024**3:.2f} GB")
        print(f"\tFree Memory: {gpu_memory_free / 1024**3:.2f} GB")
else:
    print("CUDA is not available")

that gives,

CUDA is available
Number of available GPUs: 1
GPU 0: Tesla V100-SXM2-32GB
	Total Memory: 31.75 GB
	Allocated Memory: 0.00 GB
	Cached Memory: 0.00 GB
	Free Memory: 31.75 GB

My session info is

Click to view session information
-----
numpy               1.23.5
pandas              2.0.0
scanpy              1.9.3
scipy               1.10.1
session_info        1.0.0
spectra             NA
torch               2.0.0.post200
-----
Click to view modules imported as dependencies
PIL                 9.5.0
anndata             0.9.1
asttokens           NA
awkward             2.2.0
awkward_cpp         NA
backcall            0.2.0
cairo               1.23.0
cffi                1.15.1
comm                0.1.3
cycler              0.10.0
cython_runtime      NA
dateutil            2.8.2
debugpy             1.6.7
decorator           5.1.1
defusedxml          0.7.1
dot_parser          NA
executing           1.2.0
gmpy2               2.1.2
google              NA
h5py                3.8.0
igraph              0.10.4
importlib_metadata  NA
importlib_resources NA
ipykernel           6.23.1
ipython_genutils    0.2.0
jedi                0.18.2
jinja2              3.1.2
joblib              1.2.0
jsonpickle          3.0.1
kiwisolver          1.4.4
leidenalg           0.9.1
llvmlite            0.39.1
markupsafe          2.1.2
matplotlib          3.7.1
mpl_toolkits        NA
mpmath              1.3.0
natsort             8.3.1
networkx            3.1
numba               0.56.4
numexpr             2.8.4
nvfuser             NA
opt_einsum          v3.3.0
packaging           23.1
parso               0.8.3
pexpect             4.8.0
pickleshare         0.7.5
pkg_resources       NA
platformdirs        3.5.1
plotly              5.14.1
prompt_toolkit      3.0.38
psutil              5.9.5
ptyprocess          0.7.0
pure_eval           0.2.2
pydev_ipython       NA
pydevconsole        NA
pydevd              2.9.5
pydevd_file_utils   NA
pydevd_plugins      NA
pydevd_tracing      NA
pydot               1.4.2
pygments            2.15.1
pyparsing           3.0.9
pytz                2023.3
pyvis               0.3.2
regex               2.5.129
setuptools          67.7.2
six                 1.16.0
sklearn             1.2.2
stack_data          0.6.2
sympy               1.12
texttable           1.6.7
threadpoolctl       3.1.0
tornado             6.3.2
tqdm                4.65.0
traitlets           5.9.0
typing_extensions   NA
wcwidth             0.2.6
yaml                6.0
zipp                NA
zmq                 25.0.2
zoneinfo            NA
-----
IPython             8.13.2
jupyter_client      8.2.0
jupyter_core        5.3.0
notebook            6.5.4
-----
Python 3.9.16 | packaged by conda-forge | (main, Feb  1 2023, 21:39:03) [GCC 11.3.0]
Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-glibc2.17
-----
Session information updated at 2023-06-15 10:39

from spectra.

kvshams avatar kvshams commented on July 26, 2024

@russellkune Seems there are two independent problems.

  1. The delay I was observing due to the issue of running diffrent GPU jobs on the same GPU core. I was running it on the same GPU core (enough memory is available in the GPU core as seen above). Process stalls at CUDA Available: True
  2. If I run a different core (no other process are running on this core) it runs but has died due to the memory allocation (seems to be its an insufficient RAM than to do with the GPU). I am waiting to get a larger memory node to see if the error still persists.

My data set is (114k cells, 14 cell types, 100 gene sets and kept use_highly_variable=False with about 5k genes).

CUDA Available:  True
Initializing model...
Building parameter set...

Then the Kernel dies.

from spectra.

kvshams avatar kvshams commented on July 26, 2024

Finally it works (n#1 did work after about an hour stalling. Not sure what is causing it, but once it started the process, it was much faster, yay!). Are there any reason for not having label_factors and overlap_threshold parameters in the GPU est_spectra module?

Update: for the stalling

Seems it is associated with the memory allocation process. Got a GPU memory error while running multiple scripts in the same GPU core

OutOfMemoryError: CUDA out of memory. Tried to allocate 1.77 GiB (GPU 0; 31.75 GiB total capacity; 27.05 GiB already allocated; 185.94 MiB free; 28.71 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@russellkune Seems there are two independent problems.

  1. The delay I was observing due to the issue of running diffrent GPU jobs on the same GPU core. I was running it on the same GPU core (enough memory is available in the GPU core as seen above). Process stalls at CUDA Available: True
  2. If I run a different core (no other process are running on this core) it runs but has died due to the memory allocation (seems to be its an insufficient RAM than to do with the GPU). I am waiting to get a larger memory node to see if the error still persists.

My data set is (114k cells, 14 cell types, 100 gene sets and kept use_highly_variable=False with about 5k genes).

CUDA Available:  True
Initializing model...
Building parameter set...

Then the Kernel dies.

from spectra.

kvshams avatar kvshams commented on July 26, 2024

from spectra.

kvshams avatar kvshams commented on July 26, 2024

I guess if I call both gpu and cpu version in the same notebook (ie from spectra import spectra as spc &from spectra import spectra_gpu as spc_gpu), the error persist and if I am using only the gpu version it works.

For the speed it is ~3x faster in my case too (in 16 core CPU, 240g VM it is ~21hrs and on a Tesla V100-SXM2-32GB GPU vm with 16 core CPU its ~7hrs).

What is the timeline for adding the new GPU file? Are there any big difference that I need to be worried about using the results in the current version?

from spectra.

kvshams avatar kvshams commented on July 26, 2024

I am also seeing same issue. Yesterday I saw my script was running in the GPU but did not finish due to insuffiecient memory. On fresh node today, it is running 3X faster but the PID is not listed in nvidia-smi output. Where is the process happening πŸ˜…. Seems to be a PID mapping issue

I am running now in parallel comparison between CPU and GPU. My dataset is 120K cells, 460 gene sets and five main cell types. In the case of the CPU, the estimated time to complete 10000 epochs is less than 32h (~12s/it). Importing spectra_gpu in a cluster with an NVIDIA A100-SXM4-40GB show now an estimated time of 10h (3.7s/it) πŸ˜€ However, I am puzzled by the fact that when I check the use of resources of the GPU (nvidia-smi, torch.cuda.memory_allocated() or torch.cuda.memory_cached()) I do not use any allocation/running processes.

I assume it is using the GPU since it reduces the running time by 3x, but not sure why it does not display it πŸ˜…

Screenshot 2023-06-15 at 09 55 46

from spectra.

ccruizm avatar ccruizm commented on July 26, 2024

Well, I have some results to share πŸ€“

  • CPU implementation took 32.4h and reached an LR of 0.001, running all 10,000 epochs.
  • GPU reached a lower LR (0.0001) at 7250 epochs, and then no more epochs were run and stopped there. This took 7.5h.

I was waiting for the CPU job to finish and compare the results because I found some 'weird' behavior in the GPU run, but it also happened on the CPU implementation. I will raise a new issue to ask about it.

from spectra.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.