GithubHelp home page GithubHelp logo

Comments (12)

Sunnydreamrain avatar Sunnydreamrain commented on September 26, 2024

Hi,
It is the same. The weight processing is separated from the recurrent part as shown in the following line.

https://github.com/Sunnydreamrain/IndRNN_pytorch/blob/master/action_recognition/Indrnn_action_network.py#L108

Note from cuda_IndRNN_onlyrecurrent import IndRNN_onlyrecurrent as IndRNN. Here the IndRNN only refers to the recurrent part. Plusing the weight processing with DI, it the same as the whole IndRNN.

Thanks.

from indrnn_pytorch.

hongge831 avatar hongge831 commented on September 26, 2024

got it

from indrnn_pytorch.

felixfuu avatar felixfuu commented on September 26, 2024

@Sunnydreamrain Thanks for you excellent work. But how can i create .npy files?

from indrnn_pytorch.

Sunnydreamrain avatar Sunnydreamrain commented on September 26, 2024

Hi,

Generate the data ndarray. Download the NTU RGB+D dataset, save the skeleton into a ndarray, and keep the length and label of each data entry.
You can read the data_reader and check which file and which dimension keeps what information.
Another way is to use your own datareader. It is only to read the skeleton to the network for processing.

Thanks.

from indrnn_pytorch.

LiyangLiu123 avatar LiyangLiu123 commented on September 26, 2024

@Sunnydreamrain Hi, thanks for your work. I was redoing the experiment on Google Colab and found many errors about the loaded numpy array and memory exploded when the file is large. When I tried a smaller file the errors were all gone. Can you tell me how big is the GPU memory you used when you trained on all the NTU data?

from indrnn_pytorch.

Sunnydreamrain avatar Sunnydreamrain commented on September 26, 2024

Hi, it is not very large. As I recall, it only takes around 2GB. The memory may grow if the network is large.

from indrnn_pytorch.

LiyangLiu123 avatar LiyangLiu123 commented on September 26, 2024

@Sunnydreamrain I think something went wrong during multi-threading because this is what I get after running.

Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "", line 121, in call
self.result['data']=np.asarray(batch_data,dtype=np.float32)
File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not broadcast input array from shape (20,50,3) into shape (20)

Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "", line 219, in call
self.result['data']=np.asarray(batch_data,dtype=np.float32)
File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not broadcast input array from shape (20,50,3) into shape (20)

After the exceptions, a key error for the dict occurs. I think the error is caused by the exceptions before.

KeyError: 'data'

The memory of each thread cannot be released, then the program crashed. Do you have any suggestion for this situation?

from indrnn_pytorch.

Sunnydreamrain avatar Sunnydreamrain commented on September 26, 2024

The code is based on the SRU shown in the following link. The multiple GPUs are not supported yet. If you want to use multiple GPU, please use the pytorch version instead of the CUDA version.

https://github.com/taolei87/sru/issues/4

from indrnn_pytorch.

LiyangLiu123 avatar LiyangLiu123 commented on September 26, 2024

@Sunnydreamrain Another question is about the ndarray data. After I transferred the data to ndarray, the ndarray is actually np.array(list(), list() ...) because of the different number of frames of each file. I was hoping to get a nice multidimensional np.array().

What format should we use in order to make the program run? Like do we make every frame an np.array() or just a list is ok?

from indrnn_pytorch.

LiyangLiu123 avatar LiyangLiu123 commented on September 26, 2024

@Sunnydreamrain Ah, I found the mistake of the thread exception. It was silly of me to have accidentally put some empty frames in the ndarray while processing the raw data. So some sample will be in shape (20,) instead of (20,50,3). That's why the np.asarray() couldn't transform the type.

from indrnn_pytorch.

hiroshiperera avatar hiroshiperera commented on September 26, 2024

Hi .. Thanks alot for your great implementation. I'm still in the process of understanding this. Can you kindly let me know the input dimension of the dataset, and and what the length should be. I really appreciate if you can mention some more details about the dataset.

from indrnn_pytorch.

Sunnydreamrain avatar Sunnydreamrain commented on September 26, 2024

Answered in #3.

from indrnn_pytorch.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.