Comments (12)
Hi,
It is the same. The weight processing is separated from the recurrent part as shown in the following line.
Note from cuda_IndRNN_onlyrecurrent import IndRNN_onlyrecurrent as IndRNN
. Here the IndRNN only refers to the recurrent part. Plusing the weight processing with DI
, it the same as the whole IndRNN.
Thanks.
from indrnn_pytorch.
got it
from indrnn_pytorch.
@Sunnydreamrain Thanks for you excellent work. But how can i create .npy files?
from indrnn_pytorch.
Hi,
Generate the data ndarray. Download the NTU RGB+D dataset, save the skeleton into a ndarray, and keep the length and label of each data entry.
You can read the data_reader and check which file and which dimension keeps what information.
Another way is to use your own datareader. It is only to read the skeleton to the network for processing.
Thanks.
from indrnn_pytorch.
@Sunnydreamrain Hi, thanks for your work. I was redoing the experiment on Google Colab and found many errors about the loaded numpy array and memory exploded when the file is large. When I tried a smaller file the errors were all gone. Can you tell me how big is the GPU memory you used when you trained on all the NTU data?
from indrnn_pytorch.
Hi, it is not very large. As I recall, it only takes around 2GB. The memory may grow if the network is large.
from indrnn_pytorch.
@Sunnydreamrain I think something went wrong during multi-threading because this is what I get after running.
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "", line 121, in call
self.result['data']=np.asarray(batch_data,dtype=np.float32)
File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not broadcast input array from shape (20,50,3) into shape (20)
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "", line 219, in call
self.result['data']=np.asarray(batch_data,dtype=np.float32)
File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not broadcast input array from shape (20,50,3) into shape (20)
After the exceptions, a key error for the dict occurs. I think the error is caused by the exceptions before.
KeyError: 'data'
The memory of each thread cannot be released, then the program crashed. Do you have any suggestion for this situation?
from indrnn_pytorch.
The code is based on the SRU shown in the following link. The multiple GPUs are not supported yet. If you want to use multiple GPU, please use the pytorch version instead of the CUDA version.
https://github.com/taolei87/sru/issues/4
from indrnn_pytorch.
@Sunnydreamrain Another question is about the ndarray data. After I transferred the data to ndarray, the ndarray is actually np.array(list(), list() ...) because of the different number of frames of each file. I was hoping to get a nice multidimensional np.array().
What format should we use in order to make the program run? Like do we make every frame an np.array() or just a list is ok?
from indrnn_pytorch.
@Sunnydreamrain Ah, I found the mistake of the thread exception. It was silly of me to have accidentally put some empty frames in the ndarray while processing the raw data. So some sample will be in shape (20,) instead of (20,50,3). That's why the np.asarray() couldn't transform the type.
from indrnn_pytorch.
Hi .. Thanks alot for your great implementation. I'm still in the process of understanding this. Can you kindly let me know the input dimension of the dataset, and and what the length should be. I really appreciate if you can mention some more details about the dataset.
from indrnn_pytorch.
Answered in #3.
from indrnn_pytorch.
Related Issues (11)
- Do not work on pytorch 1.9
- LSTM instead of RNN
- issue on grad
- Not abel to use this on packed sequence , AttributeError: 'PackedSequence' object has no attribute 'size' HOT 1
- Not able to prepare the dataset shape correctly. HOT 8
- Not able to reproduce the Results HOT 3
- Input sequence padding HOT 4
- Word level PTB repo? HOT 5
- Settings to reproduce resIndRNN results on PTB HOT 4
- cuda version of IndRNNCell? HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from indrnn_pytorch.