Comments (7)
Hello,
Indeed, specifying the leaking rate to 1.0 shouldn't change anything, as this is already the default in ReservoirPy.
The difference in performance comes from the spectral radius :
- if not specified, a random sparse matrix W is generated, and its non-zero values are normally distributed (you can also change the distribution).
- if specified, the same matrix is created, and is then scaled to have a spectral radius of
sr
In your case, the spectral radius you have if you don't specify its value is around ~10. You can get its value by using the reservoirpy.observables.spectral_radius method:
from reservoirpy.nodes import Reservoir
from reservoirpy.observables import spectral_radius
import numpy as np
my_reservoir = Reservoir(
units=1000,
# sr=1.0
)
my_reservoir.initialize(np.random.normal(size=(12, 1)))
spectral_radius(my_reservoir.W)
The spectral radius of the recurrent connection weights has a significant impact on the performances of the task, so it's not a surprise that you have such bad performances if you don't specify it. You can read more about its impact in the documentation.
from reservoirpy.
Hello,
Thank you for testing new tasks with ReservoirPy.
Reservoir Computing rarely works just out-of-the-box, as it is a machine learning tool with few trained parameters (only the output layer), one need to find optimal hyperparameters (parameters not trained) for each kind of task.
Here there are several factors that could influence your performances.
- Normalize your data if not done. Between -1 and 1 by default, or maybe between 0 and 1 if you have only positive values.
- Start with the simplest model, i.e. an ESN that do not have a feedback from the read-out layer and the reservoir. This makes the training more complicated and more unstable, in particular if you use offline learning (ridge) and not online learning (RLS, FORCE, ...).
- Make an extensive hyperparameter search, including changing the ridge parameter (regularization parameter). It is also important to keep fixed the number of units inside the reservoir while doing this search, because results will be less interpretable. It is better to make several searches with fixed number of units for each search, but increase the number of units for different searches instead.
- Look at the results of hyperparameter search to understand what are the most robust set of hyperparameters, and do not just take the "best" result. If you just want to take the best, then be careful to take also the same seed, to be sure to obtain the same results. You can find several exemples of the influence of hyperparameters and how to look at the hyperparameter search results:
https://github.com/reservoirpy/reservoirpy/blob/master/tutorials/4-Understand_and_optimize_hyperparameters.ipynb
This kind of plot will help you to understand what are the hyperparameters that give the most robust results.
I hope this helps. If you show us this kind of plot for all the hyperparameters, we could help you interpret them.
from reservoirpy.
Additionally, we provide some hints of how to optimise hyperparemeters in this paper:
Which Hype for My New Task? Hints and Random Search for Echo State Networks Hyperparameters. ICANN 2021 HTML HAL PDF
from reservoirpy.
Hello Paul,
Thank you for your response.
I actually achieved my best response when I left the spectral radius unspecified. I even attempted a hyperparameter optimization using Hyperopt, but my ESN with just the number of units specified outperformed the ESN with the optimized hyperparameters. I've attached some figures to illustrate what I mean.
Despite the better result, the unspecified predictions appear very noisy. I am guessing this has to do with the chaotic nature of the reservoir due to the high spectral radius. I was wondering if there was a way to smooth this out. I thought about using a filter, but my initial attempt proved unsuccessful.
*All the "Zoom In" figures are from the ESN with unspecified leaking rate and spectral radius.
from reservoirpy.
Hello,
Its difficult to tell why you don't have better results with an hyper-parameter exploration. Can you provide more details about it ? What kind of performances does the hyper-parameter exploration gives you around the default parameters ?
It seems you have more units in the default reservoir (500) than in the optimized version (150). That could explain the performance decrease.
For the smoothness of your output, many parameters comes into play, with high inter-dependencies.
from reservoirpy.
Hello Paul,
I attached the hyperparameter optimization script, as well as the data I used to train the ESN. When I ran it, I got the values stated in the rctest_hyperopt figure that I shared previously.
As far as the number of units in my reservoir, I started with 150 and found that the performance increased as I increased the number of units.
from reservoirpy.
Hello,
To illustrate my question further, in a previous experiment, I was able to create a ESN model (rpy_PK4GA7P1_ESN.py) that performed well in predicting future dynamics of a piezoelectric actuator.
When I rerun the same experiment with a new set of data from an electric linear actuator (rpy_RCP.py), keeping the structure of the model the same, I get very different performance results (RMSE: 14.22177, R^2: -16.21228). Furthermore, when I rerun the experiment again on the electric linear actuator data, but leave the leaking rate and spectral radius unspecified, the performance improves (RMSE: 0.75309, R^2: 0.95170), but still not to the level that was achieved with the piezoelectric data. I am wondering what adjustments I should make in order to achieve similar regression results to the piezoelectric data with the electric linear data.
from reservoirpy.
Related Issues (20)
- Segfault in classification notebook HOT 5
- Save/Load to/from disk HOT 2
- No warning is triggered when non-existing variable name is used
- Autograd - Feature Request HOT 1
- Mmap error with local parallelization with optuna from the tutorial HOT 1
- datasets.narma doesn't return input series HOT 5
- ValueError: Missing input data for node Reservoir-0.
- Fitting a model on non-temporal data HOT 1
- Feature Importance HOT 4
- Small-world reservoir matrices
- Rank list of degree of influence of input variables HOT 1
- I trying to forecast using reservoirpy HOT 1
- how to save and load a prediction model HOT 2
- Is the long term forecasting example opertion explanation correct HOT 3
- Understand and optimize ESN hyperparameters errors HOT 3
- cant do long term forecasting on yahoo stock market data HOT 4
- Creating a reservoir of custom nodes HOT 2
- LMS doesn't work for single node readout HOT 1
- How can I save the trained model? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from reservoirpy.