GithubHelp home page GithubHelp logo

Comments (11)

JustGlowing avatar JustGlowing commented on July 23, 2024 1

hi, try using the StandardScaler instead of the MinMaxScaler before anything else.

Other things that you can try:

  • Use a small sigma, e.g. sigma=1 if you stick with a gaussian neighborhood function.
  • If your learning curve is still unexpected, use an inverse_decay for both the decay functions and play with the learning rate.
  • Changing the neighborhood function might also lead to good results but you'll have tweak sigma and learning rate accordingly.

from minisom.

vwgeiser avatar vwgeiser commented on July 23, 2024

@JustGlowing The strange shape of the learning curve can be explained by an oversight in my own code, I was running the visualization process from the BasicUsage.ipynb example on an already trained SOM instead of a new instance of one. However, the large QE question still remains. I noticed that in the MovieCovers.ipynb example that there is also quite a large QE, so this could be the real error value for the problem I'm working with.

(The only aspect that has changed about the problem from the last post is I now have a few more samples to work with.)

Using StandardScaler increases QE substantially, is there an interpretive reason as to why standard scaler might be preferred? Here is a side by side comparison with equal hyperparameters:

SSvsMM

Learning curve visualization [with linear_decay_to_(zero/one)]
MSLP33LC
MSLP33TE

from minisom.

JustGlowing avatar JustGlowing commented on July 23, 2024

from minisom.

vwgeiser avatar vwgeiser commented on July 23, 2024

Gotcha, thanks for the explanation! So if I am understanding correctly the high quantization error might have to do with my input size then (combined with the size of the SOM)? If I am originally working with a 420x444 lat/long grid containing pixel values that I flatten down into 'input_len=186480' for input into a SOM then it follows that even small distances between samples might be compounded over the dataset's size leading to higher QE?

# Suppose an average squared difference per feature is around 0.01 (or some small difference due to scaling)

Average Squared Difference = 0.01×186480=1864.80
Quantization Error = sqrt(1864.80) ≈ 43.2

Here is a quick example of the SOM output when I don't scale the data:
PMSL33
PMSL33LC

[ 200 / 200 ] 100% - 0:00:00 left
quantization error: 157049.84784462157
SOM training took 4.58 seconds!
Begin Learning Curve Visualization
End Learning Curve Visualization
Q-error: 157887.617
T-error: 0.013
SOM LC visualization took 201.26 seconds!

An averaged composite of all samples included within each node (a sanity check):
PMSL33AVE

from minisom.

JustGlowing avatar JustGlowing commented on July 23, 2024

from minisom.

vwgeiser avatar vwgeiser commented on July 23, 2024

@JustGlowing Right now the input length is 186480 as I have 1 variable that is flattened into one numpy array (with 149 samples this becomes a vector of length 186480*149. What would be the best way to add another variable on top of this? In the documentation for "train" it states that the input data can be a np.array or list Data matrix. How does this work when initializing the SOM with input_len when the data spans multiple rows? Is the way I would incorporate another variable to flatten the new variable and append it onto the first, making the input length 186480 + 186480 = 372960? It would seem more logical to add it as another column in the input variables, but then my question would again be how would this work with input_len; as one row doesn't correspond to one sample but 186480 rows (and 2 or 3 or more columns) would still correspond to with one input into the SOM.

I am looking for functionality similar to the R "supersom" package. Is that something MiniSOM could support naturally?

from minisom.

JustGlowing avatar JustGlowing commented on July 23, 2024

Hi, from what I understand you have an input matrix with 149 rows and 186480 columns. This means that you have 149 samples and 186480 variables. Even if you are reshaping objects that in other domains are considered variables, for a SOM the columns are considered variables and the rows samples.

From what I understand, you want to add more variables to your input and it's an easy task. You just need to add more columns to your matrix and set input_len equal to the number of columns that you have.

from minisom.

vwgeiser avatar vwgeiser commented on July 23, 2024

@JustGlowing I worded this in a weird way, I apologize. Each variable of the data is spatial and has and X and Z to it. If I were to put it in the format in your previous comment it would have 3 columns (so an input length of 3?). I've tried to implement this in MiniSom given my understanding of this problem and came up with the following discussion:

# | Pressure | Temperature | Humidity  | 
| [420x444] | [420x444] | [420x444] |  (sample 1)
| [420x444] | [420x444] | [420x444] |  (sample 2)
...
| [420x444] | [420x444] | [420x444] |  (sample 149)

this organization has a shape of (149, 3, 420, 444).

However when this is input into minisom:

ValueError: could not broadcast input array from shape (3,420,444) into shape (3,)
# | Pressure | Temperature | Humidity  |
| [186480] | [186480] | [186480] |  (sample 1)
| [186480] | [186480] | [186480] |  (sample 2)
...
| [186480] | [186480] | [186480] |  (sample 149)

This output with the all data being flattened would have a shape of (149, 3, 186480).

Yields the same result when put into Minisom:

ValueError: could not broadcast input array from shape (3,186480) into shape (3,)

When I do this I run into the following errors relating to the shape of the input data, hence why I was looking for a way to represent this in minisom. From the Readme it doesn't seem this is currently supported by MiniSom, but I was wondering if this has ever been encountered by others in the past?

If input_len could instead accept an input_shape this could work to represent this sort of multivariable spatial data in a structure compatible for MiniSom, however this wouldn't be such a simple change for the rest of the package.

from minisom.

vwgeiser avatar vwgeiser commented on July 23, 2024

The only way I can think of to represent this in minisom would be to append the pressure, temperature, and humidity values together into one vector of length 186480 + 186480 + 186480 = 559440. that way this would consider all three variables and I could reshape portions of this vector for the production of a final visualization similar to above?

I.e the first 186480 values within the weights of a SOM node would correspond to pressure, the next 186480 temperature, and the next 186480 humidity?

from minisom.

JustGlowing avatar JustGlowing commented on July 23, 2024

hi again @vwgeiser, MiniSom only accepts square matrices as input and your last intuition makes sense.

If you find that the dimensionality the problem becomes an issue, you can train a different SOMs for each type of input and then find a way to aggregate the results.

from minisom.

vwgeiser avatar vwgeiser commented on July 23, 2024

The only way I can think of to represent this in minisom would be to append the pressure, temperature, and humidity values together into one vector of length 186480 + 186480 + 186480 = 559440. that way this would consider all three variables and I could reshape portions of this vector for the production of a final visualization similar to above?

I.e the first 186480 values within the weights of a SOM node would correspond to pressure, the next 186480 temperature, and the next 186480 humidity?

For anyone finding this thread later, vectorization is the process I was looking for and is what I currently am working with for a solution!

from minisom.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.