Comments (4)
Hmm, this is a bit worrying and definitely looks like something strange that we need to investigate. I'm getting the same results. Perhaps it has something to do with how we handle learning rate now, where we incorporate the learning rate into the . The collapse seems to stem from the early exaggeration phase.
For instance,
e = TSNE(verbose=True, early_exaggeration=1, learning_rate="auto").fit(x)
works fine, but
e = TSNE(verbose=True, early_exaggeration=12, learning_rate="auto").fit(x)
produces what you reported.
I thought it may have something to do with the learning rate, which, for ee=12, turns out to be around 8, but
e = TSNE(verbose=True, early_exaggeration=1, learning_rate=8).fit(x)
works fine. It seems that a combination of low learning rate and higher exaggeration values produce this collapse.
Perhaps the simplest fix might be to keep a minimum learning rate, e.g. around 100. This fixes the problem in this case. However, this feels more like a band-aid than a real fix. There must be something in the BH implementation, perhaps due to rounding. Definitely worth investigating, as this doesn't seem like such an unusual use-case.
from opentsne.
Yeah that's interesting. I have a suspicion that the reason it works fine with learning rate 100 is almost accidental. My feeling is that the embedding does not converge! The learning rate is too big, so the points jump around 0 without converging anywhere but also without collapsing.
This kinda works for subsequent optimization but actually it's not what one wants to have!
When we set learning rate to 8, it does not fluctuate anymore, but steadily collapses to machine precision zero. I suspect the true size of this embedding should be really small, and maybe BH goes haywire and makes everything collapse at some point? Or perhaps the true size of the embedding really is 0? In a sense that this somehow minimizes the loss function? I'm not sure about it.
But the main issue maybe that early exaggeration factor 12 is too large for n=1000, and we just did not see it before because this was masked by the learning rate being too large and so everything was jumping around.
I want to explore this a bit more.
from opentsne.
Also, it only happens if the data are unstructured. If you generate the data like this:
x = np.random.seed(42)
x = np.random.randn(100,10)
x[:50, 0] += 10
then using the defaults works fine and does not collapse to a single point.
from opentsne.
I think there may be two distinct issues here.
The first issue is some intricate problem with the BH implementation that leads to negative KL values. We saw this before in #180 when the points were overlapping in the initialization. Here the same thing happens whenever the points get very close to each other during optimization.
The second issue is that early exaggeration 12 is "too strong" for some datasets, in particular small ones. This results in the embedding collapsing either to a single point, or sometimes to a 1-dimensional line. This may or may not lead to problems in subsequent optimization, and also seems highly dependent on the dataset and on the perplexity. I now played a bit around with my simulation and this happens only for some particular values of the sample size and dimensionality (and perplexity).
Note that the original 2008 t-SNE paper used early exaggeration factor 4. It was set to 12 in the Barnes-Hut t-SNE paper from 2012. I think this agrees with the idea that smaller datasets need smaller early exaggeration. Also, I recall that in https://www.nature.com/articles/s41586-020-2907-3 I wrote the following:
We used the t-SNE implementation from scikit-learn Python library with the default perplexity (30), early exaggeration 4 (the default value 12 can be too large for small data sets), and scaled PCA initialization[23].
(the sample size was ~1000), but never really explored this systematically, and can't remember what exact issues I had with the factor 12 back then.
My thinking now is that it may actually make sense to use early_exaggeration="auto"
, defaulting to 4 for small enough datasets and to 12 otherwise. Not sure where to set the threshold, but maybe it could be the same as what we use for BH/FFT choice: auto
means BH for n<10,000.
from opentsne.
Related Issues (20)
- `latest` version of ReadTheDocs not rendering Python code HOT 4
- Switching spectral initialization to sklean.manifold.SpectralEmbeddings HOT 14
- Adding tiny amount of noise to PCA/spectral init to prevent points from overlapping
- Tutorials do not show ipynb code HOT 2
- Bug: running optimize() multiple times produces different result compared to running it once HOT 3
- Failed to install from source HOT 3
- Extend openTSNE to specific purposes HOT 4
- Tests fail: ImportError: attempted relative import with no known parent package HOT 7
- Negative reported KL divergence for dof>1 HOT 4
- Unable to use custom callable metric HOT 2
- process crashes when /tmp gets full HOT 2
- Question about SGD method used HOT 2
- [Windows] save TSNEEmbedding to binary, Directory error HOT 5
- Test failure on i386 HOT 9
- Cannot install on Mac M1 HOT 1
- `utils` import error in example notebooks HOT 1
- Problem with data from CSV file HOT 7
- Question on initialization HOT 4
- import errer HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from opentsne.