GithubHelp home page GithubHelp logo

Comments (7)

lmcinnes avatar lmcinnes commented on May 1, 2024 15

I would certainly not apply K-Means to the results of UMAP (or t-SNE) output (since they rarely provide nice spherical clusters). On the other hand I feel that the linked answer is perhaps too cautious -- I don't think you can't apply a density based clustering algorithm to the results of t-SNE so much as that one needs to be careful in interpreting the results. t-SNE can certainly "create" sub-clusters that aren't entirely there (by separating parts of a cluster), and t-SNE does certainly discard some density information, so again, care is needed. In this sense I believe it is perfectly acceptable to perform clustering on the result providing you are going to submit the clusters to further analysis and verification. As long as you are not simply taking the results of clustering at face value (and you shouldn't really ever do that anyway) then the results can provide useful information about your data.

Now, having said all of that: UMAP does offer some improvements over t-SNE on this front. It is significantly less likely to create sub-clusters in the way t-SNE does, and it will do a better job of preserving density (though far from perfect, and requires small min_dist values). Thus you can have more confidence in the results of clustering UMAP than t-SNE, but I would still strongly encourage actual analysis of the clusters.

If you want evidence that this can work, using HDBSCAN on a UMAP embedding of the MNIST digits dataset (with suitable parameter choices for each algorithm) gave me an ARI of 0.92, which is remarkably good for a purely unsupervised approach, and is clearly capturing real information about the data.

My biggest caveat is with regard to noise in the data: UMAP and t-SNE will both tend to contract noise into clusters. If you have noisy data then UMAP and t-SNE will hide that from you, so it pays to have some awareness of what your data is like before just trusting a clustering (again, as is true of all clustering).

from umap.

lmcinnes avatar lmcinnes commented on May 1, 2024 4

It is certainly true that small n_neighbors will tend to break up clusters, so larger values are probably better if you want to do clustering. Of course too large and you homogenize everything, so ... this is where one wants to do some exploratory work before doing the clustering (and of the resulting clustering) to provide some confidence that there aren't any significant pitfalls.

A low min_dist also tends to be better for clustering, since concentrating points together, while potentially bad for visualisation, is exactly what you want for clustering.

With regard to clustering parameters I would suggest it would be useful to use a low min_samples parameter and quite a large min_cluster_size. Once again, this is something you want verify with some exploratory work on the clusters you get out.

In fun news I think I can now describe HDBSCAN in the same primitives as UMAP, so the two may be more connected that one might think.

from umap.

arita37 avatar arita37 commented on May 1, 2024 2

Another question, what about the
dimension of the embedding for clustering ?

Can we use higher than 2 ? like 3,4 ....
Anumy impact on the clustering ?

from umap.

gabritaglia avatar gabritaglia commented on May 1, 2024 1

Thank you so much for the deep answer. Very useful!

from umap.

lmcinnes avatar lmcinnes commented on May 1, 2024 1

Since order of samples is preserved under UMAP and then clustering, you can assign cluster labels directly to the original source data and interpret clusters there -- this would be the recommended approach really.

from umap.

kgullikson88 avatar kgullikson88 commented on May 1, 2024

I have a related question: My intuition suggests using large n_neighbors makes sense if using UMAP prior to clustering, because it will better preserve the global structure. Do you agree? Do you have any other preliminary thoughts on parameter choices for combining UMAP with HDBSCAN?

from umap.

arnaud-nt2i avatar arnaud-nt2i commented on May 1, 2024

edit: dumb question:

For the sake of interpreting the results, if we use Umap to reduce dimensionality before clustering, is it possible to retrieve the original labels of points after clustering?
Put another way: what the point of clustering data on UMAP subspace since the subspace vectors cannot be interpreted?

from umap.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.