You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think about dimensionality reduction a lot, but currently have no need or urge to publish in the stick-a-PDF-on-arXiv sense. As a result, I have managed to accumulate various markdown documents and notebooks scattered across many repositiories. I am losing track of it all myself, so here I collect in one place some of the discussions, experiments and other musings.
A Fisher Information-based approach to estimate the dimensionality of datasets, using only quantities you already calculate during the perplexity calibration stage of t-SNE: https://jlmelville.github.io/smallvis/idp-theory.html.
The smallvis doc page collects some of the above, plus discusses many other aspects of dimensionality reduction, especially t-SNE and variants: https://jlmelville.github.io/smallvis/.
The different forms of Nesterov momentum as used in stochastic gradient descent methods: https://jlmelville.github.io/mize/nesterov.html. Nothing much to do with dimensionality reduction, but you have to optimize those embeddings somehow.