You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Source code for the Geometry Regularized Autoencoder paper. Based on the paper here. The traditional autoencoder objective is augmented to regularize the latent space towards a manifold learning embedding, e.g., PHATE.
A more detailed explanation of the method can be found in GRAE_poster.pdf.
Reference
If you find this work useful, please cite:
@article{duque2022geometry,
title={Geometry Regularized Autoencoders},
author={Duque, Andres F and Morin, Sacha and Wolf, Guy and Moon, Kevin R},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
publisher={IEEE}
}
Install
You can install this repo directly with pip, preferably in a virtual environment :
The code largely follows the scikit-learn API to implement different autoencoders and dimensionality reduction tools. You can change basic autoencoder hyperparameters and manifold learning
hyperparameters through the model interface. For example, to reproduce some Rotated Digits results :
fromgrae.modelsimportGRAEfromgrae.dataimportRotatedDigits# Various autoencoder parameters can be changed# t and knn are PHATE parameters, which are used to compute a target embeddingm=GRAE(epochs=100, n_components=2, lr=.0001, batch_size=128, t=50, knn=10)
# Input data should be an instance of grae.data.BaseDataset# We already have subclasses for datasets in the paperdata=RotatedDigits(data_path='data', split='train')
# Fit modelm.fit(data)
# Get 2D latent coordinatesz=m.transform(data)
# Compute some image reconstructionsimgs=m.inverse_transform(z)
Some utility functions are available for visualization :
# Fit, transform and plot datam.fit_plot(data)
# Transform and plot datam.plot(data)
# Transform, inverse transform and visualize reconstructionsm.view_img_rec(data)
Most of our benchmarks are implemented with similar estimators. Implemented models include
GRAE: Autoencoder with a PHATE latent target;
GRAE (UMAP): Autoencoder with a UMAP latent target;
New models should subclass grae.models.BaseModel or grae.models.AE if autoencoder-based. New datasets should follow the grae.data.BaseDataset interface.
About
Geometry Regularized Autoencoders (GRAE) for large-scale visualization and manifold learning