Abstract
This study addresses the challenge of statistically extracting generative factors from complex, high-dimensional datasets in unsupervised or semi-supervised settings. We investigate encoder-decoder-based generative models for nonlinear dimensionality reduction, focusing on disentangling low-dimensional latent variables corresponding to independent physical factors. Introducing Aux-VAE, a novel architecture within the classical Variational Autoencoder framework, we achieve disentanglement with minimal modifications to the standard VAE loss function by leveraging prior statistical knowledge through auxiliary variables. These variables guide the shaping of the latent space by aligning latent factors with learned auxiliary variables. We validate the efficacy of Aux-VAE through comparative assessments on multiple datasets, including astronomical simulations.
| Original language | American English |
|---|---|
| Number of pages | 17 |
| Journal | Machine Learning |
| Volume | 114 |
| Issue number | 9 |
| DOIs | |
| State | Published - 2025 |
NLR Publication Number
- NREL/JA-2C00-93144
Keywords
- disentangled generative factors
- posterior regularization
- representation learning
- variational autoencoder
Fingerprint
Dive into the research topics of 'Enhancing Interpretability in Generative Modeling: Statistically Disentangled Latent Spaces Guided by Generative Factors in Scientific Datasets: Article No. 197'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver