Skip to main navigation Skip to search Skip to main content

Enhancing Interpretability in Generative Modeling: Statistically Disentangled Latent Spaces Guided by Generative Factors in Scientific Datasets: Article No. 197

  • Arkaprabha Ganguli
  • , Nesar Ramachandra
  • , Julie Bessac
  • , Emil Constantinescu
  • Argonne National Laboratory
  • Virginia Tech

Research output: Contribution to journalArticlepeer-review

Abstract

This study addresses the challenge of statistically extracting generative factors from complex, high-dimensional datasets in unsupervised or semi-supervised settings. We investigate encoder-decoder-based generative models for nonlinear dimensionality reduction, focusing on disentangling low-dimensional latent variables corresponding to independent physical factors. Introducing Aux-VAE, a novel architecture within the classical Variational Autoencoder framework, we achieve disentanglement with minimal modifications to the standard VAE loss function by leveraging prior statistical knowledge through auxiliary variables. These variables guide the shaping of the latent space by aligning latent factors with learned auxiliary variables. We validate the efficacy of Aux-VAE through comparative assessments on multiple datasets, including astronomical simulations.
Original languageAmerican English
Number of pages17
JournalMachine Learning
Volume114
Issue number9
DOIs
StatePublished - 2025

NLR Publication Number

  • NREL/JA-2C00-93144

Keywords

  • disentangled generative factors
  • posterior regularization
  • representation learning
  • variational autoencoder

Fingerprint

Dive into the research topics of 'Enhancing Interpretability in Generative Modeling: Statistically Disentangled Latent Spaces Guided by Generative Factors in Scientific Datasets: Article No. 197'. Together they form a unique fingerprint.

Cite this