Conference Paper 2018

Intrinsic disentanglement: an invariance view for deep generative models

{Deep generative models such as Generative Ad- versarial Networks (GANs) and Variational Auto- Encoders (VAEs) are important tools to capture and investigate the properties of complex empiri- cal data. However, the complexity of their inner elements makes their functioning challenging to interpret and modify. In this respect, these archi- tectures behave as black box models. In order to better understand the function of such network, we analyze the modularity of these system by quantifying the disentanglement of their intrinsic parameters. This concept relates to a notion of invariance to transformations of internal variables of the generative model, recently introduced in the field of causality. Our experiments on generation of human faces with VAEs supports that modu- larity between weights distributed over layers of generator architecture is achieved to some degree, and can be used to understand better the function- ing of these architectures. Finally, we show that modularity can be enhanced during optimization.}

Author(s): Besserve, M and Sun, R and Schölkopf, B
Book Title: ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models
Year: 2018
Bibtex Type: Conference Paper (inproceedings)
Address: Stockholm, Sweden
URL: https://sites.google.com/view/tadgm/accepted-papers
Electronic Archiving: grant_archive
Language: eng

BibTex

@inproceedings{item_3270705,
  title = {{Intrinsic disentanglement: an invariance view for deep generative models}},
  booktitle = {{ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models}},
  abstract = {{Deep generative models such as Generative Ad- versarial Networks (GANs) and Variational Auto- Encoders (VAEs) are important tools to capture and investigate the properties of complex empiri- cal data. However, the complexity of their inner elements makes their functioning challenging to interpret and modify. In this respect, these archi- tectures behave as black box models. In order to better understand the function of such network, we analyze the modularity of these system by quantifying the disentanglement of their intrinsic parameters. This concept relates to a notion of invariance to transformations of internal variables of the generative model, recently introduced in the field of causality. Our experiments on generation of human faces with VAEs supports that modu- larity between weights distributed over layers of generator architecture is achieved to some degree, and can be used to understand better the function- ing of these architectures. Finally, we show that modularity can be enhanced during optimization.}},
  address = {Stockholm, Sweden},
  year = {2018},
  slug = {item_3270705},
  author = {Besserve, M and Sun, R and Sch\"olkopf, B},
  url = {https://sites.google.com/view/tadgm/accepted-papers}
}