Learning disentangled representations via product manifold projection
Authors: Marco Fumero, Luca Cosmo, Simone Melzi, Emanuele Rodola
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We widely test our approach on synthetic datasets and more challenging real-world scenarios, outperforming the state of the art in several cases. |
| Researcher Affiliation | Academia | 1Sapienza, University of Rome, Rome, Italy 2Universit a della Svizzera italiana, Lugano, Switzerland. |
| Pseudocode | No | The paper describes the model and losses textually and with diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We adopted 4 widely used synthetic datasets in order to evaluate the effectivness of our method, namely DSprites (Higgins et al., 2017), Shapes3D (Kim & Mnih, 2018), Cars3D (Reed et al., 2015), Small NORB (Le Cun et al., 2004) . For the experiments on FAUST we used a Point Net (Qi et al., 2017) architecture and a simple MLP as a decoder. |
| Dataset Splits | No | The paper discusses training and testing, and evaluation metrics, but does not explicitly provide details about training/validation/test dataset splits (e.g., percentages, sample counts, or specific split files). |
| Hardware Specification | Yes | We performed our experimental evaluation on a machine equipped with an NVIDA RTX 2080ti, within the Pytorch framework. |
| Software Dependencies | No | The paper mentions using the "Pytorch framework" but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | in all our experiments we use a latent space of dimension d = 10, unless otherwise speciļ¬ed, and k = 10 latent subspaces. |