Isometric Quotient Variational Auto-Encoders for Structure-Preserving Representation Learning
Authors: In Huh, changwook jeong, Jae Myung Choe, YOUNGGU KIM, Daesin Kim
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical proof-of-concept experiments reveal that the proposed method can find a meaningful representation of the learned data and outperform other competitors for downstream tasks. [...] We evaluate the model s performance using three datasets: rotated MNIST [44], mixed-type wafer defect maps (Mixed WM38) [42] and cervical cancer cell images (SIPa KMe D) [32]. |
| Researcher Affiliation | Collaboration | In Huh1, , Changwook Jeong2, , Jae Myung Choe1, Young-Gu Kim1, Dae Sin Kim1 1CSE Team, Innovation Center, Samsung Electronics 2Graduate School of Semiconductor Materials and Devices Engineering, UNIST |
| Pseudocode | Yes | Algorithm 1 IQVAEs Input: data {xi}N i=1, hyper-parameters (β, λ), group G, Ginvariant encoders (µG θ , σG θ ), decoder µϕ Initialize θ, ϕ, Cn while training do Sample {αi [0, 1]}N i , {ϵi N(0, I)}N i Compute {µi θ, σi θ}N i = {µG θ (xi) , σG θ (xi)}N i Sample {zi}N i=1 = {µi θ + σi θ ϵi}N i=1 Shuffle {zj}N j=1 = shuffle({zi}N i=1) Augment { zi}N i=1 = {(1 αi)zi + αizj}i=j=N i=j=0 Compute LQAE = PN i=1 ming G g xi µϕ(zi) 2 2 Compute LKL = PN i=1 DKL(N(µi θ, diag[σi θ]2) N(0, In)) Compute {Ji µϕ}N i=1 = {Jµϕ( zi)}N i Compute LISO = PN i=1 (Ji µϕ)TJi µϕ Cn F Optimize (LQAE + βLKL + λLISO)/N w.r.t θ, ϕ end while |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate the model s performance using three datasets: rotated MNIST [44], mixed-type wafer defect maps (Mixed WM38) [42] and cervical cancer cell images (SIPa KMe D) [32]. |
| Dataset Splits | No | The paper specifies training and test sample sizes but does not explicitly provide information about a validation dataset split. |
| Hardware Specification | Yes | We used a single V100 32GB GPU. |
| Software Dependencies | No | The paper mentions using 'optimizer [21]' (referencing Adam) but does not provide specific version numbers for programming languages, machine learning frameworks, or other key software dependencies. |
| Experiment Setup | No | We used the same convolutional architecture3, hyper-parameters, optimizer [21], and training scheme for all models. Implementation details can be found in Section D of SM. |