Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Manifold Learning by Mixture Models of VAEs for Inverse Problems

Authors: Giovanni S. Alberti, Johannes Hertrich, Matteo Santacesaria, Silvia Sciutto

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the performance of our method for low-dimensional toy examples as well as for deblurring and electrical impedance tomography on certain image manifolds.
Researcher Affiliation Academia Giovanni S. Alberti EMAIL Ma LGa Center Department of Mathematics, Department of Excellence 2023 2027 University of Genoa, Italy Johannes Hertrich EMAIL Department of Computer Science University College London, London, United Kingdom Matteo Santacesaria EMAIL Ma LGa Center Department of Mathematics, Department of Excellence 2023 2027 University of Genoa, Italy Silvia Sciutto EMAIL Ma LGa Center Department of Mathematics, Department of Excellence 2023 2027 University of Genoa, Italy
Pseudocode Yes Algorithm 1 Training procedure for mixtures of VAEs. 1. Run the Adam optimizer on L(Θ) + λR(Θ) for M1 epochs. 2. Run the Adam optimizer on L(Θ) for M2 epochs. 3. Compute the values γik, i = 1, . . . , N, k = 1, . . . , K from (4). 4. Compute the mixing weights αk = PN i=1 γik. 5. Run the Adam optimizer on Loverlap(Θ) from (5) for M3 epochs.
Open Source Code Yes The code of the numerical examples is available online.1 1. The code is available at https://github.com/johertrich/Manifold_Mixture_VAEs
Open Datasets No Here, we consider the data set of 128 128 images showing a bright bar with a gray background that is centered and rotated. The intensity of foreand background as well as the size of the bar are fixed. Some example images from the data set are given in Figure 7a. We consider the manifold consisting of 128 128 images showing two bright non-overlapping balls with a gray background, representing conductivities with special inclusions.
Dataset Splits No We train all the models for 200 epochs with the Adam optimizer. Afterwards we apply the overlapping procedure for 50 epochs. See Algorithm 1 for the details of the training algorithm. For the Deblurring example, the latent dimension is set to d = 1. For the EIT example, the latent dimension is set to the manifold dimension, i.e., d = 6.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No We train all the models for 200 epochs with the Adam optimizer. For solving the PDEs (13) and (16), we use a finite element solver from the DOLFIN library (Logg and Wells, 2010).
Experiment Setup Yes In our numerical examples, we choose M1 = 50, M2 = 150 and M3 = 50. We train the mixture of VAEs for 200 epochs with the Adam optimizer. Afterwards we apply the overlapping procedure for 50 epochs, as in Algorithm 1. We use the retraction from Lemma 6 with a step size of 0.01.