Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Unsupervised Co-Learning on $G$-Manifolds Across Irreducible Representations
Authors: Yifeng Fan, Tingran Gao, Zhizhen Jane Zhao
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our paradigm through three examples: (1) Nearest neighbor (In brevity: NN) search on 2-sphere S2 with G = SO(2); (2) nearest viewing angle search for cryo-EM images; (3) spectral clustering with G = SO(2) or G = SO(3) transformation. We compare with the baseline vector diffusion maps (VDM) [57]. In particular, since the greatest advantage of our paradigm is the robustness to noise, we demonstrate this through datasets contaminated by extremely high level of noise. The setting of hyper parameters, e.g. kmax and mk, are shown in the captions, we point out that our algorithm is not sensitive to the choice of parameters. The experiments are conducted in MATLAB on a computer with Intel i7 7th generation quad core CPU. |
| Researcher Affiliation | Academia | Yifeng Fan1 Tingran Gao2 Zhizhen Zhao1 1University of Illinois at Urbana-Champaign 2University of Chicago EMAIL EMAIL |
| Pseudocode | Yes | Algorithm 1: Weight Matrices Filtering |
| Open Source Code | Yes | Code is available on https://github.com/frankfyf/G-manifold-learning. |
| Open Datasets | No | The paper describes simulated datasets (e.g., "We simulate n = 10^4 points uniformly distributed over M = SO(3)" and "simulate n = 10^4 projection images from a 3D electron density map of the 70S ribosome") but does not provide concrete access information (links, DOIs, formal citations) to publicly available or open datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into training, validation, and test sets. |
| Hardware Specification | Yes | The experiments are conducted in MATLAB on a computer with Intel i7 7th generation quad core CPU. |
| Software Dependencies | No | The paper mentions "MATLAB" but does not provide a specific version number for MATLAB or any other software dependencies with version numbers. |
| Experiment Setup | Yes | In Fig. 2, the paper states: "We set kmax = 6, mk = 10 for all k s and t = 1." In Fig. 3, it states: "We set kmax = 20, mk = 20 for all k s and t = 1." In Table 1, it states: "We set mk = K, kmax = 10 and t = 1 for all cases." |