Neural Implicit Dictionary Learning via Mixture-of-Expert Training
Authors: Peihao Wang, Zhiwen Fan, Tianlong Chen, Zhangyang Wang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, University of Texas at Austin. Correspondence to: Zhangyang Wang <atlaswang@utexas.edu>. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes are available in https://github.com/VITA-Group/ Neural-Implicit-Dict. |
| Open Datasets | Yes | We choose to train our NID on Celeb A face dataset (Liu et al., 2015)... We test the above algorithm on BMC-Real dataset (Vacavant et al., 2012)... We conduct experiments on Shepp-Logan phantoms dataset (Shepp & Logan, 1974)... Our experiments about SDF are conducted on Shape Net (Chang et al., 2015) datasets... |
| Dataset Splits | No | The paper mentions using a 'test set' but does not provide specific details on train/validation/test dataset splits, such as percentages or sample counts for each split, nor does it reference predefined splits with citations for reproducibility. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU models, CPU models, or cloud computing instance types. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9), which would be needed to replicate the experiment. |
| Experiment Setup | Yes | Our NID contains 4096 experts, each of which share a 4-layer backbone with 256 hidden dimension and own a separate 32-dimension output layer... NID is warmed up within 10 epochs and then start to only keep top 128 experts for each input for 5000 epochs. |