Self-Supervised Interpretable End-to-End Learning via Latent Functional Modularity

Authors: Hyunki Seong, Hyunchul Shim

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In realworld indoor environments, Mo Net demonstrates effective visual autonomous navigation, outperforming baseline models by 7% to 28% in task specificity analysis.
Researcher Affiliation Academia 1School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea. Correspondence to: Hyunki Seong <hynkis@kaist.ac.kr>.
Pseudocode No The paper includes equations and diagrams but does not feature any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes A demo video, codes, and dataset for quantitative results and interpretation are available at https://sites.google.com/view/monet-lgc.
Open Datasets Yes A demo video, codes, and dataset for quantitative results and interpretation are available at https://sites.google.com/view/monet-lgc.
Dataset Splits Yes The data is split into training and validation sets at a ratio of 80 : 20.
Hardware Specification Yes Our platform consists of a 1/10 scale racing car chassis (TT-02) equipped with an embedded computer (Jetson Xavier NX) and a controller (Arduino).
Software Dependencies No The paper mentions using "Adam optimizer" but does not specify a version number for the optimizer or any other software libraries/frameworks.
Experiment Setup Yes Batch size 512 Total training iterations 650k Optimizer Adam Similarity factor κ 0.5 Weight for the LGC loss term λLGC 5e-4 Learning rate 3e-4 Learning rate scheduler Lambda LR Scheduler factor 3e-4