360-MLC: Multi-view Layout Consistency for Self-training and Hyper-parameter Tuning
Authors: Bolivar Solarte, Chin-Hsuan Wu, Yueh-Cheng Liu, Yi-Hsuan Tsai, Min Sun
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our solution achieves favorable performance against state-of-the-art methods when self-training from three publicly available source datasets to a unique, newly labeled dataset consisting of multi-view images of the same scenes. In experiments, we leverage the MP3D-FPE [29] multi-view dataset as our unlabeled new domain. |
| Researcher Affiliation | Collaboration | Bolivar Solarte 1, Chin-Hsuan Wu 1, Yueh-Cheng Liu1, Yi-Hsuan Tsai 2, Min Sun1 1National Tsing Hua University, 2Phiar Technologies |
| Pseudocode | No | The paper does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | We will make our models, codes, and dataset available to the public. (This statement indicates future availability, not current concrete access.) |
| Open Datasets | Yes | We conduct extensive experiments using publicly available 360-image layout datasets: Matterport3D Floor Plan Estimation (MP3D-FPE) [29] as the target dataset, and three real-world datasets as the pre-training datasets, including Matterport Layout [36, 48], Zillow Indoor Dataset (ZIn D) [9], and the dataset used in Layout Net [47]. |
| Dataset Splits | No | The paper specifies training and testing sets, but does not explicitly define a 'validation set' or 'validation split' with specific percentages or sample counts for hyperparameter tuning or model selection. |
| Hardware Specification | Yes | All models are trained on a single NVIDIA TITAN X GPU with 12 GB of memory. |
| Software Dependencies | No | The paper mentions 'Horizon Net [30] as our layout estimation backbone' but does not specify version numbers for any software dependencies like programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We use the Adam optimizer to train the model for 300 epochs by setting the learning rate as 0.0001 and the batch size as 4. |