Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Self-supervised End-to-end ToF Imaging Based on RGB-D Cross-modal Dependency

Authors: Weihang Wang, Jun Wang, Fei Wen

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world data demonstrate that our approach achieves performance comparable to supervised methods, without requiring paired noisy-clean data for training. Furthermore, our method consistently delivers strong performance across all evaluated cameras, highlighting its generalization capabilities.
Researcher Affiliation Academia Weihang Wang1 , Jun Wang2 , Fei Wen2, 1Soochow University 2Shanghai Jiao Tong University EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and text, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Weihang WANG/ RGBD imaging.
Open Datasets Yes In this experiment, we evaluate the proposed method on a synthetic dataset [Zheng et al., 2021] under the scenario with combined corruptions. ... In this experiment, we evaluate the proposed method on realworld data captured by four off-the-shelf To F depth cameras, respectively LUCID, TI, TCS, and TCE. The first two datasets are captured by [Zheng et al., 2021] and [Su et al., 2018].
Dataset Splits No The paper mentions evaluating on a synthetic dataset and real-world data, including "a small dataset which contains only a quarter of the full synthetic dataset [Zheng et al., 2021]". However, it does not specify any training, validation, or test splits, either by percentage, sample count, or reference to a standard split. Therefore, specific dataset split information is not provided.
Hardware Specification Yes The code is implemented in Py Torch and run on Nvidia 3090Ti.
Software Dependencies No The code is implemented in Py Torch and run on Nvidia 3090Ti. While 'Py Torch' is mentioned, no specific version number for PyTorch or any other software dependencies is provided, making it impossible to reproduce the software environment exactly.
Experiment Setup Yes Table 1 shows the hyperparameter tuning procedure for the loss function. Using PSNR as the evaluation metric, we set λ1 = 10 and λ3 = 20. λ2 is empirically set to 5. The initial learning rate is 0.1, and the optimizer is RMSprop. The maximum number of epochs is set to 200.