Task-Specific Scene Structure Representations
Authors: Jisu Shin, Seunghyun Shin, Hae-Gon Jeon
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a variety of experiments on low-level vision tasks, including self-supervised joint depth upsampling (Sec.4.1) and unsupervised single image denoising (Sec.4.2), to demonstrate the effectiveness of our SSGNet. |
| Researcher Affiliation | Academia | Jisu Shin*, Seunghyun Shin*and Hae-Gon Jeon AI Graduate School, GIST, South Korea {jsshin98, seunghyuns98}@gm.gist.ac.kr, haegonj@gist.ac.kr |
| Pseudocode | No | The paper describes the network architecture (Fig. 2) and loss functions, but it does not provide any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source codes are available at https://github.com/jsshin98/SSGNet. |
| Open Datasets | Yes | Prior to the evaluations, we train our SSGNet on a well-known NYUv2 dataset (Silberman and Fergus 2011), consisting of 1,000 training images and 449 test images. |
| Dataset Splits | No | The paper mentions training and test sets for NYUv2 but does not specify a separate validation set or describe how validation was performed for any dataset used. |
| Hardware Specification | Yes | The training on SSGNet took about 40 hours on two NVIDIA Tesla v100 GPUs. |
| Software Dependencies | No | The paper mentions 'public Pytorch' but does not specify a version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The learning rate and the batch size are set to 0.0001 and 4 on SSGNet, respectively. We train the proposed framework on images with a 256 256 resolution. ... the hyperparameter γ is set to 0.9 in our implementation. where λ is the hyper-parameter, and is empirically set to 40. |