Restorable Image Operators with Quasi-Invertible Networks

Authors: Hao Ouyang, Tengfei Wang, Qifeng Chen2008-2016

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our approach outperforms relevant baselines in the restoration quality, and the learned restorable operator is fast in inference and robust to compression. We conduct extensive experiments to evaluate the proposed model on ten image processing operators from various categories, including detail enhancement, style transfer and image abstraction.
Researcher Affiliation Academia Hao Ouyang*, Tengfei Wang*, Qifeng Chen The Hong Kong University of Science and Technology
Pseudocode No The paper describes the method using mathematical equations and textual explanations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not explicitly state that the source code for their method is publicly available or provide a link to a code repository.
Open Datasets Yes Table 1 shows quantitative results on Adobe-MIT 5K. We apply the model trained on MIT-Adobe 5K to RAISE (Dang-Nguyen et al. 2015) and also the model trained on RAISE to MIT-Adobe 5K.
Dataset Splits No The paper mentions using 'test set' for evaluation (e.g., 'Adobe-MIT 5K test set' in Table 1), but it does not specify the explicit training, validation, and test split percentages or sample counts for any dataset.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions) used for the implementation or experiments.
Experiment Setup No The paper describes the loss functions (L2 for approximation and restoration, adversarial for Type III) and the general network architecture (8 invertible blocks, auxiliary branch) and states that hyper-parameters of the *operator* can be adjusted via an input channel. However, it does not explicitly provide specific training hyperparameters such as learning rate, batch size, number of epochs, or optimizer details for the model training.