Low-Latency Space-Time Supersampling for Real-Time Rendering

Authors: Ruian He, Shili Zhou, Yuqi Sun, Ri Cheng, Weimin Tan, Bo Yan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our approach achieves superior visual fidelity compared to state-of-the-art (SOTA) methods. Notably, the performance is achieved within only 4ms, saving up to 75% of time against the conventional two-stage pipeline that necessitates 17ms.
Researcher Affiliation Academia Ruian He*, Shili Zhou*, Yuqi Sun, Ri Cheng, Weimin Tan , Bo Yan School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University rahe16@fudan.edu.cn, slzhou19@fudan.edu.cn, yqsun20@fudan.edu.cn, rcheng22@m.fudan.edu.cn, wmtan@fudan.edu.cn, byan@fudan.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper mentions comparing with 'two open-sourced baselines' but does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets No The paper describes the creation of a 'high-quality rendering dataset with LR-LFR and HRHFR pairs' but does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year, or reference to established benchmark datasets) for it to be publicly available or open.
Dataset Splits No The paper mentions '6000 frames for training and 1000 for testing' for its dataset but does not specify exact split percentages, absolute sample counts, or reference predefined splits for a distinct validation set, which is needed to reproduce the data partitioning including validation.
Hardware Specification Yes We then tested them with an RTX 3090 GPU.
Software Dependencies No The paper states 'We use Pytorch to implement our model.' but does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., PyTorch 1.x or Python 3.x), needed to replicate the experiment.
Experiment Setup Yes We trained our model for 100 epochs on the training set with the Adam optimizer, and the learning rate was set to 1e-4. The learning rate optimizer is Step LR, with a step size of 50 and a gamma of 0.9. We use Random Crop for augmentation, and each time we slice the input image into patches of size 256x256, repeat four times and feed them into the network.