Reschedule Diffusion-based Bokeh Rendering

Authors: Shiyue Yan, Xiaoshi Qiu, Qingmin Liao, Jing-Hao Xue, Shaojun Liu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our method can effectively alleviate the fluctuation problem of sampling results while ensuring similar color styles to the input image.
Researcher Affiliation Academia 1Shenzhen International Graduate School, Tsinghua University 2Department of Statistical Science, University College London 3College of Health Science and Environmental Engineering, Shenzhen Technology University
Pseudocode Yes Algorithm 1 Prior-aware Sampling Input: noise schedule α, denoising network ϵθ( ), small aperture image y Sample: ϵ N(0, I) 1: A = 1 6(3n maxc [R,G,B] σy;c) i2 2: Find an index i such that αi 1 > A and αi < A. 3: Let T = i and x T = αT y + p1 αT ϵ 4: for t = T , . . . , 1 do 5: Sample ϵ N(0, I) if t > 1, else ϵ = 0 6: xt 1 = 1 1 βt (xt βt 1 αt ϵθ(xt, t; y))+ q 7: end for 8: return x0
Open Source Code Yes Our code is available at https://github.com/Loeiii/Reschedule-Diffusionbased-Bokeh-Rendering.
Open Datasets Yes One such dataset is the FPBNet dataset [Liu et al., 2022], comprising 941 sets of highly aligned data with a resolution of 2232 1488. To mitigate the effects of data misalignment on the model training and maximize its potential in generating more natural results, we train and test our approach on the FPBNet dataset.
Dataset Splits No The paper mentions training and testing on the FPBNet dataset but does not explicitly provide specific dataset split information (percentages, sample counts, or detailed splitting methodology) for training, validation, and testing.
Hardware Specification Yes Training is conducted over 1.2M iterations on an A100 GPU.
Software Dependencies No The paper mentions the use of 'Adam optimizer' but does not specify software names with version numbers for libraries, frameworks, or other ancillary tools required to replicate the experiment.
Experiment Setup Yes During the training phase, we set T to 1000 and linearly increase the value of β from 0.00001 to 0.01. The Adam optimizer is employed, with an initial learning rate set at 1 10 4 without decay and the optimizer s weight decay is set at 0.01. Training is conducted over 1.2M iterations... Setting γ = 0.1; sampling is then performed separately using N = 2 and N = 5.