PromptFix: You Prompt and We Fix the Photo
Authors: yongsheng yu, Ziyun Zeng, Hang Hua, Jianlong Fu, Jiebo Luo
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that Prompt Fix outperforms previous methods in various image-processing tasks. |
| Researcher Affiliation | Collaboration | 1University of Rochester, 2Microsoft Research |
| Pseudocode | Yes | Algorithm 1 High-frequency Guidance Sampling. |
| Open Source Code | Yes | The dataset and code are available at https://www.yongshengyu.com/Prompt Fix-Page. |
| Open Datasets | Yes | The dataset and code are available at https://www.yongshengyu.com/Prompt Fix-Page. The dataset is available at https://huggingface.co/datasets/yeates/ Promptfix Data. |
| Dataset Splits | Yes | For the test set, we randomly select 300 image pairs for each task. We construct the validation dataset with 200 images, and each image contains 3 restoration tasks... |
| Hardware Specification | Yes | We train Prompt Fix for 46 epochs on 32 NVIDIA V100 GPUs |
| Software Dependencies | No | The paper mentions specific backbone models like Stable Diffusion 1.5 and Intern VL2 [15], but does not provide version numbers for general software dependencies such as Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | We train Prompt Fix for 46 epochs on 32 NVIDIA V100 GPUs, employing a learning rate of 1 10 4 with the Adam optimizer. The training input resolution is set to 512 512... we randomly drop the input image latent, instruction, and auxiliary prompt with a probability of 0.075 during training. The hyperparameter λ for the time-scale weight in Algorithm 1 is empirically set to 0.001. |