Stochastic Actor-Executor-Critic for Image-to-Image Translation
Authors: Ziwei Luo, Jing Hu, Xin Wang, Siwei Lyu, Bin Kong, Youbing Yin, Qi Song, Xi Wu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several image-to-image translation tasks have demonstrated the effectiveness and robustness of the proposed SAEC when facing high-dimensional continuous space problems. |
| Researcher Affiliation | Collaboration | Ziwei Luo1 , Jing Hu1* , Xin Wang2 , Siwei Lyu3 , Bin Kong2 , Youbing Yin2 , Qi Song2 and Xi Wu1 1Chengdu University of Information Technology, China 2Keya Medical, Seattle, USA 3University at Buffalo, SUNY, USA |
| Pseudocode | Yes | Algorithm 1 Stochastic Actor-Executor Critic Input: Environment Env and initial parameters θ1, θ2 for the Critics, φ, ψ for the Actor and Executor. θ1 θ1, θ2 θ2, D for each iteration do x, y Envreset() for each environment step do zt πφ(zt|xt) yt ηψ(xt, zt) xt+1, rt Envstep( yt) D D {(y, xt, zt, rt, xt+1)} for each gradient step do Update Actor and Executor (DL guided): φ φ λDL ˆ φLDL(φ) ψ ψ λDL ˆ ψLDL(ψ) Update Actor and Critic (RL guided): θi θi λQ ˆ θi JQ(θi) for i {1, 2} φ φ λπ ˆ φ(Jπ(φ)) α α λ ˆ αJ(α) θi τθi + (1 τ) θi for i {1, 2} Output: θ1, θ2, φ, ψ |
| Open Source Code | No | The paper does not provide any links to source code or explicitly state that the code for the described methodology is publicly available. |
| Open Datasets | Yes | Celeba-HQ dataset is used in this study, of which 28, 000 images are used for training and 2, 000 images are used for testing. and We select three datasets of realistic photo translation including: CMP Facades dataset for segmentation labels images [Tyleˇcek and ˇS ara, 2013]. Cityscapes dataset for segmentation labels images and images labels [Cordts et al., 2016]. Edge and shoes dataset for edges shoes [Yu and Grauman, 2014]. |
| Dataset Splits | No | The paper mentions 'Celeba-HQ dataset... of which 28, 000 images are used for training and 2, 000 images are used for testing,' but it does not explicitly provide details for a separate validation split. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions various models and algorithms (e.g., VAE, U-Net, GANs, SAC, SNGAN, PPO) but does not provide specific version numbers for any software dependencies or libraries used in the implementation. |
| Experiment Setup | No | The paper mentions network structures and loss functions (e.g., 'actor-executor for our methods uses the same network structure as the encoder-decoder for CE', 'different SNGAN loss', 'PSNR reward and SNGAN loss'), but it does not provide specific hyperparameter values like learning rate, batch size, or number of epochs, nor detailed optimizer settings for the experimental setup. |