Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
RLMiniStyler: Light-weight RL Style Agent for Arbitrary Sequential Neural Style Generation
Authors: Jing Hu, Chengming Feng, Shu Hu, Ming-Ching Chang, Xin Li, Xi Wu, Xin Wang
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through a series of experiments across image various resolutions, we have validated the advantages of RLMini Styler over other state-of-the-art methods in generating high-quality, diverse artistic image sequences at a lower cost. ... 4 Experiments 4.1 Experimental Setup 4.2 Comparisons with Prior Arts 4.3 Ablation Study 4.4 User Study |
| Researcher Affiliation | Academia | Jing Hu1 , Chengming Feng1 , Shu Hu2 , Ming-Ching Chang3 , Xin Li3 , Xi Wu1 and Xin Wang3 1Chengdu University of Information Technology, China 2Purdue University, USA 3University at Albany, SUNY, USA jing EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm in appendix describes the RLMini Styler algorithm. |
| Open Source Code | Yes | Codes are available at https://github.com/fengxiaoming520/ RLMini Styler. |
| Open Datasets | Yes | Like most AST methods [Deng et al., 2022; Huang and Belongie, 2017; Liu et al., 2021; Park and Lee, 2019; Wang et al., 2023], we utilize the MS-COCO dataset [Lin et al., 2014] for content and the Wiki Art dataset [Phillips and Mackintosh, 2011] for style. |
| Dataset Splits | No | The paper mentions scaling and cropping images for training and testing, but it does not provide explicit details about training, validation, and test splits (e.g., percentages or sample counts), nor does it explicitly state the use of standard splits for the mentioned datasets. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA Tesla P100 (16GB) GPU. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not specify a version number for it or any other software libraries or frameworks used in the implementation. |
| Experiment Setup | Yes | We use the Adam optimizer [Kingma and Ba, 2014] with a learning rate 2e-4, the batch size in the environment set to 1, and the batch size sampled from the replay buffer set to 8. |