Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Bridging Compressed Image Latents and Multimodal Large Language Models
Authors: Chia-Hao Kao, Cheng Chien, Yu-Jen Tseng, Yi-Hsin Chen, Alessandro Gnutti, Shao-Yuan Lo, Wen-Hsiao Peng, Riccardo Leonardi
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on different neural image codecs and various MLLMs show that our method achieves great rate-accuracy performance with much less complexity. 4 EXPERIMENTAL RESULTS |
| Researcher Affiliation | Collaboration | 1University of Brescia, Italy 2National Yang Ming Chiao Tung University, Taiwan 3Honda Research Institute USA |
| Pseudocode | No | The paper describes methods in prose and figures, but does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not explicitly state that the authors' own code for the described methodology is released. While a third-party tool's code link is provided, it does not pertain to the authors' implementation of their proposed framework. |
| Open Datasets | Yes | Image Net Deng et al. (2009) Custom license. Available at https://image-net.org/download.php COCO Lin et al. (2014) CC BY 4.0 SEED-Bench Li et al. (2023a) Apache 2.0 Eastman Kodak. Kodak lossless true color image suite (Photo CD PCD0992). URL http://r0k. us/graphics/kodak. |
| Dataset Splits | No | The paper mentions training on Image Net dataset and describes a '5-way 1-shot classification evaluation scenario' for a specific task. However, it does not provide explicit training, validation, or test dataset splits or percentages for the main experiments or the transform-neck training beyond mentioning the dataset itself. |
| Hardware Specification | Yes | Our system can be successfully trained under various application scenarios on one RTX 4090 with 24GB of memory. |
| Software Dependencies | No | The paper mentions the use of an 'Adam optimizer' and general models like 'LLMs' and 'MLLMs', but does not provide specific software dependencies or library version numbers required to replicate the experiments. |
| Experiment Setup | Yes | We use the Adam optimizer, configured with β1 at 0.9, β2 at 0.999, ϵ at 10 8. Weight decay is disabled. The weighting factors α and β are set with a ratio of 1:100 for the two loss terms, and E1, E2 are empirically set to 20 and 40, respectively, in our experiments. For the scenario (d2) specifically, we find empirically that fixing the ratio γ : δ = 60 : 1 leads to a good trade-off between human and machine perception. Four models are trained for four different rate points, corresponding to λ = [0.004, 0.008, 0.016, 0.032]. |