ICAR: Image-Based Complementary Auto Reasoning
Authors: Xijun Wang, Anqi Liang, Junbang Liang, Ming Lin, Yu Lou, Shan Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared with the SOTA methods, this approach achieves up to 5.3% and 9.6% in FITB score and 22.3% and 31.8% SFID improvement on fashion and furniture, respectively. |
| Researcher Affiliation | Collaboration | Xijun Wang1,2, Anqi Liang2, Junbang Liang2, Ming Lin1,2, Yu Lou2, Shan Yang2 1 University of Maryland, College Park, USA 2 Amazon, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | Benchmark Datasets: In the following experiments, we evaluate our proposed ICAR using four datasets. Deep Rooms (Gadde, Feng, and Martinez 2021)... STL-Home (Kang et al. 2019)... STL-Fashion (Kang et al. 2019)... And Exact Street2Shop (Hadi Kiapour et al. 2015)... |
| Dataset Splits | No | The paper mentions models are 'trained for 500 epochs' but does not specify the size, percentage, or specific creation method for training, validation, or test splits. It mentions 'test-split' for some datasets but no details for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam W' and 'CNN-based (Res Net50)' but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Models are trained for 500 epochs with a batch size of 256. We choose 1 negative sample in the triplet loss. And we use 1.0, 1.0, and 0.05 as the weights for crossentropy loss, triplet loss, and regularizer loss respectively. |