Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Mixture-of-Queries Transformer: Camouflaged Instance Segmentation via Queries Cooperation and Frequency Enhancement
Authors: Weiwei Feng, Nanqing Xu, Tengfei Liu, Weiqiang Wang
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate our Mo QT outperforms 19 state-of-the-art CIS approaches on both COD10K and NC4K datasets. Section 4 is titled 'Experiments' and details the experimental setups, comparisons with state-of-the-art methods, and ablation studies. |
| Researcher Affiliation | Collaboration | The authors' affiliations include "1Zhejiang University" (academic) and "2Ant Group" (industry), indicating a collaboration between academia and industry. |
| Pseudocode | No | The paper describes its methodology through architectural diagrams, mathematical formulations, and textual descriptions (e.g., in Section 3.3 'Mixture-of-Queries Mechanism' and Figure 4), but it does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | No | The paper does not contain any explicit statement about making its source code publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The paper states: "Following the mainstream works of CIS [Dong et al., 2023; Luo et al., 2023], we evaluate our method in two datasets: COD10K and NC4K." These are widely recognized and publicly available benchmark datasets in the field. |
| Dataset Splits | Yes | The paper specifies dataset splits: "COD10K includes 3040 training images and 2026 testing images, while NC4K contains 4121 test images for evaluating the generalization of proposed models." It also mentions the standard setting for training and testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions) used in the experiments. |
| Experiment Setup | Yes | The paper provides specific hyperparameter values for its objective function: "By default, we set α = 20 and β = 1." and in ablation studies confirms these are the chosen values for best performance. |