Abstract Reasoning with Distracting Features
Authors: Kecheng Zheng, Zheng-Jun Zha, Wei Wei
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrated strong improvements over baseline algorithms and we are able to beat the state-of-the-art models by 18.7% in the RAVEN dataset and 13.3% in the PGM dataset. |
| Researcher Affiliation | Collaboration | Kecheng Zheng University of Science and Technology of China zkcys001@mail.ustc.edu.cn Zheng-jun Zha University of Science and Technology of China zhazj@ustc.edu.cn Wei Wei Google Research wewei@google.com |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | 1Full code are available at https://github.com/zkcys001/distracting_feature. |
| Open Datasets | Yes | PGM [34] dataset consists of 8 different subdatasets, which each subdataset contains 119, 552, 000 images and 1, 222, 000 questions. ... RAVEN [39] dataset consists of 1, 120, 000 images and 70, 000 RPM questions, equally distributed in 7 distinct figure configurations |
| Dataset Splits | No | The paper mentions evaluating the student model on a “held-out validation set” in section 3.1 and using a “neutral train/test split” for PGM in section 4.1, but it does not specify the size or percentage for a validation split. |
| Hardware Specification | No | The paper does not explicitly describe the hardware (e.g., specific GPU/CPU models, memory) used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries). |
| Experiment Setup | No | The paper describes the architecture of its models (LEN, teacher model using DDPG) but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings used during training. |