A Partially-Supervised Reinforcement Learning Framework for Visual Active Search

Authors: Anindya Sarkar, Nathan Jacobs, Yevgeniy Vorobeychik

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments demonstrate that the proposed representation and meta-learning frameworks significantly outperform state of the art in visual active search on several problem domains.
Researcher Affiliation Academia Anindya Sarkar Nathan Jacobs Yevgeniy Vorobeychik {anindya, jacobsn, yvorobeychik}@wustl.edu, Department of Computer Science and Engineering Washington University in St. Louis
Pseudocode Yes Algorithm 1 The PSVAS algorithm.
Open Source Code Yes Our code is publicly available at this link.
Open Datasets Yes We evaluate the proposed approach using two datasets: x View [16] and DOTA [17].
Dataset Splits Yes We use 67% and 33% of the large satellite images to train and test the policy network respectively.
Hardware Specification Yes We use 1 NVidia A100 and 3 Ge Force GTX 1080Ti GPU servers for all our experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'Res Net-34' but does not provide specific version numbers for programming languages or libraries such as Python, PyTorch, or CUDA.
Experiment Setup Yes We use a learning rate of 10 4, batch size of 16, number of training epochs 200, and the Adam optimizer to train the policy network in all experimental settings.