One-Shot Segmentation in Clutter

Authors: Claudio Michaelis, Matthias Bethge, Alexander Ecker

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then present our experimental results (Sec. 6)
Researcher Affiliation Academia 1Centre for Integrative Neuroscience and Institute for Theoretical Physics, University of Tübingen, Germany 2Bernstein Centre for Computational Neuroscience, Tübingen, Germany 3Max Planck Institute for Biological Cybernetics, Tübingen, Germany 4Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA.
Pseudocode No The paper describes methods and architectures in text and diagrams (e.g., Figure 3), but does not contain a formal pseudocode block or an algorithm labeled as such.
Open Source Code Yes We publish the dataset, the code and our models.1 1https://github.com/michaelisc/cluttered-omniglot
Open Datasets Yes We propose a new benchmark dataset: cluttered Omniglot (Fig. 1A). ... We publish the dataset, the code and our models.1 1https://github.com/michaelisc/cluttered-omniglot
Dataset Splits Yes We split the dataset into three splits: training, validation and one-shot.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory amounts) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam' optimizer but does not specify any software libraries or dependencies with version numbers (e.g., 'PyTorch 1.9' or 'TensorFlow 2.x').
Experiment Setup Yes We train the network for 20 epochs using Adam (Kingma & Ba, 2014) with a batch size of 250 and an initial learning rate of 5 10 4. After 10, 15 and 17 epochs, we divide the learning rate by 2. ... The initial learning rate is set to 5 10 5 and the batch size is 50. ... The initial learning rate is set to 2.5 10 4 and the batch size is 250.