Label-Free Supervision of Neural Networks with Physics and Domain Knowledge

Authors: Russell Stewart, Stefano Ermon

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of this approach on real world and simulated computer vision tasks. In our first two experiments, we construct a mapping from an image to the location of an object it contains.
Researcher Affiliation Academia Russell Stewart , Stefano Ermon Department of Computer Science, Stanford University {stewartr, ermon}@cs.stanford.edu
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The paper provides a link for 'Our data set 1' (footnote 1: https://github.com/russell91/labelfree) but does not explicitly state that the code for the methodology is open-source or available at this link.
Open Datasets Yes Our data set 1 is collected on a laptop webcam running at 10 frames per second (Δt = 0.1s). [footnote 1: https://github.com/russell91/labelfree]
Dataset Splits No The paper mentions holding out 25 trajectories for evaluation, which is later referred to as the 'test images'. It does not explicitly specify a separate validation set or provide comprehensive train/validation/test dataset splits with percentages or counts for all three.
Hardware Specification No The paper mentions data collection on a 'laptop webcam' but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for training or running the experiments.
Software Dependencies No The paper mentions using 'Tensor Flow' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes Images are resized to 56 56 pixels... We use 3 Conv/Re LU/Max Pool blocks followed by 2 Fully Connected/Re LU layers with dropout probability 0.5 and a single regression output. We group trajectories into batches of size 16... We use the Adam optimizer... with a learning rate of 0.0001 and train for 4,000 iterations.