Generalizing GANs: A Turing Perspective

Authors: Roderich Gross, Yue Gu, Wei Li, Melvin Gauci

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate this idea using two case studies. In the first case study, a computer infers the behavior of an agent while controlling its environment. In the second case study, a robot infers its own sensor configuration while controlling its movements. The results confirm that by allowing discriminators to interrogate, the accuracy of models is improved.
Researcher Affiliation Academia Roderich Groß and Yue Gu Department of Automatic Control and Systems Engineering The University of Sheffield {r.gross,ygu16}@sheffield.ac.uk Wei Li Department of Electronics The University of York wei.li@york.ac.uk Melvin Gauci Wyss Institute for Biologically Inspired Engineering Harvard University mgauci@g.harvard.edu
Pseudocode No The paper refers to an algorithmic description in a cited work: "For an algorithmic description of Turing Learning, see [8].", but does not provide pseudocode or an algorithm block itself.
Open Source Code No The paper does not provide concrete access to its own source code. It references a third-party simulator: "S. Magnenat, M. Waibel, and A. Beyeler. Enki: The fast 2D robot simulator, 2011. https: //github.com/enki-community/enki."
Open Datasets No The paper does not provide concrete access information for a publicly available or open dataset. For Case Study 1, training data is generated through interaction with a PFSM: "To obtain the training data, the discriminator interacts with the PFSM, shown in Figure 2." For Case Study 2, data comes from a real robot: "The training data comes from the eight proximity sensors of a real epuck robot".
Dataset Splits No The paper does not provide specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) for training, validation, or test sets. It describes how evaluation is performed during the evolutionary process: "Each of the 100 candidate discriminators is evaluated once with each of the 100 models, as well as an additional 100 times with the training agent."
Hardware Specification No The paper mentions experimental subjects such as the 'e-puck' robot, but does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments or simulations.
Software Dependencies No The paper mentions software components like "Elman neural network" and "Enki: The fast 2D robot simulator", but does not provide specific version numbers for these or any other ancillary software dependencies.
Experiment Setup Yes Case Study 1: "The number of states are set to four (n = 4). The parameters used to generate the (genuine) data samples are given by: q = (p 1, p 2, v 2, v 3, v 4) = (0.1, 1.0, 0.2, 0.4, 0.6)." "The discriminator is implemented as an Elman neural network [25] with 1 input neuron, 5 hidden neurons, and 2 output neurons." "We use µ = λ = 50 in both cases. The optimization process is stopped after 1000 generations." Case Study 2: "The network has 8 inputs that receive values from the robot s proximity sensors (s1, s2, . . . , s8)." "The evaluation lasts for 10 seconds. As the robot s sensors and actuators are updated 10 times per second, this results in 100 time steps."