Reconstruct & Crush Network
Authors: Erinc Merdivan, Mohammad Reza Loghmani, Matthieu Geist
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show the flexibility of the proposed approach in dealing with different types of data in different settings: images with CIFAR-10 and CIFAR-100 (not-in-training setting), text with Amazon reviews (PU learning) and dialogues with Facebook b Ab I (next response classification and dialogue completion). |
| Researcher Affiliation | Academia | 1 AIT Austrian Institute of Technology Gmb H, Vienna, Austria 2 LORIA (Univ. Lorraine & CNRS), Centrale Supélec, Univ. Paris-Saclay, 57070 Metz, France 3 Vision4Robotics lab, ACIN, TU Wien, Vienna, Austria 4 Université de Lorraine & CNRS, LIEC, UMR 7360, Metz, F-57070 France |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating the availability of its source code. |
| Open Datasets | Yes | CIFAR-10 consists of 60k 32x32 color images in 10 classes, with 6k images per class. There are 50k training images and 10k test images [14]. We converted the images to gray-scale and used 5k images per class. |
| Dataset Splits | Yes | CIFAR-10 consists of 60k 32x32 color images in 10 classes, with 6k images per class. There are 50k training images and 10k test images [14]. We converted the images to gray-scale and used 5k images per class. All the training samples are used for training, except for those belonging to the ship class. Test samples of automobile and ship are used for testing. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU model, CPU type) used to run the experiments. |
| Software Dependencies | No | The paper mentions that models are implemented in "Tensorflow" and trained with the "adam optimizer", but it does not provide specific version numbers for TensorFlow or any other software libraries. |
| Experiment Setup | Yes | For our autoencoder, we used a convolutional network defined as: (32)3c1s-(32)3c1s-(64)3c2s-(64)3c2-(32)3c1s-512f-1024f... These models are implemented in Tensorflow and trained with the adam optimizer [12] (learning rate of 0.0004) and a mini-batch size of 100 samples. The margin m was set to 1.0 and the threshold T to 0.5. |