Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Authors: Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We prove statistical consistency and error bounds of our method and experimentally confirm its practical usefulness. |
| Researcher Affiliation | Academia | 1LAMSADE, CNRS, Université Paris-Dauphine, PSL Research University, 75016 PARIS, FRANCE 2RIKEN AIP, Tokyo, Japan 3Kyoto University, Kyoto, Japan 4The University of Tokyo, Tokyo, Japan. Correspondence to: Ikko Yamane <ikko.yamane@dauphine.psl.eu>. |
| Pseudocode | Yes | Algorithm 1 Two-Step Regressed Regression (2Step-RR); Algorithm 2 Joint Regressed Regression (Joint-RR) |
| Open Source Code | Yes | The code will be available on https://github.com/i-yamane/mediated_uncoupled_learning. |
| Open Datasets | Yes | MNIST (Le Cun et al., 1994), Fashion-MNIST (Xiao et al., 2017), CIFAR-10, and CIFAR-100 (Krizhevsky et al., 2009). |
| Dataset Splits | Yes | We use 1,000 mediated uncoupled data for training and 10,000 coupled (X, Y )-data for test evaluation. (...) We use randomly sampled 10,000 mediated uncoupled data for training and 10,000 coupled (X, Y )-data for test evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019)' and 'Adam (Kingma & Ba, 2017)' but does not provide specific version numbers for PyTorch or any other libraries or dependencies used. |
| Experiment Setup | Yes | We train all models with Adam (Kingma & Ba, 2017) for 200 epochs. (...) We use the default values of the implementation provided by Py Torch (Paszke et al., 2019) for all the parameters of Adam: the learning rate is 0.001, and β is (0.9, 0.999). (...) We turn off the weight decay and set the other tuning parameters of Adam as in Py Torch (Paszke et al., 2019): the learning rate is 0.001, the β is (0.9, 0.999). |