Photonic Differential Privacy with Direct Feedback Alignment
Authors: Ruben Ohana, Hamlet Medina, Julien Launay, Alessandro Cappelli, Iacopo Poli, Liva Ralaivola, Alain Rakotomamonjy
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct experiments demonstrating the ability of our learning procedure to achieve solid end-task performance. |
| Researcher Affiliation | Collaboration | Ruben Ohana 1,3 , Hamlet J. Medina Ruiz 2, Julien Launay 1,3, Alessandro Cappelli1, Iacopo Poli1, Liva Ralaivola2, Alain Rakotomamonjy2 1Light On, Paris, France 2Criteo AI Lab, Paris, France 3LPENS, École Normale Supérieure, Paris, France |
| Pseudocode | Yes | Algorithm 1 Photonic DFA training |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the methodology described in this paper. |
| Open Datasets | Yes | We perform our experiments on Fashion MNIST dataset [32] |
| Dataset Splits | Yes | We perform our experiments on Fashion MNIST dataset [32], reserving 10% of the data as validation, and reporting test accuracy on a held-out set. |
| Hardware Specification | Yes | We run our simulations on cloud servers with a single NVIDIA V100 GPU and an OPU, for a total estimate of 75 GPU-hours. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., PyTorch version, Python version, etc.). |
| Experiment Setup | Yes | Optimization is done over 15 epochs with SGD, using a batch size of 256, learning rate of 0.01 and 0.9 momentum. |