Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Layer Collaboration in the Forward-Forward Algorithm
Authors: Guy Lorberbom, Itai Gat, Yossi Adi, Alexander Schwing, Tamir Hazan
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the efficacy of the proposed version when considering both information flow and objective metrics. |
| Researcher Affiliation | Collaboration | 1Technion 2 FAIR Team, Meta AI Research 3 The Hebrew University of Jerusalem 4University of Illinois at Urbana-Champaign |
| Pseudocode | Yes | Algorithm 1: Forward-Forward; Algorithm 2: Collaborative Forward-Forward |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code available or provide a link to a code repository. |
| Open Datasets | Yes | We compare those methods using MNIST, Fashion-MNIST, and CIFAR-10. |
| Dataset Splits | No | The paper mentions training on datasets like MNIST, Fashion-MNIST, and CIFAR-10, and shows performance over epochs (Figures 2 and 4), but it does not specify explicit training, validation, or test dataset split percentages, counts, or a detailed methodology for splitting. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU model, CPU type, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using SGD for optimization but does not list any specific software libraries or their version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed for reproducibility. |
| Experiment Setup | No | The paper mentions hyperparameters like θ and training concepts like SGD and epochs. It also states "We detail the experimental setup in the appendix.", but since the appendix text is not provided, the main body of the paper does not contain concrete numerical values for hyperparameters (e.g., learning rate, batch size) or detailed system-level training configurations needed for reproducibility. |