A graph-theoretic approach to multitasking

Authors: Noga Alon, Daniel Reichman, Igor Shinkar, Tal Wagner, Sebastian Musslick, Jonathan D. Cohen, Tom Griffiths, Biswadip dey, Kayhan Ozcimder

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we use a graph-theoretic analysis of network architecture to address this question, where tasks are represented as edges in a bipartite graph G = (A B, E). We define a new measure of multitasking capacity of such networks... Our main result is an inherent tradeoff between the multitasking capacity and the average degree of the network that holds regardless of the network architecture. These results are also extended to networks of depth greater than 2.
Researcher Affiliation Academia Noga Alon Tel-Aviv University Daniel Reichman UC Berkeley Igor Shinkar UC Berkeley Tal Wagner MIT Sebastian Musslick Princeton University Jonathan D. Cohen Princeton University Thomas L. Griffiths UC Berkeley Biswadip Dey Princeton University Kayhan Ozcimder Princeton University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or links to a code repository.
Open Datasets No This is a theoretical paper that does not conduct experiments on datasets, therefore no dataset information for training is provided.
Dataset Splits No This is a theoretical paper that does not conduct experiments with dataset splits, therefore no validation split information is provided.
Hardware Specification No The paper is theoretical and does not report on experiments requiring specific hardware, so no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not report on experiments requiring specific software dependencies, so no software versions are mentioned.
Experiment Setup No The paper is theoretical and does not report on empirical experiments, therefore no experimental setup details, such as hyperparameters or training configurations, are provided.