Gradient Information for Representation and Modeling

Authors: Jie Ding, Robert Calderbank, Vahid Tarokh

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As an example, we apply these measures to the Chow-Liu tree algorithm, and demonstrate remarkable performance and significant computational reduction using both synthetic and real data. Table 2: Classification accuracy of three methods for data with different levels of correlation
Researcher Affiliation Academia Jian Ding School of Statistics University of Minnesota Minneapolis, MN 55455 dingj@umn.edu Robert Calderbank Department of Electrical and Computer Engineering Duke University Durham, NC 27708 robert.calderbank@duke.edu Vahid Tarokh Department of Electrical and Computer Engineering Duke University Durham, NC 27708 vahid.tarokh@duke.edu
Pseudocode Yes Algorithm 1 Generic tree approximation based on gradient information Algorithm 2 Community discovery based on mutual information
Open Source Code No The paper does not provide any specific links to source code repositories or explicitly state that the code for its methodology is being made open-source or publicly available.
Open Datasets Yes We apply our algorithm to a protein signaling flow cytometry dataset. The dataset encodes the presence of p = 11 proteins in n = 7466 cells. It was first analyzed using Bayesian networks in [22] who fit a directed acyclic graph to the data, later studied in [23] using different methods. In a data study, we considered a dataset constructed in [28]. The data was also studied in [29] using an algorithm that recovers the communities using the eigenvectors of the sample covariance matrix.
Dataset Splits No The paper mentions 'cross validation accuracy (with 30% test data)' for synthetic data, which implies a training portion, but it does not specify a distinct validation dataset split with percentages or counts.
Hardware Specification No The paper does not specify any hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x', or specific library versions).
Experiment Setup No The paper describes aspects of the experiment such as data generation parameters and tree traversal logic, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings necessary for replication.