Unitary Branching Programs: Learnability and Lower Bounds

Authors: Fidel Ernesto Diaz Andino, Maria Kokkou, Mateus De Oliveira Oliveira, Farhad Vadiee

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we describe experiments designed to evaluate the performance of our algorithm in the task of learning unitary branching programs consistent with a given dataset. For the purpose of these experiments, we focus on read-once branching programs, where the bits of the input are read from left to right.
Researcher Affiliation Academia 1University of S ao Paulo, S ao Paulo, Brazil 2Chalmers University of Technology, Gothenburg, Sweden 3University of Bergen, Bergen, Norway.
Pseudocode Yes Algorithm 1: Sliding Window Unitary Learning.
Open Source Code Yes We have implemented our algorithm in a tool called LUBP Learning Unitary Branching Programs, which can be downloaded at https://github.com/Auto Proving/LUBP.
Open Datasets No For the purpose of these experiments, we focus on read-once branching programs, where the bits of the input are read from left to right. More precisely, for each n in the set {16, 32, 64, 128, 256, 512, 1024}, we generated 10 datasets at random, each containing two classes, and each class containing n strings from {0, 1}n. The paper describes how the datasets were generated, but does not provide concrete access (link, DOI, repository, or formal citation for a public dataset) to these generated datasets.
Dataset Splits No The paper describes generating 'n-datasets' and measuring 'DD(B, D)' (discrete distance), which appears to be a measure on the entire dataset. It does not provide specific details on how these datasets were split into training, validation, or test sets for reproducibility.
Hardware Specification Yes Each instance was executed in a machine with CPU of type Intel Xeon-Gold 6138 2.0 GHz.
Software Dependencies No The paper mentions that the tool was implemented in 'C++ using ROPTLIB' and that 'Manopt' can be used, but it does not specify explicit version numbers for these software dependencies or programming languages.
Experiment Setup Yes In Algorithm 1, we depict a pseudo-code of our algorithm. In the pseudo-code, for given positive integers k and r, we let Sample(U(k)r) be the function that samples a tuple containing r k k unitary matrices at random. We let Opt Window(B, D, L, R, p, w) be the function that applies the Riemannian gradient descent method to optimize the matrices corresponding to instructions Ip, . . . , Ip+w 1. ... We start by sampling all matrices of the branching program, including the instruction matrices, and class representative matrices at random. Subsequently, we make a round of instruction optimizations. ... We iterate this process until the target accuracy has been achieved, or until time is up. Data: An n-input dataset D of arity s and class-size c; an index sequence J , a dimension parameter k, a window-size parameter w; an accuracy parameter α; a maximum number of iterations Max.