Statistical Learning and Inverse Problems: A Stochastic Gradient Approach

Authors: Yuri Fonseca, Yuri Saporito

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, in Section 5 we provide numerical examples and a real data application for a Functional Linear Regression problem (FLR). In this section, we provide two applications of the Functional Linear Regression problem. We first demonstrate the performance of both algorithms in simulated data and next we provide an example for generalized linear models, applied to an classification problem using bitcoin transaction data.
Researcher Affiliation Academia Yuri R. Fonseca Decision, Risk and Operations Columbia University New York, NY yfonseca23@gsb.columbia.edu Yuri F. Saporito School of Applied Mathematics Getulio Vargas Foundation Rio de Janeiro, Brazil yuri.saporito@fgv.br
Pseudocode Yes Algorithm 1: SGD-SIP input : sample {xi, yi}n i=1, operator A, initial guess f0 output : ˆfn; Algorithm 2: ML-SGD input : sample {xi, yi}n i=1, discretization {wj}nw j=1 of W, operator A, initial guess f0 output : ˆfn
Open Source Code No The paper mentions making the dataset available online and using the 'refund' R package (a third-party tool), but does not provide an explicit statement or link for the source code of the methodology described in this paper.
Open Datasets Yes The data set contains 3000 bitcoin addresses spanning from April 2011 and April 2017 and their respective cumulative credit... We refer the reader to Appendix A for more information about the data set used that we make available online.
Dataset Splits Yes In Table 2 we provide 3-fold cross validation for the accuracy and kappa metrics.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions using the 'refund' package available in R and cites Goldsmith et al. [2021], but it does not specify version numbers for R or the 'refund' package itself, nor other key software components.
Experiment Setup Yes Specifically, we set W = [0, 1], f (z) = sin(4πz), and X simulated accordingly a Brownian motion in [0, 1]. We also consider a noise-signal ratio of 0.2. We generate 100 samples of X and Y with the integral defining the operator A approximated by a finite sum of 1000 points in [0, 1]... For the ML-SGD algorithm, we used smoothing splines and regression trees as base learners... The step sizes are taken to be equal of the form O(1/i), where i = 1, , n is the current step of the algorithm and n is the total number of steps/sample.