Estimating Stochastic Linear Combination of Non-Linear Regressions

Authors: Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu6137-6144

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments for both cases support our theoretical results. Each of Figure 1-3 illustrates the result for a single type of link function. We can see that the relative error decreases steadily as the sample size n grows which is due to the O( 1 n) converge rate as our theorem states.
Researcher Affiliation Academia Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu Department of Computer Science and Engineering State University of New York at Buffalo Buffalo, NY 14260
Pseudocode Yes Algorithm 1 SLS: Scaled Least Squared Estimators
Open Source Code Yes The source code of experiments can be found at github.com/anonymizepaper/SLSE.
Open Datasets No The paper uses synthetic data generated by sampling coefficients, noise, and variates from specified distributions (e.g., standard Gaussian, N(1, 16Id), Uniform distribution). It does not provide concrete access information (link, DOI, formal citation) for a publicly available dataset.
Dataset Splits No The paper does not explicitly specify training, validation, or test dataset splits (e.g., percentages or sample counts). It mentions varying the total sample size 'n' and sub-sample size '|S|'.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, cloud instances) used for running the experiments. It only mentions the use of synthetic data and time complexity.
Software Dependencies No The paper does not mention specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed to reproduce the experiments.
Experiment Setup Yes We sample all coefficient zi,j and noise ϵ i.i.d. from standard Gaussian distribution N(0, 1) across each experiment. Each β j is generated by sampling from N(1, 16Id). We consider two distributions for generating x: Gaussian and Uniform distribution (corresponds to thr sub-Gaussian case). ... x N(0, 1/p Ip), while in the sub-Gaussian case x is sampled from a uniform distribution, i.e., x U([-1/p, 1/p])p. In the first part we vary n from 100 000 to 500 000 with fixed p = 20 and |S| = n, while in the second part we vary |S| from 0.01n to n, with fixed n = 500 000 and p = 20. For each experiment, ... we will use the (maximal) relative error as the measurement.