Compact Autoregressive Network

Authors: Di Wang, Feiqing Huang, Jingyu Zhao, Guodong Li, Guangjian Tian6145-6152

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real-world datasets demonstrate the promising performance of the proposed compact network. This section first performs analysis on two synthetic datasets to verify the sample complexity established in Theorems 1-3 and to demonstrate the capability of TAR nets in nonlinear functional approximation. Three real datasets are then analyzed by the TAR-2 and TAR nets, together with their linear counterparts.
Researcher Affiliation Collaboration Di Wang,1 Feiqing Huang,1 Jingyu Zhao,1 Guodong Li,1 Guangjian Tian2 1Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China 2Huawei Noah s Ark Lab, Hong Kong, China
Pseudocode No The paper describes the network architecture and mathematical formulations but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about open-sourcing code or links to a code repository.
Open Datasets Yes Dataset We use the three publicly available datasets. 1. USME dataset: It contains 40 US macroeconomic variables provided in Koop (2013)... 2. GHG dataset: We retrieve a partial greenhouse gas (GHG) concentration dataset (Lucas et al. 2015) from UCI repository... 3. Traffic dataset: The data record the hourly road occupancy rate (between 0 and 1) in the San Francisco Bay freeway (Lai et al. 2018)...
Dataset Splits No The paper describes training and testing splits but does not explicitly define a separate validation set split or how it was used in the reported experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud resources) used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes The gradient descent method is employed for the optimization with learning rate and momentum being 0.01 and 0.9, respectively. If the loss function drops by less than 10 8, the procedure is then deemed to have reached convergence. For the USME dataset, the parameters are chosen as (P, r1, r2, r3) = (4, 4, 3, 2). For GHG dataset, the parameters are set as (P, r1, r2, r3) = (20, 2, 4, 5)... For the Traffic dataset, the parameters are set as (P, r1, r2, r3) = (30, 2, 6, 8)...