Learning with Feature and Distribution Evolvable Streams
Authors: Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, Zhi-Hua Zhou
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on synthetic data verify the rationale of our proposed discrepancy measure, and extensive experiments on real-world tasks validate the effectiveness of our algorithm. |
| Researcher Affiliation | Academia | 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China. Correspondence to: Yuan Jiang <jiangy@lamda.nju.edu.cn>. |
| Pseudocode | No | The paper describes the algorithm steps and framework but does not include a formal pseudocode block or algorithm listing. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of its source code. |
| Open Datasets | Yes | RFID Dataset (Hou et al., 2017) is real-time data streams collected by the RFID technique. Amazon Dataset (Mc Auley et al., 2015) contains the product s quality (label) from 2006 to 2008 according to the ratings of its users (feature). Reuters multilingual dataset (Amini et al., 2009) contains about 11K articles from 6 classes in 5 languages so that we can simulate the evolving stream by various languages. |
| Dataset Splits | No | The paper describes how evolving data is generated (e.g., '20% evolving data in each mini-batch') and how data is categorized or split for task generation, but it does not specify explicit train/validation/test dataset splits with percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions general software components like 'SGD', 'cross-entropy loss', 'MLP', and 'ReLU', but it does not provide specific version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | For implementations of the EDM algorithm, we set the main classifiers (min-player) and auxiliary classifiers (maxplayer) in the adversarial network as two 5-layer MLP with Re LU as activation functions. The model is trained by SGD with a learning rate of 0.004 and regularization weight decay 0.005. |