Online Non-convex Learning in Dynamic Environments

Authors: Zhipan Xu, Lijun Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we discuss the application to online constrained meta-learning and conduct experiments to verify the effectiveness of our methods.
Researcher Affiliation Academia National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China School of Artificial Intelligence, Nanjing University, Nanjing, China
Pseudocode Yes Algorithm 1 Follow the Perturbed Leader (FTPL)
Open Source Code Yes The code and data are included in supplemental material.
Open Datasets Yes We use the demonstration data given by Huang et al. [2019] and set the total number of tasks T = 200.
Dataset Splits Yes For each group, we randomly allocate 50% of the classes for training data, 25% for validation data, and 25% for test data.
Hardware Specification Yes All experiments are executed on a computer with a 2.50 GHz Intel Xeon Platinum 8255C CPU and an RTX 2080Ti GPU.
Software Dependencies No The paper mentions using the Adam optimizer and a neural network framework, but does not provide specific version numbers for software dependencies like Python, PyTorch, or TensorFlow.
Experiment Setup Yes We use the Adam optimizer [Kingma and Ba, 2014] with a learning rate of 0.001 for the optimization.