Constrained Stochastic Nonconvex Optimization with State-dependent Markov Data

Authors: Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also empirically demonstrate the performance of our algorithm on the problem of strategic classification with neural networks. We empirically show the performance of the stochastic conditional gradient algorithm on a strategic classification problem in Section 4.1. In this section we illustrate our algorithm on the strategic classification problem as described in Section 1.1 with the Give Me Some Credit4 dataset.
Researcher Affiliation Academia Abhishek Roy Krishnakumar Balasubramanian Saeed Ghadimi abroy@ucdavis.edu. Halıcıo glu Data Science Institute, University of California, San Diego. Work done while being affiliated with the Department of Statistics, UC Davis. kbala@ucdavis.edu. Department of Statistics, University of California, Davis. sghadimi@uwaterloo.ca. Department of Management Sciences, University of Waterloo.
Pseudocode Yes Algorithm 1 Inexact Averaged Stochastic Approximation (I-ASA) ... Algorithm 2 Inexact Conditional Gradient (ICG)
Open Source Code Yes 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? [Yes] ... 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes In this section we illustrate our algorithm on the strategic classification problem as described in Section 1.1 with the Give Me Some Credit4 dataset. The main task is a credit score classification problem where the bank (learner) has to decide whether a loan should be granted to a client. 4Available at https://www.kaggle.com/c/Give Me Some Credit/data
Dataset Splits No The paper mentions selecting a subset of 2000 samples but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., percentages, exact counts, or cross-validation scheme).
Hardware Specification No We did not calculate the exact timings. However, our experiments are fairly small-scale ones run on a personal laptop computer, and our main contributions are theoretical.
Software Dependencies No The paper does not provide specific software dependencies or version numbers (e.g., Python version, library versions like PyTorch, TensorFlow, or specific solvers with their versions).
Experiment Setup Yes For this experiment we set n1 = 200. Similar to [LW22], we set α = 0.5λ, and λ = 0.01. For the classifier, the activation function is chosen as sigmoidal, and m = 400. We set N = 20000, and R = 4000. All the parameters of Algorithm 1 are chosen as described in (19). For this experiment we choose d1 = 10, d2 = 20, υ = 0.1, and N = 2000. Rest of the parameters of Algorithm 1 are chosen according to Theorem 3.1.