Detection and Localization of Changes in Conditional Distributions

Authors: Lizhen Nie, Dan Nicolae

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section investigates performance in synthetic data. We report representative results on different forms of X, Y and types of changes, with additional results included in the Appendix. Baselines. We consider three baselines: one existing (the fixed design CP method [31]), and two adapted from existing abrupt CP methods for unpaired data (denoted by DXY and DY ). Localization comparisons are reported in Table 1a.
Researcher Affiliation Academia Lizhen Nie Department of Statistics The University of Chicago lizhen@uchicago.edu Dan Nicolae Department of Statistics The University of Chicago nicolae@statistics.uchicago.edu
Pseudocode Yes Algorithm 1 KCE to solve task II (conditional expectation change) Algorithm 2 KCD to solve Task I (conditional distribution change)
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplemental material.
Open Datasets Yes All data are downloaded from https://www.marketwatch.com/. The data we use are collected by [10], where the yields of three-month T-bills are treated as market interest rates.
Dataset Splits No The paper uses n0 and n1 parameters to define a search range for the change point, but does not explicitly mention train/validation/test dataset splits with percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or scikit-learn with their specific versions).
Experiment Setup Yes All bandwidths used for all methods are tuned among Sh = {0.001, 0.01, 0.1, 1, 10} on 10 independently generated data sets. We set FX = N(0, 1), F 0 = N(0, 1), n = 1000, = 0.7, = 700, 0 = 0.05, 1 = 0.95.