Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Adaptive Lasso and group-Lasso for functional Poisson regression
Authors: Stéphane Ivanoff, Franck Picard, Vincent Rivoirard
JMLR 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulations are used to assess the empirical performance of our procedure, and an original application to the analysis of Next Generation Sequencing data is provided. Keywords: Functional Poisson regression, adaptive lasso, adaptive group-lasso, calibration, concentration |
| Researcher Affiliation | Academia | St ephane Ivanoff EMAIL CEREMADE UMR CNRS 7534 Universit e Paris Dauphine F-75775 Paris, France; Franck Picard EMAIL Laboratoire de Biom etrie et Biologie Evolutive, UMR CNRS 5558 Univ. Lyon 1 F-69622 Villeurbanne, France; Vincent Rivoirard EMAIL CEREMADE UMR CNRS 7534 Universit e Paris Dauphine F-75775 Paris, France |
| Pseudocode | No | The paper describes methods and mathematical derivations in prose and equations (e.g., Section 1: Penalized log-likelihood estimates for Poisson regression and dictionary approach, Section 2: Weights calibration using concentration inequalities). However, it does not include any explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | More details on this function can be found in our code that is freely available 1. In this setting, log(f0) can be decomposed on the dictionary and we can assess the accuracy of selection of different methods. ... Our method was implemented using the grp Lasso R package of Meier et al. (2008) to which we provide our concentration-based weights (the code is fully available 1). 1. http://pbil.univ-lyon1.fr/members/fpicard/software.html |
| Open Datasets | Yes | We also propose an original application of functional Poisson regression to the analysis of Next Generation Sequencing data, with the denoising of a Poisson intensity applied to the detection of replication origins in the human genome (see Picard et al. (2014)). |
| Dataset Splits | Yes | We also implemented the half-fold cross-validation proposed by Nason (1996) in the Poisson case to set the weights in the penalty, with the proper scaling (2s/2λ, with s the scale of the wavelet coefficients) as proposed by Sardy et al. (2004). Briefly, this cross-validation procedure is used to calibrate λ by estimating f0 on a training set made of even positions (for different values of λ), and by measuring the reconstruction error on the test set made by odd positions (see Nason (1996)). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used to conduct the simulations or application experiments. |
| Software Dependencies | No | Our method was implemented using the grp Lasso R package of Meier et al. (2008) to which we provide our concentration-based weights (the code is fully available 1). We compete our Lasso procedure (Lasso.exact in the sequel), with the Haar-Fisz transform (using the haarfisz package). |
| Experiment Setup | Yes | More precisely, we take γ = 1.01. Our theoretical results do not provide any information about values smaller than 1, so we conduct an empirical study on a very simple case, namely we estimate f0 such that log(f0) = 1[0,1] on the Haar basis, so that only one parameter should be selected. ... The signal to noise ratio was increased by multiplying the intensity functions by a factor α taking values in {1, . . . , 7}, α = 7 corresponding to the most favorable configuration. Observations Yi were generated such that Yi|Xi Poisson(f0(Xi)), with f0 = α exp(g0), and (X1, . . . , Xn) was set as the regular grid of length n = 210. Each configuration was repeated 20 times. ... When grouping strategies are considered, we set all groups at the same size. |