Scalable and Efficient Non-adaptive Deterministic Group Testing

Authors: Dariusz Kowalski, Dominik Pajak

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical To avoid all the abovementioned drawbacks, for Quantitative Group Testing (QGT) where query result is the size of its intersection with the hidden set, we present the first efficient and scalable non-adaptive deterministic algorithms for constructing queries and decoding a hidden set K from the results of the queries these solutions do not use any randomization, adaptiveness or unlimited computational power.
Researcher Affiliation Collaboration Dariusz R. Kowalski School of Computer and Cyber Sciences Augusta University, USA dkowalski@augusta.edu Dominik Pajak Department of Pure Mathematics Wroclaw University of Science and Technology, Infermedica, Poland dominik.pajak@pwr.edu.pl
Pseudocode Yes Algorithm 1: Construction of a sequence of queries solving QGT with -capped feedback. Algorithm 2: Decoding of the elements for QGT with -capped feedback.
Open Source Code No The paper states in its checklist, under 'Did you include the license to the code and datasets?', that the answer is '[N/A]' and the justification is 'The code and the data are proprietary.'.
Open Datasets No The paper is theoretical and does not use datasets for empirical evaluation. Therefore, there is no mention of publicly available training data.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, thus no training, validation, or test splits are described.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the hardware used to run experiments.
Software Dependencies No The paper is theoretical and focuses on algorithm design and proofs, not on implementing and running software for experiments. No specific software dependencies with version numbers are mentioned for empirical work.
Experiment Setup No The paper is theoretical and does not describe any experimental setup, hyperparameters, or training configurations.