Information Acquisition Under Resource Limitations in a Noisy Environment

Authors: Matvey Soloviev, Joseph Halpern

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our analysis of this strategy also gives us the tools to examine two other problems of interest. The first is rational inattention, the notion that in the face of limited resources it is sometimes rational to ignore certain sources of information completely. There has been a great deal of interest recently in this topic in economics (Sims 2003; Wiederholt 2010). Here we show that optimal testing strategies in our framework exhibit what can reasonably be called rational inattention (which we typically denote RI from now on). Specifically, our experiments show that for a substantial fraction of formulae, an optimal strategy will hardly ever test variables that are clearly relevant to the outcome. (Roughy speaking, hardly ever means that as the total number of tests goes to infinity, the fraction of tests devoted to these relevant variables goes to 0.)
Researcher Affiliation Academia Matvey Soloviev Computer Science Department Cornell University msoloviev@cs.cornell.edu Joseph Y. Halpern Computer Science Department Cornell University halpern@cs.cornell.edu
Pseudocode No The paper describes strategies and algorithms conceptually but does not include any formal pseudocode blocks or algorithms labeled as such.
Open Source Code No The paper does not provide any links to or explicit statements about the availability of open-source code for the described methodology.
Open Datasets No The paper discusses evaluating properties of Boolean formulae and mentions 'truth assignments' and 'variables,' but it does not refer to specific, publicly available datasets (like CIFAR-10 or MNIST) used for training or evaluation in the machine learning sense, nor does it provide access information for any generated data.
Dataset Splits No The paper does not mention or specify any training, validation, or test dataset splits, as it does not rely on empirical training on traditional datasets.
Hardware Specification No The paper does not specify any hardware used for its analysis or 'experiments,' such as specific GPU or CPU models.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper defines parameters for its theoretical model (e.g., 'k', 'alpha', 'g', 'b') but does not describe experimental setup details like hyperparameters or training configurations for a machine learning model.