Count-Based Frequency Estimation with Bounded Memory

Authors: Marc G. Bellemare

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the usefulness of our algorithms with empirical results on two sequential, large-alphabet prediction problems. 5 Empirical Evaluation The Budget SAD is especially motivated by the desire to minimize memory usage in the presence of infrequent symbols. Here we present two experiments showing the practical value of our approach.
Researcher Affiliation Industry Marc G. Bellemare Google Deep Mind London, United Kingdom bellemare@google.com
Pseudocode Yes Algorithm 1 K-Distinct Reservoir Sampling. Initially: S = SAMPLE(K, x1:n) for t = 1 . . . n do Observe xt Draw r U ({0, . . . , t 1}) if r > DK+1 then do nothing else if xt / Y(S) then INSERT(S, xt, r) else if I(xt) < J(r) then d J(r) 1 d J(r) 1 + 1 else MERGE(S, xt) and INSERT(S, xt, r) Output y(S) ... Algorithm 2 Budget SAD. Initially: U = , z = 0 for t = 1 . . . n do Predict xt according to Equation 2 Observe xt Update U as per Algorithm 1 if xt is new in U then c I(xt) 1 else if xt Y(U) then c I(xt) c I(xt) + 1 else z z + 1 // discard xt
Open Source Code No The paper does not provide an explicit statement about releasing the authors' own code for the described methodology, nor does it provide a link to such code.
Open Datasets Yes We consider the Wikipedia Clean Text data set [Mahoney, 2011], which we use as our sequential prediction task here.
Dataset Splits No The paper does not provide specific dataset split information, such as exact percentages, sample counts, or citations to predefined splits, for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions implementation and use of other tools but does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We set Kt = R log(t+1). We performed experiments with R [1.0, 2.0] (when p = 0.1) and R [0.5, 10.0] (when p = 0.9)...For each node u, its reservoir size was increased as R log(tu + 1), with tu the number of visits to this node. We set the depth parameter to D = 9 as larger values of D did not affect redundancy...