Improving Search with Supervised Learning in Trick-Based Card Games

Authors: Christopher Solinas, Douglas Rebstock, Michael Buro1158-1165

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We use two methods of measuring inference performance in this work. First, we measure the quality of our inference technique in isolation using a novel metric. Second, we show the effect of using inference in a card player by running tournaments against several baseline players.
Researcher Affiliation Academia Christopher Solinas, Douglas Rebstock, Michael Buro Department of Computing Science, University of Alberta Edmonton, Canada {solinas,drebstoc,mburo}@ualberta.ca
Pseudocode Yes Algorithm 1: PIMC with state inference
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing their source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes The networks are trained using a total of 20 million games played by humans on a popular Skat server (DOSKV 2018).
Dataset Splits No The paper mentions using a 'validation set' for early stopping but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments, only vague terms like 'modern hardware'.
Software Dependencies No The paper mentions using 'Python Tensorflow (Abadi et al. 2016)' but does not provide specific version numbers for these software components.
Experiment Setup Yes Table 1 lists all hyperparameters used during training. Parameter Value Dropout 0.8 Batch Size 32 Optimizer ADAM Learning Rate (LR) 10^-4 LR Exponential Decay 0.96 / 10,000,000 batches