Don’t Predict Counterfactual Values, Predict Expected Values Instead
Authors: Jeremiasz Wołosiuk, Maciej Świechowski, Jacek Mańdziuk
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A direct comparison, in terms of CFVs prediction losses, shows a significant prediction accuracy improvement of the proposed approach (DEVN) over the original DCVN formulation (relatively by 9.18 15.70% when using card abstraction, and by 3.37 8.39% without card abstraction, depending on a particular setting). Furthermore, the application of DEVN improves the theoretical lower bound of the error by 29.05 31.83% compared to the DCVN pipeline when card abstraction is applied. |
| Researcher Affiliation | Collaboration | Jeremiasz Wołosiuk1, Maciej Swiechowski2,3, Jacek Ma ndziuk3 1 Deepsolver 2 QED Software 3 Warsaw University of Technology jeremi@deepsolver.com, maciej.swiechowski@qed.pl, jacek.mandziuk@pw.edu.pl |
| Pseudocode | No | No structured pseudocode or algorithm blocks are provided. The paper describes processes in textual paragraphs, such as the 'DEVN Pipeline'. |
| Open Source Code | Yes | https://github.com/jwolosiuk/dont-predict-cfvs-predict-evsinstead (additional results and code) |
| Open Datasets | No | The paper states 'Two different datasets of sizes 10M and 8.5M, resp. were generated using the same approach as in Deep Stack'. However, it does not provide any specific link, DOI, repository name, or citation with author/year for public access to these generated datasets. |
| Dataset Splits | Yes | The dataset (see section 6) is split in the proportion 90/10 between training and validation sets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU or CPU models, memory amounts, or detailed computer specifications used for running its experiments. It mentions 'end-to-end GPU approach' when describing Supremus, but not for their own experimental setup. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' but does not list specific software dependencies with version numbers (e.g., Python, library versions like TensorFlow, PyTorch, or CUDA versions) required to reproduce the experiments. |
| Experiment Setup | Yes | The training is performed for 400 epochs on batches of size 24000 samples, using Adam optimizer with learning rate equal to 3 10 4. |