Adaptive Stochastic Optimization: From Sets to Paths
Authors: Zhan Wei Lim, David Hsu, Wee Sun Lee
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have evaluated RAC in simulation on two robot planning tasks under uncertainty and show that RAC performs well against several commonly used heuristic algorithms, including greedy algorithms that optimize information gain. |
| Researcher Affiliation | Academia | Zhan Wei Lim David Hsu Wee Sun Lee Department of Computer Science, National University of Singapore {limzhanw,dyhsu,leews}@comp.nus.edu.sg |
| Pseudocode | Yes | We give the pseudocode of RAC in Algorithm 1. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release, or mention of code in supplementary materials) for the described methodology. |
| Open Datasets | No | The paper describes using 'UAV search and rescue task' and 'grasping task' and mentions 'two noisy IPP tasks modified from [10]', but it does not provide concrete access information (link, DOI, repository, or explicit statement of public availability) for these datasets as used in their experiments. |
| Dataset Splits | No | The paper mentions running trials (e.g., 'We run 1000 trials... and 3000 trials...') but does not provide specific information about dataset splits (e.g., percentages, sample counts, or references to predefined splits) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific machine configurations) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions) needed to replicate the experiment. |
| Experiment Setup | Yes | We set all algorithms to terminate when the Gibbs error of the equivalence classes is less than = 10 5. The Gibbs error corresponds to the exponentiated Rényi entropy (order 2) and also the prediction error of a Gibbs classifier that predicts by sampling a hypothesis from the prior. We run 1000 trials with the true hypothesis sampled randomly from the prior for the UAV search task and 3000 trials for the grasping task as its variance is higher. For Sampled-RAId, we set the number of samples to be three times the number of hypothesis. For performance comparison, we pick 15 different thresholds γ (starting from 1 10 5 and doubling γ each step) for Gibbs error of the equivalence classes and compute the average cost incurred by each algorithm to reduce Gibbs error to below each threshold level γ. |