Asymptotic Risk of Bézier Simplex Fitting
Authors: Akinori Tanaka, Akiyoshi Sannai, Ken Kobayashi, Naoki Hamada2416-2424
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Those results are verified numerically under small to moderate sample sizes. In this paper, we study the asymptotic risk of the two fitting methods of the B ezier simplex: the all-at-once fitting and the inductive skeleton fitting, and compare their performance with respect to the degree. We examine the empirical performances of the all-at-once fitting and the inductive skeleton fitting and verify the asymptotic risks derived in Section 3.1 over synthetic instances and multi-objective optimization instances. |
| Researcher Affiliation | Collaboration | Akinori Tanaka RIKEN AIP, Keio University akinori.tanaka@riken.jp Akiyoshi Sannai RIKEN AIP, Keio University akiyoshi.sannai@riken.jp Ken Kobayashi Fujitsu Laboratories LTD., RIKEN AIP, Tokyo Tech ken-kobayashi@fujitsu.com Naoki Hamada Fujitsu Laboratories LTD., RIKEN AIP hamada-naoki@fujitsu.com |
| Pseudocode | No | The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | Yes | The source code and library dependencies are provided in https://github.com/rafcc/aaai-20.1534. |
| Open Datasets | Yes | To investigate the relationship between the generalization performance and our theoretical risk, we provide two complementary instances of multi-objective optimization problems: a generalized location problem called MED (Harada, Sakuma, and Kobayashi 2006; Hamada et al. 2010) and a multi-objective hyper-parameter tuning of the group lasso (Yuan and Lin 2006) on the Birthwt dataset (Hosmer and Lemeshow 1989; Venables and Ripley 2002). |
| Dataset Splits | No | The paper mentions 'training points' and a 'test set', but it does not explicitly describe a 'validation set' or provide specific percentages/counts for a train/validation/test split. |
| Hardware Specification | Yes | Experiment programs were implemented in Python 3.7.1 and run on a Windows 7 PC with an Intel Core i7-4790CPU (3.60 GHz) and 16 GB RAM. |
| Software Dependencies | No | Experiment programs were implemented in Python 3.7.1 and run on a Windows 7 PC with an Intel Core i7-4790CPU (3.60 GHz) and 16 GB RAM. The source code and library dependencies are provided in https://github.com/rafcc/aaai-20.1534. While Python version is specified, the paper does not list other ancillary software dependencies with specific version numbers directly in the main text. |
| Experiment Setup | Yes | In this experiment, we estimated the B ezier simplex with degree D = 2 or 3, and compared the following three fitting methods: all-at-once the all-at-once fitting (Section 3.1); inductive skeleton (non-optimal) the inductive skeleton fitting (Section 3.2) with N (0) = = N (M 1) = N/M, which does not provide the optimal value of the risk shown in Table 1; inductive skeleton (optimal) the inductive skeleton fitting (Section 3.2) where N (0), . . . , N (M 1) are determined by minimizing the risk shown in Table 1 under the constraints M 1 m=0 N (m) = N and N (m) 0 (m = 0, . . . , M 1). We make them strongly convex by the following perturbation: f1 = f1 + ε x 2 , f2 = f2 + ε x 2 , f3 = f3 + ε x 2 where ε is an arbitrarily small positive number (we set ε = 10 4). We changed the size of the training set from N { 250, 500, 1000, 2000 }. |