Polynomial Tensor Sketch for Element-wise Function of Low-Rank Matrix
Authors: Insu Han, Haim Avron, Jinwoo Shin
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we report the empirical results of POLY-TENSORSKETCH for the element-wise matrix functions under various machine learning applications. |
| Researcher Affiliation | Academia | 1School of Electrical Engineering, KAIST, Daejeon, Korea 2School of Mathematical Sciences, Tel Aviv University, Israel 3Graduate School of AI, KAIST, Daejeon, Korea. |
| Pseudocode | Yes | Algorithm 1 TENSORSKETCH (Pham & Pagh, 2013), Algorithm 2 POLY-TENSORSKETCH, Algorithm 3 Greedy k-center, Algorithm 4 Coefficient approximation via coreset |
| Open Source Code | Yes | Our implementation and experiments are available at https://github.com/insuhan/polytensorsketch. |
| Open Datasets | Yes | For real-world kernels, we use segment and usps datasets. The datasets used in Section 4.1 and 4.2 are available at http: //www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ and http://archive.ics.uci.edu/. trained on CIFAR100 dataset (Krizhevsky et al., 2009). |
| Dataset Splits | Yes | We run all experiments with 10 cross-validations and report the average of the classification error on the validation dataset. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU/CPU models or cloud instances used for the experiments. |
| Software Dependencies | No | We use the open-source SVM package (LIBSVM) (Chang & Lin, 2011) and ADAM optimizer (Kingma & Ba, 2015), but no specific version numbers for these software dependencies are provided. |
| Experiment Setup | Yes | We set m = 10, r = 10 and k = 10 as the default configuration. We set m = 20 for the dimension of sketches and r = 3 for the degree of the polynomial. We set m = 20, d = 3, r = 3 and γ = 1. We first train the model for 300 epochs using ADAM opti-mizer (Kingma & Ba, 2015) with 0.0005 learning rate. |