Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis
Authors: Jung Yeon Park, Kenneth Carr, Stephan Zheng, Yisong Yue, Rose Yu
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide theoretical analysis and prove the convergence properties and computational complexity of MRTL. We demonstrate on two real-world datasets that this approach is significantly faster than fixed resolution methods. ... evaluate on two real-world datasets and show MRTL learns faster than fixed-resolution learning and can produce interpretable latent factors. |
| Researcher Affiliation | Collaboration | 1Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA. 2Salesforce AI Research, Palo Alto, CA, USA. Work done while at California Institute of Technology. 3Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA. 4Computer Science and Engineering, University of California, San Diego, San Diego, CA, USA. |
| Pseudocode | Yes | Algorithm 1 Multiresolution Tensor Learning: MRTL |
| Open Source Code | Yes | The code for our implementation is available 1. 1https://github.com/Rose-STL-Lab/mrtl |
| Open Datasets | Yes | We use a large NBA player tracking dataset from (Yue et al., 2014; Zheng et al., 2016)... We use precipitation data from the PRISM group (PRISM Climate Group, 2013) and SSS/SST data from the EN4 reanalysis (Good et al., 2013). |
| Dataset Splits | Yes | For both datasets, we discretize the spatial features and use a 60-20-20 train-validation-test set split. |
| Hardware Specification | No | The paper mentions GPUs generally ('chosen to fit GPU memory or time constraints') but does not specify any exact models of GPUs, CPUs, or other hardware used for experiments. |
| Software Dependencies | No | The paper mentions using 'Adam' for optimization, but it does not specify version numbers for Adam or any other software components like programming languages or libraries. |
| Experiment Setup | Yes | We use Adam (Kingma & Ba, 2014) for optimization as it was empirically faster than SGD in our experiments. We use both L2 and spatial regularization as described in Section 3. We selected optimal hyperparameters for all models via random search. We use a stepwise learning rate decay with stepsize of 1 with γ = 0.95. |