Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

tntorch: Tensor Network Learning with PyTorch

Authors: Mikhail Usvyatsov, Rafael Ballester-Ripoll, Konrad Schindler

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We benchmark tntorch s running times across four representative operations: TT decomposition using the TT-SVD algorithm, cross-approximation, and two arithmetic operations that can be achieved by direct manipulation of TT cores (Oseledets, 2011). We test four modalities: CPU vs. GPU, and in both cases for loop vs. vectorized batch processing. As a baseline, we also compare with the Python library ttpy (Oseledets, 2015), which is written in Num Py and FORTRAN and also implements these four operations. All experiments use randomly initialized tensors of TT-rank R = 20, physical dimension sizes I = 15, . . . , 45, and number of dimensions N = 8 (except for the TT-SVD experiment, where N = 4). We used Py Torch 1.13.0a0+git87148f2 (compiled from source) and Num Py 1.22.4 on an Intel(R) Core(TM) i7-7700K CPU with 64Gb RAM and an NVIDIA Ge Force RTX 3090 GPU. Results are reported in Fig. 2.
Researcher Affiliation Academia 1ETH Zurich, Switzerland Joint first authors 2IE University, Madrid, Spain
Pseudocode No The paper describes features and operations of the tntorch library but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes To this end, we introduce tntorch (github.com/rballester/tntorch), an open-source Python package that abstracts the choice of format, while providing a wide range of tools for tensor learning, manipulation, and analysis.
Open Datasets No All experiments use randomly initialized tensors of TT-rank R = 20, physical dimension sizes I = 15, . . . , 45, and number of dimensions N = 8 (except for the TT-SVD experiment, where N = 4).
Dataset Splits No The paper uses randomly initialized tensors for benchmarking and does not mention any training/test/validation dataset splits.
Hardware Specification Yes We used Py Torch 1.13.0a0+git87148f2 (compiled from source) and Num Py 1.22.4 on an Intel(R) Core(TM) i7-7700K CPU with 64Gb RAM and an NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies Yes We used Py Torch 1.13.0a0+git87148f2 (compiled from source) and Num Py 1.22.4 on an Intel(R) Core(TM) i7-7700K CPU with 64Gb RAM and an NVIDIA Ge Force RTX 3090 GPU.
Experiment Setup No The paper mentions parameters for the tensors used in benchmarking, such as 'TT-rank R = 20, physical dimension sizes I = 15, . . . , 45, and number of dimensions N = 8 (except for the TT-SVD experiment, where N = 4)' and 'The batch size is set to B = 32' (Figure 2). However, it does not provide specific hyperparameters or system-level training settings for a learning process, such as learning rates, optimizer details, or epochs.