Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation
Authors: Alex Rogozhnikov
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate that the overhead brought by einops on top of DL framework is negligible we measure performance of several case studies. We compare original pytorch implementations and their einops versions in the following scenarios, see Table 1: CPU or CUDA backends, with enabled or disabled JIT (just-in-time compilation), different input tensor sizes. |
| Researcher Affiliation | Industry | Alex Rogozhnikov alex.rogozhnikov@ya.ru Currently at Herophilus, Inc. |
| Pseudocode | No | The paper does not contain blocks explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Einops package is available online at https://github.com/arogozhnikov/einops |
| Open Datasets | No | The paper uses abstract 'Input Size' values for its performance benchmarks and code examples from other papers, but does not explicitly state the use of any publicly available or open dataset for its own experiments (e.g., training or evaluation). |
| Dataset Splits | No | The paper does not provide specific information about training, validation, or test dataset splits, as its experiments focus on performance benchmarks of tensor operations rather than model training. |
| Hardware Specification | Yes | We use AWS EC2 p3.2xlarge instance for benchmarks. |
| Software Dependencies | Yes | In our performance benchmark we use einops 0.3.2, pytorch 1.7.1+cu110, CUDA 11.0. |
| Experiment Setup | Yes | To demonstrate that the overhead brought by einops on top of DL framework is negligible we measure performance of several case studies. We compare original pytorch implementations and their einops versions in the following scenarios, see Table 1: CPU or CUDA backends, with enabled or disabled JIT (just-in-time compilation), different input tensor sizes. |