Partial Trace Regression and Low-Rank Kraus Decomposition
Authors: Hachem Kadri, Stephane Ayache, Riikka Huusari, Alain Rakotomamonjy, Ralaivola Liva
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show the relevance of our framework with synthetic and real-world experiments conducted for both i) matrix-to-matrix regression and ii) positive semidefinite matrix completion, two tasks which can be formulated as partial trace regression problems. |
| Researcher Affiliation | Collaboration | 1Aix-Marseille University, CNRS, LIS, Marseille, France 2Helsinki Institute for Information Technology HIIT, Department of Computer Science, Aalto University, Espoo, Finland 3Université Rouen Normandie, LITIS, Rouen, France 4Criteo AI Lab, Paris, France. |
| Pseudocode | No | The paper describes optimization methods and procedures in narrative text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Stef-hub/ partial_trace_kraus. |
| Open Datasets | Yes | We consider the multiple features digits dataset, available online.3 |
| Dataset Splits | No | The paper mentions training data (e.g., "use the training examples (230 samples) for learning the covariance mapping") and testing data, but does not explicitly specify a validation dataset split. |
| Hardware Specification | No | The paper mentions leveraging "efficient computational hardware, like GPUs" but does not provide specific details such as GPU models, CPU models, or memory specifications. |
| Software Dependencies | No | The paper states that the PTR model is implemented in "keras/Tensorflow framework" but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | In all the experiments, the PTR model is implemented in a keras/Tensorflow framework and learned with Adam with default learning rate (0.001) and for 100 epochs. Batch size is typically 16. |