Efficient Protocol for Collaborative Dictionary Learning in Decentralized Networks
Authors: Tsuyoshi Idé, Rudy Raymond, Dzung T. Phan
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical and experimental analysis shows that our method is several orders of magnitude faster than the alternative. |
| Researcher Affiliation | Collaboration | 1IBM Research, Thomas J. Watson Research Center 2IBM Research Tokyo 3Quantum Computing Center, Keio University |
| Pseudocode | Yes | Algorithm 1 Collaborative dictionary learning |
| Open Source Code | No | The paper does not provide any explicit statements or links about the availability of the source code for the described methodology. |
| Open Datasets | No | For the density model, we used Gaussian with M = 4. The samples were generated from distinctive three Gaussians as illustrated in Fig. 3. In practice, determining K can be a major issue. We initialized the model with K = 6, which is larger than the ground truth, and observed if the algorithm can correctly capture the three modes. The results are shown in the figure, where the ground truth is successfully recovered. |
| Dataset Splits | No | The paper uses a synthetic dataset but does not specify any explicit training, validation, or test splits by percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'homomorphe R [Narasimhan, 2019]' but does not provide a specific version number for this software or any other dependencies. |
| Experiment Setup | Yes | We initialized the model with K = 6, which is larger than the ground truth, and observed if the algorithm can correctly capture the three modes. [...] We used ϵ = 1/dmax, where dmax is the maximum node degree. [...] ξs s were initialized by the uniform distribution in [ 10, 10]. [...] convergence was declared when the rootmean-squared error (RMSE) per node is below 0.01. Since we set Nc = 5, the number of total iterations is about five times larger in the proposed model. |