Context-Aware Online Collective Inference for Templated Graphical Models
Authors: Charles Dickens, Connor Pryor, Eriq Augustine, Alexander Miller, Lise Getoor
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we implement our approach in probabilistic soft logic, and test it on several online collective inference tasks. Through these experiments we verify the bounds on regret and stability, and show that our approximate online approach consistently runs two to five times faster than the offline alternative while, surprisingly, maintaining the quality of the predictions. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, University of California Santa Cruz, California, United States. Correspondence to: Charles Dickens <cadicken@ucsc.edu>. |
| Pseudocode | No | The paper describes the Projected Stochastic Subgradient Descent algorithm using equations (16) and (17), but does not present it within a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Data and code: https://github.com/linqs/dickens-icml21 |
| Open Datasets | Yes | Movie Lens Movie Lens is a movie recommendation dataset containing approximately 1M timestamped ratings made by 6K users on 4K movies (Harper & Konstan, 2015). Bike Share Bike Share is a dataset that contains information for 650k trips between 70 stations by customers of the bicycle sharing service, Bay Area Bike Share (Bay Area Bike Share, 2016). Epinions Epinions is a trust prediction dataset with 2k users with 8.5k directed links representing whether one user trusts the other. The data is divided into 8 splits with the trust links partitioned into observed and unknown sets following the same procedure as Bach et al. (2017). |
| Dataset Splits | No | The paper describes how data is partitioned into time steps and how observations change over time, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or counts for model evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions implementing the approach in 'probabilistic soft logic (PSL)' but does not provide specific version numbers for PSL or any other software dependencies. |
| Experiment Setup | Yes | where gy y(log(φi)) and η is a step size hyperparameter. i.e., we set ϵ = ϵ m where ϵ is a fixed scalar hyperparameter and m is the number of potentials. Then, the α estimate is the average of the rates in the optimistic bound and the average between the regularization parameter of the model, 0.1, and the minimum observed rate for the pessimistic bound. The regretful models are grounded using the example described in Section 3.2 with κ = 1. |