Identifying Key Observers to Find Popular Information in Advance
Authors: Takuya Konishi, Tomoharu Iwata, Kohei Hayashi, Ken-ichi Kawarabayashi
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, we test our approach using real social bookmark datasets. The results demonstrate that our approach can find popular items in advance more effectively than baseline methods. |
| Researcher Affiliation | Collaboration | National Institute of Informatics JST, ERATO, Kawarabayashi Large Graph Project NTT Communication Science Laboratories |
| Pseudocode | Yes | Algorithm 1 Ada-RDA (E, λ, , ) |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We used Delicious datasets [Wetzker et al., 2008], which comprise records of events where Delicious users bookmarked (adopted) web pages with time stamps. |
| Dataset Splits | Yes | Items were split into ten subsets, with 90 percent of the items used as training data and the other 10 percent as test data. ... We repeated the above procedure ten times while changing the training and test data (i.e. 10-fold cross validation) and took the average of the AUCs. |
| Hardware Specification | Yes | We used one server that has 16 processors and allows for computing 32 threads at a time by hyper-threading. |
| Software Dependencies | No | While the proposed methods were implemented by Java, baselines were done by Python. The paper does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We set both and to 1.0, and C to 20. We also set λ to the set of 500 points in [0.00001, 0.01]. For the augmented method, we needed to select the pair of parameters (k, ). We performed v-fold cross validation on k ∈ K = {0.5, 1.0, 2.0, 5.0, 10.0, 50.0}, σ ∈ {1.0, 5.0, 9.5, 15.0}, and v = 5 using training data. |