Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
BANANA: when Behavior ANAlysis meets social Network Alignment
Authors: Fuxin Ren, Zhongbao Zhang, Jiawei Zhang, Sen Su, Li Sun, Guozhen Zhu, Congying Guo
IJCAI 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on real-world datasets, we demonstrate that our proposed approach outperforms the state-of-the-art methods in the social network alignment task and the user behavior analysis task, respectively. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China 2IFM Lab, Department of Computer Science, Florida State University, FL, USA |
| Pseudocode | No | The paper does not contain any sections explicitly labeled as 'Pseudocode' or 'Algorithm', nor are there any structured, code-like blocks detailing procedures. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to a code repository. |
| Open Datasets | Yes | In order to validate our approaches, we employ two real-world datasets. One of the real-world datasets is a Twitter Foursquare (TF) dataset [Kong et al., 2013]... The other real-world dataset [Zhong et al., 2012] is collected from the Douban website... |
| Dataset Splits | No | The paper mentions 'training ratio' in the context of Figure 5(a) ('even with a small size of training ratio, the precision of BANANA achieves the best among all comparison methods'), but it does not specify the exact percentages or counts for training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., specific libraries or programming language versions) that would be needed to replicate the experiment. |
| Experiment Setup | No | The paper describes hyper-parameters such as λw, λd (in Eq. 5) and α (in Eq. 16), but it does not provide their specific values or other concrete experimental setup details like learning rate, batch size, or optimizer settings. |