Propagating Ranking Functions on a Graph: Algorithms and Applications

Authors: Buyue Qian, Xiang Wang, Ian Davidson

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We in this section attempt to understand the strengths and relative performance of our distribution based ranking function propagation method, which we refer to in this section as RFP (Ranking Function Propagation). It is important to note that, to the best of our knowledge, our work is the first on ranking function propagation on distribution data. Therefore, we in our experiment can only compare with non-distribution propagation methods. In particular we wish to answer how well our method compares to the following state-of-the-art baseline methods: 1. GFHF (2003): A state-of-the-art label propagation method based on the harmonic function and the graph Laplacian. 2. LGC (2003): A label propagation method based on the graph smoothness and the normalized graph Laplacian. The result shows that our propagation method significantly outperforms the two baseline methods.
Researcher Affiliation Collaboration Buyue Qian IBM T. J. Watson Research Yorktown Heights, NY 10598 bqian@us.ibm.com; Xiang Wang IBM T. J. Watson Research Yorktown Heights, NY 10598 wangxi@us.ibm.com; Ian Davidson University of California Davis, Davis, CA 95616 davidson@cs.ucdavis.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: 'We will make the movie poster dataset publicly available soon', which refers to a dataset and is in future tense. There is no concrete statement or link for the source code of the methodology described.
Open Datasets Yes The birds dataset (Briggs et al. 2013) consists of 645 ten-second audio recordings...; The dataset used in this experiment is a collection of movie posters, which are crawled from the web using the links provided in Het Rec2011-Movie Lens-2K dataset (Cantador, Brusilovsky, and Kuflik 2011).
Dataset Splits No The paper describes how training data is selected and incrementally added ('additional 150 (randomly selected) labeled audio recordings are gradually added to the training set'), but it does not specify explicit train/validation/test splits with percentages or absolute sample counts for the overall dataset, nor does it explicitly define a separate validation set.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory specifications) used to run the experiments.
Software Dependencies No The paper mentions 'We use Rank SVM (Chapelle 2007) to provide the initial ranking functions', but it does not provide specific version numbers for Rank SVM or any other software dependencies.
Experiment Setup Yes The parameters in Rank SVM are set as follows: the penalty constant C is set to 1, and the two options for Newton s method, i.e., the maximum number of linear conjugate gradients is 20, the stopping criterion for conjugate gradients is 10 3.; In RFP, we set the vector µ with the following values, 10 at the locations corresponding to strong (trained with abundant data) ranking functions, 0 at the locations corresponding to unknown (no training data) ranking functions, and 1 at the locations corresponding to weak (trained with little data) functions.