Aligning Mixed Manifolds

Authors: Thomas Boucher, CJ Carey, Sridhar Mahadevan, Melinda Dyar

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the effectiveness of LRA, experiments were performed on two very different real-world data sets. For comparison, we implemented three state of the art alignment techniques: (instance-level/non-linear) manifold alignment (Wang and Mahadevan 2009), affine matching alignment (Lafon, Keller, and Coifman 2006), and Procrustes alignment (Wang and Mahadevan 2008).
Researcher Affiliation Academia Thomas Boucher, , CJ Carey, Sridhar Mahadevan School of Computer Science University of Massachusetts Amherst, MA 01003 boucher@cs.umass.edu M. Darby Dyar Department of Astronomy Mount Holyoke College South Hadley, MA 01075
Pseudocode Yes Algorithm 1: Low Rank Alignment Input: data matrices X, Y , embedding dimension d, correspondence matrix C(X,Y ) and weight µ. Output: embeddings matrix F. Step 0: Column normalize X & Y (optional but recommended if X and Y differ largely in scale). Step 1: Compute the reconstruction coefficient matrices R(X), R(Y ): USV = SVD(X) R(X) = V1(I S 2 1 )V 1 ˆU ˆS ˆV = SVD(Y ) R(Y ) = ˆV1(I ˆS 2 1 ) ˆV 1 Step 2: Set F equal to the d smallest eigenvectors of the matrix in equation (20).
Open Source Code Yes An implementation of LRA is available for download on the author s website. 1https://github.com/all-umass/low_rank_alignment
Open Datasets Yes In this second set of experiments, we used the transcribed proceedings of the European Parliament (Koehn 2005) for a standard cross-language document retrieval task.
Dataset Splits Yes The 5-fold cross validation results are shown in Figure 4. In each iteration, correspondences are provided for 80 spectra while the other 20 spectra are used for evaluation. The experiment was repeated 30 times with a random partitioning of folds. For accurate method comparison, we used 5-fold cross validation. In each fold, 80% of the sentence correspondences were provided and the remaining 20% of the sentences were used for evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU/CPU models, memory specifications, or cloud computing resources.
Software Dependencies No The paper mentions 'Python' and 'Scikit-learn' but does not provide specific version numbers for these software components.
Experiment Setup Yes For all models evaluated, the correspondence weight was set to µ = 0.8, based upon the ratio of train/test data. All competing models required an additional nearest neighbor hyperparameter. This hyperparameter was optimized using grid search and cross validation. For affine matching and Procrustes alignment the number of neighbors used was k = 10, and for traditional manifold alignment k = 4. All methods used the same default correspondence weight µ = 0.5. Grid search and cross validation were used to tune the number of nearest neighbors for all competing models. For affine matching and Procrustes alignment k = 125, and for manifold alignment k = 5.