Cold-Start Heterogeneous-Device Wireless Localization
Authors: Vincent W. Zheng, Hong Cao, Shenghua Gao, Aditi Adhikari, Miao Lin, Kevin Chang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on two public real-world data sets, and show that it significantly outperforms the best baseline by 23.1% 91.3% across four pairs of heterogeneous devices. |
| Researcher Affiliation | Collaboration | Advanced Digital Sciences Center, Singapore; Mc Laren Applied Technolgoies APAC, Singapore; Shanghai Tech University, China; Institute for Infocomm Research, A*STAR, Singapore; University of Illinois at Urbana-Champaign, USA |
| Pseudocode | No | The paper describes algorithms and formulations mathematically and textually but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements about the availability of its source code, nor does it include links to a code repository. |
| Open Datasets | Yes | Data sets: we use two public real-world data sets: HKUST data set (Zheng et al. 2008a) and MIT data set (Park et al. 2011), as shown in Tables 1 and 2. |
| Dataset Splits | No | The paper describes training and testing splits ("50% of its data at each location as the labeled training data" and "100% of its data as test data"), but it does not explicitly mention a separate validation set or split for hyperparameter tuning or model selection. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only lists devices used for data collection. |
| Software Dependencies | No | The paper mentions software like "SVM (Chang and Lin 2011)" and "Libsvm", but it does not specify any version numbers for these or other software dependencies, which is necessary for reproducibility. |
| Experiment Setup | Yes | We study the impact of λ1, λ2 and d2. In Figure 3, we fix λ2 = 1, d2 = 100 and tune λ1. Our model tends to achieve higher accuracies when λ1 is bigger. This means we prefer the zero-sum constraint to hold. Then, we fix λ1 = 10, d2 = 100 and tune λ2. Our model is generally insensitive to λ2. When λ2 = 100, the accuracies tend to drop. This may be because a too big λ2 makes the loss of f overwhelm the objective function. Finally, we fix λ1 = 10, λ2 = 1 and tune d2. Our model tends to achieve the best accuracies when d2 = 150. In practice, like other dimensionality reduction methods (Jolliffe 2005; Krizhevsky, Sutskever, and Hinton 2012), we suggest tuning d2 empirically. In the following, we fix λ1 = 10, λ2 = 1 and d2 = 150. |