Fair Kernel K-Means: from Single Kernel to Multiple Kernel
Authors: Peng Zhou, Rongwen Li, Liang Du
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | At last, we conduct extensive experiments on both the single kernel and multiple kernel settings to compare the proposed methods with state-of-the-art methods to demonstrate their effectiveness. |
| Researcher Affiliation | Academia | Peng Zhou School of Computer Science and Technology Anhui University Hefei, 230601 zhoupeng@ahu.edu.cn; Rongwen Li School of Computer Science and Technology Anhui University Hefei, 230601 e22301284@stu.ahu.edu.cn; Liang Du School of Computer and Information Technology Shanxi University Taiyuan, 237016 duliang@sxu.edu.cn |
| Pseudocode | Yes | Appendix A shows the pseudo-codes of FKKM and FMKKM, respectively. |
| Open Source Code | Yes | Our code is available at https://github.com/rongwenli/Neur IPS24-FMKKM. |
| Open Datasets | Yes | We conduct experiments on benchmark data sets which are widely used in fair clustering, including D&S [2], HAR [3], Jaffe [29], MNIST-USPS [20], Credit Card [52] and K1b [53]. |
| Dataset Splits | No | The paper does not specify exact percentages or sample counts for training, validation, and test splits. It mentions that "All experiments are repeated 10 times and the average results are reported," but does not detail the data partitioning. |
| Hardware Specification | Yes | All experiments are conducted on the 12th Gen Interl(R) Core(TM) i7-12700 with 32 GB RAM. |
| Software Dependencies | No | The paper mentions using a "Gaussian kernel" and various machine learning methods, but it does not specify any software libraries (e.g., TensorFlow, PyTorch) or their version numbers. |
| Experiment Setup | Yes | For the kernel methods (i.e., our FKKM and KKM), we use a Gaussian kernel with a bandwidth parameter fixing to 0.5 D, where D is the average distance between samples. ... we search λ as λ = 1, 2, . . . , by observing the corresponding MNCE. When the MNCE gets stable, i.e., the change of MNCE is smaller than 0.005, we stop the searching and use the current λ. |