TFWT: Tabular Feature Weighting with Transformer
Authors: Xinhao Zhang, Zaitian Wang, Lu Jiang, Wanfu Gao, Pengfei Wang, Kunpeng Liu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experimental results across various real-world datasets and diverse downstream tasks show the effectiveness of TFWT and highlight the potential for enhancing feature weighting in tabular data analysis. |
| Researcher Affiliation | Academia | 1Portland State University 2Computer Network Information Center, Chinese Academy of Sciences 3University of Chinese Academy of Sciences, Chinese Academy of Sciences 4Dalian Maritime University 5Jilin University |
| Pseudocode | Yes | Algorithm 1: Training of TFWT |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate the proposed method with four real-world datasets: Amazon Commerce Reviews Set (AM) [Liu, 2011] from UCI... Online Shoppers Purchasing Intention Dataset (OS) [Sakar and Kastro, 2018] from UCI... MAGIC Gamma Telescope Dataset (MA) [Bock, 2007] from UCI... Smoking and Drinking Dataset with body signal (SD) [Her, 2023] from Kaggle... |
| Dataset Splits | No | For each dataset, we randomly selected between 60% and 80% as training data. The paper mentions training data but does not specify the percentage or absolute number of samples used for a dedicated validation set. |
| Hardware Specification | Yes | The models were trained on NVIDIA A100. |
| Software Dependencies | No | We implemented TFWT using Py Torch and Scikit-learn. The paper lists software used but does not provide specific version numbers for PyTorch or Scikit-learn. |
| Experiment Setup | Yes | The initial learning rate was set between 10^-3 and 10^-5. For model regularization, the dropout rate was fixed at 0.2. |