Personalized LoRA for Human-Centered Text Understanding
Authors: You Zhang, Jin Wang, Liang-Chih Yu, Dan Xu, Xuejie Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on four benchmark datasets show that the proposed method outperforms existing methods in full/few/zero-shot learning scenarios for the HCTU task, even though it has fewer trainable parameters. |
| Researcher Affiliation | Academia | You Zhang1, Jin Wang1*, Liang-Chih Yu2*, Dan Xu1, Xuejie Zhang1 1School of Information Science and Engineering, Yunnan University, Yunnan, P.R.China 2Department of Information Management, Yuan Ze University, Taiwan |
| Pseudocode | No | The paper describes its methods and formulations using natural language and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For reproducibility, the code for this paper is available at: https://github.com/yoyo-yun/PLo RA. |
| Open Datasets | Yes | The used datasets include IMDB, YELP, GDRD, and PPR, where a collection of data is associated with different users. To simulate complex and practical situations in real-world applications, all datasets are individually divided into two parts DA and DB where DA contains much larger samples than DB and DB aims to simulate cold-start scenarios, as formulated in Section Methodology. |
| Dataset Splits | Yes | Either DA or DB, it splits into train, dev, and test data for experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using pre-trained language models like BERT, RoBERTa, and Flan-T5, but it does not specify any software dependencies with version numbers (e.g., PyTorch version, specific library versions). |
| Experiment Setup | Yes | For the reproduction of experiments, more implementation details of hyperparameters were reported in Appendices. |