Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Multi-Modal Recommendation Unlearning for Legal, Licensing, and Modality Constraints
Authors: Yash Sinha, Murari Mandal, Mohan Kankanhalli
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that MMRec Un outperforms baseline methods across various unlearning requests when evaluated on benchmark MMRS datasets. MMRec Un achieves recall performance improvements of up to 49.85% compared to baseline methods and is up to 1.3 faster than the Gold model, which is trained on retain set from scratch. MMRec Un offers significant advantages, including superiority in removing target interactions, preserving retained interactions, and zero overhead costs compared to previous methods. |
| Researcher Affiliation | Academia | Yash Sinha1, Murari Mandal2, Mohan Kankanhalli1 1School of Computing, National University of Singapore 2Resp AI Lab, School of Computer Engineering, KIIT Bhubaneswar, India EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the method and objectives using mathematical equations and textual descriptions, but it does not include a clearly labeled pseudocode block or algorithm steps. |
| Open Source Code | Yes | Code https://github.com/Machine Unlearn/MMRec UN |
| Open Datasets | Yes | We use three distinct categories within the Amazon dataset (Hou et al. 2024) that are used to benchmark MMRS. |
| Dataset Splits | Yes | The data interaction history of each user is split 8 : 1 : 1 for training, validation and testing. |
| Hardware Specification | Yes | All experiments are performed on 4x NVIDIA RTX2080 (32GB). |
| Software Dependencies | No | The paper mentions training the model and using standard setups but does not specify particular software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | We initialize the embedding with Xavier initialization of dimension 64, set the regularization coefficient λφ = 10 4, and batch size B = 2048. For the self-supervised task, we set the temperature to 0.2. For convergence, the early stopping and total epochs are fixed at 20 and 1000, respectively. We use Recall@20 on the validation data as the training-stopping indicator following (Zhang et al. 2022). |