Joint Attribute and Model Generalization Learning for Privacy-Preserving Action Recognition
Authors: Duo Peng, Li Xu, Qiuhong Ke, Ping Hu, Jun Liu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness and generalization of the proposed framework compared to state-of-the-arts. |
| Researcher Affiliation | Academia | Duo Peng SUTD Singapore duo_peng@mymail.sutd.edu.sg Li Xu SUTD Singapore li_xu@mymail.sutd.edu.sg Qiuhong Ke Monash University Australia Qiuhong.Ke@monash.edu Ping Hu UESTC China chinahuping@gmail.com Jun Liu SUTD Singapore jun_liu@sutd.edu.sg |
| Pseudocode | Yes | Algorithm 1: Overall Training Scheme |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of the described methodology's code. |
| Open Datasets | Yes | We conduct experiments using two benchmarks. The first benchmark, HMDB51-VISPR, is comprised of HMDB51 [31] dataset and VISPR [30] dataset. The second benchmark, UCF101-VISPR, consists of UCF101 [29] dataset and VISPR [30] dataset. |
| Dataset Splits | Yes | Specifically, we first construct a support set for virtual training, and a query set for virtual testing. ... On each benchmark, we construct the support set with the videos containing 60% of the privacy attributes in the training data Xtrain, and use the remaining training data to construct the query set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions various models used (e.g., Image Transformation model, C3D, Mobile Net-V2, UNet, R3D-18, Res Net-50) but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We set γ (in Eq. 1) as 0.4, the learning rate α for virtual training (in Eq. 5) as 5e 4, and the learning rate β for meta-optimization (in Eq. 8) as 1e 4. |