Taking Situation-Based Privacy Decisions: Privacy Assistants Working with Humans
Authors: Nadin Kökciyan, Pinar Yolum
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate various aspects of the model using a real-life data set and report adjustments that are needed to serve different types of users well. We implement the proposed agent and experimentally evaluate its workings over a case study that uses an anonymized Io T dataset [Naeini et al., 2017]. |
| Researcher Affiliation | Academia | Nadin K okciyan1 and Pınar Yolum2 1University of Edinburgh 2Utrecht University nadin.kokciyan@ed.ac.uk, p.yolum@uu.nl |
| Pseudocode | Yes | Algorithm 1: decide(p, ψ, θ, γ) |
| Open Source Code | Yes | This material together with our code base is available online1. 1https://git.ecdf.ed.ac.uk/nkokciya/pas-privacy |
| Open Datasets | Yes | We focus on the application layer and study the workings of the model using an anonymized dataset [Naeini et al., 2017], which has been collected through surveys with users of Io T devices. |
| Dataset Splits | Yes | We use 366 scenarios from remaining surveys to: (i) generate contexts using clustering techniques, (ii) train a multi-label classifier to infer multiple contexts for unseen privacy scenarios. We have tried several classification models (SVM models with linear/rbf kernel, logistic regression models and so on), applied 5-fold cross-validation for model selection, and chose the model performing the best on average. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU models, CPU types, or memory details used for the experiments. |
| Software Dependencies | No | The paper mentions using "well-known Python libraries such as NLTK, Gensim and scikit-learn" but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Table 1 shows results from our experiments with different conflict thresholds (0.1, 0.2, 0.3, 0.4) with a fixed set of 250 experiences. We report the accuracy results based on varying conflict ratios (0.1, 0.2, 0.3, 0.4). |