Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

The Right to Obscure: A Mechanism and Initial Evaluation

Authors: Eric Hsin-Chun Huang, Jaron Lanier, Yoav Shoham

IJCAI 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement a proof-of-concept search engine following the proposed mechanism, and conduct experiments to explore the influences it might have on users impressions on different data subjects.
Researcher Affiliation Collaboration Eric H. Huang Stanford University EMAIL Jaron Lanier Microsoft Research EMAIL Yoav Shoham Stanford University EMAIL
Pseudocode No The paper describes protocols in natural language with numbered steps but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper states that a proof-of-concept search engine was implemented but does not provide access to its source code.
Open Datasets No The paper describes using human participants and manually selected data subjects, but does not refer to a publicly available dataset with concrete access information (e.g., link, DOI, or formal citation to a published dataset).
Dataset Splits No The paper describes an experiment with human participants and different treatments, but it does not specify training, validation, or test dataset splits in the context of machine learning model development or data partitioning for computational experiments.
Hardware Specification No The paper mentions implementing a custom search engine and recruiting searchers but does not specify any hardware details (e.g., CPU, GPU, memory) used for the experiments.
Software Dependencies No The paper mentions using a "search service API provided by Microsoft Bing" but does not list any other software dependencies with specific version numbers for reproducibility (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes We implemented the proposed mechanism in a custom search engine using a search service API provided by Microsoft Bing. We recruit searchers from an online crowdsourcing platform, and ask searchers about their impressions about a given data subject before conducting any searches. We then ask them to research the person with a specified agenda using our custom-built search engine. Finally, we ask them again about their impressions about the person after they’ve conducted searches on our search engine. ... Each treatment is completed by approximately 30 participants recruited on Amazon Mechanical Turk, and pays $0.15 USD. To ensure work quality, we imposed standard qualification requirements of at least 95% approval rate on over 100 tasks. We also ensured that no participant researched the same data subject more than once.