Trustless Audits without Revealing Data or Models

Authors: Suppakit Waiwitlikhit, Ion Stoica, Yi Sun, Tatsunori Hashimoto, Daniel Kang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now evaluate ZKAUDIT-T, including the performance of performing SGD in ZK-SNARKs, the end-to-end accuracy and costs of ZKAUDIT-T, and the effect of our optimizations. [...] We benchmarked SGD and ZKAUDIT-T on image classification and a recommender system on Movie Lens (Harper & Konstan, 2015).
Researcher Affiliation Academia 1Stanford University 2UC Berkeley 3University of Chicago 4UIUC.
Pseudocode No The paper describes its methods and procedures in narrative text and equations, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code Yes Code. We have anonymized our code here: https://anonymous.4open.science/r/zkml72D8/README.md
Open Datasets Yes We used the following datasets: 1. dermnet (Shanthi et al., 2020): [...] 2. flowers-102 (Nilsback & Zisserman, 2008): [...] 3. cars (Krause et al., 2013): [...] 4. movielens (Harper & Konstan, 2015): [...] We further evaluated ZKAUDIT-T on CIFAR-10 and MNIST.
Dataset Splits No The paper mentions training and testing sets, but does not provide explicit details on validation dataset splits (e.g., percentages, sample counts, or methods for creating validation sets).
Hardware Specification Yes Hardware. We use the Amazon Web Services (AWS) g4dn.8xlarge instance type for all experiments.
Software Dependencies No The paper mentions software like 'halo2' and 'PyTorch' but does not specify their version numbers or any other software dependencies with version details.
Experiment Setup No The paper mentions varying hyperparameters and using different Mobile Net configurations, but does not provide specific values for hyperparameters such as learning rate, batch size, or number of epochs, nor does it detail the specific configurations used for each experiment in the main text.