Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PrivDNFIS: Privacy-preserving and Efficient Deep Neuro-Fuzzy Inference System

Authors: Hao Ren, Xiao Lan, Rui Tang, Xingshu Chen

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In comprehensive experimental results, Priv DNFIS demonstrates an approximately 1.9 to 4.4 times reduction in end-to-end time cost compared to the benchmark. Performance Evaluation Implementation settings. Experiments are conducted on a computing machine with Intel (R) Xeon (R) CPU E5-26800 @ 2.70GHz processor with 8 cores, 4 GB RAM storage, and Ubuntu 20.04 operation system.
Researcher Affiliation Academia Hao Ren 1 2 3, Xiao Lan 1 2 3, Rui Tang 1 2 3 *, Xingshu Chen 1 2 3 1 School of Cyber Science and Engineering, Sichuan University, Chengdu 610065, China. 2 Key Laboratory of Data Protection and Intelligent Management (Sichuan University), Ministry of Education, China. 3 Cyber Science Research Institute, Sichuan University, Chengdu, China. hao.ren, lanxiao, tangrscu, EMAIL
Pseudocode Yes Algorithm 1: Privately compute the hidden layer on CSB 1: Input: The LWE ciphertexts LCTω+ ,j, the private input values CT b d. The weight matrix M and the bias vector b. 2: Output: The LWE ciphertexts LCTρj of ρj for all j [l]. 3: for j [l] do 4: for i [n] do 5: m[i] M[i][j]. 6: end for 7: bm π(m); CTα Ct Pt Mul(CT b d, bm). 8: LCTα Extract(CTα, n). 9: LCTβ Ct Pt Add(LCTα, b[j]). 10: LCTρj Ct Ct Add(LCTβ, LCTω+ ,j). 11: end for 12: return a set of ciphertexts {LCTρj}, j [l].
Open Source Code No The paper mentions using and adapting open-sourced code for third-party tools like SEAL and ciphertext extraction functions, but it does not provide an explicit statement or link for the open-source release of the Priv DNFIS methodology described in this paper.
Open Datasets Yes When processing 200 queries on CIFAR-100 (Krizhevsky, A.; Nair, V.; and Hinton, G 2013), Priv DNFIS takes 1194.62s in total. For two testing datasets CIFAR-10 and CIFAR-100, the accuracy of the non-private scheme DCNFIS (Yeganejou et al. 2023) and Priv DNFIS are the same.
Dataset Splits Yes For two testing datasets CIFAR-10 and CIFAR-100... The datasets CIFAR-10 and CIFAR-100 are well-known benchmark datasets that come with predefined standard training and testing splits, which are implicitly used for the experiments.
Hardware Specification Yes Experiments are conducted on a computing machine with Intel (R) Xeon (R) CPU E5-26800 @ 2.70GHz processor with 8 cores, 4 GB RAM storage, and Ubuntu 20.04 operation system.
Software Dependencies No The paper mentions 'Ubuntu 20.04 operation system' and using 'the RLWE/LWE FHE library SEAL (Laine, K.; Cruz, R.; Boemer, F.; Angelou, N.; and et al 2015)' without specifying the version number for SEAL. It does not list multiple key software components with their versions.
Experiment Setup No The paper describes the cryptographic parameters to ensure security (e.g., '128-bit security' for SEAL) and how different operations are performed in a privacy-preserving manner. However, it does not provide specific experimental setup details for training the DNFIS model, such as learning rates, batch sizes, number of epochs, or optimizer settings.