Recovering AES Keys with a Deep Cold Boot Attack

Authors: Itamar Zimerman, Eliya Nachmani, Lior Wolf

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We trained our proposed architecture with two types of DRAM memory corruption processes: (i) the theoretical model, where δ1 = 0, and (ii) a more realistic model, where δ1 = 0.001. For each model, we test with different corruption rates δ0 [0.40, 0.72] for the theoretical model and δ0 [0.50, 0.85] for the realistic model.
Researcher Affiliation Collaboration 1Tel Aviv University 2Facebook AI Research.
Pseudocode No The paper describes algorithmic steps and methods within the text (e.g., in Section 4), but it does not include a dedicated figure, block, or section explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing their own source code for the methodology described, nor does it provide a direct link to such a repository. It only refers to third-party tools used.
Open Datasets No The training set contains generated random AES 256 keys. No concrete access information (link, DOI, repository name, or formal citation to an established public dataset) is provided for the dataset used.
Dataset Splits No The paper mentions 'training set' and 'test set' and describes a random generation process for the training data's corruption rates. However, it does not explicitly state the use of a distinct 'validation' set or provide specific quantitative splits (percentages, sample counts) or references to predefined splits for training, validation, and test datasets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running the experiments. It does not mention any specific machines or cloud resources with their specifications.
Software Dependencies No The paper mentions the use of 'Adam optimizer' and 'WBO Solver' (with a link to LogicNG library), but it does not provide specific version numbers for these or any other key software libraries or dependencies (e.g., Python, PyTorch, TensorFlow) that would be needed for reproducibility.
Experiment Setup Yes During training, we use a batch size of 4, a learning rate of 1e 4, and an Adam optimizer (Kingma & Ba, 2014). The number of iterations for the neural belief propagation was L = 3. The parameters nl were 30 and nh was 0, we multiplied the input l by a constant scaling factor of 0.12. The S-box network Snn is trained with the Adam optimizer with a batch size of 32 and a learning rate of 1e 3.