Deanonymization in the Bitcoin P2P Network
Authors: Giulia Fanti, Pramod Viswanath
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This analysis suggests that Bitcoin s networking protocols (both preand post-2015) offer poor anonymity properties on networks with a regular-tree topology. We confirm this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology. |
| Researcher Affiliation | Academia | Giulia Fanti (gfanti@andrew.cmu.edu) is in the ECE Department at Carnegie Mellon University. Pramod Viswanath (pramodv@illinois.edu) is in the ECE Department at the University of Illinois at Urbana Champaign. |
| Pseudocode | Yes | Protocol 1 (Appendix A.2.1) |
| Open Source Code | Yes | Code for all simulations available at https://github.com/gfanti/bitcoin-trickle-diffusion. |
| Open Datasets | Yes | We confirm this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology. [20] Andrew Miller, James Litton, Andrew Pachulski, Neal Gupta, Dave Levin, Neil Spring, and Bobby Bhattacharjee. Discovering bitcoins public topology and influential nodes, 2015. |
| Dataset Splits | No | The paper mentions simulating on a '2015 snapshot of the real Bitcoin P2P network topology' but does not specify any training, validation, or testing splits for datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper provides a link to simulation code but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The paper describes simulation parameters such as the number of trials and time 't' for evaluations, but it does not specify hyperparameter values or detailed training configurations (e.g., learning rate, batch size, optimizer) typical for experimental setups involving model training. |