Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

The AI Liability Puzzle and A Fund-Based Work-Around

Authors: Olivia J. Erdelyi, Gabor Erdelyi

JAIR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The paper also refrains from confronting the intricacies of the case law of specific jurisdictions for now and recognizing the importance of this task leaves this to further research in support of the legal system s incremental adaptation to the novel challenges of present and future AI technologies. In this paper, we restrict the focus of the above sketched AI liability debate by analyzing only the foreseeability concept s ability to serve as a means to limit and attribute legal liability.
Researcher Affiliation Collaboration Olivia J. Erd elyi EMAIL University of Canterbury, School of Law Christchurch 8140, New Zealand Soul Machines Auckland 1010, New Zealand G abor Erd elyi EMAIL University of Canterbury, School of Mathematics and Statistics Christchurch 8140, New Zealand
Pseudocode No The paper describes legal concepts, comparative analyses, and proposes regulatory frameworks. It does not contain any structured pseudocode or algorithm blocks.
Open Source Code No This paper is a theoretical work discussing legal and regulatory frameworks for AI liability. It does not describe a methodology that would require source code, and therefore does not provide any concrete access to source code.
Open Datasets No The paper analyzes legal and policy concepts, referencing existing legal literature and real-world incidents as examples. It does not conduct empirical experiments using datasets, and therefore does not provide access information for any open datasets.
Dataset Splits No This paper is a theoretical and conceptual analysis of AI liability. It does not involve empirical experiments with datasets, and consequently, there is no information regarding dataset splits.
Hardware Specification No The paper focuses on legal and policy analysis and does not involve experimental work that would require specific hardware. Therefore, no hardware specifications are provided.
Software Dependencies No As a theoretical paper discussing legal frameworks, it does not detail any software implementations or dependencies with version numbers.
Experiment Setup No This paper presents a conceptual and legal analysis. It does not involve conducting experiments, and thus, no details on experimental setup or hyperparameters are provided.