Computing Preferences Based on Agents’ Beliefs
Authors: Jian Luo, Fuan Pu, Yulai Zhang, Guiming Luo
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This work was supported by the Funds NSFC61171121 and the Science Foundation of Chinese Ministry of Education China Mobile 2012. Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Satisfiability Degree Let P = {p1, p2, . . . , pn} be a finite and non-empty set of atoms, we use LP to denote the set of all propositional formulae over P, formed from the logical connectives of , , and . Let p, q, r, . . . represent atoms, α, β, γ, . . . denote formulae, and , Φ, Ψ, . . . denote sets of finite formulae. The tautology is denoted by and represents the contradiction or falsity. We start introducing the concepts of satisfiability degree by Luo et al. (Luo, Yin, and Hu 2009). |
| Researcher Affiliation | Academia | Jian Luo, Fuan Pu, Yulai Zhang, Guiming Luo School of Software, Tsinghua University Tsinghua National Laboratory for Information Science and Technology Beijing 100084, China {j-luo10@mails, pfa12@mails, zhangyl08@mails, gluo@mail}.tsinghua.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any statement about releasing source code or provide a link to a code repository. |
| Open Datasets | No | The paper uses illustrative examples with logical formulas and atoms (e.g., 'Example 1. Let P = {p1, p2, p3}, and then ΩP contains the following 8 interpretations: ω1 = (1, 1, 1), ω2 = (1, 1, 0), ω3 = (1, 0, 1), ω4 = (1, 0, 0), ω5 = (0, 1, 1), ω6 = (0, 1, 0), ω7 = (0, 0, 1), ω8 = (0, 0, 0). For α = (p1 p2) p3 and β = p1 p2 p3, we have Ωα = {ω2, ω4, ω6}, Ωβ = ΩP \ {ω2}, and Ωα β = {ω4, ω6}.'), but it does not specify or provide access information for any publicly available or open datasets used for training or evaluation. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments with datasets, therefore, it does not provide specific dataset split information (e.g., train/validation/test percentages or counts). |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware, thus no hardware specifications are provided. |
| Software Dependencies | No | The paper describes theoretical concepts and does not mention any specific software components or their version numbers used for implementation or experimentation. |
| Experiment Setup | No | The paper is theoretical and does not include any experimental setup details such as hyperparameter values, training configurations, or system-level settings. |