Automated Negotiating Agents Competition (ANAC)
Authors: Catholijn Jonker, Reyhan Aydogan, Tim Baarslag, Katsuhide Fujita, Takayuki Ito, Koen Hindriks
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The annual International Automated Negotiating Agents Competition (ANAC) is used by the automated negotiation research community to benchmark and evaluate its work and to challenge itself. The benchmark problems and evaluation results and the protocols and strategies developed are available to the wider research community.Negotiating agents designed using heuristic approaches need extensive evaluation, typically through simulations and empirical analysis, since it is usually impossible to predict precisely how the system and the constituent agents will behave in a wide variety of circumstances. |
| Researcher Affiliation | Academia | Catholijn M. Jonker, Reyhan Aydo gan, Tim Baarslag, Katsuhide Fujita, Takayuki Ito, Koen Hindiks Interactive Intelligence Group, Delft University of Technology, The Netherlands Computer Science, Ozye gin University, Istanbul,Turkey Centrum Wiskunde & Informatica (CWI), The Netherlands Department of Computer Science and Engineering, Nagoya Institute of Technology, Japan Institute of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan |
| Pseudocode | No | The information is insufficient. The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The information is insufficient. The paper refers to platforms like GENIUS (http://ii.tudelft.nl/genius), BANDANA (http://www.iiia.csic.es/ davedejonge/bandana/), and IAGO (http://people.ict.usc.edu/ mell/IAGO/) with their URLs, which are tools used in the competition. However, it does not state that the authors are releasing source code for the overview and analysis presented in this specific paper. |
| Open Datasets | No | The information is insufficient. The paper describes the characteristics of negotiation scenarios and domains used in the ANAC competition, and it mentions collecting and developing a 'benchmark of negotiation scenarios', but it does not provide concrete access information (specific link, DOI, repository name, or formal citation with authors/year) for a publicly available or open dataset used for training or evaluation in the context of this paper's analysis. |
| Dataset Splits | No | The information is insufficient. The paper describes various aspects of the negotiation scenarios and competition setup but does not provide specific details on dataset splits (e.g., percentages, sample counts, or methodology) for training, validation, or testing. |
| Hardware Specification | No | The information is insufficient. The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running any experiments or simulations discussed. |
| Software Dependencies | No | The information is insufficient. The paper mentions frameworks like GENIUS, IAGO, and BANDANA, but it does not provide specific version numbers for these or any other ancillary software components needed to replicate any experimental setup. |
| Experiment Setup | Yes | In all competitions we use a deadline. ... In ANAC 2010 each agent had three minutes to deliberate. ... From ANAC2011 onward, the agents share a time window of three minutes. As of 2011 discount factors are frequently part of the scenarios. |