Surprise-Triggered Reformulation of Design Goals

Authors: Kazjon Grace, Mary Lou Maher

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement our model in the domain of culinary creativity, and demonstrate how the cognitive behaviors of surprise and problem reformulation can be incorporated into design reasoning. and As a proof-of-concept evaluation of our model for surprise-triggered goal reformulation we have implemented it in the context of recipe design... and We gathered approximately 130,000 recipes from http://ffts.com/recipes... and We conducted a series of simulations with our prototype surprise-triggered reformulation system, focussing on demonstrating its capacity to generate goals that can drive the design process in interesting (and potentially creative) directions.
Researcher Affiliation Academia Kazjon Grace and Mary Lou Maher University of North Carolina at Charlotte {k.grace,m.maher}@uncc.edu
Pseudocode No The paper describes its processes textually but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper describes the implementation of its model but does not provide any explicit statement about releasing open-source code or a link to a code repository.
Open Datasets Yes We gathered approximately 130,000 recipes from http://ffts.com/recipes, an archive of public domain recipes designed to work with desktop recipe database software.
Dataset Splits No The paper mentions training on a recipe corpus but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions the use of a Variational Autoencoder (VAE) but does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We found that the initialisation radius of the neurons in the encoder network needed to be increased significantly (to values around 0.1) to avoid the local minimum in which the latent variables z were always near 0 and the decoder network learnt only the individual probabilities of each feature. and We first applied a maximum context size in the range of 2-5 to limit search depth. and We applied beam search, finding a beam width equal to |d| multiplied by the depth limit to produce a good tradeoff between accuracy and speed.