AACP: Aesthetics Assessment of Children’s Paintings Based on Self-Supervised Learning

Authors: Shiqi Jiang, Ning Li, Chen Shi, Liping Guo, Changbo Wang, Chenhui Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct both qualitative and quantitative experiments to compare our model s performance with five other methods using the AACP dataset. Our experiments reveal that our method can accurately capture aesthetic features and achieve stateof-the-art performance.
Researcher Affiliation Academia 1School of Computer Science and Technology, East China Normal University 2Faculty of Education, East China Normal University 3Shanghai Institute of AI for Education, East China Normal University
Pseudocode No The paper describes the model architecture and training process but does not provide any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing the source code for the described methodology or a link to a code repository.
Open Datasets No The paper describes the creation and characteristics of a novel dataset but does not provide concrete access information (e.g., URL, DOI, or a specific repository) for public access to this dataset.
Dataset Splits No The paper mentions using data for training and testing but does not explicitly provide details about training, validation, or test dataset splits (e.g., percentages or sample counts for each split).
Hardware Specification Yes Our model is trained on an NVIDIA RTX 3090 using Py Torch
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number or list other key software components with their versions.
Experiment Setup Yes Our model is trained on an NVIDIA RTX 3090 using Py Torch, and takes 256 256 fixed images as the input. The masking ratio is set to 0.75. The Adam algorithm is used to optimize the model, and the Mean Squared Error (MSE) is used as the loss function. The model converges after 400 epochs of training with a learning rate of 1 10 4 and a batch size of 64.