One-Shot Texture Retrieval with Global Context Metric

Authors: Kai Zhu, Wei Zhai, Zheng-Jun Zha, Yang Cao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark texture datasets and real scenarios demonstrate the abovepar segmentation performance and robust generalization across domains of our proposed method.
Researcher Affiliation Academia University of Science and Technology of China {zkzy, wzhai056}@mail.ustc.edu.cn, {zhazj, forrest}@ustc.edu.cn
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes We also introduce expanded experimental contents in supplementary materials1. 1https://github.com/zhukaii/OS-TR
Open Datasets Yes To validate the superiority of our model in one-shot texture segmentation task, we designed a series of experiments based on Describable Textures Dataset (DTD) [Cimpoi et al., 2014] dataset.
Dataset Splits No The paper describes training and test set splits, but does not explicitly mention a dedicated validation set split or its size.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'pytorch' for reproduction, but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes Our model uses the SGD optimizer during the training process. The initial learning rate is set to 0.001 and the attenuation rate is set to 0.0005. The model stops training after 1000 epochs, where each epoch synthesizes 240 query images. All images are resized to 256 256 size and the batch size is set to 16.