Meta Internal Learning

Authors: Raphael Bensadoun, Shir Gur, Tomer Galanti, Lior Wolf

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments are divided into two parts. In the first part, we study three different training regimes of our method. First, we experiment with single-image training, in order to produce a fair comparison with existing methods. Second, we present a mini-batch training scheme, where instead of a single image, the model is trained on a fixed set of images. Lastly, we experiment with training over a full dataset that cannot fit into a single batch. In the second part, we experiment with several applications of our method. Specifically, we study the ability of our method in the Harmonization, Editing and Animation tasks proposed by [31], as well as generating samples of arbitrary size and aspect ratio.
Researcher Affiliation Academia Raphael Bensadoun The School of Computer Science Tel Aviv University Shir Gur The School of Computer Science Tel Aviv University Tomer Galanti The School of Computer Science Tel Aviv University Lior Wolf The School of Computer Science Tel Aviv University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/Raphael Bens TAU/Meta Internal Learning.
Open Datasets Yes To enable comparison with previous work, we use the 50-image dataset of [31], denoted by Places-50 and the 50-image dataset of [12], denoted by LSUN-50.
Dataset Splits No The paper mentions "Train" and "Test" sets (e.g., in Table 3) and dataset sizes, but does not explicitly provide training, validation, and test dataset splits (e.g., percentages or sample counts for each).
Hardware Specification No The paper mentions training on "a single GPU" multiple times but does not specify the exact model of the GPU or any other hardware components like CPU or memory.
Software Dependencies No The paper mentions the use of "Adam [17] optimizer" and "standard Kaiming He initialization [11]" but does not provide specific software dependencies with version numbers, such as programming languages or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We initialize the hypernetworks with the initialization suggested by [21]. In this initialization, the network f is initialized using the standard Kaiming He initialization [11]. We train the model progressively from scale 1 to scale k. Each scale is trained for a constant number of iterations and optimized using the Adam [17] optimizer.