Layout Representation Learning with Spatial and Structural Hierarchies
Authors: Yue Bai, Dipu Manandhar, Zhaowen Wang, John Collomosse, Yun Fu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our proposed SSH-AE outperforms the existing methods achieving state-of-the-art performance on two benchmark datasets. Code is available at github.com/yueb17/SSH-AE. |
| Researcher Affiliation | Collaboration | Yue Bai1*, Dipu Manandhar3, Zhaowen Wang4, John Collomosse4, Yun Fu1,2 1Department of Electrical and Computer Engineering, Northeastern University 2Khoury College of Computer Science, Northeastern University 3Centre for Vision, Speech and Signal Processing, University of Surrey 4Adobe Research |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at github.com/yueb17/SSH-AE. |
| Open Datasets | Yes | RICO (Deka et al. 2017) is largest publicly available dataset of UI layout. |
| Dataset Splits | Yes | We follow (Manandhar, Ruta, and Collomosse 2020) to assign 53K samples as training, 13K samples as gallery, and 50 samples as query set. ... We split the POSTER into 28K training set, 7K gallery set, and 50 query set. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used for running experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper describes the model architecture and training approach but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings). |