An Intrinsic Vector Heat Network
Authors: Alexander Gao, Maurice Chu, Mubbasir Kapadia, Ming Lin, Hsueh-Ti Derek Liu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our Vector Heat Network on triangle meshes, and empirically validate its invariant properties. We also demonstrate the effectiveness of our method on the useful industrial application of quadrilateral mesh generation. |
| Researcher Affiliation | Collaboration | 1Roblox Research 2Department of Computer Science, University of Maryland, College Park, USA 3Roblox Core AI. |
| Pseudocode | No | The paper describes the architecture and processes mathematically and in prose, but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We train our network on a dataset generated from the workflow described in (Dielen et al., 2021), with two modifications: (1) Instead of the DFAUST dataset used by (Dielen et al., 2021), we assemble a custom library of artist-created template avatar heads, around which we wrap the SMPL (Loper et al., 2023) head topology |
| Dataset Splits | No | The training dataset consists of 1100 triangle meshes with associated ground truth vector fields. The test dataset consists of 115 samples. A separate validation split is not explicitly mentioned. |
| Hardware Specification | Yes | We train on a single NVIDIA Tesla T4 GPU, for about 20 hours. |
| Software Dependencies | No | The paper mentions 'Re LU activation' but does not specify versions for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | For our experiments, we use N = 6 vector diffusion blocks with a hidden dimension of cl = 256 channels (see Fig. 2). We train for 3, 000 epochs, with initial learning rate of 1e 4, decayed by a factor of 0.85 every 150 epochs. In the Vector MLP layer, we use Dropout (Srivastava et al., 2014) set to 0.5, and L2 regularization (weight decay) with a value of 1e 3, which mitigates overfitting. |