Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision

Authors: Arturs Berzins

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement the algorithm in Py Torch. Since only vertex positions and bounded edges with exactly two vertices are stored, edge subdivision can run efficiently and exclusively on the GPU. The steps 0-4 can be implemented using standard tensor operations. However, using standard operations step 5 can only be implemented in sub-optimal log-linear time using sorting to pair up identical rows of a tensor. This step can be implemented in linear time using hash-tables, but since efficient hashing on the GPU with custom length keys is non-trivial (Jünger et al., 2020; Awad et al., 2023), we hope to address this in future work. ... Lastly, in Figure 9 we compare to Spline Cam (Humayun et al., 2023) which is a region subdivision method specifically for D = 2. Over the considered tests, our method is on average 20 times faster, since Spline Cam uses graph structures on the CPU.
Researcher Affiliation Collaboration 1SINTEF, Oslo, Norway 2Department of Mathematics, University of Oslo, Oslo, Norway.
Pseudocode No The paper describes the steps of edge subdivision (steps 1-5) in Section 4.2, but does not present them as a formal pseudocode block or algorithm.
Open Source Code Yes The code is available on Git Hub1. ... An open source implementation2 using standard tensor operations in Py Torch and leveraging the GPU. 1github.com/arturs-berzins/relu_edge_subdivision 2github.com/arturs-berzins/relu_edge_subdivision
Open Datasets No The paper mentions using 'randomly initialized NNs' and a 'NN trained on the signed distance field of a D = 3 Stanford bunny' but does not provide concrete access information (link, DOI, or formal citation with authors/year) for these or any other datasets used in their experiments to confirm public availability.
Dataset Splits No The paper discusses evaluating its implementation's numerical error and performance on hypercube domains and NNs of various depths and widths, but it does not specify training, validation, or test dataset splits (e.g., percentages or sample counts) for any experiments conducted.
Hardware Specification Yes The tests are performed on an NVIDIA RTX 3090.
Software Dependencies No The paper states 'We implement the algorithm in Py Torch' but does not provide a specific version number for Py Torch or any other software libraries or dependencies used.
Experiment Setup Yes A fixed NN with 4 layer depth and 10 neuron width is used throughout. ... We consider NNs of four layers and widths of 10, 20, 40 for input dimensions D = 1..10. ... We perform edge subdivision on a D = [ 100, 100]D hypercube domain. ... The bunny shape converges to a circle in 100 iterations with a standard Adam optimizer.