Tessellation-Filtering ReLU Neural Networks

Authors: Bernhard A. Moser, Michal Lewandowski, Somayeh Kargaran, Werner Zellinger, Battista Biggio, Christoph Koutschan

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 5: Algorithm and Examples: Theorem 4 gives rise to the following algorithm to identify a locally approximating UΨ in a neighborhood B at x0. ... We outline an approximating algorithm that checks condition (17) by randomly selected points in a neighborhood B at x0 by means of a sample D0 = {x1, . . . , x K0} N 1(0) B for class 0 and sample D1 = {y1, . . . , y K1} N 1(> 0) B for class 1. ... See Figure 5 for plotted S-curves for Example 1. The top curve, y = 1, shows the highest S values triggered by the outnotched with δ = 0. The curves match with our intuition that more rugged regions show higher S values.
Researcher Affiliation Academia 1Software Competence Center Hagenberg Gmb H (SCCH), Austria 2 Pattern Recognition and Application Lab (PRA Lab) at University of Cagliari, Italy 3Radon Inst. for Computational and Applied Mathematics (RICAM) at Austrian Academy of Sciences
Pseudocode Yes Algorithm 1 Local UΨ Approximation
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or links to code repositories.
Open Datasets No The paper uses 'instructive example that resembles the construction of a Cantor set in fractal geometry' (Example 1) and a function that 'checks whether all elements of a finite list of non-negative values... are strictly positive or not' (Example 2). For demonstrating the algorithm, it mentions 'randomly selected points in a neighborhood B at x0 by means of a sample D0 = {x1, . . . , x K0} N 1(0) B for class 0 and sample D1 = {y1, . . . , y K1} N 1(> 0) B for class 1.' These are conceptual or generated examples, not references to a publicly available dataset.
Dataset Splits No The paper uses mathematically defined examples and an algorithm that samples points for local approximation, but it does not specify traditional training, validation, or test splits for a dataset. The concepts of 'train' and 'validation' splits in the context of machine learning model training are not applicable here as the paper focuses on theoretical analysis and a shape complexity measure demonstrated on generated data for illustrative examples.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the examples or algorithm.
Software Dependencies No The paper mentions 'tropical algebra [Zhang et al., 2018; Alfarra et al., 2020; Trimmel et al., 2021]' and 'Trimmel et al., 2021] proposes Tropex: An algorithm for extracting linear terms in deep neural networks.' However, it does not specify any software names with version numbers that would be needed for replication.
Experiment Setup No The paper describes an algorithm for local approximation using sampled points (D0, D1) for its examples. However, it does not provide specific experimental setup details such as hyperparameters for training neural networks (e.g., learning rates, batch sizes, optimizers, epochs), as it is not focused on training a machine learning model in the traditional sense.