On Unconstrained Quasi-Submodular Function Optimization

Authors: Jincheng Mei, Kang Zhao, Bao-Liang Lu

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results verify the effectiveness and efficiency of the proposed algorithms on lattice reduction. We implement our algorithms using the SFO toolbox (Krause 2010) and Matlab. All experiments are run on a single core Intel i5 2.8 GHz CPU with 4GB RAM. We are concerned with the approximation ratio of an optimization algorithm. We compare the approximation ratio and running time of UQSFMax with MMax (Iyer, Jegelka, and Bilmes 2013). Table 1: Average lattice reduction rates. Table 2: Approximation ratios of different algorithms and functions. Table 3: Running time (seconds) of different algorithms and functions.
Researcher Affiliation Academia Jincheng Mei, Kang Zhao and Bao-Liang Lu* Center for Brain-Like Computing and Machine Intelligence Department of Computer Science and Engineering Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University 800 Dong Chuan Road, Shanghai 200240, China {jcmei,sjtuzk,bllu}@sjtu.edu.cn
Pseudocode Yes Algorithm 1 Unconstrained Quasi-Submodular Function Minimization (UQSFMin) Input: Quasi-submodular function F, N = {1, 2, ..., n}, X0 N, t 0. Output: Xt as a local optimum of min. Algorithm 2 Unconstrained Quasi-Submodular Function Maximization (UQSFMax) Input: Quasi-submodular function F, N = {1, 2, ..., n}, X0 , Y0 N, t 0. Output: Lattice [Xt, Yt].
Open Source Code No The paper states 'We implement our algorithms using the SFO toolbox (Krause 2010) and Matlab.', but does not provide specific access to its own source code for the described methodology.
Open Datasets No The paper uses various functions (e.g., Iwata's function, COM function) for experiments, often describing their mathematical forms or generation, but does not provide concrete access information (links, DOIs, or standard dataset citations with authors/year) for publicly available datasets used for training.
Dataset Splits No The paper evaluates algorithms on mathematically defined functions and synthetic problem instances, not on empirical datasets that would typically require explicit train/test/validation splits.
Hardware Specification Yes All experiments are run on a single core Intel i5 2.8 GHz CPU with 4GB RAM.
Software Dependencies No The paper states 'We implement our algorithms using the SFO toolbox (Krause 2010) and Matlab.' but does not specify version numbers for Matlab or the SFO toolbox.
Experiment Setup Yes We list several widely used quasi-submodular functions and the settings of our experiments as the following: Iwata s function... The ground set cardinality is set to be n = 5000. The COM (concave over modular) function... The ground set cardinality is set to be n = 5000. The half-products function... n is set to be 100. The linearly perturbed functions... We set n = 100, d = 400, and randomly generate M in [0.5, 1]n d, the perturbing noise σ in [ 0.01, 0.01]n. The determinant function... We set n = 100. The multiplicatively separable function... We set n = 2000. For MMax, we consider the following variants: random permutation (RP), randomized local search (RLS), and randomized bi-directional greedy (RG). For UQSFMax, we use it as the preprocessing steps of RP, RLS and RG, and denote the corresponding combined methods as URP, URLS, and URG.