Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Dual-Level Precision Edges Guided Multi-View Stereo with Accurate Planarization

Authors: Kehua Chen, Zhenlong Yuan, Tianlu Mao, Zhaoqi Wang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method achieves state-of-the-art performance on the ETH3D and Tanks & Temples benchmarks. Notably, our method outperforms all published methods on the ETH3D benchmark. [...] Extensive experiments validate the effectiveness of our proposed method, demonstrating state-of-the-art performance on the ETH3D and Tanks & Temples benchmarks. [...] Ablation Studies In the ETH3D training dataset, we conduct ablation experiments to verify the effectiveness of each component in our proposed method.
Researcher Affiliation Academia Kehua Chen, Zhenlong Yuan, Tianlu Mao*, Zhaoqi Wang Institute of Computing Technology, Chinese Academy of Sciences EMAIL
Pseudocode Yes Algorithm 1: Adaptive Patch Size Adjustment
Open Source Code No The paper mentions 'Further results are provided in the supplementary materials, including additional experimental details and comparative studies, extensive point cloud visualizations.' However, it does not explicitly state that source code for the methodology is provided in the supplementary materials or elsewhere, nor does it provide a direct link to a code repository.
Open Datasets Yes We evaluate our method on the ETH3D (Schops et al. 2017) and Tanks & Temples (Knapitsch et al. 2017) benchmarks
Dataset Splits No The paper mentions evaluating on the ETH3D and Tanks & Temples benchmarks and conducting ablation studies on the ETH3D training dataset. While these benchmarks typically have predefined splits, the paper does not explicitly provide specific dataset split information (e.g., percentages, sample counts, or explicit references to how the data was partitioned for training, validation, and testing) within its main text.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running its experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used in the implementation.
Experiment Setup Yes The proposed parameter settings are: {σ, η, τ, α, β1, β2, ω} = {0.67, 4, 0.87, 0.5, 25, 0.35, 2.5}.