Sequential Context Encoding for Duplicate Removal

Authors: Lu Qi, Shu Liu, Jianping Shi, Jiaya Jia

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our extensive experiments, the proposed method outperforms other alternatives by a large margin.
Researcher Affiliation Collaboration Lu Qi1 Shu Liu1,3 Jianping Shi2 Jiaya Jia1,3 1The Chinese University of Hong Kong 2Sense Time Research 3 You Tu Lab, Tencent
Pseudocode No The paper describes the network components and process but does not include structured pseudocode or an algorithm block.
Open Source Code Yes Our code and models are made publicly available.
Open Datasets Yes All experiments are performed on challenging COCO detection datasets with 80 object categories [25].
Dataset Splits Yes 115k images are used for training [23, 22]. Ablation studies are conducted on the 5k validation images, following common practice.
Hardware Specification Yes For both stages in the framework, we adopt synchronized SGD as the optimizer and train our model on a Titan X Maxwell GPU, with weight decay 0.0001 and momentum 0.9.
Software Dependencies No The paper mentions software components like 'synchronized SGD' and 'GRU' but does not specify software names with version numbers for reproducibility.
Experiment Setup Yes The learning rate is 0.01 in the first ten epochs and 0.001 in the last two epochs. dl and dm are by default 128 and 256. Weight of positive samples in our BCE loss is set to 4. By default, we use η = 0.5 for most of our experiments.