Probabilistic Curve Learning: Coulomb Repulsion and the Electrostatic Gaussian Process
Authors: Ye Wang, David B. Dunson
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Focusing on the simple case of a one-dimensional manifold, we develop efficient inference algorithms, and illustrate substantially improved performance in a variety of experiments including filling in missing frames in video. |
| Researcher Affiliation | Academia | Ye Wang Department of Statistics Duke University Durham, NC, USA, 27705 eric.ye.wang@duke.edu David Dunson Department of Statistics Duke University Durham, NC, USA, 27705 dunson@stat.duke.edu |
| Pseudocode | No | The paper provides a numbered list of steps for its algorithm in Section 3.1, but it is not formatted as a pseudocode block or explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not provide concrete access to source code. It only links to a video of results: 'An online video of the super-resolution result using electro GP can be found in this link1. https://youtu.be/N1BG220J1Js This online video contains no information regarding the authors.' |
| Open Datasets | No | The paper mentions collecting frames from videos (e.g., '200 consecutive frames (of size 76 ˆ 101 with RGB color) [13] were collected from a video of a teapot rotating 1800.' and '100 consecutive frames (of size 100 ˆ 100 with gray color) were collected from a video of a shrinking shockwave.'), but does not provide specific links, DOIs, repository names, or clear statements that these datasets are publicly available with proper citations for access. |
| Dataset Splits | No | The paper describes how parts of the data were used (e.g., '190 of the frames were assumed to be fully observed' for fitting and '10 frames were given without any ordering information' with missing pixels for reconstruction), but does not specify formal train/validation/test dataset splits with percentages, absolute counts, or references to predefined splits for general model training. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries with their versions). |
| Experiment Setup | Yes | Although the x s are not identifiable, since the target function (4) is invariant under rotation, a unique solution does exist conditionally on the specified order. [...] We find very similar results using LLE, Isomap and eigenmap, but focus on LLE in all our implementations. Our algorithm can be summarized as follows. 1. Learn the one dimensional coordinate x0 by your favorite distance-preserving manifold learning algorithm and rescale x0 into p0, 1q; 2. Solve Θ0 arg maxΘ ppy1:n|x0, Θ, rq using scaled conjugate gradient descent (SCG); 3. Using SCG, setting x0 and Θ0 to be the initial values, solve ˆx and ˆΘ w.r.t. (4). [...] Based on our experience, setting r 1 always yields good results, and hence is used as a default across this paper. |