Data-driven Estimation of Sinusoid Frequencies

Authors: Gautier Izacard, Sreyas Mohan, Carlos Fernandez-Granda

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our approach we simulate data according to the signal model in equation 1 and the measurement model in equation 2 for N := 50. We evaluate the different methods on a test set where the clean signal samples follow the model in Section 3.1. For each noise level, we generate 10^3 signals, which are different from the ones in the training set. Figure 5 shows the results. Figure 6 shows the fraction of signals in the test set for which the number of components is not estimated correctly for different methodologies. Figure 7 shows the results. Deep Freq clearly outperforms the other methods over the whole range of noise levels.
Researcher Affiliation Academia Gautier Izacard Ecole Polytechnique gautier.izacard@polytechnique.edu Sreyas Mohan Center for Data Science New York University sm7582@nyu.edu Carlos Fernandez-Granda Courant Institute of Mathematical Sciences, and Center for Data Science New York University cfgranda@cims.nyu.edu
Pseudocode No The paper describes the model architecture textually and with diagrams but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The code used to train and evaluate our models is available online at https://github.com/sreyas-mohan/Deep Freq.
Open Datasets No To validate our approach we simulate data according to the signal model in equation 1 and the measurement model in equation 2 for N := 50. We build the training set generating 2 10^5 clean signals. During training, new noise realizations are added at each epoch.
Dataset Splits No The paper describes the generation of training and test sets (Section 3.2) but does not explicitly provide details about a separate validation set split for its own models, only mentioning a 'validation dataset' in the context of CBLasso baseline.
Hardware Specification Yes Running times are measured on an Intel Core i5-6300HQ CPU. Training takes 11 hours on an NVIDIA P40.
Software Dependencies No The paper mentions the Adam optimizer but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers used in the implementation.
Experiment Setup Yes We fix the standard deviation of the Gaussian filter in the representation to 0.3/N. The number of channels C in the encoder is set to 64. The output dimensionality M of the encoder is set to 125. The number of intermediate convolutional layers is set to 20. The width of the filter in the transposed convolution in the decoder is set to 25 with a stride of 8 in order to obtain a discretization of the representation on a grid of size 10^3. The training loss is minimized using the Adam optimizer [24] with a starting learning rate of 3 10^ 4. The initial layer contains 16 filters of size 25 with a stride of 5, which downsample the input into features vectors of length 200. We set the number of subsequent convolutional layers to 20, each containing 16 filters of size 3.