Noether Embedding: Efficient Learning of Temporal Regularities

Authors: Chi Gao, Zidong Zhou, Luping Shi

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that NE consistently achieves about double the F1 scores for detecting valid TRs compared to classic embeddings, and it provides over ten times higher confidence scores for querying TR intervals.
Researcher Affiliation Collaboration Chi Gao Zidong Zhou Luping Shi Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University China Electronics Technology HIK Group Co. Joint Research Center for Brain-Inspired Computing, IDG / Mc Govern Institute for Brain Research at Tsinghua University, Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
Pseudocode No The paper provides a framework description and mathematical formulas but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is publicly available at: https://github.com/Kevin Gao7/Noether-Embedding.
Open Datasets Yes We assess embeddings on classic ICEWS14, ICEWS18, and GDELT datasets. In our experiments, we use ICEWS14 and ICEWS18, the same as in (Han et al., 2020). [...] We also use the GDELT released by (Jin et al., 2019).
Dataset Splits No The paper does not explicitly provide specific validation dataset splits or methods like cross-validation within a single dataset. It mentions using D14 to derive a global threshold and then applying it to D18, which is a form of hyperparameter tuning across datasets, but no clear validation set definition.
Hardware Specification Yes The experiments are conducted on a single GPU (Ge Force RTX 3090).
Software Dependencies No The paper mentions using 'Adagrad optimizer' and 'torch.Long Tensor' (implying PyTorch) but does not provide specific version numbers for any software dependencies like PyTorch, Python, or CUDA.
Experiment Setup Yes For NE, d = 400, Cp = 1, Cn = 0 and the global time vector ω is set as ωk = (2π ωmax) k d 1 Ta , k = 0, 1, ..., d 1, where Ta is the number of absolute time points, and ωmax is a tunable hyperparameter set as 600. [...] All models are trained for 100 epochs on each dataset using the Adagrad optimizer (with a learning rate of 0.01) and the Step LR learning rate scheduler (with a step size of 10 and gamma of 0.9).