Z-Transforms and its Inference on Partially Observable Point Processes
Authors: Young Lee, Thanh Vinh Vo, Kar Wai Lim, Harold Soh
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present experiments in a controlled setting and show that our inference algorithm are able to indeed recover the optimal parameters. 5 Experiments Our inference algorithm is tested on synthetic data generated from our model. |
| Researcher Affiliation | Academia | Young Lee, Thanh Vinh Vo, Kar Wai Lim, Harold Soh National University of Singapore |
| Pseudocode | Yes | Algorithm 1 Inference of parameters via Z transforms |
| Open Source Code | No | The paper does not provide any specific links to source code for its own methodology nor does it explicitly state that the code is publicly available. It only mentions 'using the scikit-learn implementation' for a baseline model (GPR), which is a third-party library. |
| Open Datasets | Yes | We compare the various methods on two real-world datasets: breaches data, henchforth known as Breaches, comprising times of security violation in a period of 2005 to 2018 [Privacy Rights Clearinghouse, 2018]; and Calgary, comprising of access log to the University of Calgary from October 1994 to October 1995 [The Internet Traffic Archive, 2018]. http://ita.ee.lbl.gov/html/traces.html. |
| Dataset Splits | Yes | For each of these datasets, we split the recorded number of events into a train and test set.We then carry out predictions of event counts over periods with different granularities. The precise split methodology varies for each dataset are as follows: For the Breaches dataset, we select the first 3,926 data points as training set to train the models and predict the next 1 to 4 data points (in days). For Calgary, the first 5,000 data points are selected as training set and we predict for the next 20 minutes. |
| Hardware Specification | No | The paper mentions 'Core i7 machine with 8GB RAM' but this is in the context of describing memory issues encountered by a baseline model (GPR), not the hardware used for their own experiments. No specific hardware details for their own method's experimental setup are provided. |
| Software Dependencies | No | The paper mentions using 'Python by using the mpmath.bell() function' and the 'scikit-learn implementation' for GPR, but it does not specify version numbers for any key software dependencies or libraries used for their own model's implementation or experiments. |
| Experiment Setup | Yes | The parameters are obtained by minimizing the relative entropy between the Z transform and its empirical counterpart, i.e. DKL(p(η) q(η)) = X η p(η) log p(η) ... Another way to estimate λσ is to minimize the mean squared loss between the theoretical Z transform and the empirical version as follows: argmin τ,c,d P N n=1 (φσ(ηn, s, t) φm(ηn, s, t))2 . ... Minimize the relative entropy or mean squared error between the theoretical and empirical Z transforms as in equation (15) or equation (21) using standard optimization packages whilst enforcing strict constraints such that τ > 0, δ > 0 and g( ), f( ) being non-negative activation functions. ... we match the entire structure of the Z transform by regulating the number of different values for η until there is a surfeit of moment conditions compared to number of unknowns. |