Dispatch Guided Allocation Optimization for Effective Emergency Response
Authors: Supriyo Ghosh, Pradeep Varakantham
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, using two real-world EMS data sets, we empirically demonstrate that our heuristic approaches provide significant improvement over the best known benchmark approach. |
| Researcher Affiliation | Academia | Supriyo Ghosh School of Information Systems Singapore Management University supriyog.2013@phdis.smu.edu.sg Pradeep Varakantham School of Information Systems Singapore Management University pradeepv@smu.edu.sg |
| Pseudocode | Yes | Algorithm (1) delineates the key functionalities of the event-driven simulator. Let us assume, ξ denotes a set of events where each event e ξ represents an emergency request and the list is sorted according to the arrival time of incidents. Let we need to evaluate the performance of allocation strategy A. Then, the set of available ERVs, I is initialized according to the allocation strategy A. Let, ar denotes the ERV which is assigned for request r R, where each incident is initialized with a null assignment. In each iteration, the first element in the event list is popped. If the element is a new request r, then we assign the nearest available ERV, ar for the request (a typical dispatch strategy followed by the realworld EMS operators) and that particular ERV is removed from the available ERV set I. In addition, we insert a job completion event in ξ at time tr(ar), which denotes the time when the ERV, ar will return back to the base after serving request r. On the contrary, if the popped event is a job completion event for r, we add the ERV, ar into the available ERV set I. This iterative process continues until the event list becomes empty. Once the simulation is over, we have a valid assignment for each request and therefore, we can compute the utility of the given allocation strategy, A in terms of the percentage of requests served with the given threshold response time bound, Δ. Algorithm 1: EDSimulator(R, B, A) |
| Open Source Code | No | No explicit statement about providing open-source code for the methodology described in this paper or a direct link to such code was found. |
| Open Datasets | Yes | We conduct experiments4 on two real-world data sets. We obtain the dataset-1 from a real-world EMS in the form of anonymous and modified sample of request logs. The dataset-2 is adopted from Yue, Marla, and Krishnan (2012)5. |
| Dataset Splits | Yes | We divide our 6 months of data set into two parts first 3 months is used for training purpose to generate the allocation strategies and the performance of these strategies are tested on other 3 months of data. We use Sample Average Approximation [SAA] (Verweij et al. 2003) for validation and performance estimation. We generate 10 policies for each of the weekdays, where each policy is generated using request logs of that particular weekday for 10 consecutive weeks (e.g., the second policy for Monday is generated using requests of all the Mondays from week 11 to week 20). Then we identify the policy with best validation performance for each of the weekdays separately over 500 weeks of request logs. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided. Footnote 4 only mentions software used. |
| Software Dependencies | Yes | All the optimization models are solved using IBM ILOG Optimization Studio V12.5. incorporated within python code. |
| Experiment Setup | No | No specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or system-level training settings are provided in the main text. |