AMELIA stands for Analyzable ModELs Inference from trAjectories, and accompanies the SEAMS19 paper “Inferring Analyzable Models from Trajectories of Spatially-Distributed Internet-of-Things”.

Artifact Instructions

The following tarball contains a VirtualBox image. The VirtualBox image contains the program with all dependencies already installed, and the sample dataset already present: http://dsg.tuwien.ac.at/team/ctsigkanos/amelia/amelia-seams19.tar.gz

  1. After booting the VM image, login with credentials user: yves, password: klein.
  2. On the terminal, type ./run_amelia.sh
  3. The above command executes a sample run configuration (see below). Note that placing of landmarks takes some time. Then, a small model will be inferred from 10 trajectories in the Beijing dataset.

The program is also found in amelia.tar.gz. If you prefer a local install, follow the instructions on README.

Usage

The program interacts with a geospatially-enabled mongoDB instance which resolves spatial queries over an earth spheroid geometry. To use AMELIA in practice, one i) obtains a trajectory dataset, ii) obtains a set of geospatial landmarks, iii) configures parameters for the spatial model inference (such as the landmarks’ search range), and iv) invokes the tool. Helper procedures are available, e.g. for calculating traces based on time windows or simulating movement of entities within models.

An increased searchrange will produce a more dense model, and as a consequence longer presence traces. The model is outputed in GML format as specified in the –help directive, while the traces are stored inside the mongodb instance. Options available can be found with python amelia.py –help.

Resulting models can be readily verified with a variety of tools operating on the spatial models presented (e.g. STREL, see below), or transformed to other formalisms taking advantage of the generality of the underlying graph structure.

Example Workflow

A normal workflow is the following. The following steps are optional, as the datasets are already bundled with the artifact:

  1. (Optional) Download landmarks from OpenStreetMap and transform to CSV: http://wiki.openstreetmap.org/wiki/Planet.osm

  2. (Optional) Download trajectory data from: https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/

  3. Execute: python amelia.py Tdrive-dataset/taxi_log_2008_by_id -i china-latest.csv -n 50

This will build a spatial model from all landmarks in Beijing but only with 50 taxi trajectories. To build the whole model, remove the ‘-n 50’ option. Be advised, this will take a long time.

V. Example run:

python amelia.py Tdrive-dataset/taxi_log_2008_by_id -i china-latest.csv -n 10

AMELIA: Analyzable ModELs Inference from trAjectories.
Initializing POIs from china-latest.csv (this may take a long time)...
Loading POIs from china-latest.csv ... added 274638.
POIs in database 274638
# 10 Trajectory length 1781 -> Presence trace length 8
Tdrive-dataset/taxi_log_2008_by_id/3356.txt
# 9 Trajectory length 1727 -> Presence trace length 0
Tdrive-dataset/taxi_log_2008_by_id/9696.txt
# 8 Trajectory length 1875 -> Presence trace length 5
Tdrive-dataset/taxi_log_2008_by_id/4624.txt
# 7 Trajectory length 918 -> Presence trace length 6
Tdrive-dataset/taxi_log_2008_by_id/2556.txt
# 6 Trajectory length 1546 -> Presence trace length 10
Tdrive-dataset/taxi_log_2008_by_id/5128.txt
# 5 Trajectory length 1533 -> Presence trace length 5
Tdrive-dataset/taxi_log_2008_by_id/5384.txt
# 4 Trajectory length 2116 -> Presence trace length 3
Tdrive-dataset/taxi_log_2008_by_id/222.txt
# 3 Trajectory length 2167 -> Presence trace length 8
Tdrive-dataset/taxi_log_2008_by_id/5703.txt
# 2 Trajectory length 1507 -> Presence trace length 7
Tdrive-dataset/taxi_log_2008_by_id/7234.txt
# 1 Trajectory length 631 -> Presence trace length 3
Tdrive-dataset/taxi_log_2008_by_id/573.txt
Building spatial model... ElementPresences 55
Nodes 55 Edges 46 done.
Writing output files networks_generated/network-trajects10searchrange5
Model inference took 257.15 sec.