GithubHelp home page GithubHelp logo

motis-project / motis Goto Github PK

View Code? Open in Web Editor NEW
194.0 16.0 47.0 10.98 MB

Intermodal Mobility Information System

Home Page: https://motis-project.de

License: MIT License

CMake 1.51% C++ 37.92% HTML 0.43% Shell 0.09% Java 22.72% JavaScript 7.20% CSS 3.97% Elm 21.14% Lua 0.28% Python 0.64% Dockerfile 0.04% TypeScript 4.06%
routing routing-engine routing-algorithm gtfs json-api intermodal real-time mobility gtfs-rt transit

motis's Introduction

Linux+MacOS Build Windows Build

MOTIS stands for Multi Objective Travel Information System.

The core features are:

  • Intermodal Routing: computing optimal journeys mixing public transit, sharing mobility, walking, etc. in sensible ways. Read more.
  • Real Time Support: considering delays, train cancellations, additional services, reroutings, track changes, etc. Read more.
  • Visualization: view vehicle movements in real-time. Try it out!
  • JSON API: the backend provides a JSON API via HTTP. Documentation

More detailed information can be found at motis-project.de.

To demonstrate the functionalities, an Android App and a web-based information system is available. The source code for both front-ends is available as Open Source Software as well.

The system can consume schedule timetables in the GTFS or HAFAS format as well as real time information in the GTFS-RT (and RISML, a propriatary format at Deutsche Bahn) as input data. For pedestrian routing (handled by Per Pedes Routing) and car routing (handled by OSRM) OpenStreetMap data is used.

Documentation

Contribution

Feel free to contribute in any area you like (new features, small improvments, bug fixes, documentation, testing, etc.)! By making a pull-request you agree to license your contribution under the MIT and Apache 2.0 license as described below.

Alternatives

License

MOTIS can be licensed under the terms of the MIT license or under the terms of the Apache License, Version 2.0.

motis's People

Contributors

1maxnet1 avatar carstenhagelgans avatar ch33hau avatar felixguendling avatar jbruechert avatar julianharbarth avatar kalon33 avatar mdobes avatar michaelkutzner avatar pablohoch avatar sfahnens avatar vkrause avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

motis's Issues

Newest buildcache incompatible to Docker build image

The newest release v0.23.0 of buildcache requires GLIBC 2.29, while the motisproject/cpp-build:latest docker image provides only GLIBC 2.27.
As the motis build process downloads the newest release of buildcache, this breaks the linux development workflow described here.

Build Log: build_log.txt

I could provide a PR fixing the buildcache version to v0.22.3 (the latest working release), but maybe a better solution would be including a working buildcache version within the docker image.

CSA SSE Backward Search Bug

Comparator Routing vs CSA SSE:

ERROR [id = 231]: 2018-06-15T13:10:00Z
#con1 = 1, #con2 = 0
    0:           6min [13:04 - 13:10]	    0		-                          	-
   1 :  -                          	-    		-                          	-


  -> connections in FIRST with no match in SECOND: 0
  -> connections in SECOND with no match in FIRST: -
  -> no total domination



Statistics:
  #matches = 0/1
  #mismatches  = 1/1
  #invalid = 0/1

Routing Response (CSA SSE = empty):

Response 231 contains 1 journey


Journey: duration=6   transfers=0  accessibility=0                15.06. 11:04          --> 15.06. 11:10          (UTC)

Stops:
 0: 0600723 Heerde Klick, Kirchdorf b Sulingen................ a:                        d: 15.06. 11:04               enter
 1: 0600734 Volksbank, Barenburg.............................. a: 15.06. 11:10           d:                       exit

Transports:
 0: 0  -> 1  TRAIN Bus 127                   duration=6

Trips:
 0: 0  -> 1  {0600731,    159, 2018-06-15 10:54} -> {0600047, 2018-06-15 11:23}, line_id=127
       #/trip/0600731/159/1529060040/0600047/1529061780/127  services.txt:1:20

Arrival Time Array in get_results CSA Simple CPU Version (correct result):

STATION  | EARLIEST ARRIVAL TIME WITH N TRANSFERS
000:    0    0    0    0    0    0    0    0 
001:    0    0    0    0    0    0    0    0 
002:    0    0    0    0    0    0    0    0 
003:    0    0    0    0    0    0    0    0 
004:    0    0    0    0    0    0    0    0 
005:    0    0    0    0    0    0    0    0 
006:    0    0    0    0    0    0    0    0 
007:    0    0    0    0    0    0    0    0 
008:    0    0    0    0    0    0    0    0 
009:    0    0    0    0    0    0    0    0 
010:    0    0    0    0    0    0    0    0 
011:    0    0    0    0    0    0    0    0 
012:    0 7862    0    0    0    0    0    0 
013: 7870    0    0    0    0    0    0    0 

Arrival Time Array in get_results CSA SSE Version (wrong result):

STATION  | EARLIEST ARRIVAL TIME WITH N TRANSFERS
000:    0    0    0    0    0    0    0    0 
001:    0    0    0    0    0    0    0    0 
002:    0    0    0    0    0    0    0    0 
003:    0    0    0    0    0    0    0    0 
004:    0    0    0    0    0    0    0    0 
005:    0    0    0    0    0    0    0    0 
006:    0    0    0    0    0    0    0    0 
007:    0    0    0    0    0    0    0    0 
008:    0    0    0    0    0    0    0    0 
009:    0    0    0    0    0    0    0    0 
010:    0    0    0    0    0    0    0    0 
011:    0    0    0    0    0    0    0    0 
012:    0    0    0    0    0    0    0    0 
013: 7868    0    0    0    0    0    0    0 

Schedule, Config, and Scripts to reproduce:
schedule.zip

Support additional edges in RAPTOR module

To enable intermodal routing implement support for additional edges in the RAPTOR modul.

For intermodal routing the module receives additional MUMO edges, which are not time dependent.

TODO

  • Initialize the search with the additional edges.
  • After the search finished use the additional edges to determine a Pareto set of journeys from start to finish.
  • Reconstruct the journeys with the correct MUMO edge IDs at the start and finish.

Differences in documentation and code for `clasz` field

Looking at the source code for the mapping of the clasz field I see a different mapping then in the documentation. The source code:

enum class service_class : service_class_t {
  AIR = 0,
  ICE = 1,
  IC = 2,
  COACH = 3,
  N = 4,
  RE = 5,
  RB = 6,
  S = 7,
  U = 8,
  STR = 9,
  BUS = 10,
  SHIP = 11,
  OTHER = 12,
  NUM_CLASSES
};

the documentation

0: long distance high speed trains (e.g. TGV)
1: long distance inter city trains
2: long distance night trains
3: regional express trains
4: regional trains
5: metro trains
6: subway trains
7: trams
8: buses
9: other (flights, ferries, taxis, etc.)

Could you clarify what the correct mapping is?

Graph creation despite existing graphs

We decoupled the graph creation task from the actual runnning motis instance.

The idea behind that was

  • that the service is immediately online once the container is started and
  • that the permanantly running instance doesn't need as many resources as the graph creation one.

When I run the motis instance and link it to the existing graphs I realized that it starts creating the graphs again.
Is there a parameter to tell the running instance to not do that?

double adjustment of time offset

Schedule: 2018 (121)

[dataset]
begin=20180125
num_days=2

Loading the schedule fails with "ERROR: double adjustment of time offset [case 4]".

GTFS: Parse wheelchair_accessible and bikes_allowed + search parameters

Parse wheelchair_accessible and bikes_allowed attributes from GTFS and make them enforcable in the search API / UI.

  • Parse wheelchair_accessible and bikes_allowed as attributes
  • Add API parameter to search and restrict the search to edges that carry the required attributes
  • Add an option for these parameters in the UI

initialization error: schedule not loaded - when trying to run ./motis

Hi,
great project, I love the demo.
Tried recreating all steps mentioned here https://motis-project.de/docs/install
using Switzerland data.
I grabbed the code and extracted (directory structure same as suggested)
Created data folder
Within data folder, I created hrd folder, where i added 4_tu_ch_oev_sammlung_ch__2021_20211201_203455.zip file ,
within data folder, there's also switzerland-latest.osm.pbf file and log folder, whre txt empty files have been created.

When tring to run motis from root folder i get:

root@vpsxxxxx:/home/user/motis# ./motis
Used Options:

  server.host: 0.0.0.0
  server.port: 8080
  server.cert_path: ::dev::
  server.priv_key_path: ::dev::
  server.dh_path: ::dev::
  server.api_key:
  server.log_path:
  server.static_path:


  import.paths:
  import.data_dir: data
  import.require_successful: 1


  dataset.path: rohdaten
  dataset.prefix:
  dataset.graph_path: default
  dataset.write_serialized: 1
  dataset.write_graph: 0
  dataset.read_graph: 0
  dataset.read_graph_mmap: 0
  dataset.cache_graph: 0
  dataset.apply_rules: 1
  dataset.adjust_footpaths: 1
  dataset.expand_footpaths: 1
  dataset.begin: TODAY
  dataset.num_days: 2
  dataset.planned_transfer_delta: 30
  dataset.wzr_classes_path:
  dataset.wzr_matrix_path:
  dataset.no_local_transport: 0


  modules: address, cc, csa, guesser, intermodal, lookup, osrm, parking, path, paxforecast, paxmon, ppr, railviz, revise, ris, routing, rt, tiles, tripbased
  exclude_modules:


  remotes:


  mode: server
  batch_input_file: queries.txt
  batch_output_file: responses.txt
  init:
  num_threads: 2
  direct: 0


  csa.bridge: 0
  csa.expand_footpaths: 0


  intermodal.router: routing
  intermodal.revise: 0


  osrm.profiles:


  parking.db_max_size: 549755813888
  parking.max_walk_duration: 10
  parking.import.edge_rtree_max_size: 3221225472
  parking.import.area_rtree_max_size: 1073741824
  parking.import.lock_rtrees: 0


  path.use_cache:
  path.max_size: 34359738368


  paxforecast.forecast_results:
  paxforecast.behavior_stats:
  paxforecast.routing_cache:
  paxforecast.calc_load_forecast: 1
  paxforecast.publish_load_forecast: 0
  paxforecast.stats:
  paxforecast.deterministic_mode: 0
  paxforecast.min_delay_improvement: 5


  paxmon.generated_capacity_file:
  paxmon.stats:
  paxmon.capacity_match_log:
  paxmon.journey_match_log:
  paxmon.over_capacity_report:
  paxmon.broken_report:
  paxmon.reroute_unmatched: 0
  paxmon.reroute_file:
  paxmon.reroute_router: /tripbased
  paxmon.start_time: 1970-01-01T00:00:00Z+0000
  paxmon.end_time: 1970-01-01T00:00:00Z+0000
  paxmon.time_step: 60
  paxmon.match_tolerance: 0
  paxmon.arrival_delay_threshold: 20
  paxmon.preparation_time: 15
  paxmon.check_graph_times: 0
  paxmon.check_graph_integrity: 0
  paxmon.mcfp_scenario_dir:
  paxmon.mcfp_scenario_min_broken_groups: 500
  paxmon.mcfp_scenario_include_trip_info: 0
  paxmon.keep_group_history: 0
  paxmon.reuse_groups: 1


  ppr.import.use_dem: 0
  ppr.profile:
  ppr.edge-rtree-max-size: 3221225472
  ppr.area-rtree-max-size: 1073741824
  ppr.lock-rtrees: 0
  ppr.prefetch-rtrees: 1
  ppr.verify-graph: 0


  railviz.initial_permalink:
  railviz.tiles_redirect:


  ris.gtfs_trip_ids:
  ris.db: ris.mdb
  ris.input: ris
  ris.db_max_size: 549755813888
  ris.init_time: 1970-01-01T00:00:00Z+0000
  ris.clear_db: 0
  ris.instant_forward: 0
  ris.gtfsrt.is_addition_skip_allowed: 1


  rt.validate_graph: 0
  rt.validate_constant_graph: 0
  rt.print_stats: 1


  tiles.profile:
  tiles.import.use_coastline: 0
  tiles.import.flush_threshold: 10000000
  tiles.db_size: 1099511627776


  tripbased.use_data_file: 1


     address: WAITING: {"OSM"}
     parking: WAITING: {"OSM", "PPR", "SCHEDULE"}
        path: WAITING: {"OSM", "OSRM", "SCHEDULE"}
      paxmon: WAITING: {"SCHEDULE"}
         ppr: WAITING: {"OSM"}
    schedule: WAITING: {"SCHEDULE"}
       tiles: WAITING: {"OSM"}
   tripbased: WAITING: {"SCHEDULE"}
2021-12-14T13:06:39Z [VERIFY FAIL] schedule not loaded

initialization error: schedule not loaded

Running on Ubuntu 18.04.5 LTS

GTFS: Override Paths from Path Module with Shapes

Override paths from path module (extracted from OSM) by those from shapes.txt where available. This would be interesting especially for ferries since those are currently only direct lines on in the path module.

MOTIS Import

Goal

The goal is to reduce all import steps (path-prepare, ppr-prepare, parking-prepare, osrm-extract, osrm-contract, etc.) into one single command line call like motis --mode import [+some dataset paths] or motis --init "/import".

One idea would be to use the existing module system, especially the publish/subscribe functionality to accomplish this task. Tasks activate them self when all required input events were triggered. Every task produces an output event specifying what was produces (e.g. the path of the schedule.raw file). A module waiting for multiple input events has to store the output (paths) of those events it already received until it can start.

Module Required Input Task Output Init
loader { GTFS } gtfs_parser schedule.raw
loader { HRD } hrd_parser schedule.raw
osrm { OSM, PROFILE } osrm_extract PROFILE-graph
osrm { OSM, PROFILE } osrm_contract PROFILE-contracted-graph
path { OSM, bus-contracted-graph, schedule.raw } path-prepare  path-database   
ppr { OSM } ppr-prepare ppr-graph
parking  { OSM, ppr-graph }  parking-prepare parking-db
... ... ... ... ...

Configuration

Import settings could be configured within the modules using parameters that make it clear that this is an import-parameter (e.g. parking.import.variable). But: since most modules have only parameters for their "Required Input" (see table column), there will not be many parameters.

Every "Required Input" as well as "Output" is an event publish()ed in the module system.

Initial Events

Initial events published are OSM, GTFS or HRD, and OSRM_PROFILE. There could be different solutions:

  • Hardcode those parameters and manually trigger the inital input events
  • Make this generic, too and just propagate FILE_PATH as initial event (attached with e.g. ./data/switzerland.osm.pbf, car.lua, or ./data/gtfs). Every module can register an import operation that discovers whether the filepath is a OSM dataset, GTFS dataset, HRD dataset, etc. If the discovery step identified an OSM dataset, it will trigger the OSM event with the corresponding path and therefore automatically trigger the import steps that require the OSM dataset. To debug why the import did not trigger, we also need a --import.debug=true flag that prints the missing files. Otherwise, it could be hard to find out what's wrong.

The latter solution is more generic.

Progress Documentation

Every process should document its progress using the following event message:

table ImportProgress {
  task_name: string;
  finished: bool;
  progress: int8;  // percent [0, 100]
}

The first update always has to be { finished=false, progress=0 } to start showing the progress bar. This should be done before starting the task just to indicate (to the user) that the task is running. The progress bar will be removed if an update has finshed set to true.

To give the user an overview of which modules are already loaded and ready, which modules are in-progress and which modules are waiting for dependencies, every module can update its state (i.e. using the ImportProgress or alternatively a separate structure with enum ModuleImportState { LOADING, WAITING, FINISHED };. The output could look like this:

schedule: finished
osrm: loading
ppr: loading
parking-prepare: waiting for "ppr"

[============>                                ] 48%   osrm-contract
[===>                                         ] 14%   ppr

The console status is continuously updated as new progress and module state updates come in. One solution to display multiple progress bars is demonstrated in the p-ranav/indicators library.

Missing Features of the Module System

Update: not needed because all tasks will be executed with one thread

Currently, the module system is not capable of handling RAM requirements. The maximal parallelism can be set using the num_threads parameter but this does not prevent Motis from spawning two tasks that do not fit into RAM together. Maybe, it makes sense to use the same technique that's also used in the ctx::access_schedule class to prevent concurrent (read and write) or (write and write):

  • when a task is ready - check if it can be started using a wrapping lambda
  • If not ready: put it back in to a "on hold queue"
  • when a task finishes: check the "on hold queue" and re-schedule all operations that were on hold, waiting to be executed (this is also done in the wrapping lambda after the actual operation has finished)

Thus, the wrapping lamda would look like:

[&] {
  if (ram_usage() > current_op<Data>()->ram_requirement_) {
    queue_.emplace_back(current_op<Data>()->shared_from_this());
    current_op()->suspend();
    // retry...
  }
  fn();
  if (!queue_.empty()) {
    unqueue(queue_);
  }
}

This has to work together with the access scheduler. Maybe it needs to be integrated into the access scheduler?


Update from discussions with @sfahnens and @pablohoch below

Task Execution Conditions

It would be possible to detect that a task has finished its work by checking if the produced database file is there. However, the file might also be there if the task crashed. In this case, the task should be repeated. Unfortunately, the task would not be repeated because the (incomplete) file of the (incomplete/crashed) run prevents this.

To indicate that a task is finished it should put a separate "finished"-file into the filesystem after everything is done. This file should contain everything that is required to check whether the task needs to be repeated because either the settings or the input files (OSM/HRD/GTFS/...) have changed. Since this information comes with the required input events, the file contents may essentially be all input event messages converted to JSON.

So the task execution condition would be:

  • either, there is no finished file. So there is no produced output file (e.g. database) or if there is an output file it is probably incomplete or the user deleted the finished file on purpose to trigger the import task to run again
  • or there is a finished file:
    • if the contents of the finished file are the same as those in the input events that triggered this task to run, there is nothing to do and the output event should be triggered without further ado.
    • if the contents of the finished file do not match the input events that triggered this task to run, the task needs to be executed again

The input events for OSM-based tasks should contain a hash of the first 100MB of the OSM file and the OSM filesize to enable checking whether the OSM file has changed. Replacing the OSM file should force all OSM-based import tasks to be executed.

Event Collector

Basically, every import task that has more than one required input needs to wait for several input events and remember all of the message contents. To prevent code duplication, a even_collector class should be introduced:

event_collector ec;
ec.listen("osm", MessageContent_ImportOSM);
ec.listen("schedule", MessageContent_ImportSchedule);
if (import_use_srtm_) {  // parameter configured in module settings
  ec.listen("srtm", MessageContent_ImportSRTM);
}
ec.task([](std::map<std::string, msg_ptr> const& dependencies) {
  std::cout << dependencies.at("osm")->to_json() << std::endl;
  std::cout << dependencies.at("schedule")->to_json() << std::endl;
  if (import_use_srtm_) {
    std::cout << dependencies.at("srtm")->to_json() << std::endl;
  }
  // TODO: Check finished file.
  // TODO: Start import if required.
});

// subscribes to all events listed above
// and executes the task after the last event has arrived
ec.subscribe(registry);

Integration into MOTIS

The most favorable solution is to include the import steps into MOTIS.

for (auto const& m : modules) m.init();
publish("/import", /* thread_count= */ 1);
publish("/init");

If everything is already preprocessed, the "/import" step should be finished within <1s because it only checks all "finished" files (see above) and sees that everything is already imported. If anything has changed (e.g. the OSM file was replaced), all import tasks that depend on the changed input (and recursively those that depend on steps that depend on changed input) files will be executed again.

Detection of Bad Configurations

There can be bad configurations. One example would be that PPR waits for SRTM data and OSM data but the SRTM dataset was not given as input file. Therefore, the PPR import task would never run. This should be easy to detect if the PPR import task is marked as "required" but did not run after the event queue has been cleared. Thus, we need a step that checks whether all "required" tasks were executed. If a required task was not executed, this could be a bad configuration.

Additionally, the output of the task state ("waiting/loading/finished") should print which events it is waiting for. Therefore, the user can quickly see which input was not fulfilled.

Event Chain Example

This example of triggered events should illustrate the process described above:

  • FILE_EVENT("input.osm.pbf") (triggered because the MOTIS import configuration listed this file)
  • task: osm_detection (listening for FILE_EVENT)
  • OSM("input.osm.pbf", size, hash of first 100MB) (triggered from osm_detection because the file name matched the OSM file pattern)
  • all steps that require OSM

Initialization error during import for particular modules

Hi @felixguendling ,

I tried your example setup from over here, but unfortunately in my case errors occur during initialization of the container. In particular:

 initialization error: some imports were not successful: address, osrm, parking, path, ppr, tiles

I took your mentioned OSM data of Switzerland and the schedule data from opentransportdata.swiss as you described.

Here is the full list of details on the setup:

Initial data load

wget https://download.geofabrik.de/europe/switzerland-latest.osm.pbf -O /mnt/efs/fs1/motis/input/osm.pbf
SCHEDULE_URL=https://opentransportdata.swiss/dataset/55b617c1-7241-4254-b243-0352d5523f6f/resource/54262e6d-04df-4d7d-bdb2-b6d173a49d0a/download/4_tu_ch_oev_sammlung_ch__2021_20211208_201836.zip
wget $SCHEDULE_URL -O /mnt/efs/fs1/motis/4_tu_ch_oev_sammlung_ch__2021_20211208_201836.zip
unzip 4_tu_ch_oev_sammlung_ch__2021_20211208_201836.zip -d /mnt/efs/fs1/motis/input/schedule

docker-compose.yml

version: "3.7"
services:
  motis:
    ports:
      - 8080:8080
    image: ghcr.io/motis-project/motis:0.6.3
    volumes:
      - /mnt/efs/fs1/motis/data:/data
      - /mnt/efs/fs1/motis/input:/input

docker-compose up

user@host:/mnt/efs/fs1/motis# docker-compose up

motis_1  | Used Options:
motis_1  |
motis_1  |   server.host: 0.0.0.0
motis_1  |   server.port: 8080
motis_1  |   server.cert_path: ::dev::
motis_1  |   server.priv_key_path: ::dev::
motis_1  |   server.dh_path: ::dev::
motis_1  |   server.api_key:
motis_1  |   server.log_path:
motis_1  |   server.static_path: /motis/web
motis_1  |
motis_1  |
motis_1  |   import.paths: schedule:/input/schedule
motis_1  |   import.data_dir: data
motis_1  |   import.require_successful: 1
motis_1  |
motis_1  |
motis_1  |   dataset.path:
motis_1  |   dataset.prefix:
motis_1  |   dataset.graph_path: default
motis_1  |   dataset.write_serialized: 1
motis_1  |   dataset.write_graph: 0
motis_1  |   dataset.read_graph: 0
motis_1  |   dataset.read_graph_mmap: 0
motis_1  |   dataset.cache_graph: 0
motis_1  |   dataset.apply_rules: 1
motis_1  |   dataset.adjust_footpaths: 1
motis_1  |   dataset.expand_footpaths: 1
motis_1  |   dataset.use_platforms: 0
motis_1  |   dataset.begin: TODAY
motis_1  |   dataset.num_days: 2
motis_1  |   dataset.planned_transfer_delta: 30
motis_1  |   dataset.wzr_classes_path:
motis_1  |   dataset.wzr_matrix_path:
motis_1  |   dataset.no_local_transport: 0
motis_1  |
motis_1  |
motis_1  |   modules: address, cc, csa, guesser, intermodal, lookup, osrm, parking, path, paxforecast, paxmon, ppr, railviz, raptor, revise, ris, routing, rt, tiles, tripbased
motis_1  |   exclude_modules:
motis_1  |
motis_1  |
motis_1  |   remotes:
motis_1  |
motis_1  |
motis_1  |   mode: server
motis_1  |   batch_input_file: queries.txt
motis_1  |   batch_output_file: responses.txt
motis_1  |   init:
motis_1  |   num_threads: 4
motis_1  |   direct: 0
motis_1  |
motis_1  |
motis_1  |   csa.bridge: 0
motis_1  |   csa.expand_footpaths: 0
motis_1  |
motis_1  |
motis_1  |   intermodal.router: tripbased
motis_1  |   intermodal.revise: 0
motis_1  |
motis_1  |
motis_1  |   osrm.profiles: /motis/osrm-profiles/car.lua, /motis/osrm-profiles/bike.lua, /motis/osrm-profiles/bus.lua
motis_1  |
motis_1  |
motis_1  |   parking.db_max_size: 549755813888
motis_1  |   parking.max_walk_duration: 10
motis_1  |   parking.import.edge_rtree_max_size: 3221225472
motis_1  |   parking.import.area_rtree_max_size: 1073741824
motis_1  |   parking.import.lock_rtrees: 0
motis_1  |
motis_1  |
motis_1  |   path.use_cache:
motis_1  |   path.max_size: 34359738368
motis_1  |
motis_1  |
motis_1  |   paxforecast.forecast_results:
motis_1  |   paxforecast.behavior_stats:
motis_1  |   paxforecast.routing_cache:
motis_1  |   paxforecast.calc_load_forecast: 1
motis_1  |   paxforecast.publish_load_forecast: 0
motis_1  |   paxforecast.stats:
motis_1  |   paxforecast.deterministic_mode: 0
motis_1  |   paxforecast.min_delay_improvement: 5
motis_1  |
motis_1  |
motis_1  |   paxmon.generated_capacity_file:
motis_1  |   paxmon.stats:
motis_1  |   paxmon.capacity_match_log:
motis_1  |   paxmon.journey_match_log:
motis_1  |   paxmon.over_capacity_report:
motis_1  |   paxmon.broken_report:
motis_1  |   paxmon.reroute_unmatched: 0
motis_1  |   paxmon.reroute_file:
motis_1  |   paxmon.reroute_router: /tripbased
motis_1  |   paxmon.start_time: 1970-01-01T00:00:00Z+0000
motis_1  |   paxmon.end_time: 1970-01-01T00:00:00Z+0000
motis_1  |   paxmon.time_step: 60
motis_1  |   paxmon.match_tolerance: 0
motis_1  |   paxmon.arrival_delay_threshold: 20
motis_1  |   paxmon.preparation_time: 15
motis_1  |   paxmon.check_graph_times: 0
motis_1  |   paxmon.check_graph_integrity: 0
motis_1  |   paxmon.mcfp_scenario_dir:
motis_1  |   paxmon.mcfp_scenario_min_broken_groups: 500
motis_1  |   paxmon.mcfp_scenario_include_trip_info: 0
motis_1  |   paxmon.keep_group_history: 0
motis_1  |   paxmon.reuse_groups: 1
motis_1  |
motis_1  |
motis_1  |   ppr.import.use_dem: 0
motis_1  |   ppr.profile: /motis/ppr-profiles/default.json
motis_1  |   ppr.edge-rtree-max-size: 3221225472
motis_1  |   ppr.area-rtree-max-size: 1073741824
motis_1  |   ppr.lock-rtrees: 0
motis_1  |   ppr.prefetch-rtrees: 1
motis_1  |   ppr.verify-graph: 0
motis_1  |
motis_1  |
motis_1  |   railviz.initial_permalink:
motis_1  |   railviz.tiles_redirect:
motis_1  |
motis_1  |
motis_1  |   ris.db: /data/ris.mdb
motis_1  |   ris.input:
motis_1  |   ris.db_max_size: 549755813888
motis_1  |   ris.init_time: 1970-01-01T00:00:00Z+0000
motis_1  |   ris.clear_db: 0
motis_1  |   ris.instant_forward: 0
motis_1  |   ris.gtfsrt.is_addition_skip_allowed: 1
motis_1  |
motis_1  |
motis_1  |   rt.validate_graph: 0
motis_1  |   rt.validate_constant_graph: 0
motis_1  |   rt.print_stats: 1
motis_1  |
motis_1  |
motis_1  |   tiles.profile: /motis/tiles-profiles/background.lua
motis_1  |   tiles.import.use_coastline: 0
motis_1  |   tiles.import.flush_threshold: 10000000
motis_1  |   tiles.db_size: 1099511627776
motis_1  |
motis_1  |
motis_1  |   tripbased.use_data_file: 1
motis_1  |
motis_1  |
motis_1  |      address: WAITING: {"OSM"}
motis_1  |    osrm-bike: WAITING: {"OSM"}
motis_1  |      address: WAITING: {"OSM"}
motis_1  |    osrm-bike: WAITING: {"OSM"}
motis_1  |     osrm-bus: WAITING: {"OSM"}
motis_1  |     osrm-car: WAITING: {"OSM"}
motis_1  |      parking: WAITING: {"OSM", "PPR"}
motis_1  |         path: WAITING: {"OSM", "OSRM"}
motis_1  |       paxmon: FINISHED
motis_1  |          ppr: WAITING: {"OSM"}
motis_1  |     schedule: FINISHED
motis_1  |        tiles: WAITING: {"OSM"}
motis_1  |    tripbased: FINISHED
motis_1  |
motis_1  | initialization error: some imports were not successful: address, osrm, parking, path, ppr, tiles
motis_1 exited with code 1

directory structure

user@host:/mnt/efs/fs1/motis# tree .
.
├── 4_tu_ch_oev_sammlung_ch__2021_20211208_201836.zip
├── data
│   ├── address
│   │   ├── address_db.raw
│   │   └── import.ini
│   ├── log
│   │   ├── address.txt
│   │   ├── osrm-bike.txt
│   │   ├── osrm-bus.txt
│   │   ├── osrm-car.txt
│   │   ├── parking.txt
│   │   ├── path.txt
│   │   ├── paxmon.txt
│   │   ├── ppr.txt
│   │   ├── schedule.txt
│   │   ├── tiles.txt
│   │   └── tripbased.txt
│   ├── osrm
│   │   ├── bike
│   │   │   ├── import.ini
│   │   │   ├── osm.osrm
│   │   │   ├── osm.osrm.core
│   │   │   ├── osm.osrm.datasource_indexes
│   │   │   ├── osm.osrm.datasource_names
│   │   │   ├── osm.osrm.ebg
│   │   │   ├── osm.osrm.edges
│   │   │   ├── osm.osrm.enw
│   │   │   ├── osm.osrm.fileIndex
│   │   │   ├── osm.osrm.geometry
│   │   │   ├── osm.osrm.hsgr
│   │   │   ├── osm.osrm.icd
│   │   │   ├── osm.osrm.level
│   │   │   ├── osm.osrm.names
│   │   │   ├── osm.osrm.nodes
│   │   │   ├── osm.osrm.properties
│   │   │   ├── osm.osrm.ramIndex
│   │   │   ├── osm.osrm.restrictions
│   │   │   ├── osm.osrm.timestamp
│   │   │   ├── osm.osrm.tld
│   │   │   └── osm.osrm.tls
│   │   ├── bus
│   │   │   ├── import.ini
│   │   │   ├── osm.osrm
│   │   │   ├── osm.osrm.core
│   │   │   ├── osm.osrm.datasource_indexes
│   │   │   ├── osm.osrm.datasource_names
│   │   │   ├── osm.osrm.ebg
│   │   │   ├── osm.osrm.edges
│   │   │   ├── osm.osrm.enw
│   │   │   ├── osm.osrm.fileIndex
│   │   │   ├── osm.osrm.geometry
│   │   │   ├── osm.osrm.hsgr
│   │   │   ├── osm.osrm.icd
│   │   │   ├── osm.osrm.level
│   │   │   ├── osm.osrm.names
│   │   │   ├── osm.osrm.nodes
│   │   │   ├── osm.osrm.properties
│   │   │   ├── osm.osrm.ramIndex
│   │   │   ├── osm.osrm.restrictions
│   │   │   ├── osm.osrm.timestamp
│   │   │   ├── osm.osrm.tld
│   │   │   └── osm.osrm.tls
│   │   └── car
│   │       ├── import.ini
│   │       ├── osm.osrm
│   │       ├── osm.osrm.core
│   │       ├── osm.osrm.datasource_indexes
│   │       ├── osm.osrm.datasource_names
│   │       ├── osm.osrm.ebg
│   │       ├── osm.osrm.edges
│   │       ├── osm.osrm.enw
│   │       ├── osm.osrm.fileIndex
│   │       ├── osm.osrm.geometry
│   │       ├── osm.osrm.hsgr
│   │       ├── osm.osrm.icd
│   │       ├── osm.osrm.level
│   │       ├── osm.osrm.names
│   │       ├── osm.osrm.nodes
│   │       ├── osm.osrm.properties
│   │       ├── osm.osrm.ramIndex
│   │       ├── osm.osrm.restrictions
│   │       ├── osm.osrm.timestamp
│   │       ├── osm.osrm.tld
│   │       └── osm.osrm.tls
│   ├── ppr
│   │   ├── import.ini
│   │   ├── routing_graph.ppr
│   │   ├── routing_graph.ppr.art
│   │   ├── routing_graph.ppr.ert
│   │   └── routing_graph.stats.csv
│   └── tiles
│       ├── import.ini
│       ├── tiles.mdb
│       ├── tiles.mdb-lock
│       └── tiles.pck
└── input
    ├── config.ini
    ├── osm.pbf
    └── schedule
        ├── ATTRIBUT_DE
        ├── ATTRIBUT_EN
        ├── ATTRIBUT_FR
        ├── ATTRIBUT_IT
        ├── BAHNHOF
        ├── BETRIEB_DE
        ├── BETRIEB_EN
        ├── BETRIEB_FR
        ├── BETRIEB_IT
        ├── BFKOORD
        ├── BFKOORD_GEO
        ├── BFPRIOS
        ├── BHFART
        ├── BITFELD
        ├── DURCHBI
        ├── ECKDATEN
        ├── FEIERTAG
        ├── FPLAN
        ├── GLEIS
        ├── GRENZHLT
        ├── INFOTEXT_DE
        ├── INFOTEXT_EN
        ├── INFOTEXT_FR
        ├── INFOTEXT_IT
        ├── KMINFO
        ├── METABHF
        ├── RICHTUNG
        ├── UMSTEIGB
        ├── UMSTEIGL
        ├── UMSTEIGV
        ├── UMSTEIGZ
        ├── UMSTEIGZ_VERKEHRSTAGE
        ├── ZEITVS
        └── ZUGART

11 directories, 122 files

config.ini

[import]
paths=schedule:/input/schedule
data_dir=data

system-config.ini

Is the plain one from the docker-container itself: see Dockerfile (0.6.3).

Is there anything that you see on why the module import is not working?

Thanks for your patience & help :)

Mario

Timestamp Verification / Search for Earlier & Later Connections

This seems to help.

Better: the routing response contains two flags "end reached" (earlier / later).

diff --git a/base/core/src/access/time_access.cc b/base/core/src/access/time_access.cc
index 9c61cea..73409b3 100644
--- a/base/core/src/access/time_access.cc
+++ b/base/core/src/access/time_access.cc
@@ -26,10 +26,12 @@ void verify_timestamp(schedule const& sched, time_t t) {
 }

 void verify_external_timestamp(schedule const& sched, time_t t) {
-  if (t < std::min(sched.first_event_schedule_time_,
+/*if (t < std::min(sched.first_event_schedule_time_,
                    external_schedule_begin(sched)) ||
       t >= std::max(sched.last_event_schedule_time_,
                     external_schedule_end(sched))) {
+*/
+  if (t < external_schedule_begin(sched) || t >= external_schedule_end(sched)) {
     auto const schedule_begin = external_schedule_begin(sched);
     auto const schedule_end = external_schedule_end(sched);
     LOG(logging::error) << "timestamp not in schedule: " << format_unix_time(t)
@@ -44,11 +46,15 @@ std::time_t motis_to_unixtime(schedule const& sched, time t) {
 }

 time unix_to_motistime(schedule const& sched, std::time_t t) {
+/*
   auto const mt = unix_to_motistime(sched.schedule_begin_, t);
   if (mt == INVALID_TIME) {
     throw std::system_error(motis::access::error::timestamp_not_in_schedule);
   }
   return mt;
+*/
+  verify_timestamp(sched, t);
+  return unix_to_motistime(sched.schedule_begin_, t);
 }

 time motis_time(int const hhmm, int const day_idx, int const timezone_offset) {

handling of GTFS Time values

GTFS Time is not defined relative to midnight, but relative to noon - 12h. While that makes "writing" GTFS feeds easier, it makes processing a lot harder. As an example, I expect

  • the departure_time of 00:30 of a trip running on 2021-03-28 to happen at 1616884200/2021-03-28T00:30+02:00, not at 1616887800/2021-03-28T00:30+01:00;
  • the departure_time of 06:30 of a trip running on 2021-03-28 to happen at 1616905800/2021-03-28T06:30+02:00, not at 1616909400/2021-03-28T06:30+01:00.

I'm currently checking well-known GTFS parsers/consumers for this bug, so I wondered if MOTIS is affected by the described problem on those days that the DST <-> standard time switch occurs on? I'm not familiar with this code base, I tried to dig into how GTFS is parsed, but I couldn't understand enough from quickly looking at the code.


related: google/transit#15

Support for Bike Sharing (e.g. GBFS data) in MOTIS?

The MOTIS Project website lists, among the planned features, the integration of bike and car sharing providers in the core engine. Is there already any support for e.g. General Bikeshare Feed Specification (GBFS) data, as of version 0.3? If so, how can one set the feeds for this kind of data?

MOTIS 0.3 Linux Release - dead download link

Following the link to the binaries for MOTIS 0.3 for Linux from either the Releases page or the MOTIS installation documentation (https://github.com/motis-project/motis/releases/download/v0.3/motis-linux.tar.bz2) results in the following XML being delivered:

<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>255657224/02b79b80-ab42-11ea-87d7-9b3c825d91ca</Key>
<RequestId>52F77FA3615883A7</RequestId>
<HostId>nfBFH/+ShebyGr+bPGCunhIjuKRa16TazwIRYBFGrMUqM/osowe8jGIvVjuPE1ZFyzBS4ooaSB0=</HostId>
</Error>

The Windows and MAC build are downloadable without issues, instead.

JSON Logging

Example:

{
    "time": "2022-03-08T09:32:20.685+00:00",
    "severity": "WARN",
    "logger": "dispatcher.cc:121",
    "msg": "target not found: /metrics"
}

Understanding GTFS-RT support in MOTIS v0.3

I'm trying to understand where/how MOTIS downloads real-time GTFS-RT updates. Since the Swiss Open Platform Data requires an API key to access the real time updates, I was surprised to see that MOTIS was able to keep track of bus delays and positions without me using one, even on my local copy

Since I have a personal API key for the Swiss Open Data Platform Mobility, I wanted to understand where I can add it and how to edit the connection details for the updates. I have browsed the C++ source code, but I couldn't get a good idea on where to look.

As per #73, currently MOTIS uses external scripting to update GTFS-RT. Where are those scripts located? The target version for this question is MOTIS release v0.3.

Packaging motis for Linux Distributions

Just updated to the current release and wondered about packaging motis would be an improvement or not. Did you already though about this? E.g. if I setup a web server I usually get e.g. apache web server it from the repository as well, why not mtois.

Pros I came up with:

  • easy updates without the need of manual intervention
  • easy accessible for users who want to give it a try
  • raise some awareness about motis

Don't think this is a primary issue, but wondered whether you already thought about it.

GTFS: Parse Platform

The attribute platform_code in stops.txt needs to be translated into platform rules in MOTIS' schedule.raw file.

Routing vs Tripbased Ontrip BWD

Query:
{ "destination": { "target": "/routing" }, "content_type": "RoutingRequest", "content": { "start_type": "OntripStationStart", "start": { "station": { "id": "0974791", "name": "" }, "departure_time": 1529067600 }, "destination": { "id": "0788947", "name": "" }, "search_dir": "Backward", "via": [ ], "additional_edges": [ ] }, "id": 1}

Routing module result:

Journey: duration=330 transfers=6  accessibility=0     15.06. 06:45          --> 15.06. 12:15          (UTC)
Journey: duration=334 transfers=5  accessibility=0     15.06. 06:41          --> 15.06. 12:15          (UTC)
Journey: duration=458 transfers=4  accessibility=0     15.06. 04:37          --> 15.06. 12:15          (UTC)

Tripbased module result:

Journey: duration=330 transfers=6  accessibility=0     15.06. 06:45          --> 15.06. 12:15          (UTC)
Journey: duration=334 transfers=5  accessibility=0     15.06. 06:41          --> 15.06. 12:15          (UTC)
Journey: duration=394 transfers=4  accessibility=0     15.06. 05:41          --> 15.06. 12:15          (UTC)

Tripbased yields better result.

Timetable from 2018
dataset.begin=20180615

Docker input schedule directory differences in documentation

Hi,

I have one question about running motis via docker. In the Dockerfile I see that by default it suggests putting the schedule sub-directory into the input directory:

[import]\n\
paths=schedule:/input/schedule\n\

In this comment from @felixguendling, it suggests downloading the schedule data into the /data directory:

SCHEDULE_URL=https://opentransportdata.swiss/dataset/55b617c1-7241-4254-b243-0352d5523f6f/resource/54262e6d-04df-4d7d-bdb2-b6d173a49d0a/download/4_tu_ch_oev_sammlung_ch__2021_20211208_201836.zip

#...
mkdir -p motis-swiss/data/hrd

#...

# Download and unpack timetable.
wget $SCHEDULE_URL
unzip 4_tu_ch_oev_sammlung_ch__2021_20211208_201836.zip -d data/hrd


# Write config.ini
echo "\
#...

[import]\n\
paths=schedule:data/hrd\n\
# ...

This is in alignment with the official installation documentation from:

Download the latest dataset HAFAS Rohdaten (version 5.20.39 - “Timetable 202x (HRDF)” not version 5.40) dataset from [opentransportdata.swiss](http://opentransportdata.swiss/en/dataset) and extract it into your data/hrd folder.

But in the announcement from 09-2021 it says, that the input directory can now be used for read-only input data (like schedule).

I think I just do not have the full understanding of what the schedule directory is and how it should be treated.

It would be great if you can help me out here on where to put the schedule information.

Thanks
Mario

Docker health check probe for indicating "readiness"

Hi,

in order to run the motis container in a container scheduler (like ECS in my case), it is required to know when the motis container is treated as healthy / started.

With the initial calculations that are performed at container startup, it can reach the limit of waiting time.

What would you suggest to use for the Docker HEALTHCHECK command in order to identify if the container is ready? Is it enough to simply wait for curl localhost:8080 or even rely on the load balancers' built-in health check to ask for an HTTP 200 on a particular path?

Or would you instead suggest doing something else inside the container with the HEALTHCHECK command?

PS: perhaps this is related to my other question since it is only really needed if the container takes multiple minutes to start as it has to do the initial calculations at startup. If it is possible to precalculate this and load it via an external volume, I would guess that this topic would not be necessarily a problem.

Thanks
Mario

New Claszes in MOTIS

  • for ferries
  • for air traffic
  • for long distance bus

Ferries are visible with clasz 9 which requires a high zoomlevel. When they are in a separate class, they could show up earlier.

Leanest configuration intermodal routing

Our setup:
Two tasks for

  • generating graphs via /motis/motis --mode test --system_config /system_config.ini -c/config.ini
  • running motis with the generated graphs

Map data from geofabrik and https://www.opendata-oepnv.de for whole Germany.

While generating the graphs with the highest possible memory settings for ECS

cpu = 4096
memory = 30720

the task is killed after 5.5 hours due to a OutOfMemory exception.

Bildschirmfoto 2022-03-16 um 23 18 33

As we only want to use motis as intermodal routing API could you give us a hint which modules we could exclude and how we could optimize the setup so that it doesn't use that much memory while building the graph?

GTFS2020 does not seem to work

Using the "older" GTFS data from here: https://opentransportdata.swiss/de/dataset/timetable-2020-gtfs seems to work without Problems. I've also tried to use the new format GTFS2020 from here: https://opentransportdata.swiss/de/dataset/timetable-2020-gtfs2020 which results in an error message "Verify fail schedule not loaded". Opentransportdata.swiss recommended switching to the new format by the end of July for example.

Are you aware of this behaviour? Are there plans to support the format in the future?

Illegal instruction (core dumped) on ubuntu 18.04.5 LTS

Hi,

I am trying to install the motis demo following the official documentation but unfortunatly when executing motis I am promted with the following error: Illegal instruction (core dumped).
The system itself is a ubuntu 18.04.5 LTS virtual machine (4 cores, 16g ram, 64g storage) on a proxmox host - kvm virtualized.

Default time for transfers (walk)

I have noticed that if the coordinates "from" and "to" of a leg are identical, the duration of this leg is always 4 minutes.

For example: a leg (walk) from Pfäffikon SZ (47.203,8.77813) to Pfäffikon SZ (47.203,8.77813) with a duration of 4 minutes. I observe this for each leg where the coordinates of "from" and "to" are identical. So i guess this happens by default?

Furthermore, all footpaths of a connection are always in steps like 1, 2, 4, 6, 8, ... Minutes.
I did not see any connection in my test data that required a walk of 5 or 7 minutes for a transfer, for example. Is this due to MOTIS or did I miss something?

Parking Module - Import speedup

The parking module is currently the slowest for import when starting MOTIS. Two options to speedup without big effort:

  • only recompute data for new stations as long as OSM is still the same: store a list of reachable station coordinates for each parking (maybe hashed)
  • save 50% preprocessing time by using one-way-mode - mark profiles that are more or less symmetric (i.e. the default profile) because they do not distinquish direction of e.g. stairs, escalators, etc.

only show Routes on foot, by bicycle or by car

Is it possible to get a single-mode route (e.g. a route entirely on feet/bicycle, without public transport)? If so, how can one do this, via the GUI or the API? As far as I can see the demo always shows public transport within the route, unless the route itself is short and too far from a station.

Prefer Footpaths

case

from: 52.42865;10.786206
to: 52.535542;13.198407999999999

DELFI GTFS dataset from 2020-05-15

GTFS-RT: Automatic Update

Currently, GTFS-RT in MOTIS requires external scripts for download and updates. This can be automated in MOTIS.

RAPTOR 1:N Query

Implement capabilities for the 1:N Query type (to be defined) in the RAPTOR module.

API Improvements

Idea:

  • for incoming messages: move union type from inside JSON object to outside (for Flatbuffers to parse)
  • for outgoing messages: move union type from outside JSON to inside the object (for OpenAPI tools, etc.)

This would enable OpenAPI and better tooling support for working with the MOTIS JSON via HTTP API.

Also: remove target (module / topic) or move it into the HTTP path.

More API ideas:

  • Generate OpenAPI from fbs (requires defining a service interface: request/response types per endpoint)
  • Generate Documentation from fbs
  • JWT for limiting access to certain API functions (like RT updates) - currently only possible to limit access via external proxies
  • generate API code in module from OpenAPI description, annotated by access control
  • versioning /v0/routing etc.
  • support for externally defined existing standardized stable APIs /SIRI/v2.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.