Wikis > Logging

Functional Description


The Logging Service provides a consistent mechanism for monitoring all aspects of system activity which developers and producers consider to be important.

Activities to be logged should include:

  • User interactions with devices in the client environment.
  • Interactions between components in the production environment (such as video servers, metadata and graphics feeds).
  • Interactions between devices in the client environment to discover and launch DMApps, and to synchronise media objects between devices.
  • The request and delivery of media objects and streams.
  • The presentation of DMApp components as determined by the UX Engine (Timeline and Layout services).
  • The authentication of users, client devices and services via the Session service.
  • Communication sessions set up between DMapp components in different locations, mediated by the Lobby.

The goal of the Logging Service is to produce a consistent set of logs for each production session, which we define here as the up-time of the prototype 2-IMMERSE platform during an individual trial event, such as a theatre play, MotoGP race or football match. The service acts as a log aggregator to ingest, store and index log data. It will provide ingested log data to the Analytics Service, which can be used to present and analyse its data. The service must be started before all other services and should be the last service to be shut down. It may also be started independently of a production session to enable developers and producers to read and analyse log data.

2-IMMERSE intends to evaluate two logging solutions in parallel for the its first trial, Theatre at Home:

1) Logstash and Elasticsearch, both components from the Elastic Stack (formerly ELK – see https://www.elastic.co/products) are proposed as the internal ‘platform’ logging solution for 2-Immerse. Logstash provides a flexible, open source data collection pipeline, while Elasticsearch provides storage, plus indexing and analytics functions. We will preferably use the ELK instance provided within the Mantl platform  (http://docs.mantl.io/en/latest/components/elk.html). Log events will arrive from a number of different sources. Logstash offers a wide variety of input plugins, which can also be combined with filters and output plugins, to handle different data sources and aggregate them into a common format within the Elasticsearch database.

2) Google Analytics is a very popular web analytics solution which provides (among others) data collection, consolidation and reporting capabilities for web applications (see https://www.google.co.uk/analytics/standard/features). It is available free of charge and sophisticated event tracking can integrated using Google’s analytics.js library. Google Analytics is proposed as a complementary solution for logging of user interactions with 2-IMMERSE DMApp components. These events (button clicks, page scroll/swipe) are potentially more frequent than interactions between the DMApp components and 2-IMMERSE services and their relationship with the user experience makes Google Analytics a more appropriate tool for capturing and processing them.

We anticipate that the following logging scenarios will be implemented:

  1. Proprietary 2-Immerse services (such as Layout, Timeline, Synchronisation, Lobby) will send individual events to Logstash using the Syslog protocol. Documentation for the syslog input plugin is provided here: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html.
  2. 2-Immerse services based on standard components (such as Service Registry, Session, Call Server) will ideally also send individual events to Logstash using the Syslog protocol. However, if the services already implement a different logging mechanism, an appropriate plugin will be selected to import their output.
  3. The Client Web Application will record logs in two different ways:
    1. Platform-related events will be sent directly to Logstash as pre-defined JSON structures over HTTP or HTTPS, to prevent issues with firewall traversal. Documentation for the http input plugin is provided here: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html
    2. Higher resolution events (such as user interactions – button clicks etc) will be sent to Google Analytics using one or more event trackers and the analytics.js library, in accordance with Google’s documentation at https://developers.google.com/analytics/devguides/collection/analyticsjs.

The Logstash http input plugin supports basic HTTP authentication or SSL, with client certificates validated via the Java Keystore format. I would suggest that we use basic authentication to avoid complexity, unless there is a need for personal data to be recorded within log messages.


As the Logstash input plugins present their own protocol-specific interfaces (Syslog or HTTP POST, for example), the use of a verb here is purely illustrative.


A platform event from a 2-IMMERSE service which is passed to the Logstash syslog input plugin must conform to the Syslog protocol (RFC 5424). An example message might use the following RFC5424 structure:


…and look like the following:

<34>1 2016-06-16T18:00:00.000Z layoutservice.2immerse.eu layout_service - 0 - [layout_service] New context created, id=5730


  • APP-NAME defines the source of the log, from a controlled vocabulary.
  • PROC_ID is not used, hence ‘-‘
  • MSGID is used to define the layout context for this message. If the log message doesn’t apply to a layout context (eg. it is from the service registry), special values of context_id can be used (eg.  0).
  • STRUCTURED-DATA is not used, hence ‘-‘
  • MSG contains the full log message, which should be consistent to facilitate parsing and analysis. I suggest using square brackets to provide hierarchical information about the origin of the message

A platform event from the Client Application event may be passed to the Logstash http input plugin as follows:

 "source_name" : "tv_client"
 "source_timestamp" : "2016-06-16 18:00:30 +0000"
 "context_id" : "5730"
 "message" : "[tv_client id=222][dmapc id=123] media_player: playback started"


  • source_name defines the source of the log, from a controlled vocabulary.
  • source_timestamp is the time at which the event was logged by the reporting component (as opposed to the time the log was received, which will be appended by Logstash).
  • context_id is the layout context for this message. If the log message doesn’t apply to a layout context (eg. it is from the service registry), special values of context_id can be used (eg.  0).
  • the message format should be consistent to facilitate parsing and analysis. I suggest using square brackets to provide hierarchical information about the origin of the message.


As a minimum, the following services should deliver logs to the Logging service:

  • Service Registry
  • Timeline
  • Layout
  • Synchronisation
  • Identity Management and Authentication
  • Session
  • Call Server
  • Lobby

In addition, the web application on the TV and Companion clients should also deliver logs.

In order to correctly preserve the order of logs, all services and web applications should ensure that the clock they use to generate source_timestamp is synchronised regularly to an external NTP source. Alternatively, there may be scope to use a shared Wall Clock managed by the Synchronisation service – further investigation is required here.

Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search