Beyond the video wall – responsive content projection

TIM PEARCE from BBC R&D writes:

In the 2-IMMERSE project we are aiming to create a platform which will enable content creators to author multimedia experiences which adapt to the different ecosystems of devices surrounding users. In the Theatre At Home and MotoGP trials, we have focused on second screen (companion) experiences, presenting content on a phone or tablet device that synchronises with the main content on screen.

Our future prototypes, including Theatre In Schools, are designed to extend the capabilities of the platform that may enable the creation of experiences that can be displayed on multiple communal screens, surrounding a larger group of users. By building these features into the platform, we will enable content creators to create highly immersive experiences, including more diverse applications such as digital signage and art installations in addition to traditional broadcast content.

The BBC has been exploring ‘on-boarding’, the end-to-end user experience of 2-IMMERSE experiences. One of the challenges of this is joining many different devices in a room to a single experience, especially as some devices lack a keyboard and mouse. Multiple communal screens pose an interesting use case for layout service which needs to be captured in the on-boarding process. If we want to project our object-based content onto multiple communal screens, how do we know the spatial relationship between those screens and how can we use this to influence the layout of content? Can components span multiple screens or animate from one screen to another?

A really elegant solution to this problem can be found in the Info Beamer project, an open-source digital signage solution for Raspberry PI (above). Each communal screen displays a unique QR Code, which is scanned by a camera-equipped companion device to connect the screens to the experience and determine their relative position, size and orientations. This could be a potential solution for on-boarding a number of large-screen devices in future 2-IMMERSE scenarios.

We will discuss on-boarding and the challenges in designing an end-to-end multiscreen experience in greater detail in a future blog post.

READ MORE


More?!! A first look at MPEG-MORE

Sometimes standards make all the difference.  In this technical post, MARK LOMAS from BBC R&D looks at some standards that seem relevant to the delivery of multi-screen experiences.  The 2-IMMERSE team is not yet clear about the impact these standards will have.  If nothing else, the development of standards forces someone to diligently describe, and attempt to generalise, a problem – and that’s useful.  

Multiscreen experiences are unlike regular TV programmes. The components required to make multiscreen experiences are delivered, a bit like flat-pack furniture, ready to be assembled in different ways for playback across a group of devices.

Playing media at the right time on the right device is tricky; it is like conducting an orchestra. The conductor must interpret the musical score and direct the performers by metering out the musical pulse. 2-IMMERSE is investigating how to conduct ‘orchestras of devices’ in the home, in the cloud and within production environments. We have been following the evolution of MPEG-MORE, a proposed technical standard for media orchestration.

What is MPEG-MORE?

MPEG-MORE is an acronym for “Motion Picture Experts Group – Media ORchEstration”. As of 29 March 2017, the MPEG-MORE specification is at the committee draft stage. The latest documentation is available from the ISO content server.

MPEG-MORE is concerned with the orchestration of media capture, processing and presentation involving multiple devices. The standard describes a reference architecture comprised of an object model and set of control protocols for supporting orchestration scenarios in a network independent way. It also describes many types of timed-metadata such as spatiotemporal ‘Regions of Interest’ for orchestrating media processing and playback. It specifies how timed orchestration data and timed metadata is delivered in transport formats such as ISOBMFF, MPEG-2_TS and MPEG-Dash.

MPEG-MORE has adopted the same DVB-CSS Inter-device Media Synchronisation standard as 2-IMMERSE, but describes how it can be generalised and used within production. Finally, MPEG-MORE suggests that multiscreen experiences are an Internet of Things (IoT) application. This has encouraged us to investigate IoT cloud platforms such as Amazon Web Services IoT and Google Cloud IoT.

How can 2-IMMERSE leverage MPEG-MORE?

2-IMMERSE and MPEG-MORE have a lot in common. The 2-IMMERSE Theatre At Home trial provides an early working demonstration of some of the MPEG-MORE concepts in action.

• 2-IMMERSE Design Patterns

When the 2-IMMERSE architecture is described in terms of the MPEG-MORE object model, it surfaces system design patterns that are otherwise obscured by implementation. This representation shows that 2-IMMERSE micro services can act as orchestrators and demonstrates the 2-IMMERSE platform’s extensibility in a new way.

• Orchestration in the Cloud

MPEG-MORE uses a formulation of DVB-CSS timing and synchronisation that allows it to be extended into the cloud and right back into production systems. It describes a timing architecture for production that mirrors the timing architecture for playback. We would like to leverage this to hoist compute-intensive operations such as video compositing into the cloud to support devices and homes with poorer bandwidth and/or compute capability. We would also like to use this to generate timeline correlations at various points throughout the system.

• Quality of Service

A finding from our early pilot of the Theatre At Home experience was that multiple devices in the home compete with each other for available network bandwidth. This is a result of bit-rate adaption algorithms used by video players and the absence of a coordination mechanism for managing bandwidth across multiple devices. We discovered that MPEG-MORE’s communication is modelled on MPEG-SAND control messages. (See ‘Enhancing MPEG DASH performance via server and network assistance’. This makes MPEG-SAND of interest to 2-IMMERSE because it processes Quality of Service (QoS) information to arrange for the optimal delivery of content in multi-device ecosystems.

MPEG-SAND control messages are sent between sources, sinks and processing nodes in a network and therefore MPEG-SAND fits the MPEG-MORE object model nicely. The MPEG-MORE specification actually gives an example use case where Mean Opinion Score (MOS) timed meta-data is exchanged via control messages to orchestrate playback of different video feeds.

MPEG-SAND was designed to address a range of issues and use cases. Those that are relevant to 2-IMMERSE include:

  • ‘Multiple DASH clients compete for the same bandwidth, leading to unwanted mutual interactions and possibly oscillations’.
  • ‘Where a DASH client lets the delivery node know beforehand what it will request in the near future to prime the cache’
  • ‘Network mobility, e.g., when the user physically moves, which makes the device switch from one network to another, but must maintain Quality of Experience (QoE).’
  • ‘Inter-device media synchronization, e.g., when one or more DASH clients playback content in a synchronised manner.’

Available network bandwidth is a constraint that must be processed by the 2IMMERSE layout service when deciding what content to lay out. A future version of the service could collaborate with an MPEG-SAND element to gather, communicate and act on QoS/QoE measurements and exchange parameters for enhanced reception and delivery.

Next Steps

The architecture of 2-IMMERSE and MPEG-MORE are in excellent agreement. We are eagerly awaiting the next revision of the MPEG-MORE specification, but it has already given us plenty to think about. Right now, we are trying to understand how to blend MPEG-MORE, MPEG-SAND, IoT and 2-IMMERSE technologies together to deliver a revised architecture that can solve the many challenges of multiscreen experiences.

Image copyright: stockbroker / 123RF Stock Photo

READ MORE


Designing Production Tools for Interactive Multi-Platform Experiences

BRITTA MEIXNER, JEI LI and PABLO CESAR from CWI Amsterdam write about one of the key challenges for the 2-IMMERSE project:

Recent technical advances make authoring and broadcasting of interactive multi-platform experiences possible. Most of the efforts to date, however, have been dedicated to the delivery and transmission technology (such as HbbTV2.0), and not to the production process. Media producers face the following problem: there is a lack of tools for crafting interactive productions that can span across several screens.

Currently, each broadcast service (media + application) is created in an ad-hoc manner, for specific requirements, and without offering sufficient control over the overall experience to the creative director. Our intention as a contribution to 2-IMMERSE is to provide appropriate and adequate authoring tools for multi-screen experiences that can reshape the existing workflow to accommodate to the new watching reality.

We have been working to identify new requirements for multi-platform production tools. The requirements for traditional broadcast productions are clear and well-established, and are fulfilled by conventional broadcast mixing galleries such as the one above. But it is far from clear how multi-platform experiences will be produced and authored, as so far there are only a few experiences available. Each of these current experiences has been treated as an independent project and as a consequence was implemented on demand for a specific setting. The next generation of production tools must be particularly designed for interactive multi-platform experiences. These new tools are intended for broadcasters and cover both pre-recorded and live selection of content.

To find out about specific requirements for the aforementioned tools, we conducted semi-structured interviews with seven technical and five non-technical participants. The interview guidelines covered several sections. The first section tried to identify state-of-the-art knowledge and current challenges when creating interactive multi-platform experiences, to learn about how past experiences were authored, and to find a common ground between interviewer and interviewee(s). The second section aimed to find out who will use the system in the future and for which purpose, and it included questions like:

  • Who will be users of the system?
  • What level of education or training do users have?
  • What technical platforms do they use today? What tools do they use to produce (immersive) experiences?
  • What other IT systems does the organization use today that the new system will need to link to?
  • What training needs and documentation do you expect for the future system?

Functional and non-functional requirements were then gathered. Exemplary questions for functional requirements were:

  • What does the production process for live experiences look like?
  • Is spatial and temporal authoring desired?
  • Is the spatial design based on templates or can elements be arranged freely? How should layout support be realised, if at all?
  • Should the application be able to preview the presentation. If so, then to which degree of detail?
  • Which data formats do you use for video/audio/images that have to be processed by the authoring environment?

Exemplary questions for non-functional requirements were:

  • What are your expectations for system performance?
  • Are there any legal requirements or other regulatory requirements that need to be met?

After conducting the interviews, the transcripts were analysed, and user characteristics, general and environmental constraints, assumptions and dependencies related to live broadcasts, and open questions and issues were identified and noted. We also differentiated between functional requirements, and non-functional, i.e. technical and user requirements.

Fig. 1

Figure 1 above shows a subset of the initial collection of requirements, open questions, and issues. These were then rearranged according to phases of the production process, for which see Figure 2 below.

Fig. 2

Especially for the planning phase, a large number of open questions were identified. Production, distribution, and consumption phases revealed some technical questions that need to be solved. We identified a set of requirements that were used as the basis to create first screen designs for the authoring tool. Based on the most relevant requirements, four concepts of the production tool interfaces were designed, namely Chapter-based IDE (Integrated Design Environment), Mixed IDE, Workflow Wizard and a Premiere Plugin.

Fig. 3

Fig. 4

 

 

 

 

 

 

 

 

 

 

 

 

 

The Chapter-based IDE concept (Figure 3) divides a program into several chapters (e.g., for a sports event such as MotoGP, pre-race, main race, post-race). Each chapter contains (dozens of) components such as leaderboard, course map etc. The authoring process starts with newly-created or predefined templates, so all the components are assigned to specific regions on screens. The timing for each component to start and end is authored on a timeline.

The Mixed IDE concept (Figure 4) does not specify different phases/chapters of a program. A collection of re-usable Distributed Media Application (DMApp) components, includes components that play audio and video, present text and image content, and provide real-time video communication and text chat.

The limited collections (so far, 12 have been developed) of DMApp components reduce the diversity/complexity of the components. Dragging and dropping the DMApp components into the defined regions on screens allows program producers to author the multi-screen experience into a coherent look and feel. The sequence of the applied DMApp components are editable on a timeline.

Fig. 5

The Workflow Wizard (Figure 5) concept gives a program author an overview of the authoring process and guides the authoring step-by-step. It allows the assignment of work to different collaborators and facilittates a check on everyone’s progress.

 

 

 

 

 

 

 

 

 

 

 

Fig. 6

A Premier Plugin (Figure 6) is very similar to the Mixed IDE concept, but is based on the interfaces of Adobe Premiere. Since it is assumed that program authors are expert users of Adobe Premiere, the idea behind this concept is to increase their feeling of familiarity and ease of use.

In the future, further evaluations of these four concepts will be conducted, and new concepts will be formulated based on the feedback.

READ MORE


HbbTV 2: a note on the state of play

MICHAEL PROBST from IRT writes:

Hybrid Broadcast Broadband TV (HbbTV), as Wikipedia details, is both an industry standard (ETSI TS 102 796[1]) and promotional initiative for hybrid digital TV to harmonise the broadcastIPTV, and broadband delivery of entertainment to the end consumer through connected TVs (smart TVs) and set-top boxes.

The latest release of the HbbTV specification was released about a year and a half ago. From a public perspective, not much has happened since then, as it is still not possible to purchase a HbbTV 2-enabled TV. But in fact, a great deal has been happening “behind the curtains” and HbbTV 2 is evolving steadily. In this post we highlight some of the latest developments in terms of implementations and services.

Let’s have a short look at the new elements in HbbTV 2:

  • Support for HTML5 including the Media Elements replacing the “unloved” XHTML 1.0
  • Fancy UIs with CSS3 Animations and Transitions, but also downloadable fonts, e.g. to support languages with exotic characters.
  • Enabling closed subtitles for all broadband delivered media.
  • Companion screen: discovery of TVs and launching HbbTV apps; discovery of special (manufacturer specific) launcher apps and launch of mobile apps; communication between HbbTV apps and mobile apps
  • Media Synchronisation: The most complex feature allows to play broadcast on TV with content from the Internet in sync either also on TV or on the companion device.

HbbTV 1.5 was a relatively small update to the first release and included only features with urgent market needs, and hence it was implemented and deployed rather quickly. In Germany we can assume that all new TVs sold in 2017 implement 1.5. The market share of these devices, in relation to all HbbTV TVs, depends to a degree on who you ask, but now in summer 2017 it is quite likely 50%.

Once HbbTV 2 is deployed Germany’s broadcasters will have to deal with a large legacy of HbbTV 1.X devices. Other countries have the advantage of starting with HbbTV 2 but will have to migrate from former platforms like MHEG-5 (UK) and DVB-MHP (Italy). The UK industry has chosen to launch HbbTV with a sub-profile (Freeview Play), that does not include companion screen and media sync features, but Italy could be the first country where deployed TVs have a full HbbTV 2 implementation.

The HbbTV consortium is still very active and working on a number of different topics to allow HbbTV to find new markets:

  • The test suite already supports a first set of the HbbTV 2 test cases, and the HbbTV testing group is working hard to complete it.
  • ADB (Application discovery via broadband) is a specification that will make HbbTV services available in broadcast networks where operators block or do not care about HbbTV. A second version of this spec will allow scenarios were people to have to use STBs of the network operator, e.g. by employing watermarks.
  • IPTV: this spec addresses specific issues to use HbbTV in IPTV networks, i.e. where they differ from classical broadcast networks
  • Operator apps: not yet released, but this specification will define a complete new class of applications suitable for broadcast network operators.

HbbTV 2 in action

Synchronised textbook using HbbTV 2; image shows the Royal Shakespeare Company production of Richard II (2013), © RSC.

Several companies have shown prototypes over the last years including the BBC and IRT who are partners in the 2-IMMERSE project.

In 2015 IRT presented a first HbbTV 2 showcase – live streaming of MPEG DASH with EBU-TT-D subtitles – with several partners including both manufacturers of streaming encoders and TVs, as well as a CDN provider and content provider. As a result of the success of this activity we now see first live streaming services with MPEG-DASH and subtitles offered by ARD broadcasters. More information on this can be found here.

Also in 2015 IRT cooperated with ARD Mediathek – a service providing catch-up TV – to enable their mobile application to cast videos on HbbTV 2 TVs. The application has been very useful for testing HbbTV 2.0 features for automatic discovery and application launch with a large number of TV manufacturers and the next step will be that the function is integrated into the end-user version of the application. Further information here.

The concept of HbbTV 2 Media Synchronisation is largely based on contributions from the BBC. TNO and IRT also supported it as part of their cooperation in the EU-funded hbb-next project. The BBC have done some early HbbTV 2 implementations using their own TV emulator and have released libraries and tools as open source on github.com. At TV Connect 2017 they showcased synchronised playback of broadband media on the companion device with a broadcast service on TV in cooperation with Opera TV.

If you have not had the chance to see any of these demos, there is another one at IFA Berlin and IBC Amsterdam, both taking place in September.

The HbbTV 2 demos of IRT this year will focus on media synchronisation for broadcast services, like broadband delivered foreign language audio tracks. As a study for new types of devices emerging, IRT has implemented a companion application for Microsoft’s Hololens, that will show additional synchronised video feeds alongside the real picture of an HbbTV 2 TV set.

At the IFA IRT demos can be seen at the booth of ARD digital in hall 2.2/103 (look for the IRT table at the back of the booth). And at IBC IRT’s stand is in hall 10.F51 (in a corner of hall 10). We look forward to seeing you there.

READ MORE