2-IMMERSE at NAB 2018

The trade fair, exhibition and conference NAB is one of the fixtures on the calendars of all the key influencers in the broadcasting industry.  NAB, which took place 7-12 April in Las Vegas, and its sister conference IBC, in Amsterdam 13-18 September, are great litmus tests for what is important for the industry. This year the idea of immersion was strongly featured and there was a great deal of interest in high resolution VR technologies.

ChyronHego, partner in 2-IMMERSE and provider of sports graphics and data to the broadcasting industry, took to NAB our project’s interpretation of what immersion can mean. ChyronHego used the multi-screen object based broadcasting concept demonstrated by the MotoGP service prototype (which can be seen in this video) to showcase the way we believe we can immerse viewers in an experience across multiple screens.

With private demonstrations in meeting rooms on the ChyronHego stand (the main stand is above, our demo set-up to the right), the work assumed a slightly mysterious hue. Since it remains a prototype, it was inappropriate to place our MotoGP demo front-and-centre of the ChyronHego stand, since such space is reserved for today’s products.

In private presentations, however, the work was introduced to many key influencers from more than 15 different broadcasters based in Germany, Switzerland, Sweden, USA, Norway and the UK. ChyronHego’s Director of Software Development Stefan Fjellsten, who led the interactions, was delighted with the response:

It exceeded my expectations. Everyone was really positive with some asking for exclusive rights to the technology in their territory, and all of them trying to work out how the capability could be applied for the content rights they have.

The feedback and interest we garnered through this comparatively low-key display of our ideas at NAB is really encouraging.  Alongside the feedback we are getting from users it suggests strongly we are following a path that interests providers, rights holders and their viewing public. It also prompts us to be bold as we plan to provide a more comprehensive display of the project’s results at this year’s IBC iin the early autumn.

Thanks to Stefan Fjellsten for the photographs.

READ MORE


Open-source DVB CSS libraries available

MATT HAMMOND, Lead Research Engineer, BBC R&D writes:

2-IMMERSE is seeking to enable the easy creation of multi-screen experiences. In doing so, one of the aims is to leave a legacy of re-usable technology solutions to the technical challenges encountered.

One such technical challenge is synchronising videos playing on different devices. Keeping those videos in-sync helps create the impression of a single unified experience. 2-IMMERSE has adopted a solution defined by DVB, known as DVB Companion Screens and streams (DVB CSS). This same solution has also been adopted by the HbbTV association which defines the interactive capabilities of TVs in many countries in Europe and beyond.

The BBC has previously developed an implementation of DVB-CSS in the Python programming language (pydvbcss). Now, working as part of 2-IMMERSE, they have developed a new version in the JavaScript programming language that is used across the 2-IMMERSE service prototypes.

This new version has now also been open-sourced, making it available for others to use. It takes the form of two software libraries that are available on GitHub and via the popular npm package manager:

  • The dvbcss-clocks library (on GitHub and via npm) is a powerful way to represent timing and timing relationships.
  • The dvbcss-protocols library (on GitHub and via npm) implements the protocols defined by DVB CSS to synchronise. It uses dvbcss-clocks to model the timing relationships between the devices being synchronised.

2-IMMERSE hopes that these libraries will be of use to anyone wanting to understand and use DVB CSS technologies in the future.

READ MORE


Beyond the video wall – responsive content projection

TIM PEARCE from BBC R&D writes:

In the 2-IMMERSE project we are aiming to create a platform which will enable content creators to author multimedia experiences which adapt to the different ecosystems of devices surrounding users. In the Theatre At Home and MotoGP trials, we have focused on second screen (companion) experiences, presenting content on a phone or tablet device that synchronises with the main content on screen.

Our future prototypes, including Theatre In Schools, are designed to extend the capabilities of the platform that may enable the creation of experiences that can be displayed on multiple communal screens, surrounding a larger group of users. By building these features into the platform, we will enable content creators to create highly immersive experiences, including more diverse applications such as digital signage and art installations in addition to traditional broadcast content.

The BBC has been exploring ‘on-boarding’, the end-to-end user experience of 2-IMMERSE experiences. One of the challenges of this is joining many different devices in a room to a single experience, especially as some devices lack a keyboard and mouse. Multiple communal screens pose an interesting use case for layout service which needs to be captured in the on-boarding process. If we want to project our object-based content onto multiple communal screens, how do we know the spatial relationship between those screens and how can we use this to influence the layout of content? Can components span multiple screens or animate from one screen to another?

A really elegant solution to this problem can be found in the Info Beamer project, an open-source digital signage solution for Raspberry PI (above). Each communal screen displays a unique QR Code, which is scanned by a camera-equipped companion device to connect the screens to the experience and determine their relative position, size and orientations. This could be a potential solution for on-boarding a number of large-screen devices in future 2-IMMERSE scenarios.

We will discuss on-boarding and the challenges in designing an end-to-end multiscreen experience in greater detail in a future blog post.

READ MORE


Designing Production Tools for Interactive Multi-Platform Experiences

BRITTA MEIXNER, JEI LI and PABLO CESAR from CWI Amsterdam write about one of the key challenges for the 2-IMMERSE project:

Recent technical advances make authoring and broadcasting of interactive multi-platform experiences possible. Most of the efforts to date, however, have been dedicated to the delivery and transmission technology (such as HbbTV2.0), and not to the production process. Media producers face the following problem: there is a lack of tools for crafting interactive productions that can span across several screens.

Currently, each broadcast service (media + application) is created in an ad-hoc manner, for specific requirements, and without offering sufficient control over the overall experience to the creative director. Our intention as a contribution to 2-IMMERSE is to provide appropriate and adequate authoring tools for multi-screen experiences that can reshape the existing workflow to accommodate to the new watching reality.

We have been working to identify new requirements for multi-platform production tools. The requirements for traditional broadcast productions are clear and well-established, and are fulfilled by conventional broadcast mixing galleries such as the one above. But it is far from clear how multi-platform experiences will be produced and authored, as so far there are only a few experiences available. Each of these current experiences has been treated as an independent project and as a consequence was implemented on demand for a specific setting. The next generation of production tools must be particularly designed for interactive multi-platform experiences. These new tools are intended for broadcasters and cover both pre-recorded and live selection of content.

To find out about specific requirements for the aforementioned tools, we conducted semi-structured interviews with seven technical and five non-technical participants. The interview guidelines covered several sections. The first section tried to identify state-of-the-art knowledge and current challenges when creating interactive multi-platform experiences, to learn about how past experiences were authored, and to find a common ground between interviewer and interviewee(s). The second section aimed to find out who will use the system in the future and for which purpose, and it included questions like:

  • Who will be users of the system?
  • What level of education or training do users have?
  • What technical platforms do they use today? What tools do they use to produce (immersive) experiences?
  • What other IT systems does the organization use today that the new system will need to link to?
  • What training needs and documentation do you expect for the future system?

Functional and non-functional requirements were then gathered. Exemplary questions for functional requirements were:

  • What does the production process for live experiences look like?
  • Is spatial and temporal authoring desired?
  • Is the spatial design based on templates or can elements be arranged freely? How should layout support be realised, if at all?
  • Should the application be able to preview the presentation. If so, then to which degree of detail?
  • Which data formats do you use for video/audio/images that have to be processed by the authoring environment?

Exemplary questions for non-functional requirements were:

  • What are your expectations for system performance?
  • Are there any legal requirements or other regulatory requirements that need to be met?

After conducting the interviews, the transcripts were analysed, and user characteristics, general and environmental constraints, assumptions and dependencies related to live broadcasts, and open questions and issues were identified and noted. We also differentiated between functional requirements, and non-functional, i.e. technical and user requirements.

Fig. 1

Figure 1 above shows a subset of the initial collection of requirements, open questions, and issues. These were then rearranged according to phases of the production process, for which see Figure 2 below.

Fig. 2

Especially for the planning phase, a large number of open questions were identified. Production, distribution, and consumption phases revealed some technical questions that need to be solved. We identified a set of requirements that were used as the basis to create first screen designs for the authoring tool. Based on the most relevant requirements, four concepts of the production tool interfaces were designed, namely Chapter-based IDE (Integrated Design Environment), Mixed IDE, Workflow Wizard and a Premiere Plugin.

Fig. 3

Fig. 4

 

 

 

 

 

 

 

 

 

 

 

 

 

The Chapter-based IDE concept (Figure 3) divides a program into several chapters (e.g., for a sports event such as MotoGP, pre-race, main race, post-race). Each chapter contains (dozens of) components such as leaderboard, course map etc. The authoring process starts with newly-created or predefined templates, so all the components are assigned to specific regions on screens. The timing for each component to start and end is authored on a timeline.

The Mixed IDE concept (Figure 4) does not specify different phases/chapters of a program. A collection of re-usable Distributed Media Application (DMApp) components, includes components that play audio and video, present text and image content, and provide real-time video communication and text chat.

The limited collections (so far, 12 have been developed) of DMApp components reduce the diversity/complexity of the components. Dragging and dropping the DMApp components into the defined regions on screens allows program producers to author the multi-screen experience into a coherent look and feel. The sequence of the applied DMApp components are editable on a timeline.

Fig. 5

The Workflow Wizard (Figure 5) concept gives a program author an overview of the authoring process and guides the authoring step-by-step. It allows the assignment of work to different collaborators and facilittates a check on everyone’s progress.

 

 

 

 

 

 

 

 

 

 

 

Fig. 6

A Premier Plugin (Figure 6) is very similar to the Mixed IDE concept, but is based on the interfaces of Adobe Premiere. Since it is assumed that program authors are expert users of Adobe Premiere, the idea behind this concept is to increase their feeling of familiarity and ease of use.

In the future, further evaluations of these four concepts will be conducted, and new concepts will be formulated based on the feedback.

READ MORE


2-Immerse at half time

The 2-IMMERSE project has just past its halfway point, with another 17 months to run until the end of November 2018. So this seems like a good moment to take stock – and also to breathe life back into our blog, which needs to be much more active than it has been to date. Our intention now is to post at least once a week, on Mondays, and to contribute a wider range of reflections on our research and progress, and on related topics. You can also keep up to date on this via our Twitter feed @2Immerse.

The key outputs are our public deliverables, which can now be found here. Those papers that are probably of most interest beyond the project include (with links that take you to downloadable .pdf files):

D4.1 Prototype Service Descriptions – Initial Version

The four multi-screen service innovation prototypes that will be developed by 2-IMMERSE are described in this document. They are called “Watching Theatre at Home”; “Watching Theatre at School”, “MotoGP at Home” and “Watching Football in a Pub”. For each service innovation prototype the market context, the social context and the trial plans are described. Whilst the use cases are described very specifically, it seems clear that many aspects of service innovation concepts will have much broader applicability.

D3.1 General Concepts, Designs and Interactions for Multi Screen Experiences

This document outlines the concepts at play in the design of the multi-screen experiences under development in 2-IMMERSE for the project’s four pilots.

D2.1 System Architecture version 1

This document describes the system architecture being developed by the 2-IMMERSE project. This architecture is designed to enable the four multi-screen service prototypes that will be delivered through the project. The System Architecture is layered as a set of platform services, a client application architecture and production architecture. The system architecture is a work in progress; it will evolve both as we refine it and specify it in more detail, and as we deliver each of the multi-screen service prototypes through the project.

D2.3/D5.1 Distributed Media Application Platform and Multi-Screen Experience Components: Description of First Release

This document describes the first release of the 2-IMMERSE Distributed Media Application Platform, Multi-Screen Experience Components and Production Tools that have been developed for the project’s first service prototype, “Watching Theatre at Home”. It provides an illustrated tour of the project’s technical achievements to date, along with details of the current status of the platform and components and key features developed beyond those described in deliverables D2.1 and D2.2.

Earlier this year we organised our first trial with the Theatre at Home prototype, which was a fascinating experiment even if the results were mixed. We learned an enormous amount from the trial, and it tested in numerous ways the system architecture that the 2-IMMERSE team has spent many months building.

The prototype service allows two households to share the experience of watching a theatre performance together with the production being presented on a TV screen. Each household has a second screen device, a tablet, and can use this to access synchronized information streams and communication resources directly from the provider of the broadcast. The experience is curated to mirror aspects of the ritualised nature of going to the theatre. The experience thus allows users to:

  • Chat to each other (using video chat) before and after the performance and during the interval
  • Receive warnings, as they do when they visit the theatre, that the performance was about to start.
  • Access additional material related to the production, much as they would in a theatre programme.
  • Send messages to each other discretely during the performance using text chat.

A detailed exploration and evaluation of the Theatre at Home trial is now available in our deliverable D4.2 Theatre trial evaluation results (link to .pdf)

We are confident that our platform is now appropriately flexible and sufficiently robust to allow us to run our next set of trials for the MotoGP at Home prototype alongside the British Grand Prix event at Silverstone, 24-27 August 2017. More details of our work at Silverstone will be featured here.

One other welcome piece of news about the project is that 2-IMMERSE won Best Paper Award at the recent ACM TVX2017 conference and trade show at Hilversum in The Netherlands. The project successfully demonstrated our Theatre at Home prototype at TVX2017 – and the abstract for this presentation, 2-IMMERSE: A Platform for Orchestrated Multi-Screen Entertainment, is available here. And thrillingly, Best Paper Award was given to ‘On time or not on time: a user-study on delays in a synchronised companion-screen experience’, authored by IRT’s Christoph Ziegler (pictured above at TVX) and Christian Keimel and the BBC’s Rajiv Ramdhany and Vinoba Vinayagamoorthy. A sliver of the award citation is our header image above.

READ MORE