Success for 2-Immerse at TVX

Jie Li honoured with Best Paper Award

On 27 June Jie Li of the Distributed and Interactive Systems (DIS) group at 2-IMMERSE partner CWI was awarded with the Best Paper Award at ACM TVX 2018, the ACM International Conference on Interactive Experiences for Television and Online Video. Li won the award for the paper ‘A New Production Platform for Authoring Object-based Multiscreen TV Viewing Experiences’. Outlining research done within 2-IMMERSE, the paper has as co-authors Pablo Cesar, Maxine Glancy (BBC), Jack Jansen and Thomas Röggla

Multiscreen TV viewing refers to a spectrum of media productions that can be watched on TV screens and companion screens such as smartphones and tablets. TV production companies are now promoting an interactive and engaging way of viewing TV by offering tailored applications for TV programs.

However, viewers are demotivated to install dozens of applications and switch between them. This is one of the obstacles that hinder companion screen applications from reaching mass audiences. To solve this, TV production companies need a standard process for producing multiscreen content, allowing viewers to follow all kinds of programs in one single application. This paper proposes a new object-based production platform for authoring and broadcasting programs for multiscreen.

Besides the awarded paper, the DIS group also presented a demo at the conference, showcasing object-based live broadcasting of a sports event, to both a large TV screen and companion screens. The demo included both the producer and the user side. The producer side takes place inside an outside broadcast truck, where the director/producer can insert on-demand snippets and interactive components in the live broadcast. The user side is a home, where a viewer with a TV screen and several companion devices enjoy a personalized and interactive experience.

CWI’s Distributed and Interactive Systems (DIS) research group focuses on facilitating and improving the way people access media and communicate with others and with the environment. They address key problems for society and science, resulting from the dense connectivity of content, people, and devices. The group uses recognized scientific methods, following a full-stack, experimental, and human-centered approach.

More at TVX from BBC and IRT

A work-in-progress paper submitted by BBC and IRT explores the potential of augmented reality technology as a novel way to allow users to view a sign language interpreter through an optical head-mounted display while watching a TV programme. We address the potential of augmented reality for personalisation of TV access services. Based on guidelines of regulatory authorities and research on traditional sign language services on TV, as well as feedback from experts, we justify two design proposals. We describe how we produced the content for the AR prototype applications and what we have learned during the process. Finally, we develop questions for our upcoming user studies.

Also at TVX BBC and IRT demonstrated results of our work stream which targets deploying the 2-IMMERSE apps on HbbTV 2.0 devices. In cooperation with Samsung we showed the 2-IMMERSE MotoGP experience from Silverstone GP 2017 on a recent consumer device running an HbbTV 2.0 ready firmware.

READ MORE


Back of the net!

The 2-IMMERSE Football production team report on the success of our major trial at Wembley.

2-IMMERSE passed another milestone at the recent 2018 FA Cup Final at Wembley Stadium between Chelsea and Manchester United as we proved our prototype end-to-end live production system for object-based broadcasting to multiple screens.

For those that may not know the FA Cup is the world’s oldest football challenge cup, having been started back in 1871. The high profile final is the curtain closer for the UK domestic football season and has spawned and fuelled many of the intense inter club rivalries that give the UK league its particular character. For decades the match has also been a key television moment and a showcase for experimentation and innovation in the presentation of football on television. So 2-IMMERSE is very proud this year to have been a small part of a great tradition. The FA Cup Final is a national treasure, but it also draws a global audience estimated to be in the hundreds of millions.

Our team for the FA Cup was drawn from BT, CWI, Cisco, and BBC R&D – and they all worked tirelessly over the preceding weeks and months to design and develop the end-to-end system. Early ‘sighting visits’ to Wembley (see our previous blog post) helped us identify and address the key issues that can affect the delivery of the service and helped the test on the day of the Final to go as smoothly as it did.

On the day, we arrived with our modest broadcast truck and once the feeds from all the individual cameras were in place we were set to go. After all the preparation we were able to conduct three key activities at Wembley:

  • Live orchestration of match graphics, mirroring the broadcast graphics production but using our own HTML 5 focused production and delivery chain
  • Live monitoring and demonstration of prototype Football DMApp (see blog post) on site at Wembley
  • ChyronHego data capture for Virtual Placement: camera parametric data and Tracab player tracking data.

In addition to the on-site achievements, and to further illustrate the end-to-end nature of the trial, we engaged 10 remote viewing participants in UK, Sweden and the Netherlands, to experience watching our interactive, customisable and multi-screen version of the 2018 FA Cup Final.

Among the key system elements and features of our system are the following (which will be illustrated in a short video coming soon):

  • Live triggering tool with trigger launcher driven by Elgato Streamdeck
  • Real-time preview in OB van driven direct from SDI
  • TV emulator and Samsung tablet as client devices
  • Live sessions integrated within client onboarding workflow
  • Match GFX components developed by ChyronHego and implemented using new DMApp component which integrates with their Prime universal graphics platform.
  • Interactive experience on tablet included:
    • ScoreClock menu with match overview, team line-ups and replays
    • Broadcast menu with camera feed viewer and customised picture-in-picture layout on TV

Like most productions this was a massive collaboration involving people and organisations across and beyond 2-IMMERSE. We are happy to acknowledge the help of: AWS Elemental in providing real-time encoders; BT Technology Service and Operations for providing some key with contacts and helping with system design; BT Sport for permission to access the stadium and allowing our small van to take its place alongside the big trucks from BT Sport and the BBC; and to BT Sport and BT media and broadcast for providing the guaranteed upstream bandwidth we needed.

The next step – or at least one of them – is to work with all the video and data captured at the Final to develop a stand-alone demo showing the capabilities of the system and illustrating the end-to-end process. We will present this on our stand in the Future Zone at IBC 2018 in Amsterdam in September. We look forward to seeing you there.

READ MORE


Designing Production Tools for Interactive Multi-Platform Experiences

BRITTA MEIXNER, JEI LI and PABLO CESAR from CWI Amsterdam write about one of the key challenges for the 2-IMMERSE project:

Recent technical advances make authoring and broadcasting of interactive multi-platform experiences possible. Most of the efforts to date, however, have been dedicated to the delivery and transmission technology (such as HbbTV2.0), and not to the production process. Media producers face the following problem: there is a lack of tools for crafting interactive productions that can span across several screens.

Currently, each broadcast service (media + application) is created in an ad-hoc manner, for specific requirements, and without offering sufficient control over the overall experience to the creative director. Our intention as a contribution to 2-IMMERSE is to provide appropriate and adequate authoring tools for multi-screen experiences that can reshape the existing workflow to accommodate to the new watching reality.

We have been working to identify new requirements for multi-platform production tools. The requirements for traditional broadcast productions are clear and well-established, and are fulfilled by conventional broadcast mixing galleries such as the one above. But it is far from clear how multi-platform experiences will be produced and authored, as so far there are only a few experiences available. Each of these current experiences has been treated as an independent project and as a consequence was implemented on demand for a specific setting. The next generation of production tools must be particularly designed for interactive multi-platform experiences. These new tools are intended for broadcasters and cover both pre-recorded and live selection of content.

To find out about specific requirements for the aforementioned tools, we conducted semi-structured interviews with seven technical and five non-technical participants. The interview guidelines covered several sections. The first section tried to identify state-of-the-art knowledge and current challenges when creating interactive multi-platform experiences, to learn about how past experiences were authored, and to find a common ground between interviewer and interviewee(s). The second section aimed to find out who will use the system in the future and for which purpose, and it included questions like:

  • Who will be users of the system?
  • What level of education or training do users have?
  • What technical platforms do they use today? What tools do they use to produce (immersive) experiences?
  • What other IT systems does the organization use today that the new system will need to link to?
  • What training needs and documentation do you expect for the future system?

Functional and non-functional requirements were then gathered. Exemplary questions for functional requirements were:

  • What does the production process for live experiences look like?
  • Is spatial and temporal authoring desired?
  • Is the spatial design based on templates or can elements be arranged freely? How should layout support be realised, if at all?
  • Should the application be able to preview the presentation. If so, then to which degree of detail?
  • Which data formats do you use for video/audio/images that have to be processed by the authoring environment?

Exemplary questions for non-functional requirements were:

  • What are your expectations for system performance?
  • Are there any legal requirements or other regulatory requirements that need to be met?

After conducting the interviews, the transcripts were analysed, and user characteristics, general and environmental constraints, assumptions and dependencies related to live broadcasts, and open questions and issues were identified and noted. We also differentiated between functional requirements, and non-functional, i.e. technical and user requirements.

Fig. 1

Figure 1 above shows a subset of the initial collection of requirements, open questions, and issues. These were then rearranged according to phases of the production process, for which see Figure 2 below.

Fig. 2

Especially for the planning phase, a large number of open questions were identified. Production, distribution, and consumption phases revealed some technical questions that need to be solved. We identified a set of requirements that were used as the basis to create first screen designs for the authoring tool. Based on the most relevant requirements, four concepts of the production tool interfaces were designed, namely Chapter-based IDE (Integrated Design Environment), Mixed IDE, Workflow Wizard and a Premiere Plugin.

Fig. 3

Fig. 4

 

 

 

 

 

 

 

 

 

 

 

 

 

The Chapter-based IDE concept (Figure 3) divides a program into several chapters (e.g., for a sports event such as MotoGP, pre-race, main race, post-race). Each chapter contains (dozens of) components such as leaderboard, course map etc. The authoring process starts with newly-created or predefined templates, so all the components are assigned to specific regions on screens. The timing for each component to start and end is authored on a timeline.

The Mixed IDE concept (Figure 4) does not specify different phases/chapters of a program. A collection of re-usable Distributed Media Application (DMApp) components, includes components that play audio and video, present text and image content, and provide real-time video communication and text chat.

The limited collections (so far, 12 have been developed) of DMApp components reduce the diversity/complexity of the components. Dragging and dropping the DMApp components into the defined regions on screens allows program producers to author the multi-screen experience into a coherent look and feel. The sequence of the applied DMApp components are editable on a timeline.

Fig. 5

The Workflow Wizard (Figure 5) concept gives a program author an overview of the authoring process and guides the authoring step-by-step. It allows the assignment of work to different collaborators and facilittates a check on everyone’s progress.

 

 

 

 

 

 

 

 

 

 

 

Fig. 6

A Premier Plugin (Figure 6) is very similar to the Mixed IDE concept, but is based on the interfaces of Adobe Premiere. Since it is assumed that program authors are expert users of Adobe Premiere, the idea behind this concept is to increase their feeling of familiarity and ease of use.

In the future, further evaluations of these four concepts will be conducted, and new concepts will be formulated based on the feedback.

READ MORE