“BT Sport: Object-based delivery ‘next big thing’ ” – so reads the title of a recent article on the web site Advanced Television. Doug Williams from BT argues that this illustrates that we are making progress on exploitation – although perhaps not in the way you might expect.
A key objective of an Innovation action like 2-IMMERSE is to be able to show that the project has an impact, and most particularly that it leads to commercial impact. Simplistically it would be nice to think 2-IMMERSE designed and proved the viability of a widget, and that following the project a new company was started that made, marketed and sold said widget. That would be an easy story to tell – but things don’t often work like that.
2-IMMERSE is a collaborative project involving, among others, BBC, BT, CISCO and IRT, and in this blog post I want to tell you a little about some of the progress we are making towards delivering our exploitation objectives. I am not telling the whole story, just relating something of what is happening in BT.
As in the suggested ‘easy to tell’ story, 2-IMMERSE is designing and proving the viability of something, though that something is not a widget. We are proving, through the work done on the MotoGP experience and on the football trials, for example, that using the object–based delivery approach, we can create multi screen TV-based experiences and that those experiences can be appealing. We have worked closely with rights holders BT Sport and Dorna Sports as our experiments progressed and we have shared our progress with those rights holders along the way.
A fundamental aspect of this project is that the delivery of the TV experiences is based on objects – where those objects can be video streams, graphics, audio streams or interactive elements that are assembled, according to rules, across the devices that are available to present the experience. There might be an element of “so what?” to that description. It does not highlight any benefits and gives no obvious reason why such an approach is useful to a broadcaster. But there is a simple benefit addressing something that is distinctly missing from current TV services — and that is the ability to personalise or customise the experience to better suit the context in which the TV experience is being consumed.
TV is great. I am writing this the morning after nearly half the UK population were together watching England play, and disappointingly lose, in the semi-final of the World Cup. Television is the only medium than can achieve such moments where such a large fraction of the population are all sharing the same experience. And that’s magic.
• What if I was hard of hearing and wanted to quieten that crowd volume and increase the commentary volume so I could hear the commentary more clearly? I couldn’t do that last night, but I could with the object-based delivery approach.
• What if I was watching on a small TV and found the graphics a bit hard to read and wanted to increase the size of the graphics so I could read them more easily? I couldn’t do that last night, but I could do that with the object-based delivery approach.
• What if I was profoundly deaf and wanted to pull up a signed commentary on the TV screen? I couldn’t do that last night, but with the object-based delivery approach that is possible.
• What if I wanted to actively track the relative possession enjoyed by each of the teams throughout the game? I couldn’t do that last night, but I could do so with the object-based delivery approach.
• What if I was interested in seeing at all times, the expressions on each of the different managers’ faces? I couldn’t do that last night, but with the object-based delivery approach I could.
The object-based delivery approach provides ithe ability to personalise the TV experience to better address the needs of a large number of niche populations. Whether they are characterised by a need for more data, clearer graphics or a different audio mix. We’ve been discussing these options with our rights-holders partners for some time now, and together we are beginning to focus on these benefits and to work out how we could take them from the lab to the living room.
BT Sport have a fantastic innovation record. BT Sport were the first to do deliver live UHD TV in Europe. BT Sport were among the first to provide 360 video coverage of live sporting events. BT Sport were the first to use Dolby Atmos to deliver even more compelling soundscape to accompany the pictures. And now? Well now it’s good to see reports of Jamie Hindhaugh, COO of BT Sport, and the one who has driven innovation in the company, ruminating on what innovations are next in the pipeline.
According to an article penned by Colin Mann of the Advanced Television web site, reporting on the Love Broadcasting Summit,
[Jamie Hindhaugh] didn’t see VR and 3D as necessarily the future for sports viewing, as they lacked the social aspect, with BT preferring to offer such options as different camera angles, instead suggesting that object-based delivery would grow with the spread of IP. ‘We will start giving you objects, packages, templates, so you can create your own version of what you are watching.‘
Hindhaugh said that hardcore fans of MotoGP could benefit from additional graphics and stats available to the broadcaster. BT would curate the data, and allow the viewer to choose their preferences from the programme feed. ‘That’s where we’re going. That’s where the next big initiative is coming through, combined with 5G mobile.’
We have always seen our greatest potential for innovation impact in large companies as being the ability to affect the way they choose to develop their services. We have set out to work with the parts of our business that run key services and to effect innovative change. We are pleased to see the ideas we are developing and demonstrating are appealing not only to users but also to decision makers and that innovations developed in this project could be the ‘next big thing’.READ MORE
A report from our man on the terraces, DOUG WILLIAMS from BT R&D:
2-IMMERSE is developing a live service prototype based on football that we aim to test at the FA Cup final on 19 May this year. Several 2-IMMERSE project participants gave up this last weekend to take part in key technical tests to prepare for that event. The test took place at Wembley Stadium during the FA Cup semi-final between Chelsea and Southampton.
Getting experience of all that is required for the Wembley trial incorporates a number of fundamental and non-trivial milestones, including knowing what we have to do get access to Wembley with our own outside broadcast truck, knowing what we have to do to accept all the live feeds from the cameras in the stadium, and what we have to do to make the uplink work.
For most of the 73,000 in attendance the key question was, “Who would book their place in the final against Man United?” Our team members were much were less concerned about the result on the field. They were worried by the result on the screens.
The truck we used, thanks to the invaluable assistance of BT Sport, allowed us to test in a live environment a range of tools and systems that have been developed by the project. We learned an enormous amount by being in the middle of a major outside broadcast and through meeting a number of suppliers and colleagues who were centrally involved in delivering it.
We were thrilled that the 2-IMMERSE was recognised and supported – and importantly that we are now better known to some of the outside broadcast teams whose support we will rely on for our next two live tests. Inevitably, and as you would expect from such a test, we spotted a range of bugs and problems, and we are now busy examining the logs to see if we can identify their causes – and the mend them.
While our team went home with a bug list and deeper knowledge of how the outside broadcast process interacts with our system, Chelsea went home victorious. So the FA Cup final that we will be using in our live trials on Saturday 19 May will be between Manchester United and Chelsea. Prior to that, as the footballers get measured for their match-day suits, we have some bug fixing to attend to and, thankfully, another live test to complete before we conduct a pioneering object based broadcast test of the 137th FA Cup final.
With thanks to Martin Trimby for the photographs.READ MORE
STEFAN FJELLSTEN, Chief Architect at 2-IMMERSE partner ChyronHego, writes about the data visualisation possibilities of the project’s football trial:
The third trial in the 2-IMMERSE project will target football broadcasts to create an immersive multi-device experience for the viewer. One of the goals is to provide the audience with more control over the graphics elements presented, giving the viewer the opportunity to focus on what is relevant for them. This control allows presentation of more data than ever since the viewers themselves determine what and how much to visualise.
Traditionally broadcasters of live football coverage have been conservative in their use of graphics on screen. They have opted instead for a minimalist approach where graphics are shown sparingly and the focus is instead on the video coverage. With a growing demographic of young audiences becoming more accustomed to data-rich content as well as being more demanding in their content being personalised to their own individual viewing preferences, the need to control graphics at a far more granular level is now beckoning.
But, what is there to show?
Of course, there is an indispensable wealth of statistical information to visualise if desired. But this becomes more interesting and technically challenging with dynamic in-game generated data. What is available depends on what level of data capture is associated with a particular football game. There are several options for capturing football data, which can be combined or used in isolation. The most common are listed below:
- Manual scouting, one or more persons, mostly on site, are following the game with relevant digital tools for capturing the most basic events such as goals, substitutions, free-kicks, corner kicks, offsides, red and yellow cards and similar. This allows close to real-time distribution of these events. The delay is the human interaction with the digital tool.
- Video-based scouting with manual input, similar to the above, but more events can be collected since work is assisted with a video recorder. Events like passes can be captured, as well as their type, such as through-ball. This information is not distributed in real-time, but normally is available typically in less than a minute after the event.
- Automatic capture using optical computer vision or wearable technology. These technologies collect automated data on the position of the players and the ball (optical only) in real-time and collect this data much more quickly, objectively and consistently than a human can. The raw data can be used to create many new derived metrics, the most well-known and used being distance travelled and speed, but with the capability to create many hundreds of other physical or spatio-temporal metrics and visualisations.
As ChyronHego is a partner in the 2-IMMERSE project, the football trial will give the viewer access to data captured by the player tracking system TRACAB™, which in real-time is tracking all the players, referees and ball 25 times per second during a football game. This will provide the raw positional data, as well as a lot of statistics that can be derived and visualised.
Example of visualisation of this data on a web page:
Above, statistics and analysis derived from position.
Above, pitch radar visualisation with players, referees and ball, as well as voronoi spatial analysis.
Above, virtual graphics visualisation on top of live or recorded video.READ MORE
DOUG WILLIAMS writes:
In a previous post John Wyver reviewed the 2-IMMERSE project’s achievements at the half way point. From that post you’ll see that we have made progress on the technical platform required for the delivery of multi-screen experiences and that we have thought hard about the nature of the particular multi-screen experiences we are developing. In this post I want to look at some of the rather intriguing findings that have emerged from our background research. We included these insights in our deliverable 4.1 describing the service prototypes. (more…)READ MORE