Signing On

signingon
Today we had a really interesting debate within the project about sign-on and identity and about whether we needed to build ‘all that stuff’ into our prototype experiences.

It is, of course, a bit complicated. Ideally we’d have a great simple-to-use single-sign-on process that was secure, complete and scalable. But we are a bit nervous of the overhead that might put on the development teams. This stuff is largely “done”, repeating it looks like an engineering job, not research.

As the debated continued and I listened and learned; I realised the question has got layers. When signing on, am I a person, or a household? Am I known or am I anonymous? How does signing on work if I have an experience that depends on me using both a shared device and a personal device? And what am I prepared to do, signing-on-wise, if I am accessing a service in a public space like a pub? What kind of sign-on makes sense if I am joining a video chat session with friends vs joining a session in a pub? What if I wanted to place a bet?

The correct solution is not always obvious, but faced with a stupid log-in process we all recognise it and, from experience, I know it can make me abandon a purchase or a transaction.

Fortunately this is a research project and one of the work packages we have defined is intended to look into issues like this. So we were able to close our debate in violent agreement. We agreed that our platform should be prepared as if it is to be adopted as a real services beyond the project. This means that security and single-sign-on issues must be considered and that solutions for these issues must be defined. It does not mean all these capabilities will necessarily be built and instantiated.

We also agreed that, to define appropriate sign-on capabilities, we would carry out work in our User Interaction Design work package to explore what the ‘right’ sign on procedure might be for each of the different situations described in the storyboards for our prototype scenarios.

We didn’t resolve all the questions the debate created, but I was reassured that the project does have the structure to answer the questions.

Signing off.


READ MORE


FA CUP

facup3
Having installed a new sound driver and toggled a few selections and got Audacity to start recording (even though I couldn’t actually hear the commentary) I retired to actually watch the match. I found myself, for the first time in decades, taking more than a passing interest in the FA Cup final. This isn’t because of anything the FA Cup Final did it to me, it had more to do what with what we, in the project, are hoping to do the FA Cup Final.

The way the TV coverage of the FA Cup has developed since it was first broadcast in 1938 is well documented by those who have had the pleasure and privilege of being closely involved with it. Brian Barwick’s book “Watching the Match – the inside story of football on television” is a good source of insight into the way the coverage has developed to date and accounts many of the key moments in the history of the coverage including a rather unseemly tussle between rival film crews in 1969 seeking to create the best coverage of the event. Coverage of the FA Cup is a landmark production that creates indelible memories, particularly for young, football-obsessed boys, across the country.

So what will 2-Immerse be doing that is different? Well, as can be seen in the high-level storyboards, 2-Immerse has watching football in a pub as its focus, rather than watching at home. This means that the programming we wish to create is intended to work best in a social, participatory atmosphere rather than in a sitting room. The editorial impact of that difference will be key to making that work but key to providing the editorial flexibility is the approach to production.

Traditionally the coverage is assembled for delivery to one ‘main’ screen. 2-Immerse believes that, with IP delivery and emergent standards (like HbbTV2.0) that allow broadcast and broadband content to interwork, there is an opportunity to create a flexible presentation of the coverage that can adapt to best suit the available screens and will be a better in-pub experience than watching the version designed for broadcast to our homes.

What we want to do is to enable a system that can dynamically and responsively fetch content and arrange it in appealing ways using the resources (screens and loudspeakers) available.

If, for example, we consider the audio; we’d like to retain flexibility about what is heard so we can choose a neutral or a partisan commentary. Some pubs may well be partisan and would enjoy a commentary that was biased and had a focus on let’s say Crystal Palace rather than Man Utd. Some pubs may attract and encourage a viewership that is so engrossed in the event that they care little for the studio commentary preferring instead to import just the stadium atmosphere by relaying the chants and angst of a partisan crowd. We’d like to explore these options and wok with experts, the pub industry and with viewers to establish which of these ideas might fly.

There are also many way we could use multiple screens, we aim to build some options and test these with users and professionals. They will let us know what they think ‘works’.) Our scenario describes a few ideas we want to explore. These include ideas like having a screen showing the output of a player cam; having screens displaying the actions of the respective managers; having screens showing team sheets; having screens showing performance stats; or having screens replaying key moment of the match-so-far to enable late-comers to understand what they’ve missed. All, or none, may really be attractive.

To test these ideas we plan to build a demonstrator and that is why I took slightly more interest in the FA Cup than usual. Colleagues in BT Sport are making available to us recorded individual feeds of over 12 cameras and we have captured commentaries from different sources including various radio stations and an array of in stadium mics including a sound field microphone to allow us recreate and affect the sound presentation using Dolby Atmos. With these sources available to us we will be able to experiment with different layouts, trying different views on different screens and experimenting with different audio. Now we need to collect, log, align and prepare the terabytes of content and begin to play with the emerging set of tools that the software teams are building so we can start to see what the experience is like when we start presenting this content in different ways. But that is for later on.

But now I’d better uninstall that sound driver and see if I can get to hear the audio coming from my PC.


READ MORE