Savvy broadcasters are using software to reap the benefits of emerging technologies whilst making the most of existing assets. Charlotte Hathway reports

It’s no secret that the way people watch TV has evolved rapidly over the last decade. Today’s viewers expect quality, convenience, and innovation, and broadcasters must rise to that challenge to compete in an increasingly complex ecosystem.

Success rests on balancing new investments with making the most of existing devices, services and infrastructure. In this climate, FPGAs are coming to the fore.

Rob Green, Senior Manager, Pro AV & Broadcast at Xilinx, explained, “You can use an FPGA or programmable devices for whatever application you want. That might be creating a different interface standard, for video processing or compression, as a wireless base station processor, or as a consumer TV video processor.

“The same device can be reprogrammed using software in different ways. Broadcasters really like this, because rather than building a specific device for that function, they can use these devices to innovate and to play with video quality and to make the device do exactly what they want.”

FPGAs are being used across every stage of production from the digital video camera and pre- or postprocessing of the captured video and audio content, through to video compression, editing and distribution.

There are a lot of options here, and broadcasters are looking at FPGAs that can deliver high performance, flexibility, obsolescence proof and low development costs.

In broadcast, it’s common to see camera, sensor and display technology improvements trickling across the whole production workflow, pushing production to innovate and make the most of those improvements. This drives a requirement for higher bandwidth, better video processing capabilities, or the use of compression as a larger number of frames per second or a deeper colour, for example, mean more data being sent.

This is where the ongoing move to IP networks for broadcast comes in. A new standard, SMPTE ST 2110, is being adopted across the industry as the way to send digital video over an IP network, largely due to it being open and interoperable.

In the UK, the BBC’s Research & Development’s IP Studio team provides regular updates on what the broadcaster is doing to prepare for future requirements. In a November blogpost, authored by Peter Brightwell, Lead Engineer at BBC R&D, the team discussed improving the networking and compute technologies that are used in broadcast facilities, the mechanisms to connect, control and these, and how we can benefit from cloud computing.

In a follow up conversation, Brightwell explained, “One area we’ve been working on is moving broadcast production facilities to IP working, and we’ve simultaneously moved to distributing content using IP. There’s a lot more flexibility and capacity for new content, and potentially scope for cost savings as you can use generic components more easily.”

This chimes with the view of Xilinx’s Green. He said, “Video over IP gives [broadcasters] a lot of benefits in terms of operational efficiency. You can use equipment anywhere on the network. It doesn’t have to be specific to a particular application. You can put lots of channels within the same ethernet link. To move from HD to 4K to 8K, you just need faster ethernet, you don’t need a new standard like you do with DisplayPort or HDMI or any of those other connectivity evolutions.

“I think we’ll all go to IP, eventually, so there won’t be lots of different interfaces into your PC, monitor or anything else. It’ll all be IP based, packetised, and probably internet-based.”

That, of course, won’t happen overnight. Green suggested that shift will be a gradual one. “I’d say within 10 years. It’s not going to be quick. There’s going to be lots of legacy standards.”

Understanding resolution options

Broadcasters are also experimenting with what will come after HD. Streaming services like Netflix, Hulu, Amazon are focusing on 4K, and there is some demand for 8K. This year’s Olympic Games, held in Tokyo, will deliver coverage with 8K at the start of the chain. It remains to be seen whether that quality will be delivered outside of Japan, but these projects will push the technology forward.

Xilinx’s Green explained why deeper and higher dynamic range requirements are being spotlighted. He said, “Arguably, HD HDR (high definition high dynamic range) is better than non-HDR 4K. It’s not just a fact of bigger frame sizes providing a higher quality picture. It’s a combination of all of these. HDR is probably the one that customers and consumers will see a much more impactful difference to their viewing habit.”

He explained that when we talk about “high dynamic range, there’s lots of different formats of that, this is [about] making the blacks blacker, and the whites whiter and everything in between much better in terms of video quality”.

This demand for resolution improvements is one reason video compression is rising in importance. Green explained, “If you want to go to 8K, you might need 100GbE (gigabit ethernet). If you don’t have that already installed, it will cost money to be put in. The alternative is that you compress the video and put it over your existing infrastructure, which is probably 1GbE or 10GbE.”

Broadcasters need to consider what compression technique is suitable for the application. Permanent compression might be acceptable, or there could be a need to return the video to its initial format. This, Green says, depends on the codec used. “There are some codecs like the MPEG based ones, which do quite high compression. That’s more for your streaming from the internet to the home or from a camera out in the field to a broadcast truck or something like that.

“But there are other codecs, which we refer to as mezzanine compression, that do very lightweight compressions. They just squeeze it enough to get it into the network capability. Some of those are visually lossless and some of the mathematically lossless, so you get back the pristine data that you put in the front end.”

Using cloud in the right way

Cloud computing is becoming vital to keeping pace with evolving requirements for production, storage and distribution techniques. Following a trial in Salford, the BBC is building a larger on-premise cloud in London. Brightwell explained that this “will be a bigger build, and it will be connected to the internet, where previously the little cloud in Salford was only accessible on our R&D intranet. We’ll be concentrating on what it means to make it more widely available and secure.

The benefit of that initial pilot, Brightwell explained, was that the project team could “understand the technical issues that are involved in operating a cloud” so the larger cloud can be “specific to the types of work [needed] for broadcast production”. So, why not just throw everything up to the cloud? Green explained “the cloud has a perception that it’s cheap. It’s a cheaper service to use, because you’re not buying any equipment, but actually, if you’re using it a lot, it’s more expensive than buying your own piece of equipment. If you’re doing live news or sports 24/7, you would never put that in the cloud because it would be way too expensive. You would have your own piece of equipment to do that. It’s just a matter of balancing the costs.”

For the BBC, automation is another area that shows promise. Brightwell explained, “Consider a small music festival, where you have to take quite a lot of equipment out on the road to make a programme. We would like to look at what it might mean to have less infrastructure on site and be able to connect up to a broadcast-friendly automated cloud. Previous events could provide a starting point, so everything is automatically set up with the editing software and graphics that you need. A lot of manual set-up is required at the moment, so the idea that computers could handle the tedious stuff and people can get on with the more creative parts of the job is [worth exploring].”

Like other industry segments, machine learning and artificial intelligence are opening up new possibilities for broadcast technologies. Xilinx’s Green, expects to see machine learning used to detect faces within video. Broadcasters can then retain video quality around the face, but can then throw away more information in the background. This would mean less bits would be streamed over the network, which could deliver significant cost savings.

Green sees these technologies delivering benefits elsewhere in the production chain, with similar techniques being used to control cameras. He gave the example of a football match where players could be tracked in the same image, and that data could also be tagged during the broadcast. This could generate automated replay, with machine learning used to track events, like goals scored or penalty decisions. The director of the show would then have a highlight reel instantly available to them when required.

Technology moves from prototype to adoption much faster than in the past, so it could be a matter of just a few years before we’re starting to see these concepts being used widely by broadcasters. What is clear is that software holds the key to striking the balance between old and new investments.