By nxtedition Staff | nxtedition | Published 22nd May 2020
Blick TV is making Swiss media history and breaking all the boundaries of traditional publishing & broadcast mediums. Its ‘web first’ live news platform, powered by nxtedition, means Blick TV is the first digital-only TV broadcaster in Switzerland. Blick TV chose nxtedition to take care of journalistic planning tools, story scripting, media management, rundown creation, studio automation, graphics, ingest with metadata and final channel playout all from within one nxtedition system.
Based in the newsroom of Ringier Pressehaus, Zurich, Blick TV is taking a radically different approach to live news coverage by leveraging nxtedition’s unique capabilities to collate, produce and output content to Blick.ch and the Blick app as both a live video stream and video on demand.
Blick TV isn’t predicated on the traditional linear television format but instead relies upon news which can be updated every hour and also broadcast live in the case of breaking news reports. Viewers can watch live or on-demand at the Blick.ch website and on the custom-designed Blick app. The editorial focus is on breaking news, politics, business, sport and entertainment.
‘Ringier has the courage to conquer television and reinvent it for the Internet. I am really looking forward to working with my competent, creative and powerful team’
-Jonas Projer, Editor-in-Chief of Blick TV.
The installation was a collaboration between Qvest media & nxtedition. Qvest worked over the summer to build a two studio installation complete with green screen, virtual set, respeaking subtitling along with the two control rooms and robotic camera systems.
nxtedition provides all the journalistic planning tools, story scripting, media management, rundown creation, studio automation, graphics, ingest with metadata and final channel playout all from within a single nxtedition system.
Blick TV broadcasts 16 hours a day in 15 minute repeating segments between 6 am to 11 pm each day before switching to a repeat playout stream to complete the 24-hour news cycle.
Rather than using a mix of different software from multiple vendors, the entire team of 48 collaborate within nxtedition’s single unified system from story concept right through to production delivery. The efficiency and productivity gains in creating live and pre-recorded content speeds up and enhances the production workflow vastly.
‘I was very impressed with the flexibility and adaptability of nxtedition. It was very easy to tailor it to our specific workflows! Also, working with minimal technical staff, every bit of automation counts, and especially reducing potential sources for human error in the control room during fast and complex live shows. nxtedition helps us with that with its many clever and well-designed features.’
-Beat Vontobel, Head of Technics
A good example of these gains can be measured in how nxtedition’s unique ‘salami-slicing’ feature is utilised by Blick TV. During each live segment, every individual news story is automatically sliced off the nxtedition ingest server as soon as the director steps into the next news story.
Each newly created video ‘slice’ is automatically added to the repeat rundown for the live stream channel, as well as available for publication for VOD. This is achieved without any human intervention at all and all the salami clips are packaged for playout with subtitles and graphics included as metadata.
The mobile version of Blick TV can be viewed in both portrait and landscape format with no pause in the streaming and all videos have graphics and subtitles overlaid inside the app using metadata that streamed with the video.
Directors are capable of accessing each individual story segment for any graphicsor subtitle corrections, new developments or even drop in live over any segment, anytime day or night. By fully exploiting these productivity gains, when breaking news happens – Blick TV can truly be ‘fast and first’ to on-air.
“Blick TV is the first digital TV in our country and we are the first to be live within minutes if something happens,”
- Christian Dorer, Editor-in-Chief of the Blick Group.
“This has been a great project for all of the team at Blick, Qvest Media & nxtedition. The challenging vision that Blick set out to us at the beginning of this project was always a perfect fit for the nxtedition solution, it’s exciting to see it all up and running today. We worked hard alongside the Blick team and Qvest Media to create a truly 21st-century virtualised news solution for the Blick audience to enjoy”.
- Ola Malmgren, Co-Founder & CEO of nxtedition
By David Davies | IBC 365 | Published 25th June 2020
Implementing some degree of cloud-based playout has been a marked trend for a while now, but this year’s momentous events are certain to accelerate developments, writes David Davies.
In the context of the profound changes that have impacted most aspects of broadcast workflows during the past few years, it was surely only a matter of time before playout underwent a similar quiet revolution. That has now arrived in the form of cloud-based playout, which opens the way for broadcasters to enjoy greater flexibility and cost-efficiency, either as a sole platform or as part of a hybrid playout infrastructure that typically includes on-premise facilities.
By Mo-Sys Staff | Mo-Sys Engineering | Published 13th March 2020
On Friday 6th March Mo-Sys were thrilled to be named Business of the Year at the Best of Royal Greenwich Business Awards 2020. Hosted by TV presenter and journalist Steph McGovern at the InterContinental Hotel, the awards ceremony recognises the achievements of businesses across the borough. In addition, Mo-Sys also came out victorious in the Made in Greenwich category, an award which celebrates the talent and creativity of local innovators who are bringing exciting products, good and services to market.
For those that don’t know who we are or what we do, we are based in Morden Wharf on the Greenwich Peninsula and we manufacture sophisticated camera technology for television broadcasts and feature films.
If you have watched the BBC’s Match of the Day programme this season or if you’ve seen any of the recent General Election nights on BBC or Sky, then you would have experienced our technology in action. Our specialist equipment which gives precise camera tracking data, broadcasters can add informational graphics and immersive virtual sets into their productions, giving audiences a more engaging and satisfying viewing experience.
Since relocating to Greenwich in 2015, we’ve been on a mission to deliver the best technology that will transform the way films, TV series and broadcasts are made. Now, our innovations are making greenscreen shooting cheaper and simpler than ever before, in turn giving smaller companies the same opportunities as the big players that have long since dominated both the film and broadcast industries.
Michael Geissler CEO and Founder of Mo-Sys:
“After being shortlisted in the 2019 Awards but unfortunately missing out in the Micro to Small Business Award category to the well-deserved BeGenio – we were delighted to be named winners in the Made in Greenwich category and receiving the Business of the Year award.”
“It’s very easy to get swept up in day to day business activity but entering the business awards was a great opportunity to take a step back and reflect on our achievements over the last year. It’s even more fantastic to be recognised for the work we do and to come out victorious. Thank you to everyone all of the judges and the council for recognises for our innovation and all of the hard work we do”
Mo-Sys’ credits include: Aquaman, Stranger Things, The Walking Dead, Life of Pi, The Shape of Water, House of Cards and Westworld. Broadcast customers include BBC, Sky, Fox, ESPN, CNN Discovery Channel and The Weather Channel, Netflix and Sony.
Contributions by Micheal Geissler | TVBEurope | Published in the April 2020 edition
TVBEurope asks eight experts for their views on the benefits, challenges and future of remote production
WHAT DO YOU VIEW AS THE BENEFITS OF REMOTE PRODUCTION?
WOLFGANG HUBER, PR MANAGER, LAWO
The primary aim of live remote production is to move as much equipment and as many staff as possible from the remote location back to the studio facility. Thus, remote production offers the possibility of both dramatically reducing production costs at the higher-end and improving quality at the lower-end. This is achieved by redesigning the production workflow such that the majority of tasks take place in the studio rather than at the remote site. Ideally, the only task taking place at the remote site would be signal acquisition - the capture and conversion of camera and microphone signals into a form transportable over wide-area networks. In this model, all signals are transported back to a studio facility, where the program production takes place. Contrast this to conventional remote broadcasts, where the production process happens on-site, and only the finished pictures and audio are backhauled to a broadcast facility for distribution.
MICHAEL GEISSLER, CEO, MO-SYS ENGINEERING
When you take travel and rest time into consideration, the top operators might only be doing the job at which they excel for maybe 40 days a year. With remote production, those people could be working for maybe 200 days a year. That is not just a massive boost in productivity, it represents a huge reduction in carbon footprint, by eliminating long-distance travel and specialist clothing. Production companies are already talking about saving half a million dollars a year.
PETER MAAG, CCO AND EVP STRATEGIC PARTNERSHIPS, HAIVISION
The beauty of employing a remote production model is that it allows broadcasters to do more, with less. Remote production offers broadcasters the opportunity to produce more high-quality content to meet the rapidly growing demand while simultaneously creating efficiencies. And the efficiencies gained by using a remote production model are dramatic. By eliminating the costly and complex logistics associated with deploying OB trucks full of expensive equipment and production teams to the field, broadcasters can instead focus on optimising the use of their resources to produce more high-quality content. For example, a replay operator on-site at a sporting event might be only utilised for three hours during a four-day period. If the replay operator is at home, however, they could be running replays around the world, all the time.
DAVE LETSON, VP OF SALES, CALREC
Like the rest of the industry, we see multiple benefits to remote production, though it’s important to highlight that while the principals are the same not all remote productions are created equally in terms of scale. Firstly, using OB trucks is no longer necessary at every single live event. This is beneficial in multiple ways: fewer staff need to travel and therefore better employee welfare; far less equipment is required on-site; it’s environmentally more sustainable; and there is far less equipment downtime at the central production location. There’s also the fact that multiple sports events – football matches being a prime example – can be covered in a day or over a weekend because the centralised production technologies aren’t committed to a single event. In the long run all this adds up to significant cost savings.
ON WHICH AREAS OF CONTENT DO YOU SEE IT HAVING THE BIGGEST IMPACT?
DR REMO ZIEGLER, HEAD OF PRODUCT MANAGEMENT, VIZRT SPORTS
Remote production will have its biggest impact on any live production that is typically faced with big travel and logistic costs. Sports and news production are prime examples. With regards to sports, various aspects will benefit from remote production. The bigger leagues and federations with dedicated connections to stadiums enable the transmission of high-quality, low-latency signals to a centralised production location. Only small production crews, comprised mostly of camera and audio technicians, are required on-site, while the rest of the production equipment and operations staff remain back at the production centre. The impact of remote production is amplified when one considers the opportunity of distribution through OTT. Many more signals, or specialised cuts, can be produced which are tailored to different customer segments. Producing all these outputs is heavily facilitated through remote production.
WH: The impact applies to all productions covering events that happen outside the studio, like in arenas or stadiums, for which many signals are required to cover the event - and particularly sports with long distances between the camera and microphone positions, like football, rugby, biathlon, motorsports, but also open-air concerts of a large scale. It also opens up the possibility of extending the depth and range of live event coverage into areas previously inaccessible through cost. For example, more specialised sports, lower league and regional coverage, even to college and university level. The industry sees an unquenchable public thirst for sports and other live events coverage and remote production provides the means to broadcast more of it.
MG: Anything that is somewhere for a short time will feel the impact – anything that today is covered by an OB truck or flyaway kit. Obviously, sport heads the list, but it also includes music and entertainment. It will also have a huge impact on corporate events. Product launches and business presentations will be raised in quality, not least through the ability to afford more cameras. Production-as-a-Service will have an impact on linked events: fashion weeks, for example, could see the same skilled production team covering every major event.
PM: Remote production has the most significant impact on events – and it’s not just limited to live events. Whether it’s for a sporting event, esports, a press conference, or a political panel, a remote production model reduces the number of people and resources required on-site, allowing production costs to be lowered. Even for events that aren’t broadcast live but where speed is critical, remote production can dramatically accelerate the production process. At-home/REMI workflows are particularly attractive options when it comes to tier two and tier three events such as college football, for example, where deploying resources on-site is simply not costeffective. In this instance, remote production enables broadcasters to expand their coverage to meet demand while keeping production costs in check.
HOW SOON WILL IT BECOME THE INDUSTRY NORM?
RICHARD MCCLURG, VICE PRESIDENT MARKETING, DEJERO
We’re already there. Dejero has enabled remote production workflows for over a decade. Dejero enabled the first live coverage of the Vancouver 2010 Olympic Games torch relay, delivering unprecedented live coverage following the torch as it travelled 45,000km across Canada. In 2013, another first enabled Sky Sports to broadcast live from all 92 English Football Clubs in a single day. Using revolutionary wireless technology at the time, Dejero blended multiple cellular connections and provided enough bandwidth to deliver high-quality live broadcast content, at significantly less expense and complexity than traditional video transport technology.
NORBERT PAQUET, HEAD OF LIVE PRODUCTION SOLUTIONS, SONY PROFESSIONAL SOLUTIONS EUROPE
Consumers are demanding more content, available whenever and wherever they choose, without any drop in quality. With this escalating pressure on broadcasters, remote production will naturally become the norm and act as a silver bullet to help keep up with growing industry demands. We’ve already seen overall connectivity (mobile or fibre networks) become a major game-changer for our industry, particularly when it comes to live production. And, with remote production set-ups, resource sharing and a more collaborative, faster turnaround time, it’s becoming even more popular. At Sony, we’ve been at the forefront of this revolution and have, to date, worked with many customers around the world to develop remote production set-ups in news, magazine and live production.
LARISSA GOERNER, DIRECTOR OF ADVANCED LIVE SOLUTIONS, GRASS VALLEY
The biggest hurdle to widespread adoption of at-home models is the challenge of latency. As we look to the future, though, broadcasters and production companies will continue to drive toward more captivating experiences that draw in viewers, using higher resolutions and more camera angles that will put greater stress on the network and available bandwidth. More efficient encoding solutions – JPEG2000, JPEG-XS and MPEG – offer an attractive alternative.These options deliver ultra-low delay, comparable to transporting the signal over fibre, and come at a significantly reduced cost while ensuring there is no difference in the viewers’ experience.
RZ: When we look at some of our leading customers, remote production is already the norm, since they have adopted Vizrt solutions that facilitate a remote production workflow. However, our larger customers are not the only ones benefiting from remote production. Many smaller productions also take advantage of the same concepts to reduce their cost and the size of their footprint. The rise of IP and the availability of software-defined productions tools, which in turn can be virtualised in the Cloud, will make remote production the norm for the majority of media productions.
HOW BIG A PART DO YOU THINK 5G WILL PLAY IN REMOTE PRODUCTION?
NP: 5G will play a key role in powering remote production for many organisations. Firstly, the low-latency transmission offered by 5G is crucial for any productions such as sporting events or news broadcasts where delays are unforgivable. Secondly, the higher bandwidth 5G offers helps deliver less compressed content to the mobile viewer but also unlocks additional applications for remote production too. Finally, given 5G enables Cloud-based production models, it helps reduce the deployment of physical OB resources, which makes productions much more sustainable.
LG: As an emerging technology, 5G will play a significant role in the broadcast industry as a reliable way to deliver content to consumers. In terms of remote production, 5G can be utilised for its greater capacity benefits. However, bandwidth is not unlimited in 5G and as we are still seeing an increasing uptick for live UHD content, baseband cannot be transported over 5G and has to be encoded in order to handle demand for this format. This adds another step in the creation process and will slow down adoption for high-end live sports production. Currently, for tier two and three productions, 5G is a means to end in providing easier contribution to the remote location and is a good candidate to enable more creativity in content production.
PM: There’s nothing mystical about 5G: it’s a faster, wireless, mobile network. As 5G gathers momentum and begins to easily handle multiple video streams from a venue it will definitely act as a catalyst to accelerate the adoption of remote production. However, very fast, reliable, and affordable network pipes are already available from any venue today and it’s important to remember that a network is just a network and all networks are getting faster; both wired and wireless. What’s more impactful than the network are technologies like the open source Secure Reliable Transport protocol (SRT), pioneered by Haivision, which enables video transport over any network.
DL: From a Calrec perspective, little will change with 5G. RP1 one was deliberately designed to be transport agnostic. From our perspective, it does not matter whether we are piggy-backing audio on a camera feed via a JPEG2000 path or via a closed AES67 wide-area network. For our clients though, 5G offers vast potential. It’s not implausible to consider a camera at a field of play sending pictures directly back to base over 5G. Companies have already achieved this with multiple 4G links. 5G technology could offer a true paradigm shift in areas ranging from traditional SNG to Premier League football. However, where there are local commentators or reporters, some local IFB mixing will still be needed and RP1 becomes even more relevant.
HOW DO YOU INCORPORATE SUSTAINABLE PRACTICES IN REMOTE PRODUCTION?
RM: Remote production is no doubt reducing the industry’s carbon footprint. The amount of kit and crews required to travel to live events is greatly reduced compared to the traditional production workflows. Less OB and larger SNG trucks are on the road. Centralising production staff at the broadcast facility means that fewer people are having to travel to the field, cutting airmiles and transport. Initiatives, such as ‘Find a Provider’, which is featured in Dejero’s Cloud-based management system, enables broadcasters to find freelancers across the globe, making it easier to find local resources to acquire content. Dejero’s MultiPoint Cloud service enables broadcasters to share field resources and contribute the pool feed simultaneously to many broadcasters, geographically dispersed.
LG: In general, the decrease in travel brought about by remote production already has a significantly lower impact on the environment. However, we believe more can be done. Enabling workflow consistency for a variety of content productions is a goal for us. Grass Valley cameras, switchers and replay products all enable the highest flexibility for any workflow, therefore allowing the creative talent to be where they are most needed to add better value. Recurring tasks can easily be centralised and produced with fewer operators, ultimately allowing more content to be created at a consistently high quality. Our DirectIP solution, for example, enables almost all production and technical staff to work from a centralised location. We also give customers the flexibility to locate creative talent either at the venue or the production hub. We continuously strive to innovate across the entire portfolio providing the latest software and hardwaretechnology to enable sustainable production in the market.
RZ: Remote production reduces the number of required people and equipment on-site. That means fewer people travelling, and fewer pieces of production equipment shipping, by plane, train, and automobile. Furthermore, a remote dedicated production centre, designed around software-defined production practices, reduces hardware usage, power consumption, and the need for active cooling versus inefficient mobile units.
MG: The headline benefit is that fewer people need to travel to the event, meaning is a significant reduction in carbon footprint. As remote production becomes ever more sophisticated – with remote camera operation, for example – so the reductions become greater. This does depend upon complex technologies becoming mainstream and commoditised, to simplify the installation and the power consumption of rig and connectivity. The recent coronavirus outbreak is seeing a very large reduction in business travel. The ability to control cameras from a central hub in any place of the world will be extremely attractive to productions, not least because of the reduced environmental impact. WHAT WILL BE THE CHALLENGES FOR REMOTE PRODUCTION AS IT GROWS?
WH: Growing demand for content and tighter schedules of events to be covered are challenging on the administrative side, as equipment needs to be available reliably at any time for a new production as soon as it is not used anymore for the previous one. Access to and reliability of the fibre infrastructure must be guaranteed. There the System Monitoring and Realtime Telemetry for Broadcast Networks like Lawo’s SMART come into play to allow for constant control and monitoring over the complete IP network installation from capturing to playout. And the more concurrent productions that are happening, the more essential it is to have such a monitoring system in place to ensure signal, sync and packet integrity and thus flawless operation.
MG: The real issue will be the management of change, particularly for people. It is a different pitch for operators: taking them away from immersion in the action and giving them comfortable, familiar working environments in exchange for greater productivity. The people issues, and the shifts in budgeting, are cultural changes, which always see a natural resistance.
DL: Connectivity is a key issue. Also getting staff to understand and adapt to it, though our customers tell us that once it’s been explained and tried, this stops being an issue! The other thing, of course, is reliability. For Calrec, this hasn’t proved an issue either. Lastly, for quick turnaround projects, or where there are multiple events in a row/ across a season, technical and workflow practices have to be set in stone. But we don’t see any reason that remote production use won’t grow significantly from here.
By iSIZE staff | iSIZE technologies | Published 29th April 2020
Enhanced video streaming start-up iSIZE Technologies has today announced the appointment of Paul Massara, Ex-CEO of Npower, and Maria Ingold, Ex-CTO of Disney-Sony joint venture FilmFlex Movies, to its Executive Board.
Massara brings over two decades of experience in the energy sector to the role, after various executive positions at Centrica plc, Northstar Solar and Habitat Energy, before becoming CEO at energy giant Npower. With a similarly impressive track record, Ingold joins the Board with extensive technical experience, having helped innovate the beginnings of audio and video on PCs at IBM and early 3D PC computer gaming at Ocean, before becoming a Senior Technical Executive in streaming, including a 5-year tenure as CTO at Disney-Sony joint venture FilmFlex Movies, which produced one of the most successful on demand film services in Europe.
Commenting on his appointment, Massara said: “I am very excited to join iSIZE Technologies, a start-up which is shaking up the streaming industry. Having dedicated much of my career to furthering the environmental cause, I am very much looking forwards to taking iSIZE low-carbon, sustainable technology solutions to new heights in 2020.”
Speaking about the role, Ingold commented: “I’m thrilled to be part of iSIZE at such a relevant time. With nearly 30 years of technical expertise in the entertainment and media industry, I am well-placed to see the potential iSIZE has to revolutionise the sector and am delighted to be part of its future.”
Sergio Grce, CEO at iSIZE Technologies, added: “We are very pleased to have both Paul and Maria on board here at iSIZE. With years of commercial and technical experience under their belts, these appointments demonstrate our accelerated growth trajectory as we continue to expand our corporate horizons in 2020.
With Paul’s impressive business acumen and Maria’s technical credentials on side, we are confident that iSIZE will continue to flourish as we look to make our mark in the streaming sector in 2020.”
iSIZE Technologies, a London-based provider of intelligent video coding and delivery technology, launched its pioneering AI-powered encoding platform in 2016. Last year, the innovative start-up was awarded a special merit prize at The Digital Media World awards for its proprietary BitSave software, a cloud transcoding technology which, by compressing vast quantities of data, significantly reduces the energy input required to stream videos.
By Maria Ingold | iSize Technologies | Published 21st April 2020
I grew up off-grid in a cabin in the New Mexico mountains. That was isolation. By contrast, isolation in the time of coronavirus is incredibly connected. While working, socialising and relaxing from home have impacted that connectivity, new patterns are emerging as well as opportunities for the future.
What is the scope of the impact?
Akamai, a content delivery network (CDN), saw global internet traffic increase by 30% in March, an entire year’s growth, and without live sports streaming. Comcast saw a 32% increase in peak USA traffic over March, with plateaus in early lockdown markets.
Even before COVID-19 video was 60% of downstream internet traffic. When Conviva analysed three weeks in mid-March they discovered that video streaming viewing hours jumped more than 20% globally in that last week, up 27% in the USA. By the end of March, Comcast saw a 38% USA increase.
Although Internet service providers (ISPs) and CDNs are engineered to deal with peak changes, when usage spiked, the European Commissioner asked streamers to switch to Standard Definition (SD) when High Definition (HD) wasn’t necessary. The European Broadcasting Union (EBU) then issued recommendations for adapting streaming quality during times of crisis.
Assuaging one concern, Conviva discovered that daytime viewing jumped nearly 40%, spreading peak load, but that still leaves lots of bits flowing across the internet.
Netflix and Google’s YouTube agreed to reduce bitrates in Europe for 30 days, with Netflix dropping by 25% and YouTube moving to SD as a default globally. Both were crucial, because while Netflix usually has the largest percentage of video traffic, YouTube is currently generating almost twice the traffic of Netflix. Amazon Prime Video, Apple TV+ and Walt Disney’s Disney+ soon followed.
Consumers were concerned. They were paying for HD but would get SD. Netflix explained that customers would still get the SD, HD and Ultra-High Definition (UHD) resolutions they paid for, just no longer the highest quality from a “bitrate ladder” of low to high bitrates and resolutions.
What are the long-term opportunities?
Netflix’s total energy consumption for 2019–451,000 megawatt-hours — could power 40,000 average American homes for a year, at an 84% increase over 2018, compared to 20% user growth.
Netflix has 167m subscribers. Disney+ has 50m, with 226m subs predicted by 2024. Reducing bits creates a more sustainable energy-consumption to user-growth ratio and helps companies meet their environmental impact objectives.
During the 30-days of COVID-19-inspired bitrate reduction, streamers will have saved money by reducing storage, distribution and energy costs. If one million people watch one hour per day, at 1 GB of data per hour (somewhere between SD and 720p HD), and it costs .0025 USD to stream 1 GB to one person, that’s nearly $1 million per year ($912,500). YouTubers watch one billion hours per day. That’s nearly $1 billion per year. A 25% savings is $228 million.
While these short-term actions enabled quick bitrate reductions and increased margins, they don’t preserve quality. Consumers won’t tolerate that indefinitely.
How to cut costs and maintain customer satisfaction
A codec encodes (usually in hardware) the moving image source and decodes (usually in software) on a device to display it. It reduces bitrate as much as possible while attempting to maintain fidelity. Codecs range from older, widely supported but higher bitrate like MPEG-4 AVC (H.264) to newer, less supported, time and power-hungry but lower bitrate ones.
Per-title encoding is a bitrate reduction tactic pioneered in 2015 by Netflix. To measure fidelity, Netflix used quality metric PSNR (Peak Signal-To-Noise Ratio), but it doesn’t always measure how it looks to a person. Neither does SSIM (structural similarity), designed to improve on PSNR. So, Netflix co-created VMAF (Video Multi-Method Assessment Fusion), a perceptual quality metric.
Machine learning (ML) can reverse engineer perceptual metrics to make encoding more effective. When this precedes encoding — precoding — it works with any codec, encoder and decoder. There are trade-offs between the sharpness of VMAF, which can look artificial, the naturalness of PSNR and SSIM, and the blurriness and lack of fidelity caused by reducing bitrate.
I advise iSIZE, a machine learning precoder that claims 20%-40% bitrate savings (up to 60%) without changing the resolution and typically improving VMAF. Latency is one frame. I asked expert reviewer Jan Ozer to independently test iSIZE’s BitSave product. He tested using the MPEG AVC (H.264) codec.
Jan confirmed that “BitSave is a legitimate processing technology and not a [VMAF] hack”. Ultimately, “[a]fter many hours of testing, [Jan] found that BitSave’s technology is valid and valuable” though he recommends subjective testing. I agree and recommend testing various bitrate savings and metric balances. Regardless of the solution you choose, remember to balance long-term sustainability and cost-cutting with perceived customer experience.
By Adrian Pennington | IBC 365 | Published 27th April 2020
The traditional means of optimising video streaming workflows have run their course. Future advances will be made in software automated by AI.
Online video providers have never been under so much pressure. Excess demand has caused Netflix, YouTube and Disney+ to tune down their bitrates and ease bandwidth consumption for everyone, in the process deliberately compromising the ultimate quality of their service.
Even once the crisis has subsided operators will have to equate scaling growth with the cost of technical investment and bandwidth efficiency. Even in a world with universal 5G, bandwidth is not a finite resource.
For example, how can an esports streaming operator grow from 100,000 to a million simultaneous live channels and simultaneously transition to UHD?
“Companies with planet scale steaming services like YouTube and Netflix have started to talk about hitting the tech walls,” says Sergio Grce, CEO at codec developer iSize Technologies. “Their content is generating millions and millions of views but they cannot adopt a new codec or build new data centres fast enough to cope with such an increase in streaming demand.”
By iSIZE | edie.net | Published 19th June 2020
Episode 87 of the edie’s Sustainable Business Covered podcast focuses on the energy impact of streaming and digitization and features interviews with iSIZE CTO and UCL professor, Yiannis Andreopoulos along with former NPower Chief Executive, Paul Massara, who was recently appointed to iSIZE’s executive board.
Andreopoulos and Massara discuss the need to balance the environmental benefits of switching meetings to online platforms with the energy intensity of data centres and the online services we access.
By Maria Ingold | Isize Technologies | Published 1st July 2020
Fifty years ago, back when my father built our cabin off-grid by hand, sustainability was called environmentalism and considered hippy, not hip. In that time global energy consumption has increased 173%, in an ever-upward trend – until COVID-19. As of the 28th of April, 54% of the global population was in some form of lockdown. Global energy demand declined 3.8% in Q1 2020, with full-lockdown countries experiencing an average 25% decline in energy demand per week, and those in partial lockdown 18%.
Paul Massara, former CEO of npower and fellow Board Advisor to iSIZE, who deliver machine learning bitrate and energy reduction and perceptual quality enhancement for video, notes that, “At the same time, global carbon use has reduced around 5% as economies have slowed and airplanes have remained grounded. And yet if we are to hit our net zero targets and keep global temperature rises to less than 2%, we require a 7% year on year reduction in carbon, year in year out. The challenge is to achieve such carbon reductions without a crashing of the world economy.”
IMPACT OF VIDEO STREAMING ON SUSTAINABILITY
Changes during lockdown could result in a new way of working. MIT discovered that 48.7% of US workers worked from home after COVID-19 (14.6% had already been working from home.) Global Workplace Analytics noted that globally 77% of white collar workers were now working from home full time (compared to 9% before.) The percentage of those who would like to work from home at least 1 day a week increased from 31% to 77%. Indeed, companies like Twitter have already approved permanent remote working, which will result in a very different use of transport and need for office space, as well as work-life balance and use of bandwidth.
With COVID-19, all the internet traffic usually across enterprise, education, consumer, public WiFi and, to a lesser extent, mobile and satellite, was suddenly consolidated to a consumer network, monitorable by Sandvine. Sandvine analysed 1 February to 19 April under these unique circumstances. They discovered that globally traffic grew by 38%, with upstream increasing by 121% and then plateauing, and downstream by 23% and rising. It makes sense that upstream grew to accommodate home workers, but steady downstream growth says they’re consuming more. Video is still almost 60% of traffic, but Sandvine predicts it could have reached 70% if the major streamers hadn’t reduced bitrates as requested by the European Commissioner.
Some customers noticed the reduction in bitrates, especially for AppleTV+ which reduced resolutions to 670 pixels high and compressed heavily with blocky artefacts. Netflix and AppleTV have mostly returned to normal, and YouTube was supposed to stop defaulting to SD after a month.
For an idea of Netflix’s typical data usage, Standard Definition (SD) is 0.7 GB per hour (1.6 Mbps), High Definition (HD) is up to 3 GB (6.7 Mbps) and Ultra High Definition (UHD) is 7 GB per hour (15.6 Mbps). Netflix is currently taking up 11.42% of global traffic, which, while a slight decline in share, is still an increase in global traffic. To cope with the overall rise in traffic, Netflix added four times the normal capacity in Internet Service Providers (ISPs) in April.
In 2017 when Netflix had 117.58 million subscribers, its users streamed 140 million hours per day, or 1 hour 11 minutes per day per user. In 2019, 167 million subscribers watched an average of 2 hours per day. As of Q1 2020 Netflix has 182.86 million paid subscribers. 182.86 million people watching two hours per day at 3 GB per hour is 400 exabytes of data per year.
By the beginning of May, mobile video traffic for Disney+ across North America and Europe reached 7 exabytes per month, representing 1.2%-2.2% of all mobile video traffic. Netflix is at 7%-15% which is in the region of 40 exabytes per month. That’s 480 exabytes per year just for mobile in North American and Europe.
Given that, 400 exabytes of data per year is low. Perhaps we really are at 3.2 hours of viewing per day as has been projected due to COVID-19. That’s 640 exabytes per year, which still seems low.
YouTube uses 1.1Mbps for 480p SD, 2.5Mbps for 720p HD, 5Mbps for 1080p HD and 20Mbps for UHD. YouTubers watch 1 billion hours per day, nearly twice the 585 million hours (at 3.2 hours per day) that Netflix users watch.
And UHD isn’t the end of the story. YouTube’s traffic, at 15.94%, is now more than Netflix. YouTube will also be the first major streaming provider to offer 8K on 8K TVs which support Alliance for Open Media’s AV1 hardware decoding. 8K, or UHDTV-2, is 7680 x 4320 pixels.
Netflix used 451,000 megawatt hours (MWh) in 2019, an 84% increase over 2018, compared to a 20% user growth. 94,000 MWh are direct energy use and composed of their offices, studios and telecoms that form their Content Delivery Network (CDN). 357,000 MWh are indirect energy use which includes partnerships such as Amazon Web Services, Google Cloud and the caching servers they put into ISPs. This will have increased with the quadrupling of capacity in ISPs in April.
The exact sustainability hit of streaming video is under debate, however, with the July 2019 The Shift Project’s “Climate Crisis: The Unsustainable Use of Online Video” results being contested by George Kamiya, a digital/energy analyst from the International Energy Agency (IEA) whose article came out in March 2020, a month after Netflix’s impact report. He says The Shift Project’s figures imply that Netflix streaming consumes 370 terawatt hours (TWh) per year, 800 times higher than what Netflix confirmed above. He further notes The Shift Project’s numbers show 1.6Kg CO2e per half-hour of Netflix content, which IEA estimates to be closer to 0.028-0.057Kg CO2e. He says they overestimate bitrate, CDNs and data transmission networks, but underestimate energy consumption of devices.
CONTRIBUTORS TO STREAMING VIDEO ENERGY CONSUMPTION
Source content is normally encoded into a high-resolution master format and transcoded into variants. Variants are tailored for the device and service level of the consumer, and typically use “bitrate ladders” of low to high bitrates and resolutions. This could range from SD for a mobile on 3G to UHD for a TV on WiFi.
Video is compressed by an encoder (usually hardware) and decompressed by a decoder (usually software but hardware can be used to optimise). Codecs trade-off between ubiquity, compression (or bits saved) and the time and power required to encode. To further reduce the bitrate and increase perceptual quality, a precoder, like iSIZE’s BitSave, may be inserted before the encoder.
Videos are usually stored, and archived, with technical metadata in a Media Asset Management (MAM) system. Metadata, such as title, synopsis, trailers and images are added to a Content Management System (CMS). The MAM and CMS can be combined alongside workflow management and other tools as a Software as a Service (SaaS) with content assets stored in the cloud.
Content is usually delivered to a CDN and pushed to its edge servers. Akamai, one of the leading CDNs, has 250,000 edge servers deployed in thousands of locations to cache content at one network hop from 90% of the world’s users. All of these servers sit in data centres. In 2018 data centres used about 200TWh, or 1% of global electricity.
Because of the volume of its traffic, Netflix created thousands of Open Connect caches to sit within Internet Service Providers (ISPs) to increase efficiency and reduce the overall demand on upstream network capacity.
Transmission networks, which transmit the bits of data, used about 260 TWh, or 1.1 of global electricity in 2018. Two-thirds of that was mobile networks.
The Cisco Annual Internet Report (2018-2023) noted that there will be 29.3 billion networked devices by 2023, up from 18.4 billion in 2018, with 50% being Machine-To-Machine (M2M) connections, growing at 19% CAGR. Smartphones will grow second fastest at 7% CAGR, then Connected TVs and related devices at just under 6% CAGR. PCS continue to decline at 2%.
While smartphones make up the biggest percentage of consumer devices, mobile isn’t where all video traffic is consumed. On Netflix, by month six, 70% of viewers watch on TV, 15% on laptops, 10% on mobile phones and 5% on tablets. With over 100 pay TV partnership deals and access to over 300 million global pay TV homes, TV is likely to continue to be important to Netflix.
SOLUTIONS TO STREAM VIDEO SUSTAINABLY
Encoding, transcoding, storing and distributing fewer bits helps reduce energy use and costs. When the Competition Commission asked for help reducing internet traffic, Netflix stopped using the top rung of each encoding bitrate ladder set up for SD, HD and UHD. YouTube changed its default from HD to SD. AppleTV+ used a highly compressed lower resolution. While these reduced bits, they also reduced quality and some viewers complained.
Keeping the perceptual quality the same or better, while reducing bits, can be achieved in a variety of ways. One is with better codecs. While an older codec like AVC (H.264) has the best availability across devices and platforms, HEVC (H.265) offers a 25-50% bitrate reduction on H.264. AV1 is even newer. It provides a 17% bitrate reduction on H.265 across entire bitrate ladders, but up to 30% for 1080p HD and 43% for UHDTV-1. It’s slated to be a gamechanger for 8K encoding. The problem is the speed and power usage. AV1 is said to be 50-3000 times slower than HEVC and requires more powerful hardware to encode. That’s being addressed, including with multi-dimensional parallelism (using many CPU cores simultaneously to process multiple parts of the encode). New codecs can take a while, so some improvement methods work with existing codecs. One way is during encoding, as in per-title encoding as created by Netflix in 2015.
Another way is by perceptually optimising the bits before they get to the encoder – precoding. I’m a Board Advisor for iSIZE, who have a codec-independent machine learning precoder called BitSave. iSIZE’s precoding provides fewer bits to be encoded, which look perceptually the same, or in some cases better. iSIZE’s BitSave combines two key things to reduce complexity, save bits and save energy: preprocessing and dynamic resolution selection. BitSave’s consumer version, available as a SaaS service and API, both on bitsave.tech, includes preprocessing, which saves about 30% bitrate on average. The enterprise version, available on a trial basis for B2B users, includes both, so also enables up to a 5-fold (500%) reduction in the energy required by a video encoder.
I explain how BitSave preprocessing works in a previous article on opportunities for video streaming, including an independent analysis by streaming expert Jan Ozer, who finds the technology ‘valid and valuable’. In summary, machine learning enhances areas in the frame that are important to the viewer and blurs areas that aren’t. This reduces bitrate and improves perceptual metrics like VMAF, while balancing with fidelity metrics like PSNR and SSIM. For Full HD and UHD across a range of encoders – AVC (H.264), HEVC (2.65) and Google’s VP9 – iSIZE BitSave preprocessing saves between 8% to 52% (an average of 30%) of the bitrate, and therefore an average of 30% of the energy to store and stream that piece of content.
Dynamic resolution scaling is used on top of preprocessing in the enterprise version. This intelligently downscales the pixel footprint going into any encoder. Some frames don’t lose significant information when they are downscaled and then upscaled by a client. While iSIZE do provide an optional upscaler, players already automatically upscale based on the resolution and bitrate information presented in the DASH or HLS manifest file, so existing upscaling can be used with no change to the client.
If the aspect ratio is kept the same but the video’s horizontal and vertical resolution is cut in half, then the new frame will only take up a quarter of the pixels. For instance, Full HD is a quarter the size of UHD. A quarter of the original size would result in significant reduction in the CPU cycles and energy consumption required, typically between a 2-fold to 3-fold reduction. And, at the same bitrate, more bits of data would represent each pixel, so quality loss could be mitigated.
To choose the best resolution, each frame is scaled to several resolutions and analysed to determine which one provides the best quality result. Netflix does this, but with two key differences: one, they use a ‘brute force’ approach to select the resolution, and two, they use linear filters to produce the actual resolution. iSIZE uses neural net filters to produce the optimal resolution, and finds that optimal resolution via a process it calls ‘footprinting’. Footprinting does a potentially real time check on the rate and distortion of a set of resolutions, then selects the best using a mathematical optimisation approach. The energy required for footprinting is low and constant, and tests show iSIZE can achieve up to a 5-fold (500%) reduction in encoding time and its associated energy use.
iSIZE’s preprocessing and dynamic resolution scaling are explained in detail in iSIZE’s peer reviewed journal article, which will appear shortly in the IEEE Transactions on Circuits and Systems for Video Technology (arXiv preprint link).
Consumers also have a role to play in reducing their energy consumption. Consuming consciously means using energy-efficient devices, efficient transmissions and reducing electronic waste.
A 50-inch LED television currently consumes five times as much electricity as a laptop and 100 times more than a smartphone. The type of display also effects efficiency. LED-backlit LCD TVs are more energy efficient then plasma TVs. OLED is more efficient than LCD, and microLED is more efficient than OLED. It also depends how old your TV is. Consumer Technology Association (CTA) showed that LCD TVs in 2015 consumed 76% less energy (per screen area) than in 2003. Furthermore, the CTA’s sustainability work has reduced American set top box (STB) consumption by 39% since 2012, saving 29 million metric tons of CO2 emissions.
How content gets to the device also has an impact. Wireless and mobile are expected to make up more than 70% of Internet Protocol (IP) traffic in 2022, up from 50% in 2018. Streaming through 4G mobile networks consumes about four times as much electricity as WiFi, but 4G can be more than 50 times as energy efficient as 2G.
Newer devices are generally more energy efficient. The CTA notes that even with a 21% increase of electronic devices in homes there is a 25% reduction in home energy consumption. Production, however, still has an impact. Apple has taken significant steps to make energy-efficient products with renewable or recycled materials and renewable energy, but the production of an iPhone 11 contributes to 79% of its carbon emissions. Use contributes to 17%, transport to 3% and end-of-life processing for less than 1%. Electronic waste is a growing problem too with 50 million tonnes produced each year, amounting to $62.5 billion in valuable materials lost globally. Harvesting would generate fewer CO2 emissions than mining.
As a result of COVID-19, coal, oil, gas and electricity demand have all dropped, with electricity demand decreasing by 20% or more during full lockdown. Increases in residential demand are strongly outweighed by the reduction of commercial and industrial operations. The impact enabled Britain to shut down its four remaining coal-fired plants in April. Renewables is the most resilient and the only one to see growth in demand – a 1.5% increase across all sectors (up 3% in electricity generation to a nearly 28% share) year-on-year in Q1 2020.
Renewables are likely to remain the only growth area, up 1% across all sectors and 5% in electricity generation in 2020. More wind, hydropower and solar projects are underway, with low operating costs, priority in the grid, and they don’t have to adjust output to match demand. So if electricity demand decreases, renewables end up with a higher share in the electricity generation mix. As a result, global CO2 emissions are expected to decline throughout 2020 by 8% to 2010 levels. Unfortunately, recovery from every previous crisis has immediately rebounded CO2 emissions, including the highest ever year-on-year increase in 2010.
Luckily, many large digital companies are reducing their energy and carbon footprints. CDN, Limelight Networks, announced in May 2020 that even with a 50% traffic increase over the last year, it had increased the average amount of data delivered per Mbps per watt by almost 80%. It did this by switching to next-generation server hardware and software that uses less energy. Streaming Media’s ‘Greening of Streaming’ initiative also notes that Limelight is proactively selecting data centres based on access to renewable energy.
Netflix ensured that 100% of its estimated direct and indirect non-renewable power use was matched with renewable energy certificates and carbon offsets in 2019. Furthermore, Google matched 100% of its electricity consumption with purchases of renewable energy and Microsoft intends to be carbon negative by 2030 and remove its entire impact by 2050. The good news is these aren’t the only large companies to reduce their energy and carbon footprint, but individual corporate effort is only part of global structural change.
Demand for data will only grow. Videos, games and social sharing already account for 80% of internet traffic. The challenge will be to find new technologies that can help us grow the economy and also reduce energy and carbon. Ultimately we need many more solutions such as iSIZE’s if we are to reduce carbon emissions and bring climate change under control.
By Sergio Grce | Data Economy | Published 26th June 2020
Around the world, the advice surrounding the Covid-19 pandemic is that people should, wherever possible, work from home. Thanks to the ubiquity of high-speed broadband, that seems on the face of it a reasonable request for most people.
It seems reasonable because our use of the internet has grown far beyond emails and messaging. Even before the current crisis, video conferencing was rapidly replacing face-to-face meetings as a way of saving time and reducing carbon footprint.
In today’s unusual circumstances, online meetings are very popular. But it is not just workers who are confined to the home. Schools are also closed, so children are turning to the internet for educational material. Online fitness classes are booming. And, of course, people of all ages confined to the home are turning to streaming video services and online gaming for entertainment.
Before the crisis erupted, Cisco predicted that global internet traffic would reach 4.8 zetabytes a year (that’s 48 followed by 20 zeros). The significant point, though, is that video – in all its forms – represents at least 80% of that total.
And if we are consuming more video over the internet, whether conferencing or binge watching, then the impact on the network will increase. Openreach, which provides most of the broadband infrastructure in the UK, has seen traffic increases of between 35 and 60% over equivalent times in “normal” weeks. Vodafone says mobile broadband demand has increased by 50%.
EU Commissioner Thierry Breton commented recently that he is concerned that the digital infrastructure could collapse at any time. “Streaming platforms, telecom operators and users, we all have a joint responsibility to take steps to ensure the smooth functioning of the internet during the battle against Coronavirus,” he said.
The nature of digital video is that it is fiercely demanding of bandwidth. The native data rate for HD television is 1.5 gigabits a second; Ultra HD is at least four times that. It is also deterministic: if your television display does not get a new picture every 40 milliseconds you see it all too clearly. Drops and freezes may be acceptable in complex video conferences, but not when you’re watching Narcos or Stranger Things.
Video streams are compressed heavily before passing through either the broadcast or broadband pipe to get to you: a premium HD channel might be four or five megabits a second to your home. That is extremely effective, but the sheer mass of traffic still makes it a challenge for the internet infrastructure.
The codecs used to encode the video signals are, of course, tightly standardized – they have to be for the whole thing to work. Any changes to these standards take years to develop and ratify. Updating the codecs to reduce bandwidth requirements is not an option.
There is the suggestion that users might accept standard definition video streams rather than HD or Ultra HD. Netflix chairman and CEO Reed Hastings tweeted “To secure internet access for all, let’s #SwitchtoStandard definition when HD is not necessary”.
That, though, must be seen as a huge commercial risk. First, consumers have got used to seeing HD quality on the large screens now found in every living room. SD will be seen as very inferior. Longer term, it deeply undermines the push for Ultra HD which the streaming businesses have advocated as giving them a clear advantage over broadcasters.
It also does nothing to limit the boom in video conferencing. Most users have no idea of the resolution of the camera built into their computers, let alone how to modify its parameters. Video conferencing will continue to grab all the bandwidth available because there is no practical means of throttling it.
With a need to maintain perceived quality and no significant reductions in bandwidth from codec developments in sight, the only solution is to pre-process the video before it reaches the encoder. Perceptually optimized video files when given to the encoder should result in smaller streams out.
For a moment, let us look back 25 years. The challenge then was to stream audio, which required a sustained data rate of 1 – 2 Mb/s. The engineers and mathematicians developing the first MPEG standard applied psychoacoustics to the understanding of human perception of sound. Although purists remain critical, the level 3 audio coding within MPEG-1 – known as MP3 for short – has become universally accepted and used.
What MP3 does is eliminate those parts of the audio signal which most listeners would not miss. It allows the data rate to be slashed down to 64kb/s.
Our work at iSIZE Technologies has found that the same principles can be applied to video. If you determine what people actually see, then you can remove from the video stream those pixels which are not important. This is a pre-processing stage, prior to encoding into one of the video delivery standards, but with less data in so less data out.
There is extensive academic work on measuring the effectiveness of such video pre-processing. The most widely used is VMAF, for video multi-method assessment fusion. Driven by Netflix – which has a big interest in efficient video streaming – VMAF was developed by the University of Southern California and the University of Texas.
Through VMAF we have reliable metrics for human visual perception, and therefore a solid foundation on which to develop machine-learning processes to identify the less important parts of the image and to reduce their significance in the video flow. We are already seeing bitrate reductions of between 20 and 40% and no compromise to the visual quality – in fact in some instances we even improve visual quality as measured by VMAF and other high-level perceptual metrics.
In the long term, saving 30% of the 80% of the internet that is video traffic could result in data savings close to 25%. In the short term, proven video pre-processing algorithms are ready to roll, and could keep the internet alive during a period of unprecedented threat.