GB Labs and Archiware today announce integration between the GB Labs storage platforms and the Archiware P5 data management solution to deliver maximum security for ongoing and completed productions. Customers now have the flexibility to choose from different storage devices for backup and archive such as disk, LTO tape and cloud storage.
GB Labs storage platforms, such as SPACE, Echo and FastNAS, allow content creation teams to collaborate on files and projects for increased productivity and creative freedom. These products give Mac, Linux and Windows users simultaneous access to projects using 1 to 100Gb network connections. They are all designed from the ground up to be easy to install, maintain and upgrade, and the system allows easy configuration of access rights, storage quotas and permitted bandwidth for each user. Moving data between tiers of GB Labs units is optimised by the CORE OS intelligence to increase workflow efficiency.
With P5 now able to run natively on GB Labs devices, the latest CORE.4 OS allows users to configure the P5 client via the integrations tab to access files on GB Labs storage products for backup and archive. The integration provides an efficient way to protect production and provide business continuity. The P5 platform offers enormous flexibility in configuration, setup, storage and policies. The synergy between the two systems also means production is protected in multiple ways.
P5 Backup protects ongoing production against accidental deletion, file corruption and any other mishaps. The scheduled automatic Backup is the best way to keep files safe. The optimised restore process hands any file back identically to continue with production (including xattributes, ACLS, etc.). P5 Backup works with disk, tape and cloud storage to provide maximum flexibility and fulfil any requirements. Encryption is available for both transfer and storage.
P5 Archive migrates finished projects and their assets to disk, tape or cloud to preserve them for the long-term. Finding files at a later date is easy with its MAM-like features, customisable metadata fields, thumbnails for still images and proxy clips for videos. Combined search and visual browsing functionality help to locate files when they are needed for re-use, reference and monetisation. Full LTFS integration (ISO/IEC) provides import, export and archiving on LTFS tapes and the system makes it easy to catalogue and include existing third- party LTFS tapes into the P5 Archive.
Howard Twine, Chief Product Officer for GB Labs, said “We are delighted to bring this latest CORE OS update to our users allowing them to easily configure the Archiware P5 client so that it can access files for either backup or archive (or both). This allows our customers the flexibility to choose different storage devices for backup or archive like LTO tape from other vendors offering a ‘best of breed’ approach.”
Dr. Marc Batshkus, Director Marketing and Business Development at Archiware, confirmed, "We are proud to have GB Labs as integration partner for P5. Our shared strength, like the focus on customer experience, help to offer solutions that are powerful, cost-effective and easy to use. Especially in media production, the accessibility of any solution is key to improving productivity."
Both P5 Backup and P5 Archive support LTO tape drives and tape libraries from all vendors, ensuring the highest durability and shelf life of decades and the lowest TCO of all professional storage media starting from 10€/USD/GBP per TB. When using multiple drives, throughput can grow (and almost multiply) with each drive added though P5´s drive parallelisation feature. For maximum security and offsite storage P5 offers tape cloning to create two identical tape sets with two LTO drives.
Aldermaston, UK, 29 September 2020 - GB Labs, innovators of powerful and intelligent storage solutions for the media and entertainment industries, has launched Unify Hub, a platform designed to meet all the challenges of today’s media production environment combining on premise and cloud content, empowering remote production while maintaining data integrity and security.
Unify Hub is a data management platform designed for today’s changed world. It manages storage – on site or in the cloud; from GB Labs or from other vendors – to provide a working environment which is simple and fast, providing the tools for maximum productivity from production and post production artists.
“For most of this year, collaborative production has been impacted,” said Dominic Harland, CEO and CTO at GB Labs. “Content is back to being stored in multiple locations, with all the problems of delays in moving material from place to place, the risk of creating multiple ‘master’ versions, and of course poor security.
“First and foremost, Unify Hub provides a unified approach to content and metadata,” Harland added. “Wherever your material is physically stored, the content you need appears as a single, secure and coherent source. That makes it ideal for high-efficiency collaborative and remote working.”
Unify Hub offers five key pillars of advances in productivity, connectivity and security:
Unify Hub Cloud Mounts: Align the local user permissions with cloud accounts to facilitate single sign-in and simplify the secure access of cloud accounts on-premise
Acceleration: Leverage GB Labs technology to speed up the user experience, as well as reducing costs and saving internet bandwidth
S3 Endpoint: Connect on-premise workspaces to cloud services or remote users
Remote Working: Provide remote workers with a seamless experience, regardless of your location
Virtual Workspaces: Simply select what is needed and securely make available everywhere, instantly
Through its management structure, Unify Hub allows users and groups to be established for each project. All appropriate cloud accounts can then be accessed through the single sign-on, with pre-authenticated cloud endpoints appearing as SMB storage shares. For system administrators Unify Hub File Manager provides a single pane of glass overview and control; for users their log-in brings all the material they need to their workstation, wherever they are working and the content is stored.
Unify Hub is the winner of the Best of Show Award 2020 during IBC 2020 from TVBEurope magazine.
Aldermaston, UK, 29 Sept - 1 Oct 2020 - GB Labs, innovators of powerful and intelligent storage solutions for the media and entertainment industries, is proud to be part of the inaugural BroadcastAsia (29 September – 1 October) immersive virtual convention. As well as a fully staffed booth, GB Labs is making two presentations on key products: CloakDR and the award-winning Unify Hub, which will be launched at the event.
Unify Hub was designed for today’s changed world, a storage appliance which has support for remote production and distanced implementation baked in. Unify Hub is a powerful control layer, tuned to the challenging needs of broadcast, which manages storage – from GB Labs or other vendors – whether that storage is on premises or in the cloud; local or remote. So it can leverage existing storage as well as providing a sure foundation for future plans.
“The pandemic has forced changes on the way we work, bringing remote collaborative working to the fore,” said Ben Pearce, CBO Asia and Co-Founder at GB Labs. “The temptation is to go back to the bad old days of material stored in multiple locations, with all the problems of delays in moving material from place to place, the risk of creating multiple ‘master’ versions, and of course poor security.
“As the name suggests, Unify Hub provides a simple, unified approach to content and metadata, so that wherever it is physically stored, it appears as a single, secure and coherent source, ideal for high-efficiency collaborative and remote working,” Pearce explained.” Unify Hub is a winner of the Best of Show Award 2020 during IBC 2020 from TVBEurope magazine.
The other featured product at BroadcastAsia is CloakDR, a revolutionary approach to 'no single point of failure' content security from GB Labs. The intelligence in GB Labs servers uses CloakDR software to build redundant storage systems which are always synchronised from the moment of ingest. This allows CloakDR to dynamically allocate one appliance as primary, one as secondary, and to provide instant failover whatever the incident.
The inherent intelligence maintains a single IP address for storage, so the rest of the infrastructure can continue unaware of any issues. The result is perfect resilience: content is read from and written to the servers without interruption.
“We are excited to be involved in this year’s BroadcastAsia,” added Pearce “The organisers of the parent event, ConnecTechAsia, have gone to extraordinary lengths to create a virtual experience that is as real as possible, with staff on booths ready to answer questions and presentations on key new technologies. The technology they have brought together to make this a satisfying and engaging experience for ‘visitors’ looks excellent, and we are looking forward to taking part.”
Aldermaston, UK, 21 September 2020 - GB Labs, the leader in intelligent storage solutions, has provided its FastNAS storage system to Fancy Film, Los Angeles, enabling the facility to complete colour correction and other finishing services on a major new documentary TV series.
Tony Shek, Fancy Film CTO, said, “We were rapidly getting into 4K finishing and HDR Dolby Vision, which meant that we had to start looking for shared storage that was fast enough to run in real-time but was also cost effective. We researched a lot of companies, but in terms of cost effectiveness, the quotes we received were anything but.”
Fancy Film Online Editor Jacob Fisher added, “When high volumes of ultra-high resolution content starts pouring in, you need the ability to work on that content from multiple workstations at speed, which at the time was a capability we didn’t have.”
Rave reviews during NAB 2019 from existing GB Labs storage users convinced Shek that he may have located a solution. A subsequent referral from a user to GB Labs’ West Coast representative and installer, New Media Hollywood, meant that Fancy Film had found what it needed in FastNAS.
FastNAS shared storage combines the benefits of hard disk and solid state drives in a single device and has become the high performance storage system of choice worldwide, all at a highly affordable price point.
Shek said, “We took on a high-profile job, and even though we knew ahead of time that the majority of the footage was going to arrive in ultra-high resolution, it was a relief to discover soon into our trial with FastNAS - during which we pushed it to its limits for a solid week - that we didn’t have to worry. It just worked. The FastNAS system didn’t even break a sweat. All we had to do then was focus on the creative, and the results are marvellous.”
Adi Antariksa, GB Labs Chief Business Officer for the Americas said, “It’s deeply gratifying to have our storage systems deliver on what they promise, and then some, especially when it comes to household brand projects like this. We look forward to continuing to support Fancy Film in current and future projects as we get almost as much satisfaction from their success as they do.”
By Ben Pearce | TVBEurope | Published 10 September 2020
I’ve been working with network area storage (NAS) for nearly 20 years and still see too many people buy what is clearly the wrong storage for what they need. Often, their decision is not based solely on perceived cost savings, it is usually a result of not fully understanding that the operative word is “shared” storage, meaning that the storage is about providing support for multiple files to multiple users. Looking at storage as a single purpose appliance has often proved to be short-sighted.
What I’m saying is that measuring peak performance and IOPS (input/output operations per second) as standalone criteria for purchasing a storage system is a mistake because those figures are often misleading. A single peak-performance figure provided by a manufacturer is not indicative of the totality of what a storage system can provide to a specific facility at which it is deployed. A high peak-performance figure may sound impressive, but doesn’t take into account multiple file access requirements that take place in a typical shared storage environment, so that figure is more applicable for DAS (directly accessed storage) systems. In short, IOPS are almost meaningless unless you know exactly what parameters were configured in the tests that were performed to arrive at that figure.
It’s too easy to arrive at misleading figures that promise false economies. The issue is that sales collateral that emphasises peak performance and IOPS figures are so prevalent that they distort the truth and lead, in many cases, to unhappy users when they subsequently find out that the claims on which they based their purchase decision bear little resemblance to real-world performance.
Back to basics
Hard drives come in many shapes and sizes, from many different manufacturers, and each manufacturer chooses what to adopt and promote from numerous storage model types and technologies.
What many end users don’t always grasp is that storage capacity alone is not a good measure of its ability to perform the tasks they need a storage system to do or to eliminate the bottlenecks they are buying it to fix.
It’s common for people to want, indeed expect, high, 24-hour duty cycle performance from a high-density RAID. But to achieve that, you need a very specific type of hard drive that comes at a higher cost than the consumer-grade drives that many assume will be “good enough”. And, like many things when you decide on a cheaper, “good enough” option, it soon costs even more to retroactively put right.
Thinking outside the capacity
There are many aspects other than capacity that impact storage system performance and reliability. For example, communal backplanes that address RAIDs inside NAS storage; the storage interface; and the number of paths and the quality of the host-based adapter (HBA) also play key roles. However, the benefits of getting these areas right are often overlooked in favour of focussing solely on greater capacity or lower cost. Again, too many people consider price-per-terabyte to be the sole purchasing parameter rather than taking a more holistic view that encompasses the entire spectrum of what a system can do when it’s designed, configured, and deployed to take advantage of a system’s entire range of capabilities.To get optimal performance, all of those aspects must work together.
Think of it this way: A race car that is installed with a very powerful engine, but with a too light chassis, standard gearbox and high street tyres to save money, is likely to spend more time in the garage than out of it, let alone ever be competitive in any races.
The best way around this somewhat short-sighted decision-making is to fully understand the potential ramifications of choosing the least-cost option. Ask detailed questions.
For example: Is the RAID level achieved in hardware or software? There are advantages and disadvantages to achieving the RAID level with either approach, so it’s important to find out which will work best for what you want to do. In some cases, it might be that a hybrid hardware and software-based RAID level system is the most appropriate option, but too many find this out after they’ve already installed a relatively cheap storage system that has little or no chance of delivering what they need. And all because they didn’t ask anything other than, “How much per terabyte?”
The OS is everything
I’ve been discussing questions to be asked and choices to be made concerning purchasing NAS, but I want to identify the main differentiator of any NAS, and that’s the operating system (OS) on which it runs.
No, I’m not talking about Windows or Mac. With NAS, the limitations of those operating systems are quickly reached and exceeded by NAS systems running on powerful hardware. Off-the-shelf operating systems are not suitable platforms for any professional shared storage system.
Nevertheless, the vast majority of NAS storage systems on the market today use generic, OTS operating systems that purport to turn hardware servers into functional NAS. The problem with that approach is that they must cater for a wide range of different hardware configurations from good to bad, which means that they are specifically tuned for none and even for those they can operate requires a great deal of compromise in many important areas.
Those faux NAS systems are “kind of” functional, but there are still major issues with them. For one, they’re unstable, and they also suffer from being designed to run on the lowest common denominator, which means that they are not computationally able to take full advantage of whatever hardware it may run on, no matter how good that hardware is. NAS hardware performance that looks good on printed specifications by the marketing department tends to fall short of real-world performance after it’s deployed. An additional problem with that is, having spent the money on a new NAS, the buyer just can’t understand why there’s been little or no improvement.
And after that money’s been spent, the boss is going to want to see those improvements, too. That’s why it’s vital to seek out hardware that can reach its full potential by working seamlessly with specially developed OS software that is highly tuned to achieve peak performance and functionality. Every component of a system must be perfectly matched and finely tuned. Hardware, software, OS…everything.
Testing is key
It amazes me that most storage system suppliers do not test their systems in high bandwidth editing and content creation environments with multiple workstations. It’s true. Most don’t.
And that’s a problem because it’s precisely those high-end editing and creation environments where many of these systems will be expected to perform. But it is too common for storage manufacturers to simply take the highest peak figure for bandwidth or IOPS that they can “in theory” achieve and publish that as their benchmark network storage and performance.
They then use that figure in the marketplace, claiming that you can just divide their figure by the number of workstations to calculate the performance that will be simultaneously delivered to each, which is patently absurd. Storage just doesn’t work like that.
I know I risk repeating myself, but it’s a fact worth reinforcing: Peak performance figures may look good on paper and sound compelling from a salesperson, but they usually only tell you about how that system is theorised to perform in a single scenario that probably hasn’t even been tested. What they don’t tell you is how a system will actually perform under the load of multiple machines, often around the clock, which is exactly what the real world requires.
And it’s critical to understand that differentiation. The very high bandwidths we’re talking about normally require at least a couple of workstations or servers to test and confirm performance figures, but most manufacturers use speed testing software that reads only one file at a time It also writes the same file, which is easily cached by the storage and therefore skews the results. This is why GB Labs always tests on real world edit suites with real media streams; not just to generate the highest figure we can get away with for marketing purposes, but to ensure the honesty and integrity of our performance figures.
Delivering ‘real world performance’ to a network
It’s important to understand that powerful storage in a server room can equate to powerful network performance. Yes, eliminating bottlenecks by utilising the latest network protocols, connectivity, and distribution methods is important, but that’s not something most NAS systems enable you to do.
There are, however, a few exceptions. What a good NAS will do is control the delivery of data by automatically making intelligent decisions on who gets allocated what portion of the overall bandwidth. Sophisticated controls like this are rare, but they are increasingly necessary to ensure Quality of Service (QoS) to the many users on the network.
Moreover, finding a system with the ability to dynamically adapt to usage and deliver 100 percent of the available bandwidth narrows the field of potential NAS solutions even further.
Therefore, choose wisely
All of the above are just some of the reasons to take time to carefully analyse the storage system investment you are about to make. The acronym ‘NAS’ is a broad term that is rather too loosely used to cover many different grades of technology offerings in the market, many of which, in truth, have little or nothing to do with true NAS. As I’ve said, limiting your research to how much it will cost per TB is short-sighted and will end in disappointment, not to mention wasted time and money.
So research your NAS options to determine all of what you need to deliver for your business, not just in terms of capacity to store additional assets, but how that storage can streamline your business whilst simultaneously providing the best and most efficient experience for multiple users, both now and in the future.
Most of all, make doubly sure that each and every component is highly tuned to the others. It’s the only way to get what you paid for.
Aldermaston, UK, 3 September 2020 - GB Labs, innovators of powerful and intelligent storage solutions for the media and entertainment industries, today announced that Side Street Post and VFX, Vancouver, supported by GDS Communications, has chosen GB Labs’ SPACE SSD NAS shared storage system to drive Side Street’s DaVinci Resolve Studio systems.
More than 300TB of GB Labs "SPACE SSD" shared storage solution, with its massive disc performance of 12GB/s now enables all of Side Street’s DaVinci Resolve Studio workstations to simultaneously playback 4K DPX streams at full resolution, with no network slow-down or dropped frames.
Side Street Post and VFX’s President, Gary Shaw, said, “Our legacy SAN storage system was no longer meeting our needs, and limited our ability to fully utilise Resolve in higher resolutions. We needed a solution that would enable all our colour correction suites to operate simultaneously at 4K resolution or higher and more efficiently.”
Gary’s view, shared by many, was that the current fibre channel storage systems could not affordably achieve the concurrent speeds needed, and the technical development of such systems is generally thought as having been eclipsed by ethernet solutions. For Side Street, NAS was the way to go.
Gary said, “Vancouver is a major market for episodic television and feature film production, a lot of it captured with high-end cameras that shoot at 4K, 6K or 8K, so a lot of raw camera footage arrives which in the grading process requires a very high data rate. Both technically and ethically, we don’t really want to downscale and work in HD. We want our clients to experience the true image quality especially on Dolby Vision projects.
"To deal with such high data rates and file sizes, you need a system that can handle them, and SPACE SSD from GB Labs provided both the bandwidth and file management capability that fits our needs."
SPACE SSD is the world's fastest and most scalable NAS platform, with performance up to 18GB/s and capacity up to 10PB. The Side Street system transfers data at 12GB/s and is linked to a 100GbE switch.
According to GB Labs CEO-CTO Dominic Harland, “Side Street Post is very forward-thinking and knew that a fibre channel system could not achieve what it needed. Speed was of the essence and they needed all colour workstations running at full capacity, simultaneously, in 4K, 6K, 8K and, eventually, beyond. SPACE SSD NAS copes with that easily, with plenty of headroom.”
A major differentiator with all GB Labs storage systems is that they do not require the user to replace or dispose of their existing storage. Like many other GB Labs users, Side Street was able to make use of its existing SAN by incorporating GB Labs ECHO Bridge as a way of accessing SAN data, or moving files to it for near-line storage.
“It’s a case of using the old storage for secondary purposes, essentially cold storage, and empowering SPACE SSD for the heavy lifting,” added Shaw. “After a few minutes of on-site tweaks to configure the system to our preferences, we were up and running – at astonishing speed - without the slightest disruption to our operations.”
CJP Broadcast Press Release: Ross-On-Wye, UK, 20 August 2020:
CJP Broadcast announces the successful completion of a video production and live streaming project for three of the highest profile games in the European sports calendar. A complete system centred on a CJP Live Sports Production System captured content to supplement terrestrial and satellite coverage of the events. CJP staff active at the matches included Managing Director Chris Phillips supervising technical setup, James Ruddock operating as Technical Manager, Kieron Sharpe and Kieran Phillips providing AV rigging support, Rob Dyton as Production Director and Chris Hollier as Remote Camera Operator.
“Covid-19 restrictions meant the venue was unable to host the capacity crowds normally present at semi-finals and finals,” Chris Phillips comments. “We were asked to augment the traditional broadcast coverage with behind-the-scenes content. This included manager reactions during the match, player interactions in the tunnel area and a focus on key international players. The resultant video would then be made available online for easy access by supporters during and after each match. Our role was to provide a complete production system plus an experienced installation and operations crew.”
“A key part of the challenge was capturing content from ‘red zone’ areas such as the substitutes’ bench and the technical control area. We provided four JVC cameras with motorised pan/tilt/zoom which were operated from our control base on the gantry in front of the press area. Two of the cameras were positioned in the tunnel. The other two were focused on the managers’ and subs’ benches. In addition to the robotic cameras, we had feeds from two Sony FS7 cameras provided by the host and operated by their own experienced freelancers, plus a JVC GY-HM660RE live streaming camcorder.”
“The Streamstar iPX allowed us to record ISO-style feeds from the six cameras and make these accessible in 15-minute segments to remotely located video editors so they could start producing final edited content while the game was still in progress. We expanded the 1 terabyte of onboard video storage in the iPX with a 16 terabyte GB Labs F-8 Studio recorder using a 512 gigabyte Nitro SSD Layer for ingest while editing. Content for editing was accessed from the F-8 by four edit suites. The system also generated an H.264 RTMP live stream for practically instant publication on social media. An HD/SDI clean programme feed was also provided from our system to the host broadcaster for use as an optional contribution feed within the terrestrial and satellite transmission.”
Available in several versions supporting up to eight camera inputs, the CJP Live Sports Production System provides a wide range production and streaming capabilities in an easily transportable unit. Its facilities include ISO recording, four-layer graphics, transitions, real-time replay, slow-motion replay, on-the-fly highlights creation, advert insertion, clip insertion and audio mixing. Operation is via a touchscreen and keyboard with the option of an external joystick for pan/tilt/zoom camera control. Full multiscreen monitoring facilities are included with the option of a second screen for commentator positions. An H.264 live stream can be fed directly to a TV station or third-party OB control suite via a 10 megabits per second link or via 4G mobile, with the ability to simulcast to multiple platforms and in-stadium screens. Up to 96 terabytes of RAID5 storage can also be attached.
About CJP Broadcast
CJP Broadcast Service Solutions Limited (www.cjp-bss.co.uk)) was established in 2011 to provide broadcast manufacturers and engineering companies with professional ITIL based service desk solutions. In 2016 the company expanded its portfolio to include digitisation of broadcast tape and film media to provide restoration of historical media archives into modern file-based formats. In 2018 CJP expanded its operation further, providing live production solutions, professional broadcast TV studio system integration and technical support services.
By Contributor | TVBEurope | Published 8th April 2020
Storage provider GB Labs and Ortana, the creator of Cubix, the asset orchestration, management and automation software, have come together to provide a unique customer experience.
Ortana Founder and CTO James Gibson said: “People talk about media asset management, but orchestration is what people are really interested in. MAM is just a by-product.
“But if an orchestrator can’t accurately understand the devices it’s talking to or what is taking place with a piece of technology at any moment in time, it’s not much use.”
To demonstrate interoperability, several years ago Ortana conducted a proactive Cubix integration project in conjunction with GB Labs storage as a best-of-breed exercise.
According to Gibson: “I worked with GB Labs for many years as a customer and have great respect for their expertise. One major benefit of working with them on various projects was their consistency in approach of establishing and ensuring a technical commonality across their product range.
“What that means is that although their products are designed to suit a wide range of needs, it is technically consistent. From a standalone LTO device to their high-end SSD storage and everything in between, all are driven with in-built intelligence anchored by their CORE.4 OS.”
GB Lab’s new CORE.4 is a high-performance custom OS specifically designed to serve media files with an additional intelligence layer that delivers ultimate stability and quality of service for every user. Moreover, its power-saving intelligence means that CORE.4 ensures consistent, reliable performance whilst using the least amount of disks. Its expanded range of demonstrably useful features are all engineered to further enhance users’ ability to manage and enhance online workflows.
Gibson added: “When it comes to integrating those storage systems with our asset orchestration technology, CORE.4 OS enables it to be done simultaneously and seamlessly. Establishing interoperability with the most crucial component needed for orchestration, i.e., storage, is painless with GB Labs.
“That synergy is because the modular approach to product development that both companies take is very similar, which benefits customers of both. In applications for which they are deployed, Cubix and GB Labs can work independently, but to achieve optimum performance, they benefit from working together. The manner in which they are individually architected means that both systems know exactly what is expected of the other to work in tandem, whether it’s ingest, content discovery, archive, workflow orchestration, tape ingest or one of many other tasks. They just ‘get each other’.”
Another key parallel, and benefit, is reusability.
Gibson said: “Many products these days have a working life that can easily exceed the life of the project for which they were purchased. Cubix orchestration software and GB Labs storage products, on the other hand, can be easily redeployed to address changing business requirements without having to justify and endure another round of CAPEX.”
And it’s those differing needs that Ortana soon plans to address in conjunction with GB Labs by co-parenting “Kiosk”, an exciting new approach based on the concept of “bring your own storage”.
It has long been a tenant of both Ortana and GB Labs that to use their respective technologies there is no need to rip out existing infrastructures. Both are able to sit as a layer on top, and make better use, of what is already there.
Ortana has designed Kiosk to make managing media simpler and more affordable by wrapping orchestration around existing storage until the time comes to upgrade or expand.
Gibson said: “The concept of Kiosk is that, if you have legacy storage, or storage you are contracted to, you can reinvigorate it with an orchestrator that includes a fast way to find and retrieve assets or anything else that you specifically need it to do. GB Labs and Cubix are respectively renowned for enabling users to make use of what they already have. We’ve taken a page from what GB Labs has done with its award-winning Mosaic software. In a sense, Kiosk reimagines Mosaic for its own purposes.”
GB Labs storage systems Mosaic is a combination of AI and intelligent storage that culminate in an automatic, vastly enriched way to track and find media assets. Kiosk is a complementary technology designed to fully examine the movement of media through an active workflow. In cases that include GB Labs storage, Kiosk and Mosaic work in concert to exploit the intelligence of both.
Kiosk is initially targeted at, what have traditionally been, smaller clients, and Gibson anticipates that Kiosk will help people understand that Ortana can layer orchestration on top of their existing storage, if that’s what they prefer.
Gibson concludes: “We have a great relationship with GB Labs, but for those who are not quite ready to upgrade their storage speed and reliability, Kiosk can assume initial responsibility for an existing infrastructure and drive what they have, if that’s all they want for now.
“However, we work with more than 50 integration partners, and we all share a belief in each other’s products and a confidence that when we work together, we can deliver what we promise. The pairing of minds at Ortana and GB Labs is an ideal illustration of partners who know and trust one another.
“I have the greatest respect for the GB Labs team; our business model commonality; technical expertise; and like-minded approach to thinking differently about how to further improve life for our customers. In my view, it’s a perfect pairing of orchestration and storage that enables its users to thrive in a rapidly changing content creation, transmission and distribution market.”
Video Interview| Kitplus Daily | Published 7th May 2020
Chief Solutions Officer Duncan Beattie featured as a guest on the KitPlus Daily Show. Watch the video below to find out how our products and solutions are ideally suited to remote working, remain secure and are versatile enough for markets outside of the broadcast industry.
By Ben Pearce | TVBEurope | Published in the July/August 2020 Edition
CBO Asia and Co-Founder of GB Labs, Ben Pearce talks to TVBEurope about how the industry is evolving and its knock on affects on opex.
Until recently, operating expenses – opex to most of us – were defined as the expenses a company incurs through normal business such as rent, equipment, inventory, marketing, payroll, insurance, plus R&D.
It’s long been a central tenet of business, and broadcast in particular, to continually strike the right balance between keeping operating expenses in check, or reducing them, without significantly impacting a company’s ability to compete.
It’s obvious that the majority of capital expenditures have stalled for the time being, but opex carries on, although under increased scrutiny.
And that pressure in recent years is due to a wide range of reasons as the broadcast industry reinvents many aspects of itself; so much so that opex reduction has been forcibly recalibrated to include, “How do I sensibly mitigate my financial and operational risks but stay in business if disaster strikes?”
The industry was already heading that way, but has had a major fire lit under it that has accelerated the need to ensure operational security even if under unexpected pressure…and be able to ensure it from anywhere in the world.
An Asia-Pacific customer of GB Labs’ regional dealer realised late last year that its disaster recovery system, often thought of a ‘nice to have’, was costing it more money in maintenance and substandard performance that it was delivering, and in what turned out to be a prescient move, contacted GB Labs about installing CloakDR, which is the most complex system in our portfolio. Nevertheless, its installation is typically a straightforward process of working closely with the client to determine their specific requirements; configuring a system to suit those needs; and spending several days on-site with a small team of local engineers from the dealer and the customer to ensure the install goes smoothly.
But that’s impossible when you suddenly find, between the point of ordering and the installation date, that you’re not able to get within physical proximity of each other, let alone within thousands of miles. It’s one thing to reduce opex, but this was not how anyone foresaw achieving it.
GB Labs is quite used to doing remote installs. Installing a standard storage system is pretty easy whether the customer is in the middle of the Sahara or the Arctic. But a sophisticated CloakDR system is a different closet of cloaks and would normally require several days on-site. In this case, ancillary components had to be ordered that, again, would normally be sourced on-site and integrated in-situ, but on-site sources, or a visit, weren’t options and the customer needed the system as soon as possible.
To mitigate any obstacles, we safely assembled the core system at our Berkshire HQ, including the ancillary components we ordered in. We then shipped the complete system to our dealer with each section clearly demarcated for connection. CloakDR requires two units to work together over a highly advanced networking system to provide full resilience across switches, storage, client connections, and a great many other devices and connections, which doesn’t lend itself to a ‘quick start’ process. And, because it relies heavily on seamless networking, it’s not usually something you would try to establish from the other side of the world, but we had no choice.
Once the kit arrived the local engineers followed our instructions, overcoming considerable language barriers, with support provided by us remotely. We all worked together in challenging circumstances to get a necessary job done fast. It not only worked, but was achieved far cheaper than would otherwise have been the case.
I say that because it’s interesting to note that the new disaster recovery system was up and running in only three days, which coincidentally is roughly the same amount of time it would have taken had we been physically on-site. That we’re able to do so much on such a complex system, and do it all from half way around the world, gives our local dealer and the end user comfort, and it saves us all a heck’uva lot of, let’s face it, often unnecessary travel.
I’m not saying that remote installation will be right for every scenario. There’s still no substitute for hands-on, face-to-face deployment, but if you have no other choice, it’s satisfying to know that remote installation, even for complicated projects, is not only highly doable, but may increasingly be seen as preferable.
So, have the multiple challenges of 2020 so far accelerated the inevitable, i.e., fast-tracked the adoption of new ways of working that not only drive down opex by enforcing more financially and environmentally efficient operations? Or is it fundamentally redefining what opex should really be about?
It’s too soon to tell, but I would not be surprised if opex and capex were soon replaced by acronyms to be defined later (ATBDL). And we all love our acronyms, don’t we?