Quantum CEO and president Jamie Lerner has suggested that the Linear Tape Open (LTO) consortium was not ambitious enough for tape and its standards were insufficient for hyperscalers.
Jamie Lerner
The Scalar brand tape libraries is part of Quantum’s overall file life cycle management product portfolio. It has enjoyed success in the hyperscaler market for archival storage, counting all of the top five hyperscalers as customers, 17 of the top US intelligence agencies, has more than 40EB of capacity deployed globally, and 3 million-plus LTO tapes under management.
Lerner told B&F at an IT press tour meeting that “hyperscalers don’t care about standards” and that they treat tape as a “big fat disk drive… Tape is like a big slow hard drive.”
He said “the products we built for enterprise do not work in hyperscale clouds,” and products had to be redesigned for hyperscalers: “Basically we redesigned everything we know about tape for these hyperscalers.”
Quantum i6H hyperscaler-type tape library
What the hyperscalers desire is online access to large capacity tape drives so that they have to fetch them from a tape library’s shelves less often. That means they want higher-capacity tapes and the tape library’s mechanical components to cope with high-intensity use.
Getting higher-capacity tape drives is a problem. Lerner told us the LTO roadmap is falling behind disk capacities. The industry is on the LTO-9 level, with 18TB raw tapes, yet HDD supplier Western Digital is already shipping 20TB HDD with 22TB and 26TB SMR drives announced. Seagate is sample shipping 20TB+ drives and Toshiba has 26TB drives coming.
The three suppliers’ HDD roadmaps extend out 30, 40, 50TB drives, and beyond. WD is even suggesting it could create a 50TB archive disk drive. If it proceeds with this idea, the gen 2 and 3 versions would have even larger capacities.
Lerner told us that the hyperscalers are such large buyers that they can specify hyperscaler-specific designs which deliver the requisite speeds, feeds, capacities, reliability, and power consumption. Standards such as LTO don’t matter.
The LTO roadmap won’t catch up with disk until gen 11 at up to 76TB, if that arrives within four or five years. We note that LTO stopped its capacity doubling with LTO-9, which was originally expected to have a 24TB capacity but was scaled back in September 2020. This doesn’t create confidence in its ability to deliver on its future roadmap.
How could tape capacity be driven higher? One method suggested by Lerner is to increase the tape’s tension so that it can be wound tighter and take up less space inside its cartridge. This would enable the tape length to be increased and its capacity would jump in proportion.
Another option could be to move to disk drive head technology which reads and writes data in narrower tracks than tape drives. This could be accompanied by streaming the tape across a flat bed as it passes under the head so that its motion is steadier and smoother.
Quantum is constrained by there being a single tape drive supplier: IBM. It was a pity that, looking back, Quantum stopped making its own SDLT format drives. The lack of competition in tape drive manufacturing is a bad thing and is certainly not helping Quantum.
Veritas is attempting to transform its core NetBackup product into an autonomous data management service operating across public clouds, on-premises datacenters, and edge sites. The company is spending a significant amount of engineering time and money doing this.
The key themes are autonomy and as-a-service. The whole project started two years ago, and the first public inkling of this came with the release of NetBackup 10 (NBU 10) and an IT press tour at Veritas’s Santa Clara HQ this month.
Veritas also promoted internal execs and recruited new ones to bring its revamped software to the market. For example, Lissa Hollinger was promoted to SVP and CMO in February 2022, from being a VP running product and solutions marketing.
The org recruited ex-Accenture consultant Lawrence Wong to be an SVP and its chief strategy officer in January. He said Veritas had looked around at other companies with potentially appropriate technology, but decided it was better to take its NetBackup product and customer base and re-engineer the product. Then the customers could use what they are familiar with and trust, but as-a-service and with wholly new functionality to autonomously protect and manage their data.
Wong works closely with Doug Matthews, SVP for product management. Matthews told us that the engineering organization had closed down many extraneous projects to concentrate resources on the core products and their transformation.
It was a conscious decision not to follow the Commvault route and set up a separate division like Metallic for the whole technology. It did buy HubStor in January 2021 to get a SaaS-based data protection development team – an “acqui-hire.”
The magnitude of the task can be seen by Veritas receiving additional engineering and market funding from private equity owner Carlyle. It bought Veritas for $7.4 billion in 2015 and now it has pumped in more money – an unrevealed amount.
Wong told event attendees 87 percent of the Fortune 500 uses Veritas – 435 companies – and it has more than 80,000 customers in total. He said Veritas was number one in data protection, according to Gartner calculations, having a 15 percent enterprise backup market share. This differs from IDC calculations. Veeam says it is tied number one in data replication and protection revenue market share according to IDC, with $647.17 million in revenues (11.7 percent) alongside Dell with its $665.46 million (12 percent). Veritas is in the number three slot with $541.47 million (9.8 percent).
He said Veritas had a four-year no data loss record in the event of ransomware attacks, which was news to us and suggests an under-marketed product strength. The NBU roadmap includes integration with early warning systems and automated clean recovery at scale.
Veritas’s strategy comprises developing autonomy features, adding virtual air-gapped protection in the cloud, and AI-based ransomware resilience. It wants to deliver protection, availability, and compliance within and across hybrid, private, and public clouds with subscription and as-a-service business models.
Automation and autonomy
Veritas believes that automating data protection procedures is necessary, but not enough. The data protection product has to gain autonomy as well. There’s a difference between automation and autonomy – a washing machine is automated, a driverless car is autonomous.
Veritas declares it will eliminate the burden of human intervention from data management, but not oversight. Data management and protection should just happen, invisibly and autonomously, but without sacrificing overarching human control.
Autonomy combines automation with AI and machine learning so that the data protection system can adapt to changed circumstances and respond instantly. Autonomous Data Management will provide ransom-free recoveries – at any scale. It can actively defend against threats.
Matthews said: “We’re going to build a data lake of metadata to understand how people are protecting their data.” Veritas is building its own analytic routines for such data lakes, and will provide this as a service for MSPs to sell.
Wong chipped in: “This autonomousness will spread outside data protection, to secondary data management and archiving.” The autonomy features will come in a future release of NetBackup, building on the foundation set by NBU 10.
Veritas declares that multicloud autonomous data management will independently find and protect data no matter where it lives. It will continuously determine where and how to store it in the most efficient and secure way possible – all without human involvement.
Veritas is aiming to achieve annual revenue growth of 8 to >10 percent by 2026. This is a big ask. It is building a dedicated cloud specialist enabling team to be embedded with CSPs, partner system integrators, and managed service providers, which will help it raise sales.
Carbon reduction
Veritas claims using NetBackup can lower a customer’s carbon footprint. It quotes a US Grid Emission Factor and US Data Center Energy Usage report saying the energy equivalency of storing 1PB of unoptimized data in the cloud for one year can create 3.5 metric tonnes of CO2. NBU can reduce this with data reduction, leading to a lower network load and cloud footprint plus elastic cloud compute resource utilization.
The data reduction process will deduplicate global metadata in the cloud across customers, but it will not dedupe data across customers; the customer data boundary is sovereign.
Comment
Whatever calculations Gartner and IDC make, Veritas is a major data protection player and it is competing against Veeam, Druva, HYCU, Cohesity, Commvault, and Rubrik. As the amount of unstructured data grows, and as new workloads come into being – such as Kubernetes-orchestrated and edge applications – Veritas must stay relevant and is convinced it can grow.
The ambitious retooling of NetBackup is a huge engineering effort – Veritas’s moonshot if you will. If it can produce the autonomy goods in a post-v10 NBU release, then its enterprise customers should be well pleased and see no need to try competing products.
Matthews said the autonomy vision was conceived two years ago. “We don’t think Rubrik and Cohesity have autonomy on their roadmaps.”
Wong is bullish about the competition. He suggested that data protection startups with unfortified balance sheets could face consolidation if economic times get hard.
WEKA has ported its filesystem code, which runs natively in AWS, to the Azure, Google, and Oracle clouds with a v4.0 release adding filesystem-wide data reduction, tiering to local and cloud object storage, better NFS and SMB support, an improved UI, and new packages and pricing.
The high-speed, scale-out, parallel filesystem supplier says that many enterprise do not run mission-critical applications in the cloud because of lock-in, costs, proprietary cloud formats, and performance. WEKA claims its v4 product delivers consistent high performance, robust data services, and a seamless, simplified data management experience with best-in-class economics in the public cloud, datacenters, and edge environments. The company runs the same binary in the clouds and on vendors’ servers but with different integrations underneath.
WEKA aims to be a global filesystem.
Liran Zvibel, WEKA co-founder and CEO, said in a statement: “The world’s largest enterprises and research organizations are now doubling down on using AI and ML at scale to support innovation, discovery, and business breakthroughs. Hyperscale public clouds like AWS, Azure, GCP and OCI can provide the requisite agility and economies of scale needed to fuel these critical transformation and innovation engines, but the WEKA Data Platform is the key to unlocking that value for AI/ML workloads in hybrid and now multi-cloud environments. WEKA can uniquely help enterprises to avoid cloud lock-in and run their businesses with unparalleled economics.”
WEKA supports NFS v3 and the v4 WEKA code adds support for NFS v4.1, with features natively integrated into WEKA’s code stack. It also supports SMB-W, a WEKA implementation of SMB providing improved latency and small file performance. This is, we were told, suitable for high-performance MS-Windows workloads.
New UI in WEKA v4 release. It says the new UI Improves ease of use, streamlines recurrent operations, simplifies day-to-day tasks, and supplies the foundation for management in multi-tenant environments to help organizations better manage their data at scale.
Storage cost reduction
WEKA v4 provides replication between edge, on-premises, hybrid and multi-cloud environments, and three ways to reduce costs of storing data in these places. Firstly, ageing primary data can be moved onto less expensive local or cloud object storage within a global namespace. This uses a Snap-To-Object feature that enables the committing of a specific snapshot to an object store.
WEKA’s chief product officer, Nilesh Patel, told an IT press Tour that this incrementally and continuously promotes filesystem changes (deltas) non-disruptively to the cloud (or to the on-premises object store) with a very lightweight implementation. Unlike data life cycle management processes, he said, this feature involves all the snapshot, which includes data, metadata, and every file. This facility can also be used to keep two sites concurrent.
He said that WEKA provides instant (milliseconds) object retrieval from AWS Glacier and will provide the same fast retrieval from on-premises S3 stores soon.
Secondly, WEKA can use capacity-optimized QLC or performance-optimized (TLC) NVMe drives, and, thirdly, there is a new, filesystem-wide data reduction to increase effective capacity.
The data reduction uses variable length blocks and a similarity hash (fingerprinting) with compression. Patel said: “We have a logical directory tree that contain fragments of different sizes of file that probably have the same similarity hash. This logical directory tree is invisible to customers and is used for global data reduction.”
v4 will be available in four software packages said to be aligned to its customers’ primary use cases, with the option of subscription or consumption-based pricing. There were no details of these four packages available at publication time.
This is a comprehensive code update by WEKA and its consistent environment across the on-premises, AWS, Azure, GCP, and Oracle clouds will surely be welcomed by customers. The data reduction and storage tiering features will likewise be applauded as a way of keeping costs in check. WEKA very much wants to be a platform on which to run mission-critical apps, like databases, and says block storage is not necessary; there is no speed advantage. In this it shares the same viewpoint as VAST Data.
WEKA says it has 1EB+ under management, approximately 20 large enterprise customers, with eight of the Fortune 50 and 60 of the Fortune 100 in its customer roster. It has 40 Gartner peer review results with an average score of 4.9 out of 5 stars. Board member Dan Warmenhoven, ex-chairman of NetApp, thinks WEKA will be as influential as NetApp was in the 1990s.
WEKA’s view is that the public cloud will be a significant force in the years to come, and that means it has to use hardware-agnostic technology. We think WEKA may well support other CPU architectures apart from x86 in the future. It will also pay more attention to big data, data lakes, and warehouses (Cloudera-type stuff).
Backup provider Backblaze is partnering with US public sector supplier Carahsoft Technology Corp to sell its cloud storage to US state and local government, federal, healthcare, and higher education institutions.
…
Data migrator and manager Datadobi has announced a commissioned ESG report that says its StorageMAP product can manage organizations’ unstructured data at scale, and address the top four areas of concern for IT leaders: cost, carbon footprint, business risk, and maximizing the value provided by data. ESG claims organizations, using StorageMAP, can mitigate risks in terms of poor data security, regulatory non-compliance, and inefficient operations. High level graphs enable the C-suite to gain an immediate view of their cost and carbon footprint. The IT department or business units can then dig deeper to understand what is driving these costs and carbon emissions to make evidence-based decisions on how to reduce them. To download the report, click here.
…
HPC, AI, and datacenter systems supplier Exxact Corp is now offering NVMe-oF storage products featuring GRAID‘s GPU-accelerated SupremeRAID. Andrew Nelson, VP of technology at Exxact, said: “By partnering with GRAID, Exxact can help customers unlock the full potential of NVMe SSD performance by leveraging powerful computing and software capabilities to achieve significant performance gain over traditional RAID without requiring any memory caching.”
…
Hitachi Vantara and non-profit Rainforest Connection have partnered to utilize bioacoustics with AI and help stop deforestation, extinction, and illegal poaching. At the 2022 Collision Conference, Hitachi Vantara and Rainforest Connection will host a fireside chat on June 22 to discuss:
How Hitachi Vantara is helping Rainforest Connection create the world’s first recommendation engine that uses acoustics to suggest conservation outcomes.
How predictive analytics can provide live recommendations for those on the ground tasked with stopping illegal logging and poaching.
The Guardian, a self-contained computer that sits atop trees in the rainforest, to track species abundance/richness and even identify new species.
…
Distributor Tech Data has launched a Software Store giving HPE resellers/ solution providers, etc an easy way to get alerted when post-warranty opportunities are coming up for Pointnext Tech Care Services. By registering on the Software Store, HPE partners get access to advanced information on upcoming warranties that are due to expire where there will be an opportunity to sell HPE Pointnext Tech Care Services. This is shown on a simple dashboard displaying customer details of all the warranties that are due to expire in the next 90 days. It gives partners plenty of time to contact the customer and provide a quote for a Pointnext service.
…
Datarewind, a Swedish IT service provider, has selected HPE Alletra 6000 (Nimble line) cloud-native data infrastructure and HPE ProLiant servers to host virtual machines on behalf of its clients. Datarewind offers services to clients across Sweden, allowing users to run VDI solutions during work hours and automate backup during the night.
…
IBM tells us there is a Spectrum Scale User Group UK meeting on June 30, 09:00-17:00 BST, at IBM’s new London location – IBM Client Center London, 20 York Road, London, SE1 7ND. Find out more here.
…
Malware-scanning Index Engines has added AWS and Commvault support to its CyberSense product. v7.11 can now run in the AWS cloud and also run full content (not just metadata) scans within Commvault VMware backups and InterSystems IRIS data platform. The aim is to detect data corruption due to a ransomware attack. CyberSense uses more than 200 full-content analytics and machine learning that has been trained on thousands of ransomware variants to provide users with a 99.5 percent level of confidence in accurately detecting corruption.
If a cyberattack is successful, CyberSense says its post-attack reports will provide the intelligence needed to recover quickly, including providing the location of the last good version of data. Version 7.11 is available immediately. However, features and timing of the delivery may be rolled out on individual partner schedules.
…
Kioxia has announced sampling of the industry’s first XFM v1.0-compliant removable PCIe-attached NVMe NAND storage device in capacities of 256GB and 512GB: the XFMEXPRESS XT2. This JEDEC XFM device form factor measures 4mm x 18mm x 1.4mm with a 252mm footprint. It is designed for ultra-mobile PCs, IoT devices, and various embedded applications. Kioxia says its minimized z-height is suitable for thin and light notebooks. The XFMEXPRESS XT2 implements a PCIe 4.0 x 2 lanes, NVMe 1.4b interface, and its speed and size makes it an alternative to M.2 gumstick NAND drives, and possibly microSD cards. Kioxia will demonstrate its XT2 product live at Embedded World 2022 in Nuremberg, Germany, June 21-23, in Hall 3A booth #3A-117 as well as the Flash Memory Summit 2022, August 2-4, at the Santa Clara Convention Center, California.
…
Platform9, an open distributed cloud company has closed a $26 million financing round led by Celesta Capital, with participation from Cota Capital, NGP Capital, and other investors. This financing round comes on the back of three consecutive years of 100 percent growth in annual recurring revenue (ARR) for SaaS Kubernetes, led by enterprise interest in cloud-native distributed clouds. Former Oracle and Nokia sales executive Emilia A’Bell has joined as CRO and former Intel Corporate VP Ravi Jacob as CFO.
…
Seagate has commissioned a Multicloud Maturity Report, which says businesses that adopt multicloud data management are much better placed for success:
Businesses are more likely to beat their revenue goals by twice as much as their counterparts who don’t use multicloud solutions
Businesses are six times more likely to go to market months ahead of their competition
Forecasting states that their valuation will be more likely to increase fivefold over the next three years
88 percen of UK respondents believed that their organization’s adoption of cloud technologies is being hindered by storage costs
The UK is wasting a lot of important data with 69 percen of organisations having deleted or discarded unstructured data that could have been used to create business value in the last year
Seagate CEO Dave Mosley said: “It should no longer be acceptable that companies can afford to save and use only a fraction of their data.” So have a multicloud setup and the CSPs should then buy more Seagate disk drives. Download the report here.
…
Cloud-native database supplier SingleStore said a GigaOM study reports it has a 50 percent lower TCO against MySQL and Snowflake and a 60 percent lower TCO than PosgreSQL and Redshift. GigaOM also measured SingleStore performance across three industry-standard benchmarks: TPC-C (operational), TPC-H (analytical), and TPC-DS (analytical). SingleStore said its combination of transactional and analytical workloads eliminates performance bottlenecks and unnecessary data movement, thus lowering TCO. You should be able to download the GigaOM study from SingleStore’s website.
This is an ISO/IEC standard, ISO/IEC 17826. CDMI enables cloud solution vendors to meet the growing need of interoperability for data stored in the cloud. The CDMI standard is applicable to all types of clouds – private, public, and hybrid. It provides end users with the ability to control the destiny of their data and ensures data access, data protection, and data migration from one cloud service to another.
Version 1.2.4 extends support for NVMe and NVMe-oF storage devices to the Swordfish ecosystem, as well as releasing a new technical white paper detailing the standardized approach, infrastructure, and mechanisms available in Swordfish for metrics and telemetry.
Snowflake wants more data to analyze, and has unified transactional and analysis data in a new Unistore offering. It has entered into competition with SingleStore and Dremio, who say forget about running complex and time-consuming extract, transform and load (ETL) procedures and analyse transactional data directly.
Unistore features Hybrid Tables which offer fast single-row operations and enable customers to build transactional business applications directly on its data cloud. Customers can perform analytics on transactional data for immediate context then join their Hybrid Tables with existing Snowflake Tables for a holistic view across all their data. Read a Snowflake blog to find out more. The Hybrid Tables feature is currently in preview.
…
Replicator WANdisco has announced preliminary full calendar 2021 year results:
Revenue for the year of $7.3 million (2020: $10.5 million)
Cash overheads1 of $41.5 million (2020: $36.9 million)
Adjusted EBITDA loss of $29.5 million (2020: $22.2 million)
Statutory loss from operations of $37.6 million (2020: $34.3 million)
Cash at 31 December 2021 of $27.8 million (2020: $21.0 million)
CEO and chairman Dave Richards said: “In 2022, the business is focused on continuing the acceleration of business started in H2 2021. As we have announced post period end, we have made significant achievements with major contract wins in the IoT space, in addition to continuing to secure large contracts for replicating on premises Hadoop data to the cloud. With our recent contract wins, unique set of solutions and unique set of solutions and high visibility of near-term pipeline, we remain confident in our ability to significantly improve results in FY22.”
…
ReRAM developer Weebit Nano will participate at Leti Innovation Days, CEA-Leti’s annual three-day flagship event being held June 21-23 in Grenoble, France. Weebit’s VP of Marketing and Business Development, Eran Briman, will present “ReRAM Moves from Lab to Fab,” highlighting the range of applications that can benefit from the low power consumption, low cost, high temperature reliability, and process simplicity of Weebit ReRAM. Register for the event here.
…
Yellowbrick Data has announced availability of the latest release of its cloud data warehouse which it says:
Can be deployed on premises or in AWS, with Azure and Google Cloud support in Q3.
Runs entirely in the customer’s own cloud account – eliminating security and compliance challenges and builds on customers’ existing commercial cloud commitments – maximizing infrastructure savings.
Delivers true separation of compute and storage, allowing for elastic scaling on demand.
Integrates with data lakes using Parquet as the data interchange format.
Is consumable on-demand or through a fixed capacity subscription.
Features a Unified Control Plane powered by SQL, supporting provisioning, managing, monitoring, and cost-control across any topology.
Yellowbrick cloud data warehouse
…
Zerowait CEO Mike Linnett tells us: “Like most OEMs, NetApp institutes artificial end of support/end of software dates and tell the customers there is no option but to buy a new system if they want any kind of official support from NetApp. There will be no software or firmware upgrades or security patches for EOL/EOS systems In this case end of 2022 is a hard deadline for FAS255X systems. As usual, these end dates are well in advance of the systems’ actual need to be replaced. They are working fine, and especially with small systems such as the FAS255X these customers do not need more storage or faster or more bells and whistles. We find that our customers are fine with what they have, and there are many other NetApp customers that are looking to extend the life of their hardware due to current economic conditions and forecasts.”
He added:
Old infrastructure works, it’s paid for, and Zerowait can keep it running reliably and affordably.
New NetApp equipment will be expensive and require a data migration.
MinIO has followed Dell and Pure Storage by making its data stores available to Snowflake’s cloud data warehouse processing algorithms.
Satish Ramakrishnan, executive at MinIO, says in a blog that Snowflake has added external table support so Snowflake SQL compute can come to the data, so to speak, instead of customers having to select and move their data from an on-premises store into Snowflake’s cloud.
Ramakrishnan says this “will not change the locations where Snowflake runs – it will still run exclusively in the three major public clouds (AWS, GCP, Azure). It will, however, remove the requirement that all the data be stored in Snowflake for Snowflake to operate on it.”
If, for example, you have an object bucket in your MinIO store, you can configure it so that any SnowSQL command can access it, just as if it were a local Snowflake table. Data is still actually copied and moved up into the Snowflake cloud. How long does that take?
”The time taken by the initial query will depend on the amount of data that is being transferred, but reads are cached and subsequent queries may be faster till the next refresh,” says Ramakrishnan. “Using the external table approach the data does not need to be copied and the bucket can be used as an external table for queries, joins, etc.”
The benefits, he claims, are:
It extends the capabilities of the warehouse without incurring the cost of the move
The ability to run analysis on real-time data is now available
Moving data only to run an ad hoc query can be completely avoided
Analysis is possible in instances when the data cannot be moved for compliance or other business reasons
You still get all the advantages of the Snowflake capabilities with the same resources who are already familiar with the Snowflake platform
His post contains MinIO CLI code examples to accomplish this:
MinIO Snowflake external table code
Despite the Dell and Pure precursors, “MinIO can become the global datastore for Snowflake customers – wherever their data sits.” This is because “MinIO is a high-performance, cloud-native object store. It will work on all flavors of Kubernetes (upstream, EKS, AKS, GKE, etc.) as well as on virtual machines like the public cloud VMs, VMWare, etc. and on bare-metal hardware.”
The Dell and Pure alternatives work on those suppliers’ own hardware too.
Ramakrishnan also claims there are limitations in other object stores accessible through Snowflake’s S3 endpoint support: “While Snowflake will support S3 endpoints (which naturally includes other object stores), those object stores are not capable of running in all of the places that an enterprise keeps its data… To achieve a consistent, data anywhere strategy, enterprises will need to adopt MinIO.”
A SnowFlake document explains how to download and install SnowSQL to accomplish MinIO support of external tables.
Library of Congress image.ights Advisory: No known restrictions on publication.
Data cyber security and backup supplier Acronis has launched a Data Loss Prevention (DLP) pack for Acronis Cyber Protect Cloud available in an Early Access Program. Acronis says it doesn’t require months for deployment or highly skilled teams to maintain it.
Protect sensitive data transferred via various user and system connections including instant messaging and peripheral devices.
Uses Acronis Cyber Protect Cloud console and agent for data visibility and classification.
Offers out-of-box data classification templates for common regulatory frameworks including GDPR, HIPAA and PCI DSS.
Provides continuous monitoring for DLP incidents with multiple policy enforcement options, enabling ongoing automated policy adjustment to business-specifics.
Includes robust audit and logging capabilities, giving administrators the ability to respond effectively to DLP events and conduct post-breach forensic investigations.
…
AWS has announced three new EC2 instances, including:
EC2 R6id Instances with NVMe Local Instance Storage of up to 7.6 TB: Amazon EC2 R6id instances are equipped with NVMe SSD local instance storage, designed to power applications that require low storage latency or require temporary swap space. These sixth-generation instances offer generational improvement, including a 15% increase in price performance, 33% more vCPUs, up to 1 TB memory, 2x networking performance, 2x EBS performance, and global availability.
EC2 M6id instances: M6id instances are powered by gen 3 Xeon SPs and deliver up to 15% better price performance compared to previous generation M5d instances. M6id instances are ideal for workloads that require a balance of compute and memory resources along with high-speed, low-latency local block storage, including data logging and media processing.
EC2 C6id instances: Also powered by gen 3 Xeon SPs, C6id instances offer up to 138% higher TB storage per vCPU and 56% lower cost per TB when compared to previous generation instances. C6id instances are ideal for compute-intensive workloads, including those that need access to high-speed, low-latency local storage, such as video encoding, image manipulation, and other forms of media processing.
Online data-in-place controller upgrades² when more performance, capacity and system limits are required ensuring their initial investment is protected
Support for new synchronous file replication configurations with fan-out and cascading topologies enhancing disaster recovery operations for the business
Deliver additional system efficiencies with dynamic pool and inline data reduction software for both hybrid and all flash pools on Unity XT HFAs
HCI HW and Kubernetes lifecycle manager Diamantihas upgraded its Spektra Enterprise product, a cloud-native software stack for deploying and managing containerized applications. Centralized Application Catalogs provide more control over open-source components and versions deployed into production to mitigate security vulnerabilities. Users can set up one or more people to receive email alerts in the event of critical application failure events and/or application disaster recovery.
…
DPU hardware and composability SW startup Fungible has announced a new release of Fungible Storage Cluster (FSC) 4.1, providing support for VMware vSphere environments requiring high-performance. Customers can plug Fungible’s FSC storage into their ESXi servers with NVMe/TCP and get what appears to be local storage. The resulting performance is nearly identical to local storage even though it is a shared resource.
…
Micron has announced upcoming availability of two new consumer storage SSDs, the Crucial P3 Plus Gen4 NVMe and Crucial P3 NVMe.
The P3 Plus SSD product line will provide sequential read/write speeds up to 5000/4200MB/sec, while P3 SSDs will provide read/write speeds up to 3500/3000MB/sec. Both drives will be available in capacities up to 4TB. The P3 Plus Gen4 NVMe SSD, built with 176-layer 3D NAND, delivers load times and data transfers that are nearly nine times faster than SATA3 SSDs and up to 43% faster than the fastest Gen3 SSDs, Micro said. The P3 Gen3 NVMe SSD has load times six times faster than SATA SSDs and over twenty times faster than hard disk drives, while offering performance that is 45% faster than the previous generation, the vendor added. Both drives should be available this summer.
Veeam founders Ratmir Timashev and Andrei Baronov have launched Object First to provide on-premises object storage facilities for Veeam backup.
Timashev and Baranov founded backup supplier Veeam in 2006 and it grew rapidly on the back of server virtualization. There was no external funding until Insight Venture Partners injected $500 million in January 2019. Veeam was subsequently bought by Insight Venture Partners for around $5 billion a year later. Baronov left in February 2020 but Timashev stayed on as a contracted consultant and is still listed as such on LinkedIn.
Object First came out of stealth at VeeamON 2022 with a functioning appliance. In a VMblog, Anthony Cusimano, director of marketing for Object First, said it “provides primary target object storage for Veeam,” in the form of a hardware device that fits in a rack, and software.
Rear of Object First appliance with lid removed. It shows two CPUs. The entire device is shown below.
Video screen grab of Object First device.
This device is accessed using Amazon’s S3 protocol and is owned and stored locally. It offers immutability, claimed instant recovery and security. Object First will work solely with Veeam Backup and Recovery release 12 (VBR12) which can write to an S3 object storage target.
With VBR12, Cusimano claimed, “Object First can be up and running and storing Veeam Backups immutably in under 15 minutes.” It uses the equivalent of the AWS S3 Object Lock technology for immutability and this is built into the box. Immutability can be set to last for up to 999 days.
Why S3 object storage? In Cusimano’s view, “We see object storage as the untapped resource of the future. When properly utilized, it helps mitigate ransomware, brings universal simplicity into the storage space, and delivers ease of scale without the pain of breaking the budget.”
There is a YouTube video of VMBlog interviewing Cusimano at VeeamON 2022 in Las Vegas in May.
Cusimano video at VeeamON 2022.
In the video Cusimano says “We offer a target object appliance [which will] allow Veeam backup users to write direct to object storage on-prem.” The video shows a 3 or 4 RU rackmount box.
Cusimano says the key differentiators include “the affordability of the device, paired with security, paired with performance.” The device delivers more than 1GB/sec of throughput on a single node. Cusimano demonstrated setting up an Object First device cluster, thus implying that it is a scale-out system.
Object First screen shot of setup process.
Object First is part of the Veeam Technical Alliance program and has a website which is basically a placeholder, offering you the chance to be on its mailing list.
We are all watching a slow-motion collision as datacenter-oriented filers from NetApp and others meet cloud-centric filers from Nasuni and its peers.
The well-established network attached storage (NAS) concept is to have a filer in a datacenter providing external file storage I/O services to users and applications on local and remote servers. The two leading suppliers are NetApp (ONTAP) and Dell (Isilon/PowerScale) with Qumulo (CORE) and others such as VAST Data coming on strong.
For these suppliers the public cloud represents competition. The public cloud can replace on-premises filers but it can also enhance them in a hybrid and multi-public cloud environment. Thus NetApp has ONTAP present in the main public clouds and customers can move applications using ONTAP to, from, and between their datacenters and the public clouds, and find the familiar ONTAP environment in each.
Dell is moving in the same sort of direction with its APEX initiative, as is Qumulo.
The core of these companies is the on-premises filer. The public cloud represents a burst destination for their filer users’ applications and a place for some applications to run, while others – such as data sovereignty-limited, or perhaps mission-critical applications – stay on premises. The cloud can also be a place to offload older, staler, data into lower cost S3-type object stores.
Nasuni’s inversion
Nasuni inverts this model, coming from the sync’n’share file collaboration market. Its core is in the public cloud with accessing applications in datacenters of all sizes, from central to edge sites like retail outlets, treated as remote access users equipped with edge caches – either physical boxes or virtual machines.
The company’s UniFS file system stores all its file data in S3-type online object storage vaults in the cloud, not offline archive tiers. It uses the edge (filer) caches and its algorithms to provide fast local access.
Western Digital is a Nasuni customer that synchronizes and shares large design and manufacturing files between its global sites. Such file sync happens in less than ten minutes, instead of hours and hours.
This cloud-as-core model is also used by CTERA.
Ransomware Protection as a Service
Nasuni is changing from just providing file storage and infrastructure to providing add-on cloud data services, such as file system analytics and Ransomware Protection as a Service (RPaaS). It detects incoming ransomware file patterns, such as specific file extensions, and activity anomalies.
It will move to stop a ransomware source acting on file data if a customer sets a policy. Chief product officer Russ Kennedy said automated recovery will be added next year. “Nasuni customers have billions of potential recovery points through every file system change being recorded in an immutable way.”
Its software captures all metadata changes – anything up to the root – and puts them in small manifest files.
Nasuni and NetApp
While not as big as NetApp, Nasuni is a substantial startup. We were told by Kennedy at a June IOT Press Tour briefing that it has more than 680 customers and 13,600 edge locations worldwide. Over 10 customers are paying Nasuni $1 million a year and 187 customers are paying more than $300,000/year. Its strategy is to go public.
Nasuni sees itself having a $5 billion total addressable market (TAM) in cloud file services over 2021/2022, with NetApp having an equal TAM, along with the major cloud providers:
In a way Nasuni has parked its cloud object store-backed edge filer cache tanks on NetApp’s on-premises lawn. As has CTERA. How will the on-premises filer suppliers respond?
We may well see the adoption of cloud-based core file storage technology and access to/from remote sites by Dell, NetApp, Qumulo, and their peers as they respond to the market dynamics.
We asked Kennedy about NetApp, Dell, and Qumulo doing this. He said it would take years for them to build a similar cloud-based structure. For example, Nasuni has an orchestration center that handles global file locking. It is a cloud service unique to Nasuni, Kennedy said, that uses DynamoDB and elastic services.
This may be a key difference between Nasuni and the on-premises/hybrid cloud filers. The difficulty inherent in building an equivalent cloud-based infrastructure from scratch is indicated by something Kennedy told us: Nasuni has had talks with on-premises providers about them using its UniFS cloud facilities. They could get cloud-based remote site sync and so forth via UniFS talking to their filers.
But the talks have not led to this actually happening, nor any other partnering activity. In a way, we have reached a sort of equilibrium. Cloud-based Nasuni has an on-premises filer presence with its edge caches, but these are not full-blown filers. The on-premises filer suppliers – Dell, NetApp, Qumulo, etc. – have and are building a cloud presence, but this is not as capable as the ones Nasuni and CTERA have built.
Unless customers show a significant preference for cloud-based file system technology, both the on-premises and cloud-based filers will grow. They’ll collide, but there will be no outright winner. Because unstructured data is growing and a rising tide lifts all boats.
The non-profit initiative will bring vendors together in a “Battle of the Technology Rockstars” on June 21 at The Sinclair music venue in Cambridge, Massachusetts, to raise cash for fighting pediatric cancer.
The initiative was founded by AtScale CEO and technology veteran Christopher Lynch.
The event has a live band karaoke singing competition and Ken Steinhardt, field CTO at Infinidat, will be featuring. He has had close friends and direct family members affected by cancer, and is running a personal fundraising campaign that will directly benefit this charity.
From left: Chris Lynch, Ken Steinhardt, Nathan Hall, Steve Duplessie, and George Hope
On stage, he will be going up against Nathan Hall, VP of Engineering at Pure Storage, Steve Duplessie, founder and senior analyst of Enterprise Strategy Group/Tech Target, George Hope, worldwide head of partner sales at HPE, and a dozen of other “rockstar” executives from the storage industry.
The live audience and spectators watching the performances via LinkedIn Live will be able to vote for their favorite performers.
Tech Tackles Cancer sponsors
Tech Tackles Cancer, which has been running for six years, is returning to an in-person event after a hiatus of more than two years due to the COVID pandemic. To date it has raised more than $2 million in donations for organizations focused on pediatric cancer treatment and research, including St Baldrick’s and One Mission. This year, the goal is to raise over $300,000 for pediatric cancer-related causes.
Steinhardt said: “Tech Tackles Cancer is a cause that I have strong affinity for. To do something for such a good cause and mix it with some good rock music; it doesn’t get any better. I am encouraged by how so many tech companies have responded to this cause, saying ‘I’m on board. How can I help?’ The attitude across the industry has been amazing. It’s a beautiful thing when we’re collaborating for the right reasons.”
Durham University’s DiRAC supercomputer is getting composable GPU resources to model the evolution of the universe, courtesy of Liqid and its Matrix composability software.
The UK’s Durham University DiRAC (Distributed Research utilising Advanced Computing) supercomputer department has both Spectra Logic tape libraries and Rockport switchless networks taking advantage of the university’s largesse.
Liqid has put out a case study about selling its Matrix composable system to Durham so researchers can get better utilization from their costly GPUs.
Durham University is part of the UK’s DiRAC infrastructure and houses COSMA8, the eighth iteration of the COSmology MAchine (COSMA) operated by the Institute for Computational Cosmology (ICC) as a UK-wide facility. Specifically, Durham provides researchers with large memory configurations needed for running simulations of the universe from the Big Bang onwards, 14 billion years ago. Such simulations of dark matter, dark energy, black holes, galaxies and other structures in the Universe can take months to run on COSMA8. Then there can be long periods of analysis of the results.
Durham University’s DiRAC supercomputer building.
More powerful supercomputers are needed, exascale ones. A UK government ExCALIBUR ((Exascale Computing Algorithms and Infrastructures Benefitting UK Research) programme has provided £45.7 million ($55.8 million) funding to investigate new hardware components with potential relevance for exascale systems. Enter Liqid.
The pattern of compute work at Durham needs GPUs and, if these are physically tied to accessing client computers they can be tied up and dedicated to specific jobs, and then stand idle because new jobs are started in other servers with fewer GPUs.
The utilization level of the expensive GPUS can be low, and the workload of multiple and large partially overlapping jobs will get worse at the exascale level. More GPUs will be needed and their utilization will remain low, driving up power and cooling costs. At least that is the fear.
Liqid’s idea is that GPUs and other server components, such as CPUs, FPGAs, accelerators, PCIe-connected storage, Optane memory, network switches and, in the future, DRAM with CXL, can all be virtually pooled. Then server configurations, optimized for specific applications, can be dynamically set up by software pulling out resources from the pools and returning them for re-use when a job is complete. This will drive up individual resource utilization.
Alastair Basden, technical manager for the DiRAC Memory Intensive Service at Durham University, was introduced to the concept of composable infrastructure by Dell representatives. Basden is composing NVIDIA A100 GPUs to servers with Liqid’s composable disaggregated infrastructure (CDI) software. This enables researchers to request and receive precise GPU quantities.
Basden said: “Most of our simulations don’t use GPUs. It would be wasteful for us to populate all our nodes with GPUs. Instead, we have some fat compute nodes and a login node, and we’re able to move GPUs between those systems. Composing our GPUs gives us flexibility with a smaller number of GPUs. We can individually populate the servers with one or more GPUs as required at the click of a button.”
Durham University’s Liqid composable system diagram
“Composing our GPUs can improve utilisation, because now we don’t have a bunch of GPUs sitting idle,” he added.
Basden noted an increase in the number of researchers exploring artificial intelligence and expects their workloads to need more GPUs, and Liqid’s system will support their importation: “When the demand for GPUs grows, we have the infrastructure in place to support more acceleration in a very flexible configuration.”
He is interested in the notion of DRAM pooling and the amounts of DRAM accessed over CXL (PCIe gen 5) links. Each of the main COSMA8 360 compute nodes at Durham is configured with 1TB of RAM. Basden said: “RAM is very expensive – about half the cost of our system. Some of our jobs don’t use RAM very effectively – one simulation might need a lot of memory for a short period of time, while another simulation doesn’t require much RAM. The idea of composing RAM is very attractive; our workloads could grab memory as needed.”
“When we’re doing these large simulations, some areas of the universe are more or less dense depending on how matter has collapsed. The very dense areas require more RAM to process. Composability lets resources be allocated to different workloads during these times and to share memory between the nodes. As we format the simulation and come to areas that require more RAM, we wouldn’t have to physically shift things around to process that portion of the simulation.”
Liqid thinks all multi-server data centres should employ composability. If you have one with low utilization rates (say 20 percent or so) for PCIe or Ethernet or InfiniBand-connected components like GPUs, it says you should consider giving it a try.
Backblaze says it is the first independent cloud provider to offer a cloud replication service which helps businesses apply the principles of 3-2-1 data protection within a cloud-first or cloud-dominant infrastructure. Data can be replicated to multiple regions and/or multiple buckets in the same region to be protected from disasters, political instability, for business continuity and compliance. Cloud Replication is generally available now and easy to use: Backblaze B2 Cloud Storage customers can click the Cloud Replication button within their account, set up a replication rule, and confirm it. Rules can be deleted at any time as needs evolve.
…
William Blair analyst Jason Ader told subscribers about a session with Backblaze’s CEO and co-founder, Gleb Budman, and CFO, Frank Patchel, at William Blair’s 42nd Annual Growth Stock Conference. Management sees a strong growth runway for its B2 Cloud business (roughly 36% of revenue today) given the immense size of the midmarket cloud storage opportunity (roughly $55 billion, expanding at a mid-20% CAGR). Despite coming off a quarter where it cited slower-than-expected customer data growth in B2 (attributed to SMB macro headwinds), management expects that a variety of post-IPO product enhancements (e.g., B2 Reserve purchasing option, Universal Data Migration service, cloud replication, and partner API) and go-to-market investments (e.g., outbound sales, digital marketing, partnerships and alliances) will begin to bear fruit in coming quarters.
…
DPaaS supplier Cobalt Iron has received U.S. patent 11308209 on its techniques for optimization of backup infrastructure and operations for health remediation by Cobalt Iron Compass. It will automatically restore the health of backup operations when they are affected by various failures and conditions.The techniques disclosed in this patent:
Determine the interdependencies between various hardware and software components of a backup environment.
Monitor for conditions in local or remote sites that could affect local backups. These conditions include:
Indications of a cyberattack
Security alert conditions
Environmental conditions including severe weather, fires, or floods
Automatically reprioritize backup operations to avoid or remediate impacts from the conditions (e.g., discontinue current backup operations or redirect backups to another site not impacted by the condition).
Dynamically reconfigure the backup architecture to direct backup data to a different target storage repository in a different remote site, or in a cloud target storage repository, that is unaffected by the condition.
Automatically extend retention periods for backup data and backup media based on the conditions.
Dynamically restrict or remove access to or disconnect from the target storage repository after backup operations complete.
…
Cohesity is playing the survey marketing game. The data protection and management company transitioning to as-a-service commissioned research that looked at UK specific data from an April 2022 survey of more than 2,000 IT decision-makers and Security Operations (SecOps) professionals in the United States, the United Kingdom and Australia. Nearly three-quarters of respondents (72%) in the UK believe the threat of ransomware in their industry has increased over the last year, with more than half of respondents (51%) saying their organisation has been the victim of a ransomware attack in the last six months. So … naturally … buy Cohesity products to defeat ransomware.
…
Commvault says its Metallic SaaS backup offering has grown from $1 million to $50 million annual recurring revenue (ARR) in 6 quarters. It has more than 2,000 customers, with availability in more than 30 countries around the globe. The Metallic Cloud Storage Service (MCSS), which is used for ransomware recovery, is getting a new name: Metallic Recovery Reserve. Following Commvault’s acquisition of TrapX in February, it’s launching an early access programme for ThreatWise this week. ThreatWise is a warning system to help companies spot cyberattacks, and enable a response helped with tools for recoverability.
…
The STAC benchmark council says Hitachi Vantara has joined it. Hitachi Vantara provides the financial services industry with high-performance parallel storage, including object, block, and distributed file systems, along with services that specialize in storage I/O, throughput, latency, data protection, and scalability.
…
Kioxia has certified Dell PowerEdge R6525 rack servers for use with its KumoScale software. Certified KumoScale software-ready system configurations are available through Dell’s distributor, Arrow Electronics. Configurations offered include single and dual AMD EPYC processors, Kioxia CM6 NVMe SSDs and have capacities of up to 153TB per node. Kumoscale is Kioxia’s software to virtualise and manage boxes of block-access, disaggregated NVMe SSDs. “Kumo” is a noun meaning ‘cloud’ in Japanese.
…
William Blair analyst reported to subscribers about a fireside chat with Nutanix CFO Rukmini Sivaraman and VP of Investor Relations Richard Valera at William Blair’s 42nd Annual Growth Stock Conference. Management still has high conviction in the staying power of hybrid cloud infrastructure, and Nutanix’s growing renewal base will provide it with a predictable, recurring base of growth as well as greater operating leverage over time. Management also noted that beyond Nutanix’s core hyperconverged infrastructure (HCI) products, Nutanix Cloud Clusters is serving as a bridge for customers looking to efficiently migrate to the cloud.
…
Pavilion Data Systems, describing itself as the leading data analytics acceleration platform provider and a pioneer of NVMe-Over-Fabrics (NVMe-oF), said that EngineRoom is using Pavilion’s NVMe-oF tech for an Australian high-performance computing cloud. Pavilion replaced two full racks from a NAS supplier with its HyperOS and its 4 rack-unit HyperParallel Flash Array. It says EngineRoom saved significant data center space by using Pavilion storage to improve HPCaaS analytics. The data scientists at EngineRoom deployed Pavilion with the GPU clusters in order to push the boundaries of possibility to develop and simulate medical breakthroughs, increase crop yields, calculate re-entry points for spacecraft, identify dark matter, predict credit defaults, render blockbuster VFX, and give machine vision to robotics and UAVs.
Stefan Gillard, Founder and CEO of EngineRoom, said: “We need NVMe-oF, GPU acceleration, and the fastest HPC stack. Pavilion is a hidden gem in the data storage landscape.”
…
Phison Electronics Corp. unveiled two new PCIe Gen4x4 Enterprise-Class SSDs: the EPR3750 in M.2 2280 form factor and the EPR3760 in M.2 22110 form factor. Phison says they are are ideal for use as boot drives in workstations, servers, and in Network Attached Storage (NAS) and RAID environments. The EPR3750 SSD is shipping to customers as of May 2022, and the EPR3760 SSD will be shipping in the second half of 2022.
…
Qumulo has introduced a petabyte-scale archive offering that enhances Cloud Q as a Service on Azure. It is new serverless storage technology, the only petabyte-scale multi-protocol file system on Azure, enabling Qumulo to bring out a new “Standard” offering with a fixed 1.7 GB/sec of throughput with good economics at scale. Qumulo’s patented serverless storage technology creates efficiency improvements resulting in cloud file storage that is 44% lower cost than competitive file storage solutions on Azure. Qumulo CEO Bill Richter said: “The future of petabyte-scale data is multi-cloud. … Qumulo offers a single, consistent experience across any environment from on-premises to multi-cloud, giving our customers the tools they need to manage petabyte-scale data anywhere they need to.” Qumulo recently unveiled its Cloud Now program, which provides a no-cost, low-risk way for customers to build proofs of concept up to one petabyte.
…
SK hynix is mass-producing its HBM3 memory and supplying it to Nvidia for use with its H100, the world’s largest and most powerful accelerator. Systems are expected to ship starting in the third quarter of this year. HBM3 DRAM is the 4th generation HBM product, succeeding HBM (1st generation), HBM2 (2nd generation) and HBM2E (3rd generation). SK hynix’s HBM3 is expected to enhance accelerated computing performance with up to 819GB/sec of memory bandwidth, equivalent to the transmission of 163 FHD (Full-HD) movies (5GB standard) every second.
…
Cloud data warehouser Snowflake announced a new cybersecurity workload that enables cybersecurity teams to better protect their enterprises with the Data Cloud. Customers gain access to Snowflake’s platform to natively handle structured, semi-structured, and unstructured logs. They can store years of high-volume data, search with scalable on-demand compute resources, and gain insights using universal languages like SQL and Python, currently in private preview. Customers already using the new workload include CSAA Insurance Group, DoorDash, Dropbox, Figma, and TripActions.
…
SoftIron says Earth Capital, its funding VC, made a mistake, a quite big one, when calculating how much carbon dioxide emissions were saved by using SoftIron HW and SW. In the original report by Earth Capital, the report stated that for every 10PB of data storage shipped by SoftIron, an estimated 6,656 tonnes of CO2e is saved by reduced energy consumption alone. The actual saving for a 10PB cluster is 292 tonnes. This is 23 times less and roughly the same weight as a Boeing 747 – still an impressive number when put into full context. Kudos to SoftIron for ‘fessing this upfront.
…
VAST Data CMO and co-founder Jeff Denworth has blogged about the new Pure Storage FlashBlade//S array and Evergreen services, taking the stance of an upstart competing with an established vendor. Read “Much ado about nothing” to find out more.
…
Distributed data cloud warehouser Yellowbrick Data has been accepted into Intel’s Disruptor Initiative. Intel and Yellowbrick will help organizations solve analytics challenges through large-scale data warehouses running in hybrid cloud environments. The two are testing Intel-based instances on Yellowbrick workloads across various cloud scenarios to deliver optimal performance. Yellowbrick customers and partners will benefit from current and future Intel Xeon Scalable processors and software leveraging built-in accelerators, optimized libraries, and software that boost complex analytics workloads. Yellowbrick uses Intel technology from the CPU through the network stack. Intel and Yellowbrick are expanding joint engineering efforts that will accelerate performance.
HYCU has pulled in a $53 million B-round, giving it yet more cash for its SaaS-based data protection offerings.
The round saw participation from all the A-round investors, including big names Bain Capital and Acrew Capital. Atlassian Ventures and Cisco Capital joined in as strategic investors. The cash will help to pay for several go-to-market initiatives including bringing to market a developer-led SaaS service and other offerings. There will be new positions in alliances, product marketing and customer success.
Simon Taylor.
CEO Simon Taylor said “Adding strategic investment from Atlassian Ventures and Cisco Investments, along with the ongoing support from Bain Capital and Acrew is a testament to what the team has developed and is delivering to customers worldwide.” He continued, “HYCU fundamentally believes there is a better way to solve data protection needs, and we are on track to deliver a profoundly simple and powerful solution before the end of the year.” Sounds interesting.
We hear that Taylor and his execs weren’t looking for new funding but, once Acrew Capital approached them this year, the interest from other investors became significant.
This B-round comes 15 months after HYCU raised $87.5 million in its A-round and takes total funding to $140.5 million.
HYCU’s roots go back a long way, having been spun out of Comtrade Software in 2018, and Comtrade has a history going back to 2016 and earlier. HYCU is growing fast. It enjoyed year-on-year bookings growth of 150 percent in 2021 and had a successful first 2022 quarter close, with projections to achieve the same growth rate in 2022.
HYCU tripled revenue in the past 12 months, stayed with a 135 percent net-retention rate, maintained a 91 net promoter score (NPS), claiming it’s the highest in the industry for data protection companies, and saw a 4x increase in valuation in the last year.
Matt Sonefeldt, head of investor relations at Atlassian Ventures, said “We’re excited to welcome HYCU to the Atlassian Ventures family and believe its approach to data protection as a service creates immense potential for our 200,000+ cloud customers.” That’s a nice potential selling opportunity for HYCU.
HYCU is now well-funded to weather any downturn and well-positioned with regard to the two great forces transforming the data protection industry: changing on-premises backup software to SaaS offerings, and the whole cyber resiliency/anti-ransomware movement. HYCU is on top of both trends and virtually every other vendor is doing the same. For example, Veritas is making NetBackup a SaaS-based product. Cohesity is moving its products set to services. Commvault has its Metallic SaaS offering. Veeam is on the SaaS train. This is a rising tide lifting all boats.
And all vendors now regard ransomware countermeasures as table stakes. Anyone not adopting them is going to get left behind.