Home Blog Page 235

Supply chain woes dampen Dell storage revenues in Q4 of fiscal ’22

Dell’s full fiscal 2022 results include pockets of storage glory but PCs sales led the charge with servers following behind as the US business battled supply chain problems that restricted order fulfilment. 

The Texan-headquartered tech giant grew revenues to $101.2bn for the year ended 28 January, a 17 per cent year-on-year rise driven by record PC shipments and some growth across the infrastructure division. Profit was $5.71bn, up 62.8 per cent year-on-year, and it could have been even higher, as we shall see by looking at the fourth-quarter numbers.

Jeff Clarke, vice chairman and co-COO at Dell Technologies, said in a statement: “Fiscal 2022 was the best year in Dell Technologies history. We reached more than $100 billion in revenue and grew 17 per cent – a huge achievement and ahead of our long-term growth targets.”

CFO Tom Sweet added: “We achieved a number of milestones that unleashed shareholder value. We generated cash flow of $10.3 billion, achieved investment grade rating, and spun-off VMware.”

Fourth-quarter revenues increased 16 per cent annually to $28bn but they were down sequentially from $28.4bn in the third quarter. The traditional seasonal pattern for Dell is for Q4 to be higher than Q3. That happened in fiscal 2021, 2020, 2019, and 2018. What’s worse, Q4’s profit was just $1m, according to Dell’s financial tables.

See the net income number in the bottom row of the table.

A look at Dell’s Q4 financial tables suggests this was due to VMware dividends and debt repayments.

Supply chain

Clarke said in prepared remarks: “The global supply chain shortage of semiconductors and global logistics challenges for goods and components continues to impact just about every industry. We are still experiencing shortages of integrated circuits across a wide range of devices, including network controllers and microcontrollers…. freight costs have continued to rise… we expect PC backlog to grow in Q1… Our higher margin ISG backlog increased again in Q4 to a record level due to a combination of very strong demand and a lack of component availability. We expect our ISG backlog to remain elevated through at least the first half of the year as part shortages continue.” 

The Kioxia/WD chemical contamination incident could make things worse. “We are awaiting information from the recent NAND contamination announcement from Kioxa/WD to evaluate the impact on Dell,” said Clark. 

Other suppliers such as Quantum and NetApp are also suffering supply chain woes, although NetApp just recorded its seventh growth quarter in a row.

A look at ISG and CSG 

Dell has two overarching business units: the Client Solutions Group (CSG) – PCs basically – and the Infrastructure Solutions Group (ISG), which sells servers, networking, and storage. CSG revenues in Q4 were $17.33bn, a rise of 26 per cent. PCs and laptops must have been flying off the warehouse shelves. ISG did not do so well: revenues were $9.2bn, a relatively anaemic rise of 3 per cent but its fourth consecutive quarter of year-on-year growth.

The chart above shows how CSG revenues are outstripping those of the ISG unit. We can see that ISG revenues have been flattish compared to those of CSG.

Dell’s Q4 servers + networking was $4.7bn, up 7 per cent annually, while storage was $4.5bn, a 2.3 per cent rise. The contributed least to Dell’s record quarter and year. However, Dell’s performance review slides pointed out that ISG saw the fastest growth in storage orders since Dell bought EMC in Q3 2017. 

We can see a good sequential storage rise in Q4  fy2022, the seasonal pattern, but storage has been flatlining in a $3.5bn to 4.5bn quarterly revenue band for five years or so.

Dell said mid-range storage orders were up double digits in fiscal 2022, and PowerStore remains the fastest ramping storage product in Dell’s history. PowerStore was launched in May last year and unified prior mid-range storage products such as Unity/VNX, XtremIO, and the SC (Compellent) lines. The company also said that storage demand rose for the third consecutive quarter and was in the high single digits. 

Storage revenues trail that and have risen slightly for two quarters in a row as the chart illustrates:

Flatlining storage is only too evident here

Co-COO Chuck Whitten said storage order growth was great. “We saw double-digit demand growth in the high-end driven by select enterprise customers, 25 per cent demand growth for our unstructured storage solutions and 8 per cent growth for HCI despite a tough Y/Y comparison. Within midrange, PowerStore demand continued to ramp in Q4, up +34 per cent sequentially and [is] now approximately 50 per cent of our midrange SAN mix. Encouragingly, 26 per cent of PowerStore buyers are new to Dell storage and 29 per cent were repeat buyers, important leading indicators of future growth.”

But he added: “Storage revenue was roughly flat Y/Y due to the aforementioned backlog build and storage software and services content that gets deferred and amortized over time.”

So it was the supply chain issues that limited Dell’s ability to ship its storage hardware. When that is resolved, hopefully over the next two quarters, then Dell’s storage revenues should bounce higher.

Dell instituted quarterly dividend payments of $0.33 per share and the the company has also gained an investment grade rating.

The outlook for the next quarter is for revenues of $24.5-25.7bn, $25.1bn at the mid-point and an 11 per cent increase on a year ago.

Storage news ticker – 25 February

Solar flare.
Source Massive X-Class Solar Flare uploaded by PD Tillman; Author - NASA Goddard Space Flight Center

The Backblaze S3 Compatible API implements the most commonly used S3 operations, allowing applications to integrate with Backblaze B2 in exactly the same way as with Amazon S3. Postman is a platform for building and using APIs. Developers can interact with Backblaze B2 Cloud Storage via a new Postman Collection for the Backblaze S3 Compatible API. The Backblaze S3 Compatible API Documentation page is the definitive reference for developers wishing to access Backblaze B2 directly via the S3 Compatible API. Backblaze has also said CrashPlan is sunsetting its On-Premises backup service as of 28 February. CrashPlan users can transition their backups to Backblaze. More info here.

Fujitsu has been selected for the “Green Innovation Fund Project/Construction of Next Generation Digital Infrastructure” project in the field of “Technology Development of the Next Generation Green Data Center” by Japan’s New Energy and Industrial Technology Development Organization (NEDO). Fujitsu has been selected along with NEC, AIO Core, Kioxia, Fujitsu Optical Components, and KYOCERA. Fujitsu will lead the development of low-power CPUs and photonics smartNICs optimised for next-generation green data centres. Additionally, Fujitsu Optical Components will work with Fujitsu to develop a photonics smartNIC that reduces network power consumption in data centres by applying optical transmission technology that achieves greater efficiency in size and energy consumption and greater data capacity. This will involve hardware and software.

HPE and chip-to-chip optical connectivity supplier Ayar Labs have announced a multi-year strategic collaboration to develop optical I/O technology using silicon photonics for high-performance computing (HPC) and artificial intelligence (AI). It will focus on Ayar Labs’ development of high-speed, high-density, low-power optical interconnects to target future generations of HPE Slingshot, the high-performance Ethernet fabric designed for HPC and AI. HPE and Ayar Labs will partner on photonics research and commercial development, building a joint ecosystem of solution providers and customer engagements. HPE’s venture arm, Hewlett Packard Pathfinder, has made a strategic investment in Ayar Labs. 

Kioxia Holdings Corporation has entered into a definitive agreement with Toshiba Digital Solutions Corporation to acquire all of the outstanding shares of its subsidiary, Chubu Toshiba Engineering (CTE) Corporation, to further strengthen Kioxia Group’s technology development capabilities. CTE specialises in semiconductor-related hardware and software design, prototyping, and evaluation. Acquisition of the shares will be completed in the first half of 2022, following completion of the necessary procedures, and CTE will become a wholly owned subsidiary of Kioxia Corporation.

Cloud file server and sharer Nasuni said it had a good 2021 with 54 per cent growth in enterprise logos [customers] choosing Nasuni, and the initial deal size of capacity under management for new customers expanded 196 per cent Y/Y. There was 81 per cent growth in new customer Annual Contract Value bookings, gross customer retention rates of over 98 per cent, and net customer retention rates of 118 per cent. Some 35 per cent of annual subscription contacts are now valued above $350,000 per year.

Server-offloading XDP storage processor developer Pliops has appointed Sudhanshu Jain as VP of product management and strategic alliances. Jain will lead product strategy and forge deeper relationships with key ecosystem partners. He comes to Pliops after nearly a decade at VMware, where he led product management for Software-Defined Data Center (SDDC) and converged infrastructure. Before that he was at Alcatel-Lucent and Aruba Networks.       

 

Pliops XDP storage processor

Redstor, a data management and SaaS protection supplier, has launched a service to improve the way MSPs and CSPs protect Kubernetes environments in AWS. It has added support for the Amazon EKS managed container service for handling applications in the cloud or on-premises, giving partners the ability to backup applications and configurations, and scale customer backups. MSP and CSPs will be able to recover a Kubernetes environment by injecting data back into an existing cluster for fast resolution of ransomware, accidental or malicious deletion, or misconfiguration while managing multiple accounts with a single solution purpose-built for cloud partners.

Research house TrendForce says that, in 4Q21, NAND Flash bit shipments grew by only 3.3 per cent quarter-on-quarter, a significant decrease from nearly 10 per cent growth in 3Q21. ASP fell by nearly 5 per cent and the overall industry posted revenue of $18.5bn, a quarter-on-quarter decrease of 2.1 per cent. This was primarily due to a decline in the purchase demand of various products and a market shift to oversupply causing a drop in contract prices. In 4Q21, with the exception of enterprise SSDs, the supply of which was limited by insufficient upstream components, the prices of other NAND Flash products such as eMMC, UFS, and client SSDs all fell.

Storage news ticker – 24 February

Datto, which supplies cloud backup storage for MSPs, reported revenues for the fourth 2021 quarter of $164.3m, up 18 per cent Y/Y with a $5.7m profit, better than the year-ago $7.32m loss. Full 2020 year revenues were $618.7m, a 19 per cent rise Y/Y, with a $51.5m profit, 129 per cent higher Y/Y. Tim Weller, Datto CEO, said: “We finished 2021 on a strong note with another quarter of accelerating growth. The fourth quarter capped an exceptional year for Datto and our MSP partners. We exceeded all of our targets in our first full year as a public company.” William Blair analyst Jason Ader commented: “The number of $100,000-plus ARR customers increased by 100 in the quarter (the highest increase since disclosures began).”

Couchbase, which provides a database for enterprise applications, has announced Couchbase Mobile 3, an edge-ready data platform introducing an industry-first embedded document database for mobile, desktop, and custom embedded hardware with built-in synchronisation capabilities. With Couchbase Mobile 3’s strengthened edge capabilities, development teams can focus on the core competency of their applications without worrying about speed and connectivity issues. It says that, with offline-first capabilities for applications, Mobile 3 ensures data integrity by automatically synchronising data across the entirety of an organisation’s edge and mobile infrastructure, with or without internet connectivity

Hitachi Vantara has promoted Adrian Johnson to be VP of its Digital Infrastructure Business Unit (DIBU), reporting to Scott Worman, Chief Revenue Officer, Hitachi Vantara.

Data mobilising Kubernetes storage and management supplier Ionir is partnering with Zensar, an India-headquartered  tech consulting and services company with 10,000 associates in 33 global locations including San Jose, Seattle, Princeton, Cape Town, London, Singapore, and Mexico City. Ionir has grabbed itself a reseller in other words. Zensar is looking at multi-cloud environments for its global clients. Ionir has also been awarded a couple of US patents; 11,226,990 relating to storage system operation and 11,132,141 for synchronising data containers.

Kioxia UFS with MIPI M-PHY v5.0

iXsystems announced the general release of TrueNAS SCALE 22.02.0, offering existing TrueNAS features, plus new Linux-specific capabilities, including Docker Containers, Kubernetes, KVM, and Scale-out ZFS through the Gluster file system. TrueNAS SCALE 22.02.0 is available for download at truenas.com/scale.

… 

Kioxia is sampling the industry’s first Universal Flash Storage (UFS) embedded flash memory devices supporting MIPI M-PHY v5.0. Its chip comes in three capacities; 128, 256, and 512GB. MIPI M-PHY is a physical layer interface for flash memory. v5.0 adds High Speed Gear 5 (HS-G5) mode enabling engineers to double the potential data rate per lane to 23.32Gbit/s on one lane and 93.28 Gbit/s over four lanes compared with the previous specification. This means up to 90 per cent faster read and 70 per cent faster write than prior-generation devices. Smartphones and similar end-user devices should benefit.  

Ondat has launched its v2.6 software and a beta of a SaaS platform which works by utilising Google Cloud’s IoT Core. This is a managed Google service that allows the SaaS platform to securely connect, manage, and ingest data from a global fleet of Ondat clusters. Ondat has also announced its Kubectl Plugin v1.1.0.  Running this as a dry-run flag produces an output showing all the manifests that the plugin would normally apply to the cluster, without implementing them. This will be welcomed by users who install Ondat with CI/CD pipelines and only want to source control manifest files.

France-based OVHcloud has a new storage offering. Its High Performance Object Storage combines high performance and scalability with a transparent and ultra-competitive pricing model for Big Data and artificial intelligence (AI) and HPC apps, as well as being suitable for e-commerce and streaming platforms. It costs $39 per terabyte per month, $0.015 per outbound gigabyte, while API requests and private outbound traffic are included without any extra charge.

Superfast NVMe storage array supplier Pavilion Data has signed a deal with Boston Limited for it to resell Pavilion’s storage to enterprise HPC customers in the US and Europe. Manoj Nayee, co-founder and managing director at Boston, said: “There is an increasing need for virtualized storage solutions that easily adapt to fast-changing analytics and processing environments. This partnership between Boston, VMware, and Pavilion, married with our cutting-edge applications, will satisfy those needs.”

Satori, a DataSecOps platform, has a  new Satori Data Access Control (DAC) Manager to let users secure access to sensitive data stored in customer-hosted VPCs, and automatically apply updates and upgrades with zero downtime. Satori claims this product development makes it the first company to offer secure data access across any deployment environment, including multi-region and multi-cloud.

SingleStore, which supplies a distributed, relational SQL database for both transaction and analytic workloads, has notched up a sale to Australia’s Directed Technologies. Directed Technologies has deployed millions of turnkey OEM-branded vehicle multimedia units (MMUs), telematics solutions, and accessories for leading global manufacturers. More than 1,800 fleets utilise telematics devices manufactured in house at Directed Technologies, and more than 300,000 heavy vehicles have its MMUs onboard. Directed Technologies employs SingleStore’s scalable database for all its data and manages high data volumes on a real-time basis. 

Synology announced the Beta release of DiskStation Manager 7.1, which adds SMB DFS capability to enable administrators to link together multiple Synology systems. This provides more convenient file access for end users by removing the need to remember separate addresses. It consolidates background tasks into an administrator-friendly overview. For Synology High Availability clusters, users can now view and manage drives on both systems from a single instance of Storage Manager for easier maintenance and management. V7.1 also economises SSD caching with the ability to speed up multiple storage volumes at the same time.

Up to $5m compensation if Rubrik Cloud Vault recovery busted

Rubrik has extended its ransomware recovery warranty, telling customers that if recovery from its Cloud Vault is screwed then it will give them up to $5 million.

Cloud Vault is SaaS archival service built on Azure. It maintains immutable and instantly recoverable copies of critical data in a secured and isolated cloud location, using Azure Blob storage, fully-managed by Rubrik. The benefit is that it sold to enable customers to survive cyber-attacks and avoid making ransomware payments by safeguarding data.

The warranty extension is designed so that Rubrik Cloud Vault customers have the assurance that Rubrik will pay up to $5m in recovery-related costs, in the event Rubrik is unable to recover secured data following a cyber attack.

A statement from Bipul Sinha, CEO and co-founder of Rubrik, said: “Whether your data lives within your enterprise or in the cloud, you’re backed by a warranty and assurance that your data will be secure and available no matter what.”

Rubrik’s ransomware recovery warranty was announced in October last year for its for Enterprise Edition SaaS offering.

A customer, Jack Higgins, network analyst at the Walton County Board of Education, said: “Rubrik’s Ransomware Warranty extension is another bold move from the company and adds an additional layer of reassurance that Rubrik is committed to securing and recovering our data.”  

This strengthening of the Rubrik-Azure relationship follows Microsoft making an equity investment in Rubrik in August last year. The development of Zero Trust anti-ransomware services was part of that deal.

As far as we know Rubrik’s ransomware recovery warranty is unique and so provides a powerfully differentiated marketing tool. It is a little surprising that no other vendors have followed Rubrik and started offering their own versions of a recovery warranty. An intriguing question is whether Rubrik will eventually extend its recovery warranty to equivalent services on AWS and GCP.

Hammerspace nails an EMEA office

Hammerspace
Hammerspace

Hammerspace has strengthened its Europe, Middle East and Africa (EMA) presence with a London-based office and distribution deal with Spinnaker.

Surprisingly, more than 35 per cent of the Hammerspace customer base is in EMEA, with that enabled by the availability of Hammerspace software in local regions of AWS, Azure, and Google Cloud.  Having more feet on the street is expected to help grow Hammerspace sales in the territory.

David Flynn, Hammerspace CEO and co-founder, used the announcement to promote the Global Data Environment (GDE), its software-defined platform for unstructured data.

“The Hammerspace Global Data Environment is a game-changer for the global economy, solving the challenges of today’s data-driven businesses… We are excited to expand our European presence with a world-class team and distribution and channel partnerships.”  

Hammerspace believes data is increasingly being created and stored in a variety of locations – at the edge, in multiple data centres, and cloud regions. It needs to be a globally consolidated and consumable resource. The GDE has file metadata shared between local and remote sites on a peer-to-peer basis so that all the users everywhere in the GDE see the same files in their file:folder structure. There is no need to replicate a full copy of the data at each site.

The sales pitch is that applications can access data stored in remote locations while using automated orchestration tools provide fast local access when needed for processing. 

The EMEA sales infrastructure will be focused on selling into data-intensive industries and organizations such as autonomous vehicle, electronic design automation (EDA), high-performance computing (HPC), visual effects (VFX), animation, post-production, and life sciences – genomics, microscopy, and pharmaceutical research. 

It is being headed up (remotely) by Chris Bowen, SVP of global sales, with London-based Iain Malins as sales director and Ian Marcroft as technical director. Through Spinnaker, Hammerspace get access to a reseller network and all Hammerspace sales are go via the channel.

Marcroft said: “Most of my customers want to maintain on-premises storage for regular workflows and burst to the cloud when larger 8 and 16K projects are needed. They also have a multi-cloud strategy and need to quickly build systems using infrastructure as code. Hammerspace will give them the ability to quickly orchestrate workflows in different regions and across cloud vendors to enable cloud hopping.” 

Hammerspace intends to continue adding EMEA local sales, technical, and support heads throughout 2022.  

All this augers well for an increase in Hammerspace revenues in 2022 and builds upon its sales and marketing recruitment that took place last year. Bowen is now setting up a regional sales presence in EMEA and we might expect a similar expansion in Hammerspace’s APAC region. 

NetApp’s cloud ops looking strong as it reports seventh consecutive growth quarter

NetApp
NetApp CloudJumper

NetApp reported 9.8 per cent revenue growth for its third fy2022 quarter, with a record all-flash array run rate and high public cloud revenue growth.

The quarter finished on January 28 with revenues of $1.61bn – they were $1.47bn a year ago – and a profit of $252m, up 38.5 per cent year-on-year.

The all-flash array (AFA) run rate grew 23 per cent to a record $3.2bn with 31 per cent of NetApp’s customer base buying into the AFA message, leaving plenty of headroom for growth.

The public cloud business grew its  annualized revenue run rate (ARR) 98 per cent with revenues of $110m in the quarter, double the year-ago number.

CEO George Kurian said in his prepared remarks: “We delivered another outstanding quarter, building on the momentum we’ve had in recent periods. Demand for our solutions is strong and powered by the alignment of our differentiated technology portfolio with customer priorities.”

He said NetApp delivered “record high gross margin dollars, operating income, and earnings per share.”

NetApp’s 7th  revenue growth quarter in a row.

Hybrid cloud revenues, led by AFAs and object storage (StorageGRID) of $1.5bn rose 6.3 per cent with the $110m of public cloud revenues a long way behind but growing 100 per cent year-on-year.

Kurian noted: “Public cloud dollar-based net revenue retention rate remains healthy at 169 per cent, as customers increase their usage of our public cloud services and adopt new products.… which puts us well ahead of our plan to achieve $1 billion in ARR in FY ’25.”

CFO Mike Berry expects public cloud ARR to be between $525m and $545m at the end of NetAp’s fy2022. Kurian said: ”Azure NetApp Files was a really strong performer this quarter as it has been for several quarters now.”

In a hint of the demand for the newly announced AWS Fsx for NetApp ONTAP he said: “A global third-party logistics company chose FSx for NetApp ONTAP to host data migrated from Nutanix systems in its data centres.”

NetApp replacing Nutanix is a surprise.

Financial summary

  • Billings of $1.76bn, up 10 per cent annually
  • Product revenue – $846m, up 9 per cent and 4th quarter of growth
  • Service and support revenues – $658m and up 2.8 per cent
  • EPS – $1.10 vs $0.80 a year ago
  • Cash and investments – $4.2bn
  • Operating cash flow – $260m compared to $373m a year ago
  • Gross margin – 67.3 per cent and unchanged from a year ago
  • Deferred revenue – $4bn

Supply chain issues

CFO Mike Berry referred to supply chain problems in his remarks: “We navigate near-term component shortages, and expect revenue to continue to be constrained in Q4.” However: “We are increasing our full year guidance for revenue, EPS and Public Cloud ARR, driven by the outperformance in Q3 and a very healthy demand backdrop for Q4.”

The earnings call revealed more about the supply chain issues. Berry said: “In addition to a worsening freight and expedite environment, we also experienced component supplier decommits beginning in the second half of Q3, which required us to purchase components in the open market at significant premiums.”

The decommits were mostly in low volume analog semiconductors, used for things like voltage stabilisation across a lot of NetApp’s product families. The freight aspect was largely due to cargo airfreight issues. NetApp has multi-supplier and long-term agreements for CPUs, DRAM, NAND, and disk drives, and is not directly affected by the Kioxia/Western Digital chemical contamination problem.

“We were faced with the short-term decision of supporting the robust customer demand versus optimising near-term product margin. … we made this strategic decision to prioritise meeting customer demand with the trade-off being lower product margins in the short term. To be clear, the pricing and availability of our core HDD and SSD components are stable and are not a contributor to the near-term headwinds.”

“We believe these cost headwinds are temporary in nature; and… we expect that Q4 will be the trough for product margins… we do expect cost improvements in the coming quarters as the supply headwinds begin to ease throughout the first half of fiscal 2023.”

Berry sees $50m to $60m in extra Q4 costs from open market component purchases. NetApp has already raised its prices by about 10 per cent overall in response to the supply chain issues.

He said of the AFA market that NetApp was winning new customers as well as selling into its existing base, and growing sales faster than the overall market – so gaining market share over the competition.

Outlook

The Q4 expectation is for revenues of $1.685bn plus/minus $50m and representing an 8  per cent rise year-on-year at the mid point. That’s lower than otherwise because of the supply chain issues. Generally demand was stronger than supply in Q3 and should be the same in Q4.

This would make full fy2022 revenues total $6.32bn, again at the Q4 mid-point, a 10.1 per cent increase from fy2021 and NetApp’s highest annual revenues since fy2014’s $6.33bn. It has been a long wait.

Comment

There is a sense that NetApp is very hopeful about growth in its public cloud revenues from its AWS FSx for NetApp ONTAP, Azure NetApp Files and the GCP equivalent. Kurian said: “Once the customer uses one of these cloud portfolios, they expand their use quite substantially.” Public cloud revenues could benefit from increased Spot portfolio new customer sales as well as from NetApp’s existing base moving workloads to the public cloud.

It does appear that NetApp is leading the storage market with its data fabric covering all three major public clouds and the cloud ops Spot portfolio driving NetApp product sales in a way that no other storage supplier, at this time, can emulate. NetApp’s total addressable market (TAM) has increased. Were Dell or HPE or Pure to compete here they’d face a build-vs-buy decision and NetApp is already busy buying the promising startups in this emerging field.

Its lead should increase over the next couple of quarters and there are opportunities to integrate the cloud ops products with the cloud products and so strengthen both. NetApp is looking well-placed for further growth.

NetApp splashes the cash on Fylamynt

NetApp is buying Fylamynt, a cloud ops automation startup, and adding it to the Spot by NetApp cloud operations product portfolio.

This marks the fourth Spot by NetApp acquisition in the last two years, as the company tries to build a $1 billion annual recurring revenue cloud business. In June last year it bought Data Mechanics for its API-based Apache Spark job interface software and it has acquired CloudHawk for security and compliance, and CloudCheckr for cloud cost management.

Anthony Lye, NetApp’s EVP and GM for Public Cloud Services, provided an announcement quote: “The native integration of Fylamynt with Spot by NetApp will allow organisations to rapidly and reliably deploy Spot by NetApp services within their existing cloud environments. … Combined with Fylamynt’s pre-built integrations and Spot’s full CloudOps portfolio, they will be able to accelerate, optimize and automate their cloud operations infrastructure. This strategic acquisition accelerates NetApp’s overall CloudOps leadership and empowers customers to continue to enjoy more cloud at less cost.”

Fylamynt was founded in 2019 by CTO David Lee, CEO Dr Pradeep Padala, and VP engineering Dr Xiaoyun Zhu to provide a cloud incident response facility for DevOps and Site Reliability Engineering (SRE). The company raised $6.5 million in 2020 seed round funding. Financial details of the acquisition transaction are not being disclosed.

Fylamynt white paper diagram

Padala said: “With our common vision to help teams deploy and run at cloud speed, we’re excited to integrate our modern cloud automation capabilities into the Spot by NetApp portfolio to ultimately democratize automation for every enterprise.”

Fylamynt’s technology provides end-to-end incident response with alerting, collaboration, and automated remediation – shades of ServiceNow perhaps. SRE teams can build automated workflows using Fylamynt’s automation facilities. It has a so-called no-code visual interface so engineers can build, run, and analyse workflows. The Fylamynt software has connectors for infrastructure-as-code tools such as Terraform, Ansible, and Cloud Formation, and services such as DataDog, Splunk, PagerDuty, Slack, and ServiceNow.

Fylamynt says that, by augmenting workflows with AI, it is a connector that automates any cloud workflow with any service and all code. Padala has stated: “Automation is key to operating enterprise SaaS at scale with high availability, but the bottleneck in building automation is writing code. We built Fylamynt to help cloud engineers codify every aspect of their cloud workflow in minutes.”

NetApp says that, as companies increasingly move to the cloud, they’re seeking automation capabilities that help them build, integrate, and run multiple services together, easily and effectively. Fylamynt adds automated incident response services to this so that cloud-native app downtime can be reduced.

Download a Fylamynt white paper to find out more here.

The backstory here is that NetApp is building out an operations infrastructure product for enterprises deploying apps in the public cloud. It reckons this is a fast-growing market with no established players and lots of innovative startups. NetApp is buying such startups and integrating their products under Spot by NetApp. Potentially NetApp could build a commanding and early lead in this market place and emerge as the dominant player.

Zadara scores compute place in Seagate’s Lyve Cloud

Seagate’s Lyve Cloud will deploy Zadara zCompute (servers with VM images) for pay-as-you-go use in its datacentres.

Lyve Cloud is – or was – an S3-compatible object-storage-as-a-service product based on Seagate storage located in Equinix datacentres. The hardware is Lyve Drive Racks with Exos AP (5U84) enclosures and MinIO object storage software. Seagate is broadening Lyve Cloud’s service to include Zadara’s zCompute, which can be deployed along with Lyve Cloud Object Storage and provides, Seagate says, significant cost savings.

Ravi Naik, CIO and EVP storage services for Seagate, said “Zadara shares our vision for infrastructure-as-a-service. Vendor lock-in is eliminated, simple, flexible deployment is enabled, and costs become predictable for the largest workloads. We designed Lyve Cloud to help enterprises overcome the cost and complexity of storing, moving and activating data, and collaborating with Zadara helps us to provide our customers with elastic compute resources that scale to their business needs.”

According to Seagate, the inclusion of zCompute in Lyve Cloud provides an equivalent cloud storage and compute experience as other cloud providers. Taken at face value, that would mean AWS, Azure, and GCP, which have a large number of different compute and storage instances. Seagate is somewhat stretching the meaning of “equivalent”.

Nelson Nahum, CEO and co-founder of Zadara, said “As workloads scale larger and larger, the costs to maintain them in public clouds become untenable. Enterprises with petabyte scale workloads have become disheartened by their public cloud experience and want to move to the edge.”

The “edge” in this case is an Equinix colo near the customer. Zadara makes the point that its elastic compute scales down to a single virtual machine “well below the starting price and scale of competing cloud solutions.” The actual x86 server hardware is not being revealed.

AWS is building out a set of local zone edge datacentres that may well compete with these Equinix colos and limit their ability to gain customers.

Zadara’s zCompute is now available on Seagate’s Lyve Cloud with support from Zadara.

Dell adds petabytes with update of PowerVault arrays

Dell has refreshed its PowerVault arrays, increasing power, connectivity, and capacity.

We’re told the ME5s offer twice the performance, throughput, capacity, and memory of its PowerVault ME4 predecessor. That’s not quite true on the capacity front, as we shall see.

PowerVault group

The ME4012 (2RU x 12-slot), ME4024 (2RU x 24-slot), and ME4084 (5RU x 84 slot) make way for the ME5012, ME5024, and ME84 with the same chassis but Broadwell DE-based controllers inside them are replaced by newer Xeon CPUs. The ME5012 and ME5024 can have a single or dual controller setup, with the ME5084 restricted to a dual controller configuration.

The ME4s supported SSDs and three kinds of disk drive at 15,000, 10,000, and 7,200rpm. The 15,000rpm option is no more but the other drive categories remain. However, only the entry-level ME5012 supports 3.5-inch disk drives in its base chassis; the others only have 2.5-inch base chassis bays, which restricts supported disk drive capacity levels.

There can be between 2 and 64 drives for the ME5012, 2 and 76 for the ME5025, and 28 and 336 for the ME5084. They can all have expansion trays, which support 18TB 3.5-inch drives.

Capacities, using expansion trays, have increased nicely:

  • 3.1PB ME4012 —> 4.7PB ME5012 and a 51.6 per cent increase
  • 3.0PB ME4024 —> 4.7PB ME5024 and a 56.7 per cent rise
  • 4.0PB ME4084 —> 6.0PB ME5084 and a 50 per cent uplift

The newer systems still support 12Gbit/s interconnects but 16Gbit/s Fibre Channel is now 32Gbit/s FC (auto-negotiating down to 16Gbit/s) and the ME4’s 10Gbit/s iSCSI is now 10 or 25Gbit/s iSCSI.

The ME5s are managed through PowerVault Manager with an HTML5 GUI featuring intuitive user navigation and support for scripting with either Redfish/Swordfish REST or CLI APIs. They are also supported within Dell’s Open Management Enterprise (OME) framework, which includes Dell networks, servers, and other infrastructure in a data centre. 

Customers with ProSupport Services for PowerVault ME5 can access CloudIQ, Dell’s cloud-based AIOPs service that uses telemetry, machine learning, and other algorithms to provide users with notifications and predictive analytics, remediation advice, anomalies, capacity projections, reclaimable storage, and more.

Dell says it sold 53,000 ME4 PowerVaults so the ME5s have something to live up to. Entry-level configurations start at $12,000. More details here.

Hitachi Vantara refreshes mid-range storage hardware, software, and management

Hitachi Vantara has announced a new mid-range array, block storage software for x86 servers and cloud-native apps, placement of its arrays in Equinix colocation centres with high-speed links to public clouds, AI-driven management from the cloud, a hyperconverged storage offering, and a Commvault data protection facility. Phew!

Update. E1090 product details added to table and E1090 replaces the E990 – 24 February 2022.

Update. E1090 is an all-flash array. E1090H is the hybrid flash/disk version. VSS Block only supported in VMware environments with KVM and bare metal server support coming. E1090H product details added to table – 27 February 2022.

There’s a hybrid cloud and embrace-the-public-cloud angle to this and, with regard to the Equinix colo presence, Hitachi Vantara is following a lead set first by NetApp and then by Pure Storage.

Mark Ablett, president of Digital Infrastructure at Hitachi Vantara, set the context with a buzzword-laden statement: “We are accelerating the pace of delivering new hybrid cloud products and services to enable our customers to find innovative ways to enhance their data-driven outcomes today, and in the near future.”

Hitachi Vantara is announcing:

  • VSP E1090 with virtual storage scale out capabilities 
  • Virtual Storage Software Block (VSS Block), a software-defined data platform that extends Hitachi Vantara’s virtual storage platform to cloud-native applications 
  • Hitachi Ops Center Clear Sight, an AI-driven cloud management tool to support VSP storage with simplified cloud-based reporting and analytics
  • Hitachi Cloud Connect (Hitachi V storage in Equinix colos)  to deliver a near-cloud system integrating with the big 3 public clouds by extending a data fabric to cloud apps running on them 
  • Unified Compute Platform RS, a hyperconverged “cloud in a box” offering
  • A cloud data protection appliance using Commvault SW to protect data in hybrid cloud operating model 

VSP E1090 array

Hitachi Vantara rack-level storage arrays are organised into Virtual Storage Platform (VSP) products with the VSP 5000 high-end NVMe flash array, E Series NVMe SSDs and disk mid-range array, F-Series all-flash enterprise to mid-size array, and G Series flash and disk entry-level array, all running the Storage Virtualisation Operating System SVOS RF.

The VSP E1090 is the new high-end member of the E Series family and tops out the E5900, E790, and E990 product line.

We have built an E Series product table listing controller, capacity and port parameters, with the E1090 highlights being that it has the lowest E Series latency at 41μs and the highest IOPS at 8.4 million:

Hitachi Vantara E Series product table. The E990 is replaced by the E1090.
Hitachi Vantara E1090 rack

Both suggest it has had a major controller CPU upgrade. A look at the E1090 rack cabinet shows us three large (4RU) chassis, like the E990, and four small (2RU) chassis. That indicates the E1090 is an upgraded E990. Hitachi Vantara confirmed that the E990 has been discontinued and replaced with the E1090.

A spokesperson told us: “The E1090 comes with either NVMe or SAS (SSD and/or HDD) backend media. Hitachi Vantara is not offering the proprietary FMD on this platform; it is all commodity media. We are not doing drive level compression/dedupe — the hardware offload is the same Compression Accelerator Module that we introduced on the 5200/5600. It is an ASIC-based offload in the controller, so common offload for NVMe and SAS media. We do not offer an QLC drive at this point.”

Update: The E1090 is the all-flash version with an E1090H supporting disks and SAS SSDs as well, like the other models in the range. The O/S provides automated dynamic tiering. There is HW-assisted compression and also O/S software compression and deduplication.

Software and management

New E Series SVOS RF software enables clustering of up to 65 nodes and 130 controllers with Hitachi Ops Center presenting them as a single virtual system to hosts, with federated management, data mobility, data protection, and automation across all the cluster nodes, which can include virtualized third-party arrays – a long-standing SVOS capability.

It also helps provide a self-install capability, non-disruptive data-in-place migration. Hitachi Replication Plug-in for Containers automates storage replication between Kubernetes clusters and storage systems located at different sites, also enabling a self-service approach. Data reduction has been improved with hardware assist providing up to 82 per cent adaptive data reduction throughput improvement over the existing VSP E series. Selectable inline compression and deduplication can be set at the volume level.

VSP Storage Management has analytics and automation across clusters. An embedded management feature provides rapid setup and provisioning of E Series arrays without any additional software installation. The E Series are now self-installable and can be up and running in 30 minutes or less.

Hitachi Vantara has also announced Ops Center Clear Sight, a cloud-based and AI-driven app for monitoring assets from anywhere. Admins can observe, analyse, and optimise VSP storage infrastructure remotely. AI-driven insights provide inventory asset views, health status, risk management, and capacity planning. There is a clear link possible here to AIOps facilities with Hitachi Ops Center ready for Google Anthos. We think Hitachi Vantara’s capability is generally similar in scope to Dell’s CloudIQ.

Inside Ops Center we find Hitachi Remote Ops, with Hitachi Vantara claiming it can resolve up to 90 per cent of support issues using predictive analysis.

VSS Block and erasure coding

The VSS Block software is layered on top of SVOS and presents a single data plane across Hitachi Vantara’s mid-range, enterprise (VSP 5000), and software-defined storage portfolio. It incorporates Hitachi Vantara’s Polyphase Erasure Coding technology. This is based on two patents, US# 10,185,624 B2 and 10,496,479 B2. 

A July 2021 Hitachi Vantara blog stated: “Current implementations of erasure coding are fraught with high CPU resource consumption, network latency due to the distributed I/O, and subpar read and write performance due to the same distributed architecture. There is plenty of evidence that pins erasure coding technology to an archival solution, like object storage or write once, read many (WORM) architectures.

“Hitachi’s Polyphase Erasure Coding features a different approach in how it adapts software data protection. Hitachi carefully evaluated erasure coding through years of research and developed a unique patented approach (US# 10,185,624 B2 and 10,496,479 B2). This approach improved crucial elements of data and parity placement, which efficiently improved latency within Hitachi Polyphase Erasure Coding and lowered storage resource overhead. These necessary enhancements provide a place where your data can be synthesized so you can create more value.”

VSS Block supports up to 3.8PB of capacity. It will, in the future, connect to the near-cloud offerings and, ultimately, extend into the public cloud. This means, we understand, that VSS Block will eventually run in AWS, Azure, and GCP.

Update. Also, at launch, VSS Block only supports VMware with support for KVM and bare metal servers expected later in 2022.

Other news

Hitachi Vantara’s Unified Compute Platform (UCP) family of converged and hyperconverged systems now provides so-called “cloud-in-a-box” functionality. A UCP RS system runs VMware Cloud Foundation with VMware Tanzu, which provides combined virtual machine and Kubernetes container app support.

The company’s converged system, Hitachi Adaptive Solution, built with Cisco switching and UCS servers, includes the new E Series storage and Cisco’s UCS X-Series Compute, managed through Cisco’s Intersight hybrid cloud operations platform. 

The EverFlex purchase/lease/consumption utility/consumption-as-a-service portfolio has a new member: Hitachi Infrastructure Orchestration as a Service, a data storage management and automation facility. Hitachi says it enhances traditional operations with ML, AI, business process management, integration, and IT process automation software and services to provide continuous insights and intelligence about IT environments. The idea is to facilitate the development of adaptable and autonomous, “self-driving” IT operations.

A new Commvault HyperScale X for Hitachi Data Protection Suite delivers comprehensive data protection across applications, databases, public cloud environments, hypervisors, operating systems, and storage arrays from a single, extensible platform. HyperScale X is a Commvault appliance providing scale-out backup and recovery for container, virtual, and database workloads. There are more than 1,200 joint Hitachi Vantara – Commvault customers. 

All in all Hitachi Vantara has delivered a pretty comprehensive incremental and catch-up refresh of its mid-range storage hardware, software and management, with stronger adoption of the hybrid cloud and a path to providing VSP-class storage in and management from the public cloud. This is designed to keep competitors such as Dell, HPE, IBM, Infinidat, NetApp, and Pure Storage away from its customer base.  

Pure Storage PortWorx Q&A: Why storage needs a data strategy

Murli Thirumale

Paid Feature Murli Thirumale is VP and general manager of the Cloud Native Business Unit at PortWorx, now part of Pure Storage, and he told Blocks & Files why organisations need to have a data strategy – not just a storage strategy

Blocks & Files: There are lots of companies providing storage for containers. And it seems that on the storage front there are two approaches. One will hook up an external array with the CSI interface, as Dell EMC has recently done. Another is to actually have cloud-native storage facilities within Kubernetes and storage is just another type of container, a system container. Mixing of virtual machines and containers also seems to be a sensible thing to do, but it sometimes seems like the two things are fundamentally incompatible, and you can make a stab at putting an abstraction layer across the two of them. But really, you need to go all in on one or the other. What is your view on that?

Murli Thirumale: I’d like to take a stab at that question from the customer in, rather than the vendor out. Storage itself is evolving, moving on to data management, and  on to a new way of thinking about how enterprises need to win with data.

Let’s step back 20 years or so, and we can see the start of cloudification. The big change was the cloud and moving from capex to opex,. But technology always drives changes in the underlying hardware infrastructure. So, not only was the hardware infrastructure being cloudified, it was also being upgraded – you had Cisco Nexus top-of-the-rack switches, you had HCI happening. 

The next phase is when SAP and all of those guys came to the fore, and it was the world of apps. The value changed to apps, and Software-as-a-Service. Today the world of apps and data is about automating these apps, and this has led to containerisation.

Now, people are not just being responsive or competing by going fast. They are competing by being smart with their data – this is the “data is the new oil, data is the new currency” argument.

Being smart is not just a question of using your own data – I am not talking about data mining HDFS data lakes, here. This is about conflating your data with the world’s data. To take one example, one of our customers is a COVID vaccine company, and they were able to do fast data science models. They were comparing publicly available information versus their own private tests. So, it’s conflating the two that allows you to gain insight. Now, let’s think about Uber, which is nothing but conflating a lot of publicly available GIS information with its own private information about the driver and about where I want to go.

So in this enterprise journey, the world of storage and CIO has moved from thinking about cloudifying infrastructure to automating apps. That is where the puck is now.  But mining data, both real time and batch wise, to gain insight – that’s where the puck is going.

How does the world of storage add value and not just turn into those old storage admins working away in the basement of enterprises and never seeing daylight? The reality is the world has moved on to apps, people and DevOps. So how does storage cope in this world? The answer lies in migrating to a data strategy.

So we can’t just be furniture guys concerned about wardrobes, drawers and boxes?

What does storage do? Storage is storing data, data management automation. And what we would do today is about freeing that data from one place and making it multi-cloud and multi-app and all of that. But in the end, you have to actually mine the data itself for insight.

So what is going to happen in the world of storage? At the bottom is, of course, the infrastructure. And nowadays, there is a software-defined storage overlay that has been overtaken by a Kubernetes storage overlay.

Now companies like Portworx or Robin.io, and there’s a host of other people, whether it’s the Cohesives of the world or other people who are in data management, this is about that automation layer. We have taken Kubernetes and we’ve taken that data, freed it from the array, and made it available across the cloud, across containers across different apps. But data management, this is the bulk of our business today.

Now when I say data, people think of data as one thing. But in my book, data is actually five things. Data is consumed as a service now, by these applications. So the app tier is at the top, but the first thing is databases, because databases are how data is stored. The second thing is data search – Elasticsearch, in particular. And I’m going to talk about these as services, because that’s really where the world is at. The third thing is analytics, and analytics can be an Excel spreadsheet, but it could be Tableau and come through the old-style analytics. And then there is AI and ML which is unique. Why? Because  this requires a different parsing of the data. It’s really GPU-based, TensorFlow, those types of things. Then finally, streaming, in which I include messaging. So I would merge these two boxes – streaming is really about having distributed data right out there, IoT, and stuff like that, or sensors of different kinds.

So these are the five data services. And this is actually the whole array of modern app solutions. It’s MongoDB, it’s Elastic, it’s Cassandra, it’s Kafka and Spark. This is not the old-style siloed world of Oracle and Sybase – which still exists. But this is the new world, where infrastructure is cloudified. Data is now all running on containerised apps. That’s the cloud-native world. But in addition, data is being consumed as a service in these five different sub-segments.

It looks like a stack. And it looks like the traditional place for suppliers like Pure is at the bottom, but that Pure is moving up the stack to provide services there. And the implication I’m drawing from what you’re saying, is that Pure Portworx will move even further up?

Pure is going to be in all these layers. These things are not mutually exclusive. And in fact, Portworx is an example of how we’re actually stitching these together. And in the future, there’s no reason why we couldn’t have a vertical slice that goes all the way, and even ties-in the app as well.

But you would probably do that with partnerships, wouldn’t you? Because of the amount of code you have to write doing that?

Exactly. So this is what I think a CIO needs to do from an industry viewpoint. But we’re not doing this on our own. This is not such a secret anymore, but people think of Kubernetes as being the container orchestrator. And they’re right, that is the primary role of Kubernetes.

But now, I believe there’s the second coming of Kubernetes, and this is really as an infrastructure control plane. It’s a multi-cloud infrastructure control plane. Sometimes Kubernetes is orchestrating infrastructure through the help of CNI. That’s what [Tigera’s Project Calico] does. I’m also using CSI extensions of Kubernetes and orchestrating storage. That’s what PortWorx or StorageOS [now Ondat] or Robin.io do. And then it will also be orchestrating VMs in the future, using KubeVirt, which is a new emerging technology that is gaining some currency. It’s still a technical concept, but I think more and more, you will see compute being orchestrated by Kubernetes.

That’s astounding. I’ve been with you up until now, but the idea of compute being organised by Kubernetes …

Well, there is this CNCF incubated technology called KubeVirt and it’s basically a way to orchestrate VMs using Kubernetes. You stand up VMs and then you can manage them just like you would containers, but now instead of containers being orchestrated, you’re instantiating and doing things like moving VMs and moving containers within VMs.

This is still in its infancy, but I think it’s going to happen. And this may sound a little bold, but I would say Kubernetes is really going to replace the vision of what OpenStack was intended to do. OpenStack was going to be this abstraction layer that allowed people to manage across any infrastructure, their storage, networking and compute. In storage it was Cinder and Swift, and so on.

My view is, it was so complicated and poorly done that it kind of crumbled. Of course, there are probably 150 companies using OpenStack that still swear by it. But these were mostly people who put a lot of effort into developing the standard. But in reality, the era of OpenStack is over. OpenStack was intended to be the universal way to manage infrastructure, and that’s what is happening now in a multi-cloud way with Kubernetes, with extensions to Kubernetes, called CNI, CSI, and then KubeVirt.

Do you see Kubernetes getting involved in composing datacentre IT resources?

Exactly. The old world is a machine-defined world. That’s how VMware was when the focus was on infrastructure. But now the focus is on as-a-service. Forget infrastructure – people want to consume services. So how do you shorten the path to something we consume as a service? You orchestrate it with containers and Kubernetes.

Look at PortWorx, which is an amazing example of this. Our buyer is not the storage admin, our buyer is a DevOps person. And eventually, with PortWorx Data Services, our buyer is going to be a line of business person.

Because you’re supplying services, not hardware boxes or software, you’re supplying services?

Thirumale: Yes, they’re consuming a service. So Kubernetes was conceived as an app organising framework, and so it is naturally already set up to be app-oriented, but it’s also consuming. So this is data services as code. You had infrastructure as code, software as code, now you have as-a-service as code.

But you don’t have to go there. Pure could remember it is a storage company. What’s in it for Pure to move up the stack to this as-a-service control plane and provide service-level applications up there?

I’m not saying we’ve left our data management world behind. That’s the bulk of our revenue as PortWorx and it’s growing. But this is a brand new thing we launched In September and it’s called PortWorx Data Services. Basically it’s a one-click way to deploy data services. Think of this as a curated set of data services, and over the next year there will be 12 to 14 of those.

Our analysis has revealed that these data services are probably about 75 to 80 per cent of what is being deployed out there in the modern kind of app world. It’s not about  siloed  infrastructure stovepipes – this is the modern multi-cloud world. And what we will offer is essentially a one-click way to do it.

On day one, we’ll let you deploy them with a single click. We actually have curated operators that allow database sizing, so we’ll do basic database sizing. And it will start with a default that we’ve known over time with our experience. And you can just download it – it’s a containerised version of Couchbase, or a containerised version of Cassandra. And we will have an open source version initially, but in future we might also have partnership licences from the vendors.

You won’t be providing the equivalent of Couchbase or Redis, or Kafka yourself? What you’re providing is the facility for consuming them as a service?

Yes, this is a database-as-a-service platform. If I were to be grandiose, I would say it’s like an app store for databases. When I go on my phone, and I go click on the app store, Apple just provides me a way to get Facebook or to get Google Maps. So, remember the old walled garden phrase? This is kind of a walled garden for data services.

But we’re doing more. We’re not just providing you the ability to provision it. That’s the day one part. But we will now allow you to optimise deploying it on a multi-tenant infrastructure. One of the challenges we found is that people might understand how to run Redis, but they won’t know how to pick the instance size to get the IOPS optimised. And they sure as hell have no idea what to do when a container fails and they have to move to a different cloud or how to migrate it.

And then day three is backing it up and archiving it right through the lifecycle. So what PortWorx Data Services is really doing is using Kubernetes in its new avatar as a service manager. Underneath the covers, a line of business person does not care that it’s Kubernetes. They may not even know – the point here is Kubernetes becomes invisible.

We’re not going to them and saying “Kubernetes, Kubernetes!” We’re just saying to them, you can consume a Postgres endpoint, consume a Redis endpoint, here’s Elasticsearch as a service. So our customer is really still a DevOps person, but one who is now going around offering these five data services to their line-of-business customers as a self-service model.

Sponsored by Pure Storage.

Veritas chasing hyper-automation with NetBackup 10

Veritas has launched v10.0 of its NetBackup data protection product along with an Autonomous Data Management strategy.

It says Veritas is planning for a future where its technology is able to provision, optimise and repair data management services autonomously, with users self-servicing data protection and recovery. This will be based on so-called CloudScale Technology, which harnesses artificial intelligence (AI) and hyper-automation – Veritas’s term – to self-provision, self-optimise, and self-heal in web-scale multi-cloud environments. 

CEO Greg Hughes issued a statement: “Hackers are increasing the impact of their ransomware attacks by targeting cloud services and data. Veritas is laying out its strategy for how we solve that challenge for our customers, starting with tools, available today, that will help to reduce cloud footprint and costs, keep data safe from ransomware, and pave the way to Autonomous Data Management.”

We are told Cloud Scale Technology enables a containerised, programmable, and AI-powered microservices architecture that provides autonomous unified data management services across any cloud.

Coming a year after v9.0, NetBackup v10 features:

  • Enhanced multi-cloud storage and orchestrated tiering capabilities, including deep support for Amazon Web Services and expanded support for Microsoft Azure, to reduce the cost of backup storage by up to 95 per cent (no comparison base supplied). NetBackup 10 supports all major Kubernetes distributions and provides multi-cloud cross-platform recovery. 
  • Users can recover the data they want to any Kubernetes distribution. It also has new automated detection and protection for more platform-as-a-service workloads, including Apache Cassandra, the Kubernetes distributions and Microsoft Azure Managed SQL and Azure SQL.
  • Its deduplication capabilities have been upgraded (no details) and it uses elastic multi-cloud compute services to reduce costs.
  • V10 provides automatic malware scanning during backups and prior to restores to ensure infection-free recovery of data. Its AI-driven anomaly detection can automatically initiate malware scanning.
  • NetBackup SaaS Protection is now integrated with NetBackup 10 to provide a single-pane-of-glass view of a customer’s data protection estate for governance and compliance purposes.
  • V10 includes a new integrated no-cost base version of NetBackup IT Analytics, formerly known as Veritas APTARE, to provide AI-driven analytics and reporting.
  • V10 supports Azure Blob object storage as well as S3.

A Conquer Every Cloud micro-website contains more information.

Veritas’s strategy is to respond to customers’ growing use of more applications in multiple cloud environments and of more cloud-native applications and SaaS apps by extending NetBackup’s data protection services across these environments and so provide a single comprehensive backup capability. This will be easier to manage because it will have a degree of autonomy – with AI-triggered malware scanning, an IT estate-wide protection view, and self-scaling and -healing features.

Private equity-owned Veritas has to show that it can sustain itself and grow against competition from Commvault, Veeam, Cohesity, Druva, HYCU and Rubrik. V10 should help it retain its existing customers and even gain some new ones but it won’t enable Veritas to damage its competitors much. The main name of the game is inroad prevention, not invade and conquer.