Business IT modernization supplier BMC bought Model9 in April last year, and has rebranded its software as the AMI Cloud.
BMC is a private equity-owned $2 billion a year revenue company supplying enterprise IT operations and service management with a focus on automation and digital transformation and, latterly, the application of AI. Its customers are mid-size to Fortune 500 enterprises. AMI stands for Automated Mainframe Intelligence and it is a suite of mainframe data management products encompassing AMI Storage Management, Storage Migration and Storage Performance focussed on the z/OS environment.
John McKenny, SVP and GM of Intelligent Z Optimization and Transformation at BMC, said in a statement: ”Organizations rely on their mainframes to run their always-on, digital world. By launching BMC AMI Cloud, we’re helping them modernize their mainframe by eliminating the need for specialized skills and unlock their mainframe data to gain the benefits of cloud computing, so they can secure the most value for their organizations.”
Model9 supplied mainframe VTL (Virtual Tape Library) and data export services so that mainframe customers could avoid using slow and costly mainframe tape on the one hand, and move mainframe data to open system servers and the public cloud for application processing that is not available in the mainframe world.
BMC has evolved the Model9 software assets into a trio of AMI Cloud offerings alongside the AMI Storage threesome:
AMI Cloud Data—Migrate mainframe backup and archive data to cloud object storage; part of the classic Model9 VTL tech.
BMC AMI Cloud Vault—Create an immutable data copy of the mainframe environment in the cloud with end-to-end compression and encryption for protection against cyber-threats and ransomware. This uses Model9’s data export capability.
BMC AMI Cloud Analytics—Transfer and transform mainframe data for use in any AI/ML analytics cloud applications which simply may not be available in the mainframe environment.
Simon Youssef, head of Mainframe at Israel Postal Bank, was on message with his statement: “I am thrilled with the exceptional results we have achieved by implementing BMC’s data protection and storage management solutions for our mainframe data. With BMC AMI Cloud, we were able to modernize our legacy infrastructure and unlock new levels of efficiency and cost savings. BMC’s solution seamlessly integrated with our mainframe systems, eliminating the need for costly tape backups and streamlining our data management processes.”
BMC told an IT Press Tour in Silicon Valley that is investing heavily in integrating AI throughout its portfolio. CTO Ram Chakravarti said: “Operationalising innovation is the key to competitive advantage … AI is data’s best friend.” He talked about BMC building a set of interconnected DigitalOps products.
After talking with the BMC staffers it seems likely that customers will have several AI bots (GenAI Large Language Model Assistants) and they will interact. A CEO at an enterprise customer could, for example, ask Microsoft Copilot a question and Co-pilot then use sub-bots to get domain-specific information, such as the BMC bot. There could be a contest between BOT suppliers to get the CxO-facing bot, the top-level one, in place and then expand their Bot’s applicability in a land-and-expand strategy.
With Cohesity and Veritas having announced their impending merger, data protection and cyber resilience rivals Commvault and Rubrik are trying to promote themselves as safe havens for customers that they think may be overcome by fear, uncertainty and doubt.
Cohesity, the main company in this merger, has been putting out the message that the deal will benefit customers and that no Veritas NetBackup customer will be left behind. The software will not go end-of-life. There will be no forced migration to Cohesity’s backup and Cohesity customers could well benefit from Veritas’ cloud capabilities. By bringing Cohesity’s GenAI Gaia offering to the more than 300 EB of Veritas backup data then NetBackup users will be able to turn this dead data into an asset and mine it for insights.
That’s not the view of Commvault VP of Portfolio Marketing Tim Zonca, as outlined a blog entitled: “Avoid Veritas + Cohesity Chaos.” He writes: “Uncertainty and chaos should not come from vendors who [purport] to help maintain resilience.”
“The integration process between any two companies is a complex and time-consuming endeavor, likely taking several years to complete. At a time when data has never been more vulnerable to attacks, the transition can be quite chaotic for impacted customers and partners. But it doesn’t have to be.” Of course not. Come to Commvault: “When it comes to cyber resilience, there is no time for chaos. Commvault Cloud is the cure for chaos.”
Rubrik’s Chief Business Officer Mike Tornincasa blogs: “while the merger may make sense for Veritas and Cohesity, it will have a negative impact on their customers who are facing exploding volumes of data across Data Centers, Cloud, and SaaS with unrelenting threats of cyber attack, ransomware, and data exfiltration.”
Tornincasa says both Cohesity and Veritas are coming from positions of weakness: “Veritas has been struggling for relevance for the last ten years,” while “Cohesity has not been growing fast enough to raise another round of capital at an acceptable valuation.”
He chisels away at the financial background: “This is not a deal done in happy times. Cohesity’s valuation in 2021 was $3.7 billion. It should have increased considerably in the past three years if they had been healthy. Veritas was acquired by Carlyle Group (a Private Equity firm) for $7.4 billion in 2016. This would be a minimum $11 billion+ business if these businesses were growing. Instead, the combined valuation is presented as $7 billion.”
In his view: “Cohesity is now legacy,” and “We can anticipate Carlyle to bring the same playbook to the combined entity – no improved quality, higher prices and less choice.” So come to Rubrik instead.
While both Commvault and Rubrik cast FUD on the Cohesity-Veritas merger their FUD-elimination strategy is to suggest Cohesity and Veritas customers come to Commvault or Rubrik. This would mean Veritas or Cohesity customers then having to move to or adopt a separate backup regime. But that move to a separate backup regime is also what they are using to criticize Cohesity and Veritas. It’s nice to have a cake and eat it.
Sanjay Poonen
Cohesity CEO Sanjay Poonen told an IT Press Tour party in San Jose: “I don’t expect a single Veritas customer to defect. … Trust me. It’s not my first rodeo. … This is not Broadcom taking over VMware.”
He told us: “We’ll not talk about our roadmap until we meet regulatory approval,” and: “I want to keep our competitors guessing. … Let (them) spread the word it’s conflict and chaos. … I predict the FUD will fall flat.”
Cohesity and Veritas together will have around 10,000 customers; Rubrik has 6,000, and more than 300 EB under management. Poonen said the merged entity would count 96 percent of the Fortune 100 as customers; just four are missing.
Eric Brown
Cohesity CFO Eric Brown said Cohesity is working on an AI-enabled file translation facility to get Veritas backup files accessible by Cohesity’s Gaia AI. There was no mention at all of migrating Veritas backup files to Cohesity’s format.
Cohesity plus Veritas revenue in 2024 will be circa $1.6 billion, which is comparable to Veeam. Rubrik has about about $600 million revenues today. The merged Cohesity-Veritas business will be very profitable in GAAP net Income sense, Brown said. “We fully intend to go public in due course,” and “our value when we go public will likely be $10 billion.”
B&F thinks the merger will spark interest in other backup supplier combinations. For example, IBM has a legacy backup customer base, which has just received an indirect valuation. Rubrilk may look to an acquisition to bolster its customer count and data under management. Veeam too may look to an acquisition to enlarge its business and regain a clear separation between itself and the other players. The backup business is a-changin.
Azure Native Qumulo Cold (ANQ Cold) is the new cloud-native cold file data storage service from Qumulo.
Qumulo provides scalable and parallel access file system software and services for datacenters, edge sites, and the cloud with a scale anywhere philosophy. The aim is to make its file services available anywhere in a customer’s IT environment – that means on-premises, public cloud, and from any business location via clusters that are globally managed through a single namespace. It ported the Core file system software to run natively in the Azure public cloud in November last year and has now extended its Azure coverage.
Ryan Farris, VP of Product at Qumulo, said in a statement: “ANQ Cold is an industry game changer for economically storing and retrieving cold file data.”
ANQ Cold is positioned as an on-premises tape storage alternative and is fully POSIX-compliant. It can be used as a standalone file service, as a backup target for any file store, including on-premises scale-out NAS, and integrated into a hybrid storage infrastructure using Qumulo Global NameSpace – a way to access remote data as if it were local.
Kiran Bhageshpur
Qumulo CTO Kiran Bhageshpur said: “With ANQ Cold, we offer enterprises a compelling solution to protect against ransomware. In combination with our cryptographically signed snapshots, customers can create an instantly accessible ‘daily golden’ copy of their on-premises NAS data, Qumulo or legacy scale-out NAS storage.”
He claims there is no other system that is as affordable on an ongoing basis, while also allowing customers to recover to a known good state and resume operations as quickly as ANQ Cold.
In another use case, ANQ Cold provides picture archiving and communication system (PACS) customers with the ability to instantly retrieve cold images from a live file system, and has seamless file compatibility with PACS applications.
J D Whitlock, CIO of Dayton Children’s Hospital, said: “In pediatrics, we have to keep imaging files until the child turns 21. After a few years, it’s unlikely we will need to look at the image. But we have to keep it around for medical-legal purposes. Keeping this massive store of imaging data on secure, scalable, reliable, and cost-effective cloud storage is a perfect solution for us.”
Farris said: “Hospital IT administrators in charge of PACS archival data can use ANQ Cold for the long-term retention of DICOM images at a fraction of their current on-premises legacy NAS costs, while still being able to instantly retrieve over 200,000 DICOM images per month without extra data retrieval charges common to native cloud services.”
ANQ Cold has a pay-as-you-go pricing model, with customers paying for the capacity consumed. It costs $0.009/GB/month and is said to be up to 90 percent less expensive than other cloud file storage services. Its $/TB/month cost is $9.95 in the anchor Azure regions of westus2 and eastus2. Prices may be higher in other regions due to variability in regional Azure costs.
Customers have 5 TB of data retrieval included each month. After 5 TB, they’ll be charged at the rate of $0.03/GB until the first of the following month, when another 5 TB will be allowed. There is a minimum 120-day retention period. Data deleted before 120 days will be billed as if it were left on the system for 120 days.
We can expect Qumulo’s public cloud coverage to extend further as it develops its ability to provide file services in a multi-hybrid cloud environment.
Profile.Solix, the enterprise information archiving survivor, says it’s working on privately trained AI assistants to help customers converse with their active and archived enterprise app data.
The company was founded 22 years ago by CEO Sai Gundavelli to connect with enterprise applications such as ERP, CRM, and mainframes. It is application-aware and ingests, creates, and stores metadata about their information documents in a Common Data Platform. A set of applications are built on this platform.
Sai Gundavelli
The Solix software ingests older documents and data from these applications’ primary storage and archives it to lower-cost stores such as public cloud object instances. Its hundreds of customers are typically global Fortune 2000 companies and include Wells Fargo Bank, Elevance Health, Sterling Pharmaceutical, LG Electronics, Helen of Troy, and Korea Telecom, with a preponderance for the banking, insurance, and pharmaceutical markets.
Solix customers
We met Solix on an IT Press Tour in Santa Clara and Gundavelli said revenue had grown 50 percent from 2022 to 2023 to the $20-30 million region, and that the privately-owned company is profitable. It is developing LLMs it says are capable of being trained on a customer’s private data to improve accuracy.
B&F has been writing about the overall storage market for 20 years or more and has not come across Solix before, which prompts us to take a look at the reasons for this.
We think Solix does its info archiving top down, from the enterprise app viewpoint, not bottom-up from the storage arrays level. B&F looks at the storage market from the storage angle and that means we have had a blindspot for higher-level suppliers such as Solix. We categorize suppliers such as Komprise and Datadobi, also latterly Hammerspace and Arcitecta, as information lifecycle management operators. They are, we can say, storage array and filer-aware, and ingest data and metadata from storage, both hardware-defined and software-defined.
They do not ingest data directly from JD Edwards or SAP HANA or enterprise applications like that. Solix does, and operates at a higher level, as it were, in the enterprise information and storage stack than the more storage-focused operators such as Komprise, Datadobi, Hammerspace, and Arcitecta.
We checked this point of view with Sai Gundavelli, and he said: “I agree on your assessment why B&F never encountered us. Take for example:
Komprise – Unstructured Data Only
Datadobi – Unstructured Data Only
Hammerspace – Unstructured Data Only
MediaFlux – Never encountered them, but they claim all types of data
“Yes, we are working with application folks, not the storage folks. We work across structured, unstructured, and semi-structured, we have 186 connectors, starting from mainframe, DB2, Sybase, Informix, VAX/VMS. The most complex thing we manage for enterprises is structured data, which is key for SAP HANA, Snowflake, Databricks, and data warehousing and AI.
“Let’s take Cohesity or Druva, they are clearly in a category of backup and recovery. You clearly have an argument for online backup and making the data available for AI. You are correct on that, but we haven’t seen that yet. Most enterprises operate with each division making their own decision for either traditional tape backup or online, they are disparate. First, globally all backups in an enterprise need to be online. Secondly, all data to be enriched with metadata. Thirdly, it also has to provide governance and compliance globally. Further, it has to connect to machine learning algorithms, and one needs to train the algorithms as well. All these things are possible, but it is all about execution.
“Take Citibank, AIG, Kaiser Permanente, Pepsi – not that they are not doing backup and not that they may have Cohesity. We have not seen them to improve application performance nor manage compliance from a data retention or data sovereignty perspective etc.
“Irrespective, enterprises want to embrace AI. Whoever can help them achieve faster are the winners. We believe having access and understanding of all enterprise data is key, provides competitive advantage to bring AI to enterprises. For example, just imagine an AI algorithm which can predict cancer. How do you knit the data, with compliance and governance considerations, an element of structured data querying from Epicor or Cerner or AllScripts, and also bringing in unstructured medical imaging provide as an API and ensure Canada data is processed in Canada only and US data in US only? See which of the companies can enable it. Solix can do that.”
NetApp is using AI/ML in ONTAP arrays to provide real-time file ransomware attack detection. Its AIPOD has also been certified by Nvidia so customers can use NetApp storage for AI processing with DGX H100 GPU servers.
The sales pitch is that NetApp is adding Autonomous Ransomware Protection with Artificial Intelligence (ARP/AI) to its ONTAP storage array software, with adaptive AI/ML models looking at file-level signals in real time to detect – we’re told – even the newest ransomware attacks with planned 99 percent-plus precision and recall.
NetApp CSO Mignona Cote claimed in a statement: “We [were] the first storage vendor to explicitly and financially guarantee our data storage offerings against ransomware. Today, we are furthering that leadership with updates that make defending data comprehensive, continuous, and simple for our customers.”
There are four more allied anti-ransomware initiatives from NetApp:
BlueXP Ransomware Protection’s single control plane, now in public preview, coordinates and executes an end-to-end, workload-centric ransomware defense. Customers can identify and protect critical workload data with a single click, accurately and automatically detect and respond to a potential attack, and recover workloads within minutes.
Application-aware ransomware protection via SnapCenter 5.0offers immutable ransomware protection, applying NetApp’s ransomware protection technologies, previously used with unstructured data, to application-consistent backup. It supports tamper-proof Snapshot copy locking, SnapLock protected volumes, and SnapMirror Business Continuity to protect applications and virtual machines on-premises with NetApp AFF, ASA, and FAS, as well as in the cloud.
BlueXP Disaster Recovery, now generally available, offers seamless integration with VMware infrastructure and provides storage options for both on-premises and major public cloud environments, eliminating the need for separate standby disaster recovery (DR) infrastructure and reducing costs. It allows smooth transitions from on-premises VMware infrastructure to the public cloud or to an on-premises datacenter.
Keystone Ransomware Recovery Guaranteeextends NetApp’s current Ransomware Recovery Guarantee to the Keystone storage-as-a-service offering. NetApp will warrant snapshot data recovery in the event of a ransomware attack and, if snapshot data copies can’t be recovered through NetApp, the customer will be offered compensation.
AIPOD
AIPOD is NetApp’s new name for its current ONTAP AI offering, which is based on Nvidia’s BasePOD. The company says it’s an AI-optimized converged infrastructure for organizations’ highest priority AI projects, including training and inferencing.
Arunkumar Gururajan, VP of Data Science & Research at NetApp, said: “Our unique approach to AI gives customers complete access and control over their data throughout the data pipeline, moving seamlessly between their public cloud and on-premises environments. By tiering object storage for each phase of the AI process, our customers can optimize both performance and costs exactly where they need them.”
NetApp announced the following:
AIPOD with Nvidia is now a certified Nvidia BasePOD system, using Nvidia’s DGX H100 platform attached to NetApp’s AFF C-Series capacity flash systems. It is claimed to drive a new level of cost/performance while optimizing rack space and sustainability. It continues to support Nvidia’s DGX A100 and SuperPOD architecture.
New FlexPod for AI reference architecturesextend the NetApp-Cisco FlexPod converged infrastructure bundle to support Nvidia’s AI Enterprise software platform. FlexPod for AI can be extended to use RedHat OpenShift and SUSE Rancher, and new scaling and benchmarking have been added to support increasingly GPU-intensive applications.
NetApp is the first enterprise storage vendor to formally partner with Nvidia on its OVX systems. These provide a validated architecture for selected server vendors to use Nvidia L40S GPUs along with ConnectX-7 network adapters and Bluefield-3 DPUs.
We think all primary file storage vendors will add integrated AI-based real-time ransomware detection and it will become a standard checkbox item. Learn more about the NetApp offerings to support Gen AI here, and find out more about its cyber-resiliency offerings here. NetApp will be offering the first technology preview of the ONTAP ARP/AI within the next quarter.
Myriad, the new all-flash storage software from Quantum, will support Nvidia’s GPUDirect protocol to feed data faster to GPU servers.
Myriad was initially devised by technical director Ben Jarvis, who has filed several Myriad-related patents and came up with the core transactional key-value metadata database idea. He’s a long-term Quantum staffer, saying that back in 2006 he “joined ADIC and found out that Quantum was buying it the next day.”
Ben Jarvis.
Quantum bought ADIC for its backup target deduplication technology, which is used in the present day DXi backup target appliances.
Jarvis became involved with the StorNext hierarchical file manager, which initially was based on disk storage with tiering off to tape. As soon as the first all-flash arrays started appearing, he thought that Quantum and StorNext would need to support SSDs and that a disk-oriented operating system was inappropriate for this, however much it could be modified. A flash storage OS needed to be written from first principles so that it would be fast enough for NVMe drives and could scale up and out sufficiently.
Myriad was announced last year. Nick Elvester, Quantum’s VP for Product Operations, told an IT Press Tour in Denver that there has been high interest in Myriad, and CEO Jamie Lerner added: “Myriad has had million-dollar rollouts.” John Leonardini, a principal storage engineer for customer Eikon Therapeutics, is evaluating Myriad. “It’s one of the very fastest things on my floor. Myriad is the first Quantum product we cut a PO for,” he said.
Eikon currently uses Qumulo scale-out storage and indicated that Myriad outperformed it, with a five-node setup providing more data IO than racks of Qumulo gear. He also said that Myriad features were more usable than aspects of Qumulo’s data services such as snapshots. Leonardini likes Myriad’s automated load balancing and new node adoption as well. “We can click a button, come back in a day, and everything is rebalanced. That’s a win in my book.”
Jamie Lerner.
Elvester said the main Myriad customer interest lies in using its high performance for AI applications. It currently supports NFS v3/4 and SMB, and has a client providing parallel access. Myriad services include encryption, replication, distributed and dynamic erasure coding, data cataloging, and analytics. S3 access, a POSIX client, and GPUDirect support are roadmap items.
Jarvis said Myriad has a client access system that is dedicated like Lustre, and doesn’t use parallel NFS. One vendor does use pNFS, although a formal standard has not been adopted generally by the industry.
He said the suppliers that defined pNFS did so in a way that suited them and not others. For these others, adopting pNFS means bending their NFS software uncomfortably. “By using our client we avoid all that.”
Jarvis’s enthusiasm for writing from scratch extended to Myriad’s internal ROCE RDMA fabric. “It’s not NVMe-OF but our own protocol similar to NVMe-OF.”
Myriad is containerized and a cloud-resident version is being produced but not yet available.
Asked about GPUDirect, Jarvis said: “Storage for AI does not begin and end with GPUDirect. We’re going to check the box. We’re going to do innovation. Not all GPUDirect implementations are good.”
Lerner said: ”GPUDirect and SuperPOD support in Myriad is coming out imminently.”
He was asked about Quantum’s tape business and said tape is declining faster than before. New architectures don’t use tape. Tape cartridge capacities are larger than before and tapes are filled up rather than being part-filled and shipped off to Iron Mountain. So fewer cartridges are needed and they tend to stay in the libraries.
We have heard from sources that the hyperscalers over-bought tape a year or so ago and are now continuing to digest the purchases, slowing down their buy rate.
Looking at Quantum overall, Lerner said: “Quantum is inventing again. We’re generating patents again. Our innovation engine is running again. In the future I imagine everything we have is flash and then there will be tape. We have to let the legacy go.”
As we wrote in a storage news ticker, Huawei has announced that it has an OceanStor Arctic device for storing archive data.
A presentation at MWC24 in Barcelona by Huawei’s Dr Peter Zhou, president of the data storage product lines, introduced this coming product. He told the audience that it can reduce total connection cost by 20 percent compared to tape, and power consumption by 90 percent compared to hard drives. We were intrigued and asked Huawei for more information.
Now Huawei’s China HQ has added some more meat to the bones. A spokesperson told us:
“Huawei’s MED (magneto-electric disk) brings brand-new innovation against magnetic media. The first generation of MED will be as a big capacity disk. The rack capacity will be more than 10 PB and power consumption less than 2 KW. For the first generation of MED, we will position it mainly for archival storage. It will be released overseas about 2025H1.”
There is no existing MED storage product from any supplier we are aware of. This is brand-new technology. The fact that it is a disk – Huawei did not say “drive” – means it most likely spins and has tracks and a read-write head. We do not know its size, meaning it would not necessarily employ the same 3.5-inch form factor as current nearline storage drives from Seagate, Toshiba, and Western Digital. Indeed, Western Digital once floated the idea of an archival disk drive in a larger-than-3.5-inch form factor.
The magneto-electric effect refers to a linkage between the magnetic and the electric properties of a material. A scientific paper entitled “Electrical control of magnetism by electric field and current-induced torques” reviews discoveries and approaches in the field, and the authors “present various families of devices harnessing the electrical control of magnetic properties for various application fields.”
These include the MESO (magneto-electric spin-orbit) field. “MESO is expected to strongly reduce power consumption for computation by harnessing ferroic materials that have embedded non-volatility and by relying on a voltage rather than a current to switch the ferroic order parameter.”
We know that MRAM (magnetic RAM) products exist and these are solid state devices with binary signals stored in cells, not in tracks on a disk. That implies MED does not use MRAM technology.
Airbyte has announced availability of PyAirbyte, an open source Python library, intended to make it easy to move data across API sources and destinations by enabling Airbyte connector resources to be created and managed using code, rather than the user interface (UI). Python users (most existing data pipelines are written in Python) can add one command to their code and gain access to Airbyte’s more than 250 data connectors for moving data (all open source). Airbyte and LangChain have teamed to offer a new document-loading package for LLM-driven applications – making 250-plus data sources and more than 10,000 distinct datasets available for GenAI projects.
…
SaaS app data protector Alcion has unveiled a partner program, “Alcion for Partners,” tailored for Managed Service Providers (MSPs) and aiming to protect their Microsoft 365 customers’ environments. It says it has a specifically tailored engagement process and terms to fit the needs of the MSP market and enhance partners’ ability to grow their business. The Alcion partner portal provides, we’re told, an intuitive unified experience to monitor operations, manage configuration, and track licensing across all MSP customer tenants. More details here.
…
Cloud NoSQL database supplier Couchbase has added vector search to its Capella Database-as-a-Service (DBaaS offering and CouchBase server. It says it’s the first database platform to announce it will offer vector search optimized for running onsite, across clouds, to mobile and IoT devices at the edge. Vector search is a necessary part of Gen AI Large Language Model (LLM) processing and use cases include chatbots, recommendation systems and semantic search. It says its customers get similarity and hybrid search, combining text, vector, range and geospatial search capabilities in one. They have RAG to make AI-powered applications more accurate and enhanced performance because all search patterns can be supported within a single index to lower response latency. Couchbase is extending its AI partner ecosystem with LangChain and LlamaIndex support to further boost developer productivity.
…
A 2024-25 DCIG Enterprise Multi-site File Collaboration Solutions report names 5 top suppliers: 9 CTERA Enterprise File Services Platform, Nasuni File Data Platform, NetApp Cloud Volumes Edge Cache, Panzura CloudFS and Qumulo Scale Anywhere Platform. It evaluated a whole bunch of suppliers – see the image. Get a cope of the report completing the signup form here.
…
Dell PowerScale storage is being used by Subaru Lab to hold data for its AI-based EyeSight Driver Assist Technology (EDAT) developments. EDAT, built to monitor traffic movement, optimize cruise control and warn drivers if they sway outside their lane, is used in more than 5.5 million Subaru vehicles. It is based on stereo camera images, not using Lidar, and builds 3D images of the road around the moving vehicle. The aim is to reduce or eliminate vehicle collisions rather than develop full automated driving technology.
…
Reuters reports file collaborator Egnyte has hired bankers to facilitate an IPO with a potential $3 billion valuation. JPMorgan Chase is the lead banker along with UBS. Egnyte is profitable and has raised a total of $138 million in funding. It was valued at $460m million when it raised $75 million in 2018. Egnyte is located in a cloud file services market and overlaps with Box and Dropbox at the consumer and endpoint side of the market and with CTERA, Nasuni and Panzura in the filer replacement side.
…
GenAI Large language Model language processing unit (LPU) developer Groq has acquired Definitive Intelligence and set up a Groq Cloud business, led by its co-founder and CEO Sunny Madra. The aim is to significantly expand access to the LPU Inference Engine. GroqCloud is a developer playground with fully integrated documentation, code samples, and self-serve access, and available today at https://console.groq.com/. There will be a separate Groq Systems business unit, developing the LPU HW and SW, and serving the public sector and customers that require Groq hardware for AI compute centers.
…
Huawei’s Dr. Peter Zhou, President of the Data Storage product lines, presented at MWC24 in Barcelona talking about storage for AI, mentioning several existing products in his presentation: DME, OceanProtect E8000 and X9000, OceanStor A800, OceanFS, and OceanStor A310. He introduced a new product, the OceanStor Arctic magneto-electric storage system for cold data. There is very little information available about it. We are told it can reduce total connection cost by 20 percent compared to tape, and power consumption by 90 percent compared to hard drives. So, by implication, it does not use tape media or hard disk drives or, we assume, SSDs. That could mean it’s an optical disk device. The magneto-electric effect denotes any coupling between the magnetic and the electric properties of a material. That doesn’t sound very much like optical disk technology which relies on light and not magnetism. We have asked Huawei for more information about this OceanStore Arctic product.
Dr. Peter Zhou presenting at MWC24.
…
A TechNewsSpace article claims Micron is set to use nanoprinting technology from Canon in some layering stages of DRAM chip production. Nanoprinting can achieve sub-nanometer resolutions and a nanoprinter could be five times less costly than EUV optical lithography equipment. A nanometer is one-billionth of a meter. Read a Canon article on nano-imprint lithography here.
…
ObjectiveFS is a distributed, log-structured, POSIX-compliant file system with an object store backed. The latest v7.2 release includes a new custom snapshot schedule for user-defined automatic snapshots, new mount options, performance improvements and more. New features and performance improvents include:
Custom snapshot schedule for user-defined automatic snapshot schedules (learn more)
New writedelay mount option to allow the kernel to delay writes to the filesystem
New filehole mount option for user-defined maximum file hole size
Increased read ahead in hpc mode to improve large read performance
Improved Linux file attributes implementation to support newer FUSE versions
Improved filesystem readiness for mounts from command line
Added new regions for AWS and GCS
Improved read performance with up to 5 percent speedup
…
Samsung has started testing the industry’s first 256 gigabyte (GB) SD Express microSD card, with maximum speeds of 800 MBps. It is scheduled to be ready for purchase later this year. The company has also started the mass production of its 1TB UHS-1 microSD card, set to launch Q3 2024.
…
Western Digital is selling 80 percent of its SanDisk Semiconductor Shanghai business unit, based in Shanghai, China, to partner JCET, according to Reuters. JCET is a Chinese test and assembly firm and is paying $624 million in cash for the flash drive’s business stake. The SanDisk unit will now be a joint-venture between JCET and minority partner Western Digital with its 20 percent stake.
…
Zilliz, supplier of the open-source Milvus vector database, has announce its Zilliz Cloud BYOC, with BYOC meaning Bring Your Own Cloud. Customers can host their vector embedding data within their private cloud networks. It uses a dual-VPC architecture. Private cloud hosting enables controlled data access permissions and can ensure compliance with regulatory standards. Deployment, upgrades, and maintenance are managed by Zilliz. The BYOC facility is available to customers on a scale of 128 CUs (Ziliz Computing Units) or more. Read a Zilliz blog to find out more. See Zilliz pricing plans here.
Veeam has announced a managed Data Cloud offering using acquired Cirrus Backup-as-a-Service (BaaS) software based on Azure to protect Microsoft services.
Veeam first entered the BaaS arena by buying Cirrus software from partner CT4 in October last year. It already offered managed backup and DR services delivered through Veeam Cloud and Service Provider (VCSP) partners. These covered BaaS for Microsoft 365, public cloud (AWS, Azure, and Google Cloud), and managed and off-site backup. There was also Veeam Backup for Salesforce launched in October 2022. Now it is pitching a cloud-native, Azure-based Veeam Data Cloud (VDC) covering M365 and Azure.
CEO Anand Eswaran said in a statement: “As the #1 global provider of data protection and ransomware recovery and the leader in backup for Microsoft 365, we’re bringing those trusted capabilities – for Microsoft 365 and Microsoft Azure – and delivering them as-a-service.”
The SaaS protection area is rapidly expanding. We could mention Asigra; Commvault Cloud powered by Metallic; Druva; HYCU; Keepit; OwnBackup, which partners with Cohesity; and many more.
There are basically two approaches. The first is where, like Veeam, the backup supplier writes its own connector software to link to the SaaS application, focusing on the most popular SaaS apps in its customer area, such as M365. The second is that either the SaaS application suppliers themselves or their customers are encouraged to build connectors (or have them built) to link their SaaS app to a protection supplier. Asigra and HYCU exemplify this approach and say it provides greater coverage of the plethora of SaaS apps available, with HYCU offering customers GenAI-based connector building facilities.
VDC uses Azure Blob storage to hold the backups. It’s an all-in-one service, including backup software, infrastructure, and storage. Backups are continuously versioned and maintained, and VDC is built using zero trust ideas.
Veeam has based VDC for Microsoft 365 on its existing Veeam Backup for Microsoft 365, which is now delivered as a service and provides backup and recovery for Exchange Online, SharePoint Online, OneDrive for Business, and Teams. VDC for Azure is a fully hosted and pre-configured backup service, providing backup and recovery for Azure VMs, Azure SQL, and Azure Files. It has customizable recovery point objectives (RPOs) and recovery time objectives (RTOs).
We asked Veeam where Veeam Backup for Salesforce fits in with VDC. Rick Vanover, senior director of product strategy at Veeam, said: “We have many opportunities to extend Veeam Data Cloud to other Veeam offerings. We have strong demand for the service on offer with more planned.”
B&F expects Azure-based SaaS apps to be protected and also the possible extension of VDC to the AWS and Google clouds. Customers have the choice of using separate best-of-breed SaaS app protection products, such as Keepit and OwnBackup, or extending their existing comprehensive backup suppliers coverage to the SaaS app arena. The top SaaS app protection products have built up a lead with the incumbents now trying to get up to speed and match them.
NetApp revenue rose 5 percent for Q3 FY24 on the back of record all-flash array sales.
It reported $1.6 billion in revenues with profits of $313 million versus $65 million a year earlier. There was a record all-flash array (AFA) annual run rate of $3.4 billion, up 21 percent year-on-year. NetApp introduced its lower cost AFF C-Series product in October last year and it says sales exceeded expectations, as did the ASA series of all-flash SAN arrays.
CEO George Kurian said on the earnings call: “I’m pleased to report that we delivered exceptional performance across the board, despite an uncertain macro environment. Revenue was above the midpoint of our guidance, driven by the momentum of our expanded all-flash product portfolio. This strength coupled with continued operational discipline yielded company all-time highs for consolidated gross margin, operating margin, and EPS for the second consecutive quarter.”
Kurian is “confident in our ability to capitalize on this momentum, as we address new market opportunities, extend our leadership position in existing markets, and deliver increasing value for all our stakeholders.”
The revenue rise reverses four successive down quarters and contrasts with competitors. Product sales of $747 million rose 9.5 percent and services went up to $859 million from $844 million.
Financial summary
Consolidated gross margin: 72 percent
Operating cash flow: $484 million
Free cash flow: $448 million
Cash, cash equivalents, and investments: $2.92 billion
Kurian said NetApp expects “a sustainable step-up in our baseline product gross margin going forward with the continued revenue shift to all-flash.” If Pure’s assertion that AFAs will replace disk and hybrid arrays by 2028 comes true, NetApp, with base conversion prospects, could be a bigger beneficiary of that than Pure.
NetApp’s all-flash revenue of $850 million was 7 percent ahead of rival Pure’s latest quarterly revenue of $789.8 million, which was down 3 percent annually.
Why did NetApp do so well this quarter? Lower cost all-flash arrays is the short answer. In contrast to Dell and HPE, it has no PC/notebook, server or networking businesses. Like Pure, it is a dedicated storage player and not exposed to these other markets that are struggling. Kurian said the hybrid cloud segment strength was ”driven by momentum from our newly introduced all-flash products and the go-to-market changes we made at the start of the year.”
He added: ”Entering FY24, we laid out a plan to drive better performance in our Storage business and build a more focused approach to our Public Cloud business, while managing the elements within our control in an uncertain macroeconomy to further improve our profitability. These actions have delivered strong results to-date [and] support our raised outlook for the year.”
The results don’t yet extend to the public cloud business, which had a minor revenue lift of just 0.666 percent. Kurian said: “As I outlined last quarter, we are taking action to sharpen our approach to our public cloud business. As a part of this plan, we exited two small services in the quarter. We also began the work of refocusing Cloud Insights and InstaClustr to complement and extend our hybrid cloud storage offerings and integrating some standalone services into the core functionality of Cloud Volumes to widen our competitive moat.”
NetApp is focusing its public cloud efforts on first-party and hyperscaler marketplace storage services. These “are growing rapidly, with the ARR of these services up more than 35 percent year-over-year.”
Kurian says NetApp is increasing its AFA market share. “As customers modernize legacy 10k hard disk drives and hybrid flash environments, we are displacing competitors’ installed bases with our all-flash solutions, driving share gains.”
GenAI hype is helping NetApp, with Kurian saying: “We saw good momentum in AI, with dozens of customer wins in the quarter, including several large Nvidia SuperPOD and BasePOD deployments. We help organizations in use cases that range from unifying their data in modern data lakes to deploying large model training environments, and to operationalize those models into production environments.”
He said NetApp’s “Keystone, our Storage-as-a-Service offering, delivered another strong quarter, with revenue growing triple-digits from Q3 a year ago.”
Guidance for next quarter’s revenues is $1.585 billion to $1.735 billion, 5 percent up on the year-ago Q4 at the midpoint. NetApp is raising its full-year revenue outlook to $6.185 billion to $6.335 billion, a 1.6 percent decrease on last year at the midpoint.
Comment
NetApp is a stable profit-making machine that has not grown its revenues for 11 years, as a chart of revenues by quarter by fiscal year illustrates:
A direct chart of annual revenues shows the same thing:
George Kurian became the CEO in 2015. Since then he’s weathered two revenue dips, in 2016-2017 and 2020, but he’s not been able to to drive NetApp’s revenues markedly higher than they were in 2013. It’s a company whose foray into the public CloudOps business has not yet proved worthwhile, despite making around $149 million of acquisitions between 2020 and 2022 . NetApp is pulling in just $151 million of public cloud revenues presently. Much of that is due to ONTAP running in the public cloud and not actually due to the acquisitions.
NetApp is generating cash and it’s not directly threatened by any competitor in the near term.
Strong sequential growth in storage revenues at Dell was not enough to prevent an 11 percent revenue fall for the final fiscal 2024 quarter as the company waits in hope of an AI-driven recovery.
Dell reported $22.3 billion in revenues for the quarter ended February 2, with a $1.2 billion profit, up 91 percent year-on-year. Full year revenues were $88.4 billion, down 13.5 percent from the year before, with a profit of $3.2 billion, 32 percent higher year-on-year. Squeezing more profit from lower revenues is quite the achievement.
Jeff Clarke
Vice chairman and COO Jeff Clarke said in prepared remarks: “In a year where revenue declined, we maintained our focus on operational excellence delivering solid earnings per share and outstanding cash flow. FY24 was one of those years that didn’t go as planned, but I really like how we navigated it. We showed our grit and determination by quickly adapting to a dynamic market, focusing on what we can control, and extending our model into the high growth AI opportunity.”
CFO Yvonne McGill said: “We generated $8.7 billion in cash flow from operations this fiscal year, returning $7 billion to shareholders since Q1 FY23. We’re optimistic about FY25 and are increasing our annual dividend by 20 percent – a testament to our confidence in the business and ability to generate strong cash flow.”
Six successive down quarters with Dell’s Q4 FY24 revenues below the 2019 level
Quarterly financial summary
Gross margin: 23.8 percent vs 23 percent a year ago
Operating cash flow: $1.5 billion
Free cash flow: $1 billion vs $2.3 billion last year; 55 percent lower
Cash, cash equivalents, and restricted cash: $7.5 billion vs $8.9 billion last year
Diluted earnings per share: $1.59
Dell has two main business units – Infrastructure Solutions Group (ISG) and Client Solutions Group (CSG). The larger CSG, with its PCs and laptops, had revenues of $11.7 billion, 12 percent down year-on-year, while ISG, with its servers, storage and networking, reported $9.3 billion, six percent lower annually.
Dell annual revenue history shows FY24 having the lowest revenues since 2018
Servers and networking brought in $4.9 billion, the same as a year ago, while storage was responsible for $4.5 billion, down 10 percent year-on-year but up 16 percent quarter-on-quarter.
Clarke said: “Our strong AI-optimized server momentum continues, with orders increasing nearly 40 percent sequentially and backlog nearly doubling, exiting our fiscal year at $2.9 billion,” implying that the next few quarter’s results should be better.
“We’ve just started to touch the AI opportunities ahead of us, and we believe Dell is uniquely positioned with our broad portfolio to help customers build GenAI solutions that meet performance, cost and security requirements.”
Why the quarterly storage revenue rise? Dell said there was demand strength across the portfolio, more than any expected seasonal improvement. Clarke said on the earnings call: “We had year-over-year demand growth in the unstructured space, ECS, as well as PowerScale. They grew quarter-over-quarter and year-over-year on a demand basis. Those are generally good indicators … around AI file and object, which are the data classes that generally feed the AI engines … Our progress in traditional storage was good too. We were ahead of our normal seasonality … It was down year-over-year, but better than we expected across mid-range, across our data protection products and our high-end storage products.”
The outlook is for growth. Clarke said: “We believe the long-term AI action is on-prem where customers can keep their data and intellectual property safe and secure. PCs will become even more essential as most day-to-day work with AI will be done on the PC. We remain excited about the long-term opportunity in our CSG business.”
He added: “Our storage business will benefit from the exponential growth expected in unstructured data … We think AI moves to the data. More data will be created outside of the data center going forward than inside the data center today. That’s going to happen at the edge of the network. A smart factory, an oil derrick or platform, a deep mine, all variations of this. We believe AI will ultimately get deployed next to where the data is created driven by latency.”
In his view, enterprises will “quickly find that they want to run AI on-prem because they want to control their data. They want to secure their data. It’s their IP and they want to run domain specific and process specific models to get the outcomes they’re looking for.”
Dell thinks an AI-fueled recovery in storage demand will lag stronger server demand by a couple of quarters.
Revenues for Q1 FY25 are expected to be between $21 billion and $22 billion, 3 percent higher annually at the midpoint. But then growth will accelerate. Clarke sees “modest growth in traditional [servers], stronger growth in AI-enabled servers, and an opportunity with storage as the year progresses.”
Full FY25 revenues should be between $91 billion and $95 billion, up 5 percent year-on-year at the midpoint. McGill said: “We expect ISG to grow in the mid-teens fueled by AI, with a return to growth in traditional servers and storage, and our CSG business to grow in the low single digits for the year.”
HPE has changed its quarterly financial reporting structure with storage results disappearing into a hybrid cloud category.
Revenues in its first fiscal 2024 quarter ended January 31 were $6.8 billion, 14 percent lower than the year-ago quarter and also below its guided estimate for $6.9 billion to $7.3 billion. There was a $387 million profit, 23 percent less than last year. The move to GreenLake is causing overall revenue declines as subscription revenue is recognized over time whereas straightforward product sales are recognized when the product ships.
Antonio Neri, HPE president and CEO, said in prepared remarks: “HPE exceeded our profitability expectations and drove near-record year-over-year growth in our recurring revenue in the face of market headwinds, demonstrating the relevance of our strategy.” HPE did not exceed its revenue expectations, however. The annual revenue run rate was $1.4 billion, 42 percent more than a year ago, and primarily due to the GreenLake portfolio of services.
HPE revenues have basically flatlined since 2019
HPE’s new organizational structure consists of the following segments: Server; Hybrid Cloud; Intelligent Edge; Financial Services; and Corporate Investments and Other. It has amalgamated previously separate Compute and HPC & AI reporting lines into a single Server category. The new Hybrid Cloud reporting line includes three elements:
The historical Storage segment
HPE GreenLake Flex Solutions (which provides flexible as-a-service IT infrastructure through the HPE GreenLake edge-to-cloud platform and was previously reported under the Compute and the High Performance Computing & Artificial Intelligence (“HPC & AI”) segments)
Private Cloud, and Software (previously reported under the Corporate Investments and Other segment)
This means we no longer have a direct insight into the business health of its storage portfolio. HPE says it’s doing this to better reflect the way it operates and measures its business units’ performance. This new hybrid cloud segment is meant to accelerate customer adoption of HPE’s GreenLake hybrid cloud platform.
Financial summary
Gross margin: 36.4 percent, up 2.4 percent year-over-year
Operating cash flow: $64 million
Free cash flow: -$482 million
Cash, cash equivalents, and restricted cash: $3.9 billion vs $2.8 billion last year
Diluted earnings per share: $0.29, 24 percent lower than last year
Capital returns to shareholders: $172 million in dividends and share buybacks
Server segment revenues were $3.4 billion, down 23 percent year-over-year. Hybrid Cloud pulled in $1.3 billion, down 10 percent, with Intelligent Edge reporting $1.2 billion, up 2 percent. Financial services did $873 million, down 2 percent, while Corporate Investments and Other was responsible for $238 million, up just 1 percent.
CFO Marie Myers said on the earnings call: “Demand in Intelligent Edge did soften due to customer digestion of strong product shipments in fiscal year ’23, which is lasting longer than we initially anticipated and is the primary reason Q1 revenue came in below our expectations.” Specifically, “campus switching and Wi-Fi products eased materially, particularly in Europe and Asia.”
Neri’s top-down view was that “overall, Q1 revenue performance did not meet our expectations.” It was “lower than expected in large part because networking demand softened industry-wide and because the timing of several large GPU acceptances shifted. Additionally, we did not have the GPU supply we wanted, curtailing our revenue upside.”
”This quarter is a moment in time and does not at all dampen our confidence in the future ahead of us.”
Myers commented: ”Demand for our traditional server and storage products has stabilized,” although “our traditional storage business was down year-over-year on difficult compares, given backlog consumption in Q1 ’23. Total Alletra subscription revenue grew over 100 percent year-over-year and is an illustration of our long-term transition to an as-a-service model across our businesses. We are starting to see AI server demand pull through interest in our file storage portfolio. We are also already seeing some cross-selling benefits of integrating the majority of our HPE GreenLake offering into a single business unit.”
Neri said HPE was “capturing the explosion in demand for AI systems” with GPU-enhanced servers, called Accelerator Processing (AP) units, orders rising. AP orders now represent nearly a quarter of HPE’s entire server orders since fiscal 2023’s Q1. Neri said: ”Our pipeline is large and growing across the entire AI life cycle from training to tuning to inferencing.”
Server revenues should grow due to AI demand and better GPU availability, and “hybrid cloud will benefit from continued HPE GreenLake storage demand and the rising productivity of our specialized sales force.”
HPE’s next quarter outlook is $6.8 billion in revenues up or down $0.2 billion, a 2.5 percent annual decrease at the midpoint. Myers said: ”For hybrid cloud, we expect sequential increases through the year as our traditional storage business improves and HPE GreenLake momentum continues. We expect meaningful progress through the year.”