Home Blog Page 195

Nasuni vs NetApp: Low-speed architecture collision

Nasuni Skid crash
Skid crash

We are all watching a slow-motion collision as datacenter-oriented filers from NetApp and others meet cloud-centric filers from Nasuni and its peers.

The well-established network attached storage (NAS) concept is to have a filer in a datacenter providing external file storage I/O services to users and applications on local and remote servers. The two leading suppliers are NetApp (ONTAP) and Dell (Isilon/PowerScale) with Qumulo (CORE) and others such as VAST Data coming on strong.

For these suppliers the public cloud represents competition. The public cloud can replace on-premises filers but it can also enhance them in a hybrid and multi-public cloud environment. Thus NetApp has ONTAP present in the main public clouds and customers can move applications using ONTAP to, from, and between their datacenters and the public clouds, and find the familiar ONTAP environment in each.

Dell is moving in the same sort of direction with its APEX initiative, as is Qumulo.

The core of these companies is the on-premises filer. The public cloud represents a burst destination for their filer users’ applications and a place for some applications to run, while others – such as data sovereignty-limited, or perhaps mission-critical applications – stay on premises. The cloud can also be a place to offload older, staler, data into lower cost S3-type object stores.

Nasuni’s inversion

Nasuni inverts this model, coming from the sync’n’share file collaboration market. Its core is in the public cloud with accessing applications in datacenters of all sizes, from central to edge sites like retail outlets, treated as remote access users equipped with edge caches – either physical boxes or virtual machines.

The company’s UniFS file system stores all its file data in S3-type online object storage vaults in the cloud, not offline archive tiers. It uses the edge (filer) caches and its algorithms to provide fast local access.

Western Digital is a Nasuni customer that synchronizes and shares large design and manufacturing files between its global sites. Such file sync happens in less than ten minutes, instead of hours and hours.

This cloud-as-core model is also used by CTERA.

Ransomware Protection as a Service

Nasuni is changing from just providing file storage and infrastructure to providing add-on cloud data services, such as file system analytics and Ransomware Protection as a Service (RPaaS). It detects incoming ransomware file patterns, such as specific file extensions, and activity anomalies. 

It will move to stop a ransomware source acting on file data if a customer sets a policy. Chief product officer Russ Kennedy said automated recovery will be added next year. “Nasuni customers have billions of potential recovery points through every file system change being recorded in an immutable way.”

Its software captures all metadata changes – anything up to the root – and puts them in small manifest files.

Nasuni and NetApp

While not as big as NetApp, Nasuni is a substantial startup. We were told by Kennedy at a June IOT Press Tour briefing that it has more than 680 customers and 13,600 edge locations worldwide. Over 10 customers are paying Nasuni $1 million a year and 187 customers are paying more than $300,000/year. Its strategy is to go public.

Nasuni sees itself having a $5 billion total addressable market (TAM) in cloud file services over 2021/2022, with NetApp having an equal TAM, along with the major cloud providers: 

Nasuni graphic

In a way Nasuni has parked its cloud object store-backed edge filer cache tanks on NetApp’s on-premises lawn. As has CTERA. How will the on-premises filer suppliers respond?

We may well see the adoption of cloud-based core file storage technology and access to/from remote sites by Dell, NetApp, Qumulo, and their peers as they respond to the market dynamics.

We asked Kennedy about NetApp, Dell, and Qumulo doing this. He said it would take years for them to build a similar cloud-based structure. For example, Nasuni has an orchestration center that handles global file locking. It is a cloud service unique to Nasuni, Kennedy said, that uses DynamoDB and elastic services.

This may be a key difference between Nasuni and the on-premises/hybrid cloud filers. The difficulty inherent in building an equivalent cloud-based infrastructure from scratch is indicated by something Kennedy told us: Nasuni has had talks with on-premises providers about them using its UniFS cloud facilities. They could get cloud-based remote site sync and so forth via UniFS talking to their filers. 

But the talks have not led to this actually happening, nor any other partnering activity. In a way, we have reached a sort of equilibrium. Cloud-based Nasuni has an on-premises filer presence with its edge caches, but these are not full-blown filers. The on-premises filer suppliers – Dell, NetApp, Qumulo, etc. – have and are building a cloud presence, but this is not as capable as the ones Nasuni and CTERA have built.

Unless customers show a significant preference for cloud-based file system technology, both the on-premises and cloud-based filers will grow. They’ll collide, but there will be no outright winner. Because unstructured data is growing and a rising tide lifts all boats.

Storage comes together to fight cancer

Infinidat, Pure Storage, and several other major IT and storage vendors are raising funds for the Tech Tackles Cancer program.

The non-profit initiative will bring vendors together in a “Battle of the Technology Rockstars” on June 21 at The Sinclair music venue in Cambridge, Massachusetts, to raise cash for fighting pediatric cancer.

The initiative was founded by AtScale CEO and technology veteran Christopher Lynch.

The event has a live band karaoke singing competition and Ken Steinhardt, field CTO at Infinidat, will be featuring. He has had close friends and direct family members affected by cancer, and is running a personal fundraising campaign that will directly benefit this charity.

Storage rockstars
From left: Chris Lynch, Ken Steinhardt, Nathan Hall, Steve Duplessie, and George Hope

On stage, he will be going up against Nathan Hall, VP of Engineering at Pure Storage, Steve Duplessie, founder and senior analyst of Enterprise Strategy Group/Tech Target, George Hope, worldwide head of partner sales at HPE, and a dozen of other “rockstar” executives from the storage industry.  

The live audience and spectators watching the performances via LinkedIn Live will be able to vote for their favorite performers. 

Storage sponsors
Tech Tackles Cancer sponsors

Tech Tackles Cancer, which has been running for six years, is returning to an in-person event after a hiatus of more than two years due to the COVID pandemic. To date it has raised more than $2 million in donations for organizations focused on pediatric cancer treatment and research, including St Baldrick’s and One Mission.  This year, the goal is to raise over $300,000 for pediatric cancer-related causes.

Steinhardt said: “Tech Tackles Cancer is a cause that I have strong affinity for. To do something for such a good cause and mix it with some good rock music; it doesn’t get any better. I am encouraged by how so many tech companies have responded to this cause, saying ‘I’m on board. How can I help?’ The attitude across the industry has been amazing. It’s a beautiful thing when we’re collaborating for the right reasons.”

DiRAC’s Liqid pours GPUs into servers for cosmological research simulation runs

Durham University’s DiRAC supercomputer is getting composable GPU resources to model the evolution of the universe, courtesy of Liqid and its Matrix composability software.

The UK’s Durham University DiRAC (Distributed Research utilising Advanced Computing) supercomputer department has both Spectra Logic tape libraries and Rockport switchless networks taking advantage of the university’s largesse.

Liqid has put out a case study about selling its Matrix composable system to Durham so researchers can get better utilization from their costly GPUs.

Durham University is part of the UK’s DiRAC infrastructure and houses COSMA8, the eighth iteration of the COSmology MAchine (COSMA) operated by the Institute for Computational Cosmology (ICC) as a UK-wide facility. Specifically, Durham provides researchers with large memory configurations needed for running simulations of the universe from the Big Bang onwards, 14 billion years ago. Such simulations of dark matter, dark energy, black holes, galaxies and other structures in the Universe can take months to run on COSMA8. Then there can be long periods of analysis of the results.

Durham University’s DiRAC supercomputer building.

More powerful supercomputers are needed, exascale ones. A UK government  ExCALIBUR ((Exascale Computing Algorithms and Infrastructures Benefitting UK Research) programme has provided £45.7 million ($55.8 million) funding to investigate new hardware components with potential relevance for exascale systems. Enter Liqid.

The pattern of compute work at Durham needs GPUs and, if these are physically tied to accessing client computers they can be tied up and dedicated to specific jobs, and then stand idle because new jobs are started in other servers with fewer GPUs.

The utilization level of the expensive GPUS can be low, and the workload of multiple and large partially overlapping jobs will get worse at the exascale level. More GPUs will be needed and their utilization will remain low, driving up power and cooling costs. At least that is the fear.

Liqid’s idea is that GPUs and other server components, such as CPUs, FPGAs, accelerators, PCIe-connected storage, Optane memory, network switches and, in the future, DRAM with CXL, can all be virtually pooled. Then server configurations, optimized for specific applications, can be dynamically set up by software pulling out resources from the pools and returning them for re-use when a job is complete. This will drive up individual resource utilization.

Alastair Basden, technical manager for the DiRAC Memory Intensive Service at Durham University, was introduced to the concept of composable infrastructure by Dell representatives. Basden is composing NVIDIA A100 GPUs to servers with Liqid’s composable disaggregated infrastructure (CDI) software. This enables researchers to request and receive precise GPU quantities. 

Basden said: “Most of our simulations don’t use GPUs. It would be wasteful for us to populate all our nodes with GPUs. Instead, we have some fat compute nodes and a login node, and we’re able to move GPUs between those systems. Composing our GPUs gives us flexibility with a smaller number of GPUs. We can individually populate the servers with one or more GPUs as required at the click of a button.” 

Durham University’s Liqid composable system diagram

“Composing our GPUs can improve utilisation, because now we don’t have a bunch of GPUs sitting idle,” he added.

Basden noted an increase in the number of researchers exploring artificial intelligence and expects their workloads to need more GPUs, and Liqid’s system will support their importation: “When the demand for GPUs grows, we have the infrastructure in place to support more acceleration in a very flexible configuration.” 

He is interested in the notion of DRAM pooling and the amounts of DRAM accessed over CXL (PCIe gen 5) links. Each of the main COSMA8 360 compute nodes at Durham is configured with 1TB of RAM. Basden said: “RAM is very expensive – about half the cost of our system. Some of our jobs don’t use RAM very effectively – one simulation might need a lot of memory for a short period of time, while another simulation doesn’t require much RAM. The idea of composing RAM is very attractive; our workloads could grab memory as needed.” 

“When we’re doing these large simulations, some areas of the universe are more or less dense depending on how matter has collapsed. The very dense areas require more RAM to process. Composability lets resources be allocated to different workloads during these times and to share memory between the nodes. As we format the simulation and come to areas that require more RAM, we wouldn’t have to physically shift things around to process that portion of the simulation.” 

Liqid thinks all multi-server data centres should employ composability. If you have one with low utilization rates (say 20 percent or so) for PCIe or Ethernet or InfiniBand-connected components like GPUs, it says you should consider giving it a try.

Storage news ticker – 10 June

Storage news
Storage news

Backblaze says it is the first independent cloud provider to offer a cloud replication service which helps businesses apply the principles of 3-2-1 data protection within a cloud-first or cloud-dominant infrastructure. Data can be replicated to multiple regions and/or multiple buckets in the same region to be protected from disasters, political instability, for business continuity and compliance. Cloud Replication is generally available now and easy to use: Backblaze B2 Cloud Storage customers can click the Cloud Replication button within their account, set up a replication rule, and confirm it. Rules can be deleted at any time as needs evolve.

William Blair analyst Jason Ader told subscribers about a session with Backblaze’s CEO and co-founder, Gleb Budman, and CFO, Frank Patchel, at William Blair’s 42nd Annual Growth Stock Conference. Management sees a strong growth runway for its B2 Cloud business (roughly 36% of revenue today) given the immense size of the midmarket cloud storage opportunity (roughly $55 billion, expanding at a mid-20% CAGR). Despite coming off a quarter where it cited slower-than-expected customer data growth in B2 (attributed to SMB macro headwinds), management expects that a variety of post-IPO product enhancements (e.g., B2 Reserve purchasing option, Universal Data Migration service, cloud replication, and partner API) and go-to-market investments (e.g., outbound sales, digital marketing, partnerships and alliances) will begin to bear fruit in coming quarters.

DPaaS supplier Cobalt Iron has received U.S. patent 11308209 on its techniques for optimization of backup infrastructure and operations for health remediation by Cobalt Iron Compass. It will automatically restore the health of backup operations when they are affected by various failures and conditions.The techniques disclosed in this patent:

  1. Determine the interdependencies between various hardware and software components of a backup environment.
  2. Monitor for conditions in local or remote sites that could affect local backups. These conditions include:
    • Indications of a cyberattack
    • Security alert conditions
    • Environmental conditions including severe weather, fires, or floods
  3. Automatically reprioritize backup operations to avoid or remediate impacts from the conditions (e.g., discontinue current backup operations or redirect backups to another site not impacted by the condition).
  4. Dynamically reconfigure the backup architecture to direct backup data to a different target storage repository in a different remote site, or in a cloud target storage repository, that is unaffected by the condition.
  5. Automatically extend retention periods for backup data and backup media based on the conditions.
  6. Dynamically restrict or remove access to or disconnect from the target storage repository after backup operations complete.

Cohesity is playing the survey marketing game. The data protection and management company transitioning to as-a-service commissioned research that looked at UK specific data from an April 2022 survey of more than 2,000 IT decision-makers and Security Operations (SecOps) professionals in the United States, the United Kingdom and Australia. Nearly three-quarters of respondents (72%) in the UK believe the threat of ransomware in their industry has increased over the last year, with more than half of respondents (51%) saying their organisation has been the victim of a ransomware attack in the last six months. So … naturally … buy Cohesity products to defeat ransomware.

Commvault says its Metallic SaaS backup offering has grown from $1 million to $50 million annual recurring revenue (ARR) in 6 quarters. It has more than 2,000 customers, with availability in more than 30 countries around the globe. The Metallic Cloud Storage Service (MCSS), which is used for  ransomware recovery, is getting a new name: Metallic Recovery Reserve. Following Commvault’s acquisition of TrapX in February, it’s launching an early access programme for ThreatWise this week. ThreatWise is a warning system to help companies spot cyberattacks, and enable a response helped with tools for recoverability.

The STAC benchmark council says Hitachi Vantara has joined it. Hitachi Vantara provides the financial services industry with high-performance parallel storage, including object, block, and distributed file systems, along with services that specialize in storage I/O, throughput, latency, data protection, and scalability.

Kioxia has certified Dell PowerEdge R6525 rack servers for use with its KumoScale software. Certified KumoScale software-ready system configurations are available through Dell’s distributor, Arrow Electronics. Configurations offered include single and dual AMD EPYC processors, Kioxia CM6 NVMe SSDs and have capacities of up to 153TB per node. Kumoscale is Kioxia’s software to virtualise and manage boxes of block-access, disaggregated NVMe SSDs. “Kumo” is a noun meaning ‘cloud’ in Japanese.

William Blair analyst reported to subscribers about a fireside chat with Nutanix CFO Rukmini Sivaraman and VP of Investor Relations Richard Valera at William Blair’s 42nd Annual Growth Stock Conference. Management still has high conviction in the staying power of hybrid cloud infrastructure, and Nutanix’s growing renewal base will provide it with a predictable, recurring base of growth as well as greater operating leverage over time. Management also noted that beyond Nutanix’s core hyperconverged infrastructure (HCI) products, Nutanix Cloud Clusters is serving as a bridge for customers looking to efficiently migrate to the cloud.

Pavilion Data Systems, describing itself as the leading data analytics acceleration platform provider and a pioneer of NVMe-Over-Fabrics (NVMe-oF), said that EngineRoom is using Pavilion’s NVMe-oF tech for an Australian high-performance computing cloud. Pavilion replaced two full racks from a NAS supplier with its HyperOS and its 4 rack-unit HyperParallel Flash Array. It says EngineRoom saved significant data center space by using Pavilion storage to improve HPCaaS analytics. The data scientists at EngineRoom deployed Pavilion with the GPU clusters in order to push the boundaries of possibility to develop and simulate medical breakthroughs, increase crop yields, calculate re-entry points for spacecraft, identify dark matter, predict credit defaults, render blockbuster VFX, and give machine vision to robotics and UAVs.

Stefan Gillard, Founder and CEO of EngineRoom, said: “We need NVMe-oF, GPU acceleration, and the fastest HPC stack. Pavilion is a hidden gem in the data storage landscape.”

Phison Electronics Corp. unveiled two new PCIe Gen4x4 Enterprise-Class SSDs: the EPR3750 in M.2 2280 form factor and the EPR3760 in M.2 22110 form factor. Phison says they are are ideal for use as boot drives in workstations, servers, and in Network Attached Storage (NAS) and RAID environments. The EPR3750 SSD is shipping to customers as of May 2022, and the EPR3760 SSD will be shipping in the second half of 2022.

Qumulo has introduced a petabyte-scale archive offering that enhances Cloud Q as a Service on Azure. It is new serverless storage technology, the only petabyte-scale multi-protocol file system on Azure, enabling Qumulo to bring out a new “Standard” offering with a fixed 1.7 GB/sec of throughput with good economics at scale. Qumulo’s patented serverless storage technology creates efficiency improvements resulting in cloud file storage that is 44% lower cost than competitive file storage solutions on Azure. Qumulo CEO Bill Richter said: “The future of petabyte-scale data is multi-cloud. … Qumulo offers a single, consistent experience across any environment from on-premises to multi-cloud, giving our customers the tools they need to manage petabyte-scale data anywhere they need to.” Qumulo recently unveiled its Cloud Now program, which provides a no-cost, low-risk way for customers to build proofs of concept up to one petabyte.

SK hynix is mass-producing its HBM3 memory and supplying it to Nvidia for use with its H100, the world’s largest and most powerful accelerator. Systems are expected to ship starting in the third quarter of this year. HBM3 DRAM is the 4th generation HBM product, succeeding HBM (1st generation), HBM2 (2nd generation) and HBM2E (3rd generation). SK hynix’s HBM3 is expected to enhance accelerated computing performance with up to 819GB/sec of memory bandwidth, equivalent to the transmission of 163 FHD (Full-HD) movies (5GB standard) every second.

Cloud data warehouser Snowflake announced a new cybersecurity workload that enables cybersecurity teams to better protect their enterprises with the Data Cloud. Customers gain access to Snowflake’s platform to natively handle structured, semi-structured, and unstructured logs. They can store years of high-volume data, search with scalable on-demand compute resources, and gain insights using universal languages like SQL and Python, currently in private preview. Customers already using the new workload include CSAA Insurance Group, DoorDash, Dropbox, Figma, and TripActions.

SoftIron says Earth Capital, its funding VC,  made a mistake, a quite big one, when calculating how much carbon dioxide emissions were saved by using SoftIron HW and SW. In the original report by Earth Capital, the report stated that for every 10PB of data storage shipped by SoftIron, an estimated 6,656 tonnes of CO2e is saved by reduced energy consumption alone. The actual saving for a 10PB cluster is 292 tonnes. This is 23 times less and roughly the same weight as a Boeing 747 – still an impressive number when put into full context. Kudos to SoftIron for ‘fessing this upfront.

VAST Data CMO and co-founder Jeff Denworth has blogged about the new Pure Storage FlashBlade//S array and Evergreen services, taking the stance of an upstart competing with an established vendor. Read “Much ado about nothing” to find out more.

Distributed data cloud warehouser Yellowbrick Data has been accepted into Intel’s Disruptor Initiative. Intel and Yellowbrick will help organizations solve analytics challenges through large-scale data warehouses running in hybrid cloud environments. The two are testing Intel-based instances on Yellowbrick workloads across various cloud scenarios to deliver optimal performance. Yellowbrick customers and partners will benefit from current and future Intel Xeon Scalable processors and software leveraging built-in accelerators, optimized libraries, and software that boost complex analytics workloads. Yellowbrick uses Intel technology from the CPU through the network stack. Intel and Yellowbrick are expanding joint engineering efforts that will accelerate performance.

Backup there – HYCU gets $53 million B-round funding

HYCU has pulled in a $53 million B-round, giving it yet more cash for its SaaS-based data protection offerings.

The round saw participation from all the A-round investors, including big names Bain Capital and Acrew Capital. Atlassian Ventures and Cisco Capital joined in as strategic investors. The cash will help to pay for several go-to-market initiatives including bringing to market a developer-led SaaS service and other offerings. There will be new positions in alliances, product marketing and customer success.

Simon Taylor.

CEO Simon Taylor said “Adding strategic investment from Atlassian Ventures and Cisco Investments, along with the ongoing support from Bain Capital and Acrew is a testament to what the team has developed and is delivering to customers worldwide.” He continued, “HYCU fundamentally believes there is a better way to solve data protection needs, and we are on track to deliver a profoundly simple and powerful solution before the end of the year.” Sounds interesting.

We hear that Taylor and his execs weren’t looking for new funding but, once Acrew Capital approached them this year, the interest from other investors became significant.

This B-round comes 15 months after HYCU raised $87.5 million in its A-round and takes total funding to $140.5 million.

HYCU’s roots go back a long way, having been spun out of Comtrade Software in 2018, and Comtrade has a history going back to 2016 and earlier. HYCU is growing fast. It enjoyed year-on-year bookings growth of 150 percent in 2021 and had a successful first 2022 quarter close, with projections to achieve the same growth rate in 2022.

HYCU tripled revenue in the past 12 months, stayed with a 135 percent net-retention rate, maintained a 91 net promoter score (NPS), claiming it’s the highest in the industry for data protection companies, and saw a 4x increase in valuation in the last year. 

Matt Sonefeldt, head of investor relations at Atlassian Ventures, said “We’re excited to welcome HYCU to the Atlassian Ventures family and believe its approach to data protection as a service creates immense potential for our 200,000+ cloud customers.” That’s a nice potential selling opportunity for HYCU.

HYCU is now well-funded to weather any downturn and well-positioned with regard to the two great forces transforming the data protection industry: changing on-premises backup software to SaaS offerings, and the whole cyber resiliency/anti-ransomware movement. HYCU is on top of both trends and virtually every other vendor is doing the same. For example, Veritas is making NetBackup a SaaS-based product. Cohesity is moving its products set to services. Commvault has its Metallic SaaS offering. Veeam is on the SaaS train. This is a rising tide lifting all boats.

And all vendors now regard ransomware countermeasures as table stakes. Anyone not adopting them is going to get left behind.

Intel planning big Lightbits NVMe/TCP storage push

A IT Press Tour briefing by Lightbits Labs in San Jose told us good things about Lightbits’s technology but a great unseen presence started becoming more visible throughout the pitch as it progressed. Several clues, when combined, indicate Intel is going to make a strong push for its partners to sell Lightbits software-powered NVMe/TCP-accessed storage servers full of Intel hardware.

Let’s itemise the various clues and then spin up the coherent picture we see being made from these pointers.

First, Intel is funding Lightbits. Intel Capital was the sole contributor to a September 2020 funding round according to Crunchbase, with the amount kept secret. At the time we wrote: “Lightbits Labs is working with Intel to make its NVMe/TCP all-flash arrays almost as fast as RoCE and InfiniBand options, which require much more expensive cabling.”

Lightbits supplies software-defined multi-tenant block storage from clusters of x86 servers fitted with NVMe SSDs and accessed across standard Ethernet using the NVMe/TCP protocol. This has a longer latency than NVMe over RoCE which needs more costly lossless Ethernet. Its software can extend QLC SSD endurance up to 20x which helps make flash data storage more affordable compared to disk.

Second, an Intel exec, Gary McCulley presented at our briefing. He is the director of the Data Platform Group, in Intel’s Data Storage Technology Business. It was nice to hear from him, but Lightbits execs could have told us about Intel and Lightbits working together. 

McCulley showed a picture of a server with Lightbits’s software utilising various Intel componentry:

  • Gen 2 Xeon SP processors (IceLake);
  • Optane Pmem 200 memory cards;
  • 800 Series Ethernet NIC.

There was a slide for each of these followed by a summary slide saying Lightbits with Intel technology delivers “Hyperscale Storage for all.” It’s certainly good for Intel to be supporting a startup using its kit, but why go this extra mile?

Kam Eshghi, Lightbits; Chief Strategy Officer told me that Intel has 50 employees, many of them engineers, working full time on the Lightbits partnership. That made me sit up. It’s a lot of people, working on what exactly? Making sure Intel components and Lightbits software work well together, we suppose. But again, why the extra mile for this particular Intel partner?

Third, Intel’s IPU (data processing unit to accelerate east-west traffic in a datacenter) can be used by clients accessing a Lightbits storage cluster. They use an extended CSI plug-in.

Fourth, Lightbits has announced TCO Calculator and Configurator tools developed in collaboration with Intel. They provide Cloud Service Providers (CSPs), Financial Services, and Telco organizations with an intuitive way of determining the value of the Lightbits Cloud Data Platform. The tools highlight the TCO savings that can be made by using Lightbits software with Intel hardware.

Fifth, Lightbits has recruited two channel sales execs. In April it appointed Charla Bunton-Johnson as VP of Global Alliances and Channel. We were then told a key focus would be collaboration with Intel with reference to Xeon scalable processors, Optane Persistent memory (PMem), and Ethernet 800 Series Network Adapters.  

Andrew Engledow joined Lightbits in May as EMEA sales director. He said Intel had been instrumental in helping UK telco BT buy a Lightbits storage system.

Sixth, we also learnt that the Lightbits partnership with intel included Intel’s sales organisation.

These six pointers are enough to get us asking: Why is Intel making such a big deal of this Lightbits technology? Is it planning on building and selling (storage) servers to large scale buyers, like telcos such as BT – or enabling server and storage hardware vendors to do this?

We think it’s the latter. Intel will use its hardware partners in the server and storage area to sell Lightbits storage clusters using Intel Xeon, Optane and Ethernet NICs, with, possibly, Solidigm SSDs, and also Intel’s IPU, to large-scale buyers (but not necessarily hyperscale) in the CSP, financial services and telco markets.

That’s our prediction. It gives intel an answer to Fungible (DPUs, NVMe/TCP, storage servers), and to other DPU vendors. And it could enable Intel’s partners to shift a lit of Intel-powered boxes, netting Lightbits substantial revenue growth.

Quantum’s slow growth needs rights issue to shore up balance sheet and pay off debt

Jamie Lerner
Jamie Lerner

Quantum’s final fiscal 2022 quarter and full year results, hit by supply chain issues, saw its cash balance drop necessitating a rights issue to strengthen the balance sheet and pay off some debt.

In the quarter ended March 31, this supplier of tape, object, video surveillance, HPC secondary storage  and media workflow management software recorded revenues up 3 percent year on year at $92.5 million, with a loss of $7.8 million, better than the year-ago $17.5 million loss. Full year revenues were $372.8 million, up 6.7 percent annually, with a $32.3 million loss, slightly improved on the year-ago $35.5 million loss.

Jamie Lerner.

Quantum’s CEO and chairman Jamie Lerner, said “We made progressive improvements throughout the year to strengthen our business, highlighted by revenue for the fourth quarter exceeding our preliminary results. Additionally, we continue to gain increasing momentum transitioning new and existing customers to our software subscription model as demonstrated by $7.4 million of subscription software Annual Recurring Revenue (ARR) and $160.5 million in high-value recurring revenue exiting the year.”

The growth, restricted by supply chain problems, was not enough and was overshadowed by a money shortfall. Cash and cash equivalents including restricted cash was $5.5 million as of March 31, 2022, compared to $33.1 million a year previously. Quantum announced a $67.5 million rights offering on April 26, and said it had been over-subscribed.

There was broad participation for the 30 million shares on offer from outside investors with all eligible directors and officers participating. Lerner said at the time: “The proceeds enable us to strengthen our balance sheet, reducing our debt by $20 million and adding over $45 million to our cash position. Additionally, with the cooperation of our supporting banks, we have reset all debt covenants to more favorable levels while increasing our revolving line of credit by $10 million to $40 million.”

The cash was also needed for working capital and other general corporate purposes. CFO Mike Dodson said: “A stronger balance sheet allows us to better weather the ongoing supply chain headwinds broadly impacting companies across our industry.”

Quantum is looking at supply chain strengthening and a cost reduction program, with Lerner saying: “With backlog at near record levels, we have aggressively implemented supply chain strategies that will help us increase our revenues and margins. We are also implementing a series of cost reduction programs that are focused on reduced spending and continued integration efforts related to the recent acquisitions, which we expect will decrease our current operating expense run-rate by $1.5 to $2.0 million per quarter by the second half of fiscal 2023.”

Quantum set the revenue bar expectation for the next quarter at $94 million, plus or minus $3 million. That represents growth of 5.5 percent from the year-ago quarter’s $89.1 million. It’s a long and hard haul for Quantum

Western Digital open to splitting disk and NAND/SSD businesses

Investors’ reactions to Elliott Management taking a stake in Western Digital and Elliott’s public letter advising a separation of WD’s Flash and disk drive businesses have prompted its board to examine this and other strategic options for WD.

WD has two divisions: one making hard disk drives (HDD) and one making NAND chips in a joint venture with Kioxia and then building SSDs using these chips. Activist investor Elliott says WD’s share price is lower than it should be because WD can’t effectively manage these two divisions. If they were divided into two separate publicly-quoted businesses then their combined share price would be much higher than WD’s current stock price.

David Goeckeler.

An WD announcement says the board is setting up an executive committee, chaired by CEO David Goeckeler, to examine and review strategic alternatives for WD to see which is best for long-term shareholder value. It will oversee an “assessment process and fully evaluate a comprehensive range of alternatives, including options for separating its … Flash and HDD franchises.”

WD’s release mentions having constructive dialogues with Elliott and other shareholders. A statement by Goeckeler explained: “The Board is aligned in the belief that maximizing value creation warrants a comprehensive assessment of strategic alternatives focused on structural options for the company’s Flash and HDD businesses. Through this process, we are actively engaging in a broad range of strategic and financial alternatives that will help further optimize the value of Western Digital, including Elliott’s offer to invest incremental equity capital in our Flash Business. We look forward to continuing our constructive dialogue with Elliott as this process unfolds.”

An Elliott statement, attributed to Managing Partner Jesse Cohn and Senior Portfolio Manager Jason Genrich, said: “We’re encouraged by the positive direction of our discussions so far, and by Western Digital’s openness to considering a full separation of its Flash business. We are pleased that Western Digital’s Board is conducting this review, and Elliott is prepared to provide strategic resources and additional capital to help the company realize the full value of both of its businesses.”

WD and Elliott have signed a nondisclosure agreement, which implies that Elliott is getting a closer look at WD’s financial situation. The two sides have named legal and financial advisors – a further sign that a separation between WD’s two businesses is actually on the table.

We think that WD partners and customers will likely have not much preference whether WD keeps the two divisions together or separates them from a product and technology perspective. The HDD OptiNAND technology could still be used  if HDD was a separate business from flash through a partnering deal. 

Many may well view Elliott as a pirate-like financial raider only concerned with making a large profit from its own holding in WD with no long-term interest in the firm’s fortunes. Such financial shenanigans can strike people as reprehensible. Actual WD stockholders may well take a different view, encouraged by Elliott-caused restructuring at Commvault. WD’s board thinks the number of them is enough to justify responding to Elliott as it has.

Pure extends FlashBlade with more capacity and faster compute

Pure Storage has launched an updated FlashBlade unified file and object array, the //S, with Purity//FB v4.0 software and double the density, performance and power-efficiency of the current FlashBlade.

FlashBlade//S, with disaggregated and modular compute and storage design features, and Purity//FB 4.0 software enable storage, compute, and networking elements to be upgraded flexibly and non-disruptively.

Pure’s Matt Burr, GM for FlashBlade, said FlashBlade//S “is not only the last scale-out platform organizations will ever need but also the right choice for meeting environment and sustainability ambitions, which are increasingly important to customers.”

FlashBlade//S has 105 percent more usable TB per rack unit than FlashBlade, requires 48 percent less power and 28 percent less cooling.

Hardware

FlashBlade//S comes in a 25 percent larger chassis than FlashBlade – 5RU instead of 4 – and contains between 7 and 10 storage blades each populated with an IceLake SP-based controller board. One to four Direct Flash Modules (DFMs) – Pure’s in-house designed flash cards – plug into each storage blade, with a minimum of one per blade. Each DFM comes with 24 or 48TB of raw QLC (4bits/cell) capacity, the same flash as is used in the FlashArray//C. The current FlashBlade uses TLC (3bits/cell) NAND. 

A storage blade has DDR4 memory DIMMs, PCIe 4 connections to the DFMs, and a 100GbitE data plane.

Storage blade in display case, with two DFMs on the left and the serrated heat sink for the NICs on the right.

A FlashBlade//S chassis can hold up to 40 DFMs – a maximum of 1.92PB raw capacity, or 4PB at 2:1 compression. Put eight of these chassis in a rack and you have 16PB of raw capacity. Add in 2:1 data reduction and we’re looking at up to 32PB/rack. The Purity//FB OS supports this and more, and can handle billions of files and objects. Burr said FlashBlade//S is fast enough to enable object storage to be used for primary data storage as well as files.

The chassis has integrated 100GbitE networking and is 800GbitE-ready. It has a mid-plane, into which the storage blades plug, and that is 400GbitE-ready.

FlashBlade//S bezel.

The current FlashBlade can have from 7 to 15x 8TB or 52TB blades in its chassis – a max of 780TB raw, and 1.6PB effective at 2:1 compression. It scales out to 10 chassis, linked by external fabric modules (XFMs), 150 blades in total. The system supports up to 15GB/sec bandwidth with 15 blades and a chassis has up to 16x 100GbitE ports.

A Pure spokesperson said: “Today, the largest FlashBlade system is 10-chassis, of 15 blades each (150 blades), with 52TB blades.  With FlashBlade//S, we are targeting the same 10-chassis cluster size (now connected by upgraded external fabric modules, or XFMs).  Remember though that each FlashBlade//S chassis is now far denser and more performant than the current FlashBlade chassis (roughly 2.5-3x as performant and dense), while requiring fewer compute blades (10 vs. 15).  Additionally, with FlashBlade//S standardized on DirectFlash Modules (DFMs), the robust roadmap for DFMs will continue to add scale and capacity through density improvements.

“Net/Net, we expect the large 10-chassis clusters to reach roughly 2-3x the total capacity (~20-30PB depending on data reduction), and offer 2-3x the throughput, and grow from there.”

FlashBlade//S storage blade controllers are stateless, like VAST Data’s controllers (compute nodes). Unlike VAST Data’s hardware FlashBlade//S has no storage-class memory cache for metadata and incoming writes. “No SCM crutch” as a Pure briefing slide said, possibly in a poke at VAST Data.

A FlashBlade//S chassis scales up from 7x DFMs, one per blade, to 40 – four on each of the 10 blades. Then FlashBlade//S can scale out. If a blade fails or needs updating to the next generation its existing DFMs can be plugged into the new blade – a data-in-place upgrade effectively.

There are two FlashBlade//S models. The S200 is a capacity-optimized product, with DFM-loaded storage blades, while the S500 is a performance-optimised one, with lightly loaded storage blades. Both extend the performance and capacity envelope of the existing FlashBlade product;

FlashBlade//S use cases include AI and ML, high volume analytics, rapid restore for backups and ransomware recovery, manufacturing, high-performance simulation and EDA, medical and enterprise imaging, and genomics. Pure says its new FlashBlade has unmatched storage efficiency compared to any other scale-out storage system on the market.

The tide of required environmental, social, and governance (ESG) reporting is rising higher and higher. FlashBlade//S uses less power per terabyte – 1.3 watts/TB compared to the existing FlashBlade’s 2.3watts/TB. At PB scale that’s a lot of saved power. It means the new FlashBlade can be part of a good ESG report for customers, helping them show that they are working to reduce their datacentre power consumption.

Pure is building hardware commonality into its FlashBlade and FlashArray products with, for example, common power supplies and common DFMs. In the future we can envisage customers and Pure optimising fleets of FlashBlade and FlashArray systems by moving storage components, DFMs initially and compute (controller boards) eventually, between them to optimize performance and efficiency across the fleet. This wIll avoid stranded capacity and compute resources. System modelling based on fleet telemetry and AI will automate the production of fleet optimization configurations and specify which components ned to be moved where.

FlashBlade//S is available to order now.

Pure Storage reworks Evergreen subscription service

Green computing
Green computing

Pure Storage has reworked its Evergreen subscription to provide a more coherent set of services and optimizations that can extend across a customer’s entire fleet of Pure arrays.

There are now three Evergreen offerings: Evergreen//Forever, which is the basic customer-owned arrays offering; Evergreen//One to provide consumption-based use of Pure’s arrays; and Evergreen//Flex in which customer-owned arrays with added fleet-wide performance and efficiency services.

Prakash Darji, GM of Pure’s  Digital Experience Business Unit, said: “The growth of our Evergreen portfolio is a testament to Pure’s commitment to uncomplicating data storage … Further distinguished with the launch of Evergreen//Flex, Pure offers the broadest procurement flexibility and choice across the storage industry today.”

Evergreen shoots

Evergreen, launched in 2015, is a software subscription contract which includes hardware and software upgrades. Since launch, the program has delivered over 10,000 non-disruptive controller upgrades to more than 3,000 customers.

Evergreen Gold, the standard subscription, delivers the full set of capabilities across software and hardware, with so-called white-glove support and maintenance. Customers get the benefit of a SaaS-like model, but for on-premises storage.

Evergreen-as-a-Service was launched in 2018 as a hybrid cloud pay-per-use offering that supports storage-as-a-service for block, file, and object workloads. It delivers storage capacity on demand, backed by Pure’s hardware, software, and support, on a $/GB basis for terms starting from 12 months.

In September 2019, Pure renamed Evergreen-as-as-Service to Pure-as-a-Service and said it would make all Pure products available on subscription. Its popularity grew and Pure’s subscription services made up 33 percent of total revenue, exceeding $738 million and representing 37 percent year-over-year growth in fiscal 2022. Pure-as-a-Service is now being renamed for branding consistency.

Three new Evergreens

Evergreen//Forever – the renamed Evergreen Gold which offers traditional appliance ownership with a subscription to software, and hardware in place updates at regular intervals to modernize media and blades. There is no downtime and no media migration.

Evergreen//One – the renamed Pure-as-a-Service is a consumption-based service model for storage, with proactive monitoring and non-disruptive upgrades and performance and usage SLAs.

Evergreen//Flex – new fleet-level hybrid Evergreen scheme which unlocks and moves stranded storage capacity to where it is needed most. It applies to the full Pure portfolio and provides non-disruptive upgrades and capacity-on-demand availability. The customer owns the hardware and pays Pure for the capacity they use. There is a fleet-level subscription with site-level reserve capacity. Sets of Pure’s Direct Flash Modules (DFMs) in Data Packs can be moved across the fleet to optimize utilization. Evergreen//Flex takes Evergreen//Forever beyond the box, and is the most efficient way to run a fleet of storage enabled by an asset utilization model.

Pure Storage Evergreen
Pure Evergreen briefing slide

Evergreen//Flex is somewhat similar to Zadara’s managed storage service, which now includes compute (servers) and has become a full stack offering. It has a focus on external-to-the customer location in a so-called edge cloud. There are more than 440 points of presence in this cloud around the world, provided by more than 250 managed service providers (MSPs).

Evergreen//Flex effectively extends the managed service to cover a customer’s entire set of Pure arrays in the customer’s own and co-location datacenters with fleet-level optimization. Pure can do this because it has Pure1 cloud-based management using telemetry from the arrays.

Pure1 is a cloud-based AIOps service providing a single web or app interface for a customer to manage all their Pure storage arrays. It provides insights into the installed Pure technology stack, including a topology view to simplify VM troubleshooting. By using its capabilities Pure can monitor overall fleet operation, model it, and recommend moving less-used capacity (DFMs) to actual or potential hot spots. The potential for this kind of service extends to DFMs between FlashBlade and FlashArray, and moving storage compute, meaning storage blades as well. Evergreen//Flex could improve the efficiency and TCO of Pure fleets significantly.

Pure Storage upgrades AIRI with FlashBlade//S

Pure Storage has upgraded its combined AI infrastructure offering AIRI with FlashBlade//S storage, making it able to analyze more data faster for better machine learning models and AI insights.

When it was launched in 2018, AIRI (AI-Ready Infrastructure) was a half rack-sized system containing a Pure Storage FlashBlade all-flash array with four Nvidia DGX-1 GPU servers and a pair of 100GbitE switches from Arista. It went through various iterations, with DGX-2 support in 2019, DGX-A100 in 2020, and now FlashBlade//S with the AIRI//S system. En route Mellanox switches were adopted as part of Nvidia’s Spectrum networking.

Pure’s Amy Fowler, strategy and solutions VP for FlashBlade, said: “Traditional approaches to AI infrastructure often result in silos of servers and storage that are either over-spent on capacity or starve AI workloads. With a focus on simplicity and scalability, AIRI//S enables global enterprises to achieve better time to insights and make the most out of their data with AI.” 

Pure Storage AIRI system
AIRI system. That side panel is designed for visibility

Charlie Boyle, Nvidia’s VP for DGX systems, said “AIRI//S built on Nvidia DGX systems provides customers with modular, high-performance enterprise infrastructure that scales easily”  as demand for integrated, full-stack computing optimized for AI continues to grow.

Pure says AIRI//S systems can be set up, deployed, and managed quickly as an end-to-end AI pipeline solution – a dedicated AI box set. It can scale storage and AI compute capabilities. AIRI//S, compared to existing AIRI systems, is a more powerful, high-performance AI infrastructure in the same footprint. It delivers the FlashBlade//S advantages of higher density, reduced power consumption, and better datacenter efficiency. AIRI//S can also be operated and consumed on a subscription basis through the Evergreen//One and Evergreen//Forever services.

Pure Storage FlashArray//S
FlashBlade//S lit-up bezel

AIRI//S, meaning FlashBlade//S, does not yet support Nvidia’s MagnumIO GPUDirect Storage (GDS). Pure CTO Rob Lee said that the total work time for an AI job includes searching a source dataset for the desired subset, extracting it and then sending it to the GPUs. The search and extract process can take far longer than the data transfer to the GPUs. Lee said FlashBlade//S shortens that time so that the overall job time is lowered by more than just having a faster data transfer time, which is what GDS provides.

AIRI is a modular system with the ability to swap in faster components such as network switches, GPUs, and storage hardware and software. FlashBlade//S and its PurityFB OS will support GDS next year. AIRI//S will adopt it as a result, and then AIRI//S will be an even faster system.

StorONE adds NVMe-attached disk tier to its array

StorONE had enabled indirect NVMe-oF access to disk drives through an NVMe flash tier with its latest array software update.

This development has come in advance of native NMVe access to disk drives. NVMe (Non-Volatile Memory Express) is a parallel access protocol for flash (non-volatile) storage, whereas hard disk drives (HDD) have traditionally been accessed by serial protocols such as SAS and SATA, which are much slower. The NVM Express organisation released the NVMe v2.0 spec in June 2021, extending NVMe’s scope to cover removable cards, compute accelerators and HDDs as well as the existing enterprise and client SSDs. Seagate could ship demo NVMe disks by mid-2024. There’s no need to wait – StorONE is delivering indirect NVMe HDD access now.

Gal Naor.

CEO Gal Naor said: “To date, vendors have used NVMe-oF solely to advance their all-flash agendas, which only accelerates the cost of storage. We now give customers the best way to leverage NVMe-oF and continue to drive down the cost of storage.”

StorONE’s Enterprise Storage Platform array can connect to the network via NVMe-oF and manage NVMe SSDs as the primary tier while automatically moving less active data to the secondary HDD tier for longer term retention. It says users benefit from a single array that can deliver millions of IOPS from flash and also decades of affordable retention from disk. They also, we could say, reduce the cost of NVMe storage by up to 10x.

Neor said “Our platform approach to storage not only means supporting a wide variety of use cases but also new protocols, like NVMe-oF. At the same time, we make sure our customers can continue to leverage their existing investment, eliminating the need for costly storage migrations.” StorONE’s product supports multiple protocols simultaneously, enabling customers adopt NVMe-oF at their own pace. They can carry on using existing protocols like iSCSI, Fiber Channel, NFS, SMB, or S3, while making investments in NVMe-oF the same time, without migrating data.

Companies that are already using NVMe-oF and are looking to archive to more affordable HDD technology can start using StorONE as an NVMe-attached HDD archive target. They can then add performance-sensitive workloads to the platform, using its flash tier as needs demand.

Existing StorONE customers can add the HDD over NVMe-oF capability through a software update at no additional charge. To find out more you can register for a StorONE “The NVMe-oF Readiness Workshop” webinar on June 28.