Home Blog Page 248

Cloud Titan OEM deals mean advantage NetApp

NetApp’s OEM deals with the AWS, Azure and Google public clouds set it ahead of all other file-focused storage providers.

Jason Ader, a William Blair financial analyst, gave subscribers the benefit of his interview with NetApp EVP and general manager for public cloud Antony Lye. “No other storage vendor’s technology sits behind the user consoles of three big cloud service providers and is treated as a first-party service, sold, supported, and billed by the CSPs themselves.”

He is referring to OEM-type deals in which NetApp’s ONTAP file and block data management software is sold as a service by the three big CSPs: AWS, Azure and Google.

NetApp provides two ways for customers to get ONTAP in the three public clouds. The first is as a self-managed Cloud Volumes ONTAP service with software available through the respective marketplaces, while the second is through services sold, supported and billed by the cloud providers themselves.

These are:

Blocks & Files diagram.

NetApp receives a cut of the revenue from these services. Thus, with Azure NetApp Files (ANF), “NetApp is compensated monthly by Microsoft based on sold capacity.” Customers for ANF typically run SAP, VMware-based apps, VDU apps and legacy RDBMS apps. SAP has certified ANF so “NetApp is able to offer backups, snapshots and clones in the context of the application itself.”

Lye told Ader that Amazon FSx for NetApp ONTAP “took two and a half years to stand up.” And, because FSx sits behind the AWS console, NetApp has been able to provide native integrations with popular Amazon services like S3, AKS, Lambda, Aurora, Redshift and SageMaker.”

AWS also offers FSx for Lustre and FSx for Windows File Server but AWS execs “have publicly stated that FSx for NetAppONTAP is one of AWS’s fastest-growing services right now.” According to Lye, 60 per cent of FSx for ONTAP customers are new to NetApp.

Because NetApp is seen as “the clear leader in file storage and file-based protocols” the three cloud titans “have not felt the need (at least not yet) to integrate as deeply with NetApp’s competitors.”

Ader’s NetApp public cloud offerings table.

NetApp also provides a set of CloudOps services (Spot by NetApp) covering DevOps, FinOps and SecOps (development, cos and security) based on five acquired products; Spot, CloudHawk, CloudJumper, Data Mechanics, and CloudCheckr

They are aimed at customers with cloud-native application development and their DevOps architects. Such customers are often new to NetApp and represent a storage cross-sell opportunity. 

The company also offers its Cloud Insights facility, cloud instantiations of its OnCommand Insight software to help with “storage resource management, monitoring and security”. This is sold both direct to customers and through the three CSP marketplaces. Altogether the Insight offerings cover on-premises, hybrid and public cloud scenarios, supporting legacy and cloud-native applications.

From Ader’s point of view no other enterprise storage supplier is as well-placed as NetApp for providing hybrid cloud storage and data services. Until its competitors catch up — if they ever do — NetApp has a clear lead and its public cloud-related revenues should increase steadily.

Fewer disk units shipped in Q4 ’21 as nearline rises

There were fewer disk drives shipped in 2021’s last quarter than a year ago, but the nearline segment saw a 38 per cent unit ship rise.

Approximately 64 million drives were shipped in the quarter, according to preliminary numbers from research house TrendFocus — a 9 per cent year-on-year fall. Within that number about 18.5 million nearline (3.5-inch high-capacity) drives were shipped — up 38 per cent from a year ago (13.39 million). 

There was a quarter-on-quarter decline in nearline disk ship units, with 19.75 million being shipped in Q3 2021 but Wells Fargo analyst Aaron Rakers thinks it was due to supply chain issues affecting shipments to hyperscalers and some softening of demand in China, not an overall market slowdown.

Rakers estimates that nearline drive capacity shipped in the quarter was between 235 and 240EB, meaning a 60 per cent year-on-year increase, due to the average capacity per drive increasing. We saw 16 and  18TB drives shipping in 2021.

He says there was no significant change in vendor ship market share. Seagate shipped some 27.2 to 27.7 million drives in the quarter, giving it a near 43 per cent market share. Western Digital was next, with 23.5 to 24 million drives shipped and a near 37 per cent share. Toshiba had a 20 per cent share, having shipped 12.7 to 13 million drives.

The chart of disk drive segment revenue shares above shows a sawtooth liners in nearline drives, and also an unexpected rising trend in mission-critical enterprise drives (2.5-inch 10,000rpm – blue line) over the year. In general 2.5-inch mobile and branded drives and 3.5-inch PC and branded drives continued their decline as SSDs take over their data storage role. That takeover seems to be slowing down as a rump of customers prefer disk capacity and lower price than SSD’s speed at a higher cost.

Storage news ticker – January 13

Data protector Catalogic Software announced general availability for its latest DPX software, with enhancements to agentless backups for virtual environments, cloud archiving, and improved capabilities for compliance and ransomware protection. Version 4.8 has single file recovery enabling restoration of specific files or directories from VMware and Microsoft Hyper-V agentless backups, support for protecting data attached to SATA and NVMe controllers for VMware, and the option to run only full backups of VDI VMs. The vStor component gets AWS Object Lock.

Danielle Sheer.

Commvault has appointed a new chief legal officer, Danielle Sheer, who will lead the company’s global legal and compliance teams and its governance, commercial, intellectual property, and privacy programs. Sheer previously served as general counsel at financial technology services company Bottomline and at cloud-backup SaaS solutions provider Carbonite. She currently serves on the board of directors of LinkSquares, the leadership board at Beth Israel Deaconess Medical Center, and on the steering committee for TechGC. 

MSP and cloud data protector Datto has hired Brooke Cunningham as its CMO. She comes from being area VP for global partner marketing and experience at Splunk, with time at Qlik, CA Technologies and SAP before that.  

Brooke Cunningham.

Scale-out file system provider Qumulo has announced the availability of AWS Quick Start for its Cloud Q offering running on AWS. This fully automated deployment experience enables customers to build fully cloud-first AWS file systems ranging from 1TB to 6PB in minutes. Qumulo offers 1TB and 12TB AWS free trials in the AWS Marketplace as well. AWS Quick Start for Qumulo Cloud Q supports almost all AWS regions globally and also supports deployments on AWS Local Zones and AWS Outposts.

Seagate says international non-profit organisation CyArk — which digitally records, archives, and shares world cultural heritage sites — has moved its vast data stores to Seagate’s Lyve Cloud. CyArk’s team used Seagate’s Lyve Mobile data transfer services to move its datasets from multiple on-premises storage devices and server to Lyve Cloud.

AIOps and one time app and infrastructure performance management business Virtana has raised $73 million in funding from Atalaya Capital Management, Elm Park Capital Management, HighBar Partners, and Benhamou Global Ventures. The company says that, with additional capital and resources, it will be able to accelerate its innovation and better meet the needs of customers through increased investment in product development, sales, and marketing. Virtana had a comprehensive exec makeover in November 2020 and the new team has convinced investors to stump up significantly more funding.

ReRAM developer Weebit Nano has joined the Global Semiconductor Alliance (GSA), described as the voice of the global semiconductor industry. Coby Hanoch, CEO of Weebit Nano, said “2022 will be a pivotal year for Weebit as we qualify our ReRAM IP, paving the path to customer volume production. Therefore, as we enter this new commercial phase of the business, it now makes sense for us to join the GSA, enabling us to more deeply engage and collaborate with partners, customers and peers.”

Open source distributed SQL databases Yugabyte has expanded the course offerings and certification opportunities of its free education program, Yugabyte University. It offers students free resources, including video downloads, hands-on labs, office hours, discussions, and proof of completion. Yugabyte University intends to support the growing demand for distributed SQL database professionals by offering free training courses to over 10,000 new students and awarding more than 4,000 professional certifications in 2022.

Ising on the cake: Sync Computing spots opportunity for cloud resource optimisation

Startup Sync Computing has devised a hardware answer to the problem that NetApp’s Spot solves with software: how to optimise large-scale public cloud compute and storage use.

Update. CEO Jeff Chou positions Sync vs NetApp’s Spot. 14 January 2022. SW focus. 17 January 2022.

It’s operating in near stealth, and what we describe here is not based on company announcements. Instead it relies on an article by one of its funders: The Engine, an MIT-based financial backer.

Enterprises are finding that using hundreds, if not thousands, of cloud compute instances and storage resources costs significant amounts of cash. It’s virtually impossible to navigate the complex compute and storage cloud infrastructure environments in real time or manage them effectively over time, meaning cloud customers spend more, much more than they actually need to in order to get their application jobs done in AWS, Azure and Google, etc.

The genius of the Spot.io company bought by NetApp lay in recognising that software could help solve the problem. Its Elastigroup product provisions applications with the lowest cost, discounted cloud compute instances, while maintaining service level agreements, and with a 70–90 per cent cost saving.

Now, two years later, a pair of MIT Lincoln Laboratory researchers argue the problem is getting so bad that navigating the maze of instance classes across time and clouds needs attacking with hardware as well as software. They say the problem, classed as combinatorial optimisation (CO), is analogous to physical world CO issues, such as the classic travelling salesman scenario. This is trying to find a route for the sales rep between a set of different destinations to minimise the time and distance travelled.

They have applied their CO algorithm expertise to designing hardware — a parallel processing item — to solve the specific cloud instance optimisation problem more effectively.

Suraj Bramhavar (left) and Jeff Chou (right). Image from The Engine.

Sync Computing was founded 2019 in by two people: CEO Jeff Chou and CTO Suraj Bramhavar. Chou was a high-speed optical interconnect researcher at UC Berkeley and a postdoctoral researcher running high-performance computing optical simulations at MIT. Bramhavar was a photonics researcher at Intel and then a technical staff member at MIT, developing photonic ICs and new electronic circuits for unconventional computing architectures.

Their company took in a $1.3 million seed round in November 2019 and more cash from an undisclosed venture round in October 2021. The company website provides a flavour of what they are doing, declaring: “Future performance will be defined not by individual processors but by careful orchestration over thousands of them. The Sync Optimization Engine is key to this transition, instantly unlocking new levels of performance and savings. … Our technology is poised to accelerate scientific simulations, data analytics, financial modeling, machine learning, and more. These workloads are scaling at an unprecedented rate.”

The OPU

Sync Computing’s Optimization Processing Unit (OPU) has a non-conventional circuit architecture designed to cope when the number of potential combinations (of instances and instance types for a job in the cloud) is too high for a current server to search through and find the best one. They say that is as the number of combinations scales up then their OPU’s performance overtakes that of general purpose CPUs and the GPUs, taking orders of magnitude less time to find the best combination.

THE OPU uses a design mentioned in a 2019 Nature article by the two founders and others, Analog Coupled Oscillator Based Weighted Ising Machine. This describes an “analog computing system with coupled non-linear oscillators which is capable of solving complex combinatorial optimisation problems using the weighted Ising model. The circuit is composed of a fully-connected four-node LC oscillator network with low-cost electronic components and compatible with traditional integrated circuit technologies.”

Diagram from Nature paper with rightmost image showing the OPU breadboard system.

The Ising model is a mathematical description of ferromagnetism in statistical mechanics and has become a generalised mathematical model for handling phase transitions in statistics. 

The paper showed that the OPU — an oscillator-based Ising machine instantiated as a breadboard — could solve random MAX-CUT problems with 98 per cent success. MAX-CUT is a CO benchmark problem where the solution is to produce a maximum cut (combination of options) no larger than any other cut.

The paper argues: “Solutions are obtained within five oscillator cycles, and the time-to-solution has been demonstrated to scale directly with oscillator frequency. We present scaling analysis which suggests that large coupled oscillator networks may be used to solve computationally intensive problems faster and more efficiently than conventional algorithms. The proof-of-concept system presented here provides the foundation for realizing such larger scale systems using existing hardware technologies and could pave the way towards an entirely novel computing paradigm.”

Update. We now understand that Sync is focusing on software rather than hardware for its initial product with hardware becoming necessary as the problem scales.

Sync versus NetApp’s Spot

Chou sent us his views on how Sync’s technology relates to NetApp Spot, saying: “Our solution goes much deeper technically than theirs, in fact you can use us on top of Spot (Duolingo is already using Spot).  The gains we got for them were on top of Spot instances.

“Fundamentally we deploy a level of optimisation that goes from the application down to the hardware, which is how we’re able to get even more gains. We are not just cost based, we can accelerate jobs as well.  We let companies choose if they want to go faster, cheaper or both.

“We are also cloud platform-agnostic, we work with AWS EMR, Databricks, etc.  Whereas [NetApp’s] Data Mechanics is only Spark on Kubernetes within the NETapp ecosystem.

“Longer term our “Orchestrator” product goes into cluster level scheduling to perform a global optimisation of all resources and applications; something nobody else is doing.”

Comment

Sync Computing’s OPU could optimise large-scale public cloud resources better, meaning faster and at lower cost. Dynamically too — beyond the point where conventional server processors and even GPUs give up. It is very early days for this startup, but its area of focus is the core of NetApp’s CloudOps business unit.

Earlier this month data protector Cobalt Iron said it had been awarded a patent that covered technology for the optimal use of on-premises and public cloud resources. This technology is based on operational and infrastructure analytics and responds to changing conditions; it’s dynamic. 

We have two established companies highlighting software approaches to solving the public cloud CO problem. If they have identified a large enough problem that is growing then Sync Computing has a good shot at making it.

Samsung demos MRAM chip with embedded compute for AI

Samsung has demonstrated an in-memory computing MRAM chip processing stored data, and accurately working on AI problems such as face detection. A Nature paper is coming to explain how it was done

This technology could provide low-power dedicated AI chips combining compute and storage to reduce data movement, speed processing and lower energy use. It could also help develop neuromorphic computing — analogous to how masses of neurons and synapses in the brain inter-operate.

Lead paper author Dr Seungchul Jung is quoted in Samsung’s announcement: “In-memory computing draws similarity to the brain in the sense that in the brain, computing also occurs within the network of biological memories, or synapses, the points where neurons touch one another.

“In fact, while the computing performed by our MRAM network for now has a different purpose from the computing performed by the brain, such solid-state memory network may in the future be used as a platform to mimic the brain by modelling the brain’s synapse connectivity.”

MRAM technology

MRAM or magneto-resistive RAM, also known as spin-transfer torque magneto-resistive RAM (STT-MRAM), relies on the different spin directions of electrons to signal a binary one or zero. 

It is a non-volatile memory technology with greater speed and endurance than NAND. While it has been actively investigated as a possibility for replacing SRAM for many years, progress has been limited as it is a difficult technology to develop, with high manufacturing costs.

As we wrote back in 2018, STT-MRAM cells or storage elements have two ferromagnetic plates, electrodes, separated by non-magnetic material. One high-coercivity electrode has its magnetism pinned because it needs a larger magnet field or spin-polarised current to change its magnetic orientation compared to the other electrode.

This second, lower coercivity electrode is called a free layer and its north-south orientation can be changed more easily. Binary values are stored by making the free electrode have the same, parallel, north-south orientation as the reference electrode or a different, anti-parallel one. The electrical resistance of the cell, due to the spin-polarised electron tunnelling effect, is different in each state, indicating a binary value.

The complexity of the technology can be seen in developer Spin Memory’s product, using a 3D two-level cell crossbar array — it involves more than 250 separate patents.

Spin Memory MRAM crossbar graphic.

Everspin is another MRAM developer and its STT-MRAM offers higher write and read speeds than DRAM and has been used by IBM in its FlashSystem 9100 and Storwise V7000 systems. In effect, MRAM is in commercial production already but it is a niche technology.

Compute this

Samsung’s neat idea is to use MRAM differently, by adding compute elements to an MRAM chip built as a 64×64 cell crossbar array, and use its data access speed and non-volatility along with in-chip parallel processing to speed AI tasks. Naturally the scope of the compute is limited — you can’t realistically add general-purpose CPU cores to blocks of cells in a memory chip, but you can add small processors with limited instruction sets.

Samsung has been working on processing-in-memory (PIM) ideas for some time. For example, it has an Aquabolt-XL HBM2 chip embedding a programmable computing unit (PCU) inside each memory bank, minimising the need for data movement.

Samsung MRAM tack

The Samsung MRAM researchers took a different tack. They thought that the low resistance of MRAM cells meant that MRAM PIM chips would need more power than computing ReRAM and Phased Change Memory (PCM) technologies. But if they changed the MRAM chip design or architecture from what is called “current-sum” to an alternative “resistance sum” for analogue multiply–accumulate operations, which they say addresses the problem of small resistances of individual MRAM devices.

Analogue AI PIM technology promises to use much less electrical power than computing artificial neural networks (ANNs) in digital processors. 

We’d like to understand this much more but the details are hidden behind scientific paper paywalls. The paper’s available diagrams talk about Artificial Neural Networks and performing analogue vector–matrix multiplication to transfer data from a layer to the next. The paper abstract mentions “multiply–accumulate operations prevalent in artificial neural networks.“

Samsung MRAM PIM paper diagram.

We have found a preliminary copy, and its first page compares in-memory processing and conventional Von Neuman architecture. “The rate at which data can be transferred between the processing unit and the memory unit represents a fundamental limitation of modern computers, known as the memory wall. In-memory computing is an approach that attempts to address this issue by designing systems that compute within the memory, thus eliminating the energy-intensive and time-consuming data movement that plagues current designs.”

The researchers’ MRAM crossbar array design is explained like this: “With each memory storing a synaptic weight as its conductance value, the crossbar array executes the vector–matrix multiplication, the most prevalent ANN algebra. Each column yields a dot product between the input voltage vector fed to the rows and the column weight vector, by first multiplying the memory conductance and the input voltage at each row–column cross-point via Ohm’s law and subsequently summing the resulting cross-point currents along the column via Kirchhoff’s law.

“This physical matrix multiplication, or analogue multiply–accumulate (MAC) operation, consumes far less power than its digital counterpart.”

Once the researchers had the designed chip available they tested it on a couple of classic AI problems. It achieved an accuracy of 98 per cent in classification of hand-written digits and a 93 per cent accuracy in detecting faces from scenes.

Comment

This compute-in memory MRAM chip looks highly specialised for specific types of AI problems and is unlikely to appear in enterprise computing until it needs to deal with such problems in a routine way.

Bootnote: The paper, “A crossbar array of magnetoresistive memory devices for in-memory computing”,  has been published online by Nature and paper-based publication is coming shortly. 

The research was led by Samsung Advanced Institute of Technology (SAIT) in collaboration with Samsung Electronics Foundry Business and Semiconductor R&D Center. The first author of the paper, Dr Seungchul Jung, staff researcher at SAIT, and the co-corresponding authors Dr Donhee Ham, fellow of SAIT and professor of Harvard University, and Dr Sang Joon Kim, vice president of technology at SAIT, led the research.

HSMR

HSMR – Hybrid Shingled Magnetic Media Recording disk drives. Hybrid SMR drives have both a conventional non-shingled magnetic recording (Conventional PMR or CMR) region and an SMR region. HSMR drives were specified by Meta’s (then Facebook) Open Compute Project (OCP) back in the 2012 era with OCP18. Western Digital and Seagate participated in the HSMR project. The idea is to spread the storage of hot data across across a set of disk drives by having hot data CMR regions on each drive rather than having the hot data occupy the totality of a disk’s surface, so as to increase the data access rate. The rest of an HSMR drive would be filled with cold data in an SMR region. This is an example of zoned namespace media.

OCP18 HSMR document diagram.

Download this OCP18 document to find out more.

Storage news ticker – January 12

IBM Cloud Pak for Data is a unified data and AI platform that runs on any cloud. It’s now available on the Azure Marketplace along with the IBM Cloud Pak for Data BYOL variant.

Nvidia has bought Bright Computing, which makes HPC management software, for an undisclosed sum. Bright Computing’s customer list includes Boeing, NASA, Tesla, Johns Hopkins University and Siemens. Companies in health care, financial services, manufacturing and other markets use Bright Cluster Management software to set up and run clusters of servers linked by high-speed networks into a single unit. The company’s employees will join Nvidia, and transaction details are not being disclosed.

DataCentre Dynamics reports “51 international companies who lost data when OVHcloud’s SBG2 datacenter in Strasbourg burnt down in March 2021 have joined a class action claiming up to €1.9 million in damages.” Read the story here.

MultiPay Group, a global payments technology company, has signed with Percona, which provides open source database software and services, to provide Managed Services for its MySQL open source database deployments. MultiPay provides an API that acts as a single point of integration between any payment method and any acquirer. It relies on Percona XtraDB Cluster and Percona Monitoring and Management for its operational database and management.

Kubernetes data management and operator 5G systems supplier Robin.io announced a strategic collaboration with STL, an integrator of digital networks, to offer XaaS (anything as-a-service). The XaaS offering will leverage the STL Enterprise Marketplace Platform with the Robin Cloud-Native Platform (CNP) to deliver enterprise applications and 5G services. Partha Seetala, founder and CEO of Robin.io, said: “Built on the foundation of cloud-native, zero-touch automation and open architectures, the integrated marketplace solution will enable CSPs to deliver new revenue models and accelerate customer onboarding while keeping service delivery costs in check. The marketplace solution built jointly by STL and Robin.io for service providers and enterprises will disrupt the way XaaS frameworks are built and delivered.”

ThreeFold says its new highly energy-efficient internet infrastructure offers cloud storage that consumes 90 per cent less energy than other storage systems, thanks to its lightweight operating system and peer-to-peer design. It is the world’s first and largest peer-to-peer Carbon Negative Internet Grid in the world. Its grid, in contrast to the current centralized model of hyper-scale and power-hungry datacentres, uses a blockchain decentralized model to spread internet access throughout the whole world by enabling anyone to participate in providing internet capacity (Compute, Storage, and Networking).

Colourful ExaGrid spills the competitive beans

We had a briefing from ExaGrid President and CEO Bill Andrews after hearing about its final 2021 quarter results. He provided us with a little more colour, as he put it.

Blocks & Files: Who else is primarily focussed on making backup appliances?

Bill Andrews.

Bill Andrews: No vendor is focused right now on building or advancing a dedicated appliance for backup. We appear to be the only ones constantly adding to the product feature set.

On what do you base your view of the data protection target market?

We have over 160 in our sales org and soon will have over 200. We truly do see the market for what it is, because we are talking to a wide range of customers. We get in through resellers bringing us in but we also have 36 inside sales reps that cold call into named accounts.

What competing suppliers and products do you encounter?

Since we are in the upper mid-market to the enterprise the backup applications we see in the market that we sit behind are Veritas NetBackup, Commvault, Oracle RMAN, IBM Spectrum Protect, Dell Networker, Dell Avamar.

Veeam continues to move upmarket and has a good base. We only see Rubrik and Cohesity if the customer is looking to switch one of the above and then ExaGrid teams up with either Veeam, Commvault or HYCU to compete.

You say you have a 75 per cent win rate and bid into customers with existing backup target systems. Who are you meeting and beating?

When we go into accounts we replace the following in the following order:

  1. Primary storage disk behind Veeam, Commvault and IBM Spectrum Protect;
  2. Dell Data Domain behind Veeam, Veritas NetBackup, Oracle RMAN;
  3. Veritas 5300/5400 appliances behind Veritas NetBackup;
  4. HPE StoreOnce behind Veeam;
  5. Dell Data Domain behind Commvault;
  6. Dell Data Domain behind IBM Spectrum Protect;
  7. We also replace the storage and deduplication appliances behind BackupExec, Acronis and many others. ExaGrid supports 25 backup applications and utilities.

Do you meet Cohesity or Rubrik in the deals ExaGrid bids for?

When the customer is looking to change both the backup storage and the backup application then we will see Rubrik and Cohesity as they can only sell if the customer is replacing both the backup application and the backup storage, at the same time. Most of the time it is Veeam to ExaGrid replacing Dell Networker to Dell Data Domain or Dell Avamar to Dell Data Domain that we replace. Most Networker and Avamar customers are leaving at high speed. We will see Rubrik or Cohesity in these deals. We don’t see many Commvault or NetBackup accounts turning over, mostly Dell Networker and Dell Avamar.

If in a Nutanix environment ExaGrid will go in with HYCU–ExaGrid versus Veeam–ExaGrid. In this case, HYCU–ExaGrid competes with Rubrik or Cohesity.

Sometimes, more on a per-case basis, we work with Commvault to ExaGrid in some deals.

The net is when a customer is changing their backup application then they will look at Veeam–-ExaGrid, HYCU–ExaGrid, Commvault to disk, Commvault–ExaGrid, Rubrik, Cohesity [in that order].

Comment

Andrews does not mention Quantum’s DXI systems at all, suggesting that they just do not appear — or appear rarely — in what Andrews refers to as upper-mid-market and enterprise markets. Quantum’s newest DXi V5000 offering is targeted at remote and branch offices.

Kastenated – Backblaze partners Veeam’s container protection BU

Storage pod
Storage pod

Kasten — a business unit of Veeam dedicated to protecting Kubernetes application data — has added Backblaze B2 Cloud Storage to its set of supported backup target systems.

Nilay Patel.

B2 Cloud Storage is an S3-compatible public cloud object store with Object Lock capabilities to prevent ransomware corrupting or deleting its stored backups.

Nilay Patel, VP of sales and partnerships at Backblaze, offered a statement: “Kubernetes containers are the standard for many organisations building, deploying, and scaling applications with portability and efficiency. Backblaze and Kasten together offer a compelling solution to support these organisations’ business continuity needs with set-it-and-forget-it ease and cost effectiveness.”

Kasten’s K10 product protects Kubernetes-orchestrated containerised applications. It supports backing up the HPE Ezmeral Container Platform, Nutanix Karbon, Red Hat OpenShift, Microsoft Azure Stack, and others, and sending backup to NFS, EMC and NetApp for example, and S3 targets. These include AWS, Azure, Google, MinIO and Scality object storage, meaning both on-premises and public cloud targets. B2 Cloud Storage has now been added to the list.

Backblaze charges customers by storage capacity used ($0.005/GB/month vs AWS $0.021/GB/month) and has low data egress fees ($0.01/GB vs AWS S3’s $0.05/GB) — differentiating it from the the main public clouds. With B2 there are no data retention penalties for deleting past backups either.

Faster data access coming – PCIe generation 6 spec unveiled

The PCIe SIG has released the official PCIe gen 6 specification, which doubles PCIe 5 speed to 256GB/sec across 16 lanes.

PCIe system designers can use this to double existing bandwidth across PCIe lanes, or halve the number of lanes for the same bandwidth — thus freeing up PCIe lane slots.

Al Yanes, PCI-SIG chairperson and president, issued a quote: “PCI-SIG is pleased to announce the release of the PCIe 6.0 specification less than three years after the PCIe 5.0 specification. PCIe 6.0 technology is the cost-effective and scalable interconnect solution that will continue to impact data-intensive markets like datacenter, artificial intelligence/machine learning, HPC, automotive, IoT, and military/aerospace, while also protecting industry investments by maintaining backwards compatibility with all previous generations of PCIe technology.”

PCIe SIG chart showing generational speed increases.

Greg Wong, founder and principal analyst at Forward Insights, said: “The PCI Express SSD market [is] forecasted to grow at a CAGR of 40 per cent to over 800 exabytes by 2025. With the storage industry transitioning to PCIe 4.0 technology and on the cusp of introducing PCIe 5.0 technology, companies will begin adopting PCIe 6.0 technology in their roadmaps to future-proof their products and take advantage of the high bandwidth and low latency that PCI Express technology offers.”

PCIe generational transfer speed details table.

NVMe data access across a PCIe 6 bus should be faster, which will increase application execution speed. With the CXL (Compute eXpress Link) bus being developed on a PCIe 5 base, and capable of providing dispersed memory configurations, a future version based on PCIe 6 should reduce the latency of memory accesses across the link.

PCIe market growth estimate. Client systems take the most shipments, followed by cloud hyperscalers.

We expect the first SSDs supporting PCIe 6 to appear in the 2023–2024 timeframe, with gaming system designers leading the way.

Micron’s incredibly dense gumstick SSD

Micron has announced the 2400 — a 2TB gumstick-sized PCIe 4 SSD using 176 layer 3D NAND formatted with QLC (4bits/cell), doubling the performance of the prior 2210 96-layer QLC product

Both products have an NVMe interface and are for PCs and notebook systems, including small and thin laptop designs. Micron says the 2400 is the world’s first 176-layer QLC NAND SSD and it uses charge trap technology with a CMOS-under-array design.

Jeremy Werner, corporate VP and GM of Micron’s Storage Business Unit, said in the release that the company expects “the new 2400 PCIe 4 SSD will significantly accelerate the adoption of QLC in client devices as it enables broader design options and more affordable capacity”.

The 2400 is available in three single-sided M.2 formats — 2230, 2242 and 2280 — with three capacity options in each case: 512GB, 1TB and 2TB. There must be a lot of empty space in the 2280 product as this picture indicates:

Micron competitor SK hynix also has a 176-layer 2TB M.2 SSD — its 2280 Platinum P41 drive for gamers. It was announced earlier this month, though SK hynix did not reveal the cell format. We suspect it was TLC (3bits/cell) as it has much higher performance than the 2400.

The 2400 has no on-board DRAM, utilising a host memory buffer instead. Like the 2100 it has an SLC cache-based dynamic write acceleration feature. Other features include hardware-based AES 256-bit encryption, RAIN and SMART, TCG Opal 2.01, TCG Pyrite 2.01, Micron Storage Executive management tool, sanitise erase and secure boot.

We have tabulated the details of Micron’s PCIe 4 M.2 format SSDs for comparison:

Micron says the 2400 has 33 per cent higher I/O speed (ONFI 4.x — 1600MT/s vs 1200MT/s) and 24 per cent lower read latency than the 96-layer QLC NAND used in the 2100. It does not provide an actual latency number. The table above shows that the 2400 is more than twice as fast as the 2100 in terms of random read/write IOPS and sequential read and write performance.

However its endurance is less. This is expressed in terabytes written (TBW) terms, and the 2100 provided 180TBW at the 512GB capacity point, 360TBW at 1TB and 720TBW at 2TB. The equivalent 2400 ratings are: 512GB — 150TBW; 1TB — 300TBW; and 2TB — 600TBW. This lower endurance may be a concern in high write workload environments — think of the 2400 as a boot and mixed read/write device.

Its active idle power has been reduced by half from the 2100. Micron said it is designed to meet Intel Project Athena requirements for more than nine hours of real-world battery life on laptops when using high-definition displays. The power draw is listed in a 2400 product brief table:

According to its product brief the 2100’s active idle power is <400mW — considerably higher. Its active read power is <4,000mW but Micron doesn’t provide an equivalent number for the 2400.

The 2400 will be incorporated into some Micron Crucial consumer SSDs, and available as a component for system designers.

Sticking to its knitting – ExaGrid breaks records again

Backup target array supplier ExaGrid has notched up another record-breaking quarter with bookings and revenue highs set in the USA, Canada, Latin America, EMEA and APAC regions.

It brought on 174 new customers in the quarter ended December 31, 2021, with 49 six-figure new customer deals and two seven-figure ones. The total active customer count has passed 3,200 and the company, founded in 2002, has been cash-positive for the last five quarters. It says its competitive win rate is 75 per cent, which must make for some unhappy competitors. Also 40 per cent of its revenue is due to ARR (Annual Recurring Revenue).

Bill Andrews.

Bill Andrews, ExaGrid’s President and CEO, is clear about why the win rate is high: “ExaGrid reduces the cost of backup storage while maintaining performance and scalability. All of the first-generation deduplication appliances such as Dell EMC Data Domain, HPE StoreOnce, and Veritas 5340, are slow for backup, slow for restores and don’t scale well. 

“Low-cost primary storage disk is too expensive for long-term backup storage. You need to have an integrated approach that brings the performance of backing up to disk but also data deduplication for efficient long-term retention backup storage. ExaGrid’s Tiered Backup Storage provides the best of both worlds.”

The Tiered Backup Storage refers to scale-out ExaGrid’s non-deduplicated landing zone for new backup datasets providing the fastest restores. Then the data in the sets is deduplicated using a global index, not one restricted to the local array, and moved to a non-network-facing repository for longer-term storage.

We have tabulated ExaGrid’s growth numbers for 2021:

The 39 per cent full-year revenue growth rate from 2020 is a good number and we doubt if it will be matched by competitors Dell EMC (PowerProtect), HPE (StoreOnce), Quantum (DXi), and Veritas. ExaGrid is looking to hire another 60 inside and field sales heads because it is growing so quickly.

Comment

As SSDs and the public cloud come to dominate enterprise IT, we see these things as potential opportunities for ExaGrid. For example, its disk-based landing zone could expand to include QLC (4bits/cell) SSDs, possibly deduplicated to lower the cost, and so enable faster restores still, with a view to responding to Pure Storage’s FlashBlade and also to VAST Data.

Bill Andrews is not a fan of using SSDs though. He tells us “It is very expensive and drives the cost of backup storage through the Moon. Very few organisations can afford SSD for backup.” Also “Customers are blown away that we are the same or faster than SSD for backup. SSD is really optimized for small writes such as database transactions. It is not optimized for extremely large backup jobs. We do side by side testing all the time.”

ExaGrid could port its software to the public cloud — meaning Amazon, Azure and GCP — and provide a multi-hybrid cloud backup environment. It all depends upon how ExaGrid sees itself: as an on-premises backup target array supplier, at which it is clearly doing very well, or as a potential hybrid and multi-cloud backup target supplier with a cloud-like business model.

Andrews said ExaGrid is adapting to the cloud. “We currently replicate from a physical onsite appliances into AWS for disaster recovery. By summer of this year we’ll replicate from physical onsite appliances into Azure for disaster recovery. Longer term (no rush as we are getting no market pressure) we will have a version of ExaGrid that runs in the clouds for data that lives and is backed up in the clouds.”