Home Blog Page 254

DPU wars: NVIDIA claims BlueField-2 faster than Fungible – and test details show it

NVIDIA says its BlueField-2 smartNIC/DPU can link a storage system to a server and run four times as fast as Fungible’s competing hardware and software. But it did not reveal details of its test, making for confusion and incomplete understanding – until detailed tables were supplied, showing an awesome 55 million IOPS number – possibly served from a DRAM cache.

Update 1: NVIDIA BlueField-2 NVMe/TCP and NVMe RoCE detailed numbers tables added. 21 Dec 2021. More Fungible test details added as well.

Update 2: NVIDIA served the data blocks from the storage targets DRAM making this a networking test and not a storage test. It is not directly comparable with Fungible at all. 21 Dec 2021.

NVIDIA’s headline claim, in a blog, by Ami Badani, VP Marketing and DevRel Ecosystem, is that its BlueField-2 data processing unit (DPU) more than quadruples the previous record holder by exceeding 41 million IOPS between server and storage.

But the blog does not reveal the data transport protocol, the data block size nor whether it was for read or write operations – meaning we cannot assess real world relevance at all until the missing details are revealed.

The blog cites these  performance numbers using BlueField-2:

  • >5 million 4KB IOPS with NVMe/TCP
  • 7 million to 20 million 512B IOPS with NVMe/TCP with the variation unexplained
  • 41.5 million IOPS with unspecified block size and transport; presumably RoCE

It says NVMe/RocE (RDMA over Converged Ethernet) is faster than NVMe/TCP, but no numbers are cited to justify this.

B&F diagram of NVIDIA BlueField-2 – HPE Server setup

The blog describes the testing methodology and configuration (see diagram above). It says there were two HPE Proliant DL380 Gen 10 Plus servers, one as the application server (storage initiator) and one as the storage system (storage target). Each server had two Intel “Ice Lake” Xeon Platinum 8380 CPUs clocked at 2.3GHz, giving 160 hyperthreaded cores per server, along with 512GB of DRAM, 220MB of L2/L3 cache (110MB) per socket) and a PCIe Gen4 bus. 

We are also told for example that the initiator and target systems were connected with “NVIDIA LinkX 100GbE Direct-Attach Copper (DAC) passive cables.” Wonderful.

We are informed that: “Three different storage initiators were benchmarked: SPDK, the standard kernel storage initiator, and the FIO plugin for SPDK. Workload generation and measurements were run with FIO and SPDK. I/O sizes were tested using 4KB and 512B, which are common medium and small storage I/O sizes, respectively.”

But the cited performance numbers are not connected to SPDK, the standard kernel storage initiator, or the FIO plugin. The storage initiator ran either a default Linux kernel 4.18 or the newer v5.15 Linux kernel – which performed better but, again, no comparison numbers are revealed.

Then we are told: “The NVMe-oF storage protocol was tested with both TCP and RoCE at the network transport layer. Each configuration was tested with 100 per cent read, 100 per cent write and 50/50 read/write workloads with full bidirectional network utilisation.”

Given this level of detail it is then absurd that the blog cites IOPS numbers without identifying which ones were with RoCE – it does say a couple were with NVMe/TCP – and which ones were all-read, all-write or a mixed read/write setup. It just gives us bald numbers instead.

However, on reaching out to NVIDIA it kindly supplied two tables detailing the specific numbers for NVMe/TCP and NVMe RoCE test runs:

NVIDIA BlueField-2 DPU Tests using NMe-oF on TCP. Each test result shows the combined performance of two BlueField-2 DPUs.

The 41.5 million IOPS number was achieved with NVMe/TCP using 512B data blocks in a 100 per cent read run. We were surprised, thinking it would have needed NVMe RoCE. But, surprise, guess what we found when we looked at the second table?

NVIDIA BlueField-2 DPU Tests using NVMe-oF RoCE. Each test result shows the combined performance of two BlueField-2 DPUs.

In fact, NVMe RoCE is faster than NVMe/TCP as the fourth line of the table shows, specifying the SPDK kernel, and showing a 46 million IOPS result with 512B blocks and 100 per cent reads. The fifth and sixth lines show 54 million IOPS, with 512B blocks and a 50-50 read/write mix, and 55 million IOPS with 100 per cent writes and 512B blocks. Why the blog highlights the 41.5 million IOPS number instead of the much higher 55 million IOPS number is a hard question to answer.

We understand from a source that the NVIDIA storage target served the data blocks from DRAM and not from NVMe SSDs, which make comparison to real world numbers hard. Here are two tweets confirming this;

And;

/dev/null means a virtual device and not a real device like an NVMe SSD. No data was sent to or fetched from any SSD here.

NVIDIA’s blog claims: “The 41.5 million IOPS reached by BlueField is more than 4x the previous world record of 10 million IOPS, set using proprietary storage offerings,” but provides no reference to this 10 million IOPS test so we have no official  idea of the system configuration used.

Fungible comparison

We believe it refers to a Fungible system which was rated at 10 million IOPS using its Storage Initiator cards.  This test system featured a Fungible FS1600 24-slot NVMe SSD array as the storage target. The app server was a Gigabyte R282-Z93 box with a dual 64-core AMD EPYC 7763 processor, 2TB of memory, and five PCIe 4 expansion slots.

These slots were filled with Fungible S1 cards and they linked to single FS1600 equipped with two Fungible DPU chips. A 100Gbit switched Ethernet LAN linked the Gigabyte server and FS1600 and 4K data blocks were used in a 100 operation cent read situation.

So a single Gigabyte 128-core server with 5 x Fungible DPUs linked over 100GbitE to a twin-DPU storage target reached 10 million 4K read IOPS. The NVIDIA test  reached 56 million write IOPS with 512B blocks, using a 160-core server and dual BlueField DPUs talking over a 4 x 100GbitE link to a 160-core storage initiator front-ended by two Bluefield-2 DPUs with data served to/from /dev/null. It certainly appears that NVIDIA basically ran a network test and used a far more powerful storage server than Fungible and possibly 4x the bandwidth.

We’d love to have the detailed performance numbers and link configuration/protocol details for both suppliers’ tests, and understand the price/performance numbers here, to be able to properly compare the NVIDIA and Fungible setups. But, as it is, all we can see are two suppliers pushing out hero numbers with NVIDIA’s being more detailed and an impressive 5.5 times higher than Fungible’s but artificially so. Let’s see what Fungible can do to match, exceed or rebut it.

Hot DRAM: Micron’s NAND/DRAM money machine delivers record Q1 revenues

Micron revenues rose 33 per cent to $7.69 billion in its first fiscal 2022 quarter, ended December 2, 2021, as DRAM and NAND sales boomed to a record first quarter level.

There was a profit of $2.31 billion, a 187.7 per cent rise year-on-year, and representing 30 per cent of its revenues.

President and CEO Sanjay Mehrotra said in a results statement: “Micron delivered solid fiscal first quarter results led by strong product portfolio momentum. We are now shipping our industry-leading DRAM and NAND technologies across major end markets, and we delivered new solutions to data center, client, mobile, graphics and automotive customers. “

The market for its products looks good: “As powerful secular trends including 5G, AI, and EV adoption fuel demand growth, our technology leadership and world-class execution position us to create significant shareholder value in fiscal 2022 and beyond.”

A seasonal pattern in quarterly revenues is apparent from fy2019 onwards

His results presentation talked of “outstanding results and solid profitability. Micron is “on track to deliver record revenue and robust profitability in fiscal 2022.” This will be helped by Micron rapidly ramping its 1-alpha, said to be industry-leading, and 176-layer NAND products and achieving excellent yields; these products are now shipping across its major end markets. ” 

DRAM represented 73 per cent of its revenues in the quarter, $5.6 billion – up 38 per cent annually, with NAND making up 24 per cent at $1.84 billion, a 19 per cent annual increase.

Revenues by business unit:

  • Compute and networking – $3.4 billion, up 34 per cent Y/Y
  • Mobile – $1.91 billion, up 27 per cent
  • Storage – $1.15 billion, up 26 per cent
  • Embedded – $1.22 billion – the second highest in Micron’s history and up 51 per cent – reflecting strength in auto and IoT markets.

The auto growth momentum was mentioned by Wells Fargo analyst Aaron Rakers, who said Micron currently sees Level 3 ADAS designs with 140GB of DRAM capacity vs. Gartner estimates that it’s currently at ~2.5GB/vehicle today. In NAND, Micron is seeing Level 3 ADAS vehicles with 1TB of capacity versus Gartner estimates of ~70GB/vehicle currently. These are huge rises.

Financial summary:

  • Gross margin – 46.4 per cent compared to 30.1 per cent a year ago
  • Cash from operations – $3.9 billion
  • Diluted EPS – $2.04 compared to $0.71 a year ago
  • Cash and cash equivalents – $8.68bn

Mehrotra said Micron was combatting supply chain issues with longer-term arrangements: ”We are seeing greater commitment and collaboration on supply planning, including the use of long-term agreements. Today, over 75 per cent of our revenue comes from volume-based annual agreements, a significant increase from five years ago when they accounted for around 10 per cent of our revenue.“

Micron will deal with demand for denser chips by employing transitions to smaller nodes in DRAM and continued layer count increases in 3D NAND. This strategy could run out of steam for DRAM after 2025 and it may then have to add more DRAM foundry capacity on a greenfield site.

The outlook for next quarter is for revenues of  $7.5 billion plus/minus $200 million, a 20.3 per cent increase year-on-year. It is planning to deliver record revenue with solid profitability in FY22 with stronger bit shipment growth in the second half of the fiscal year.

Christmas storage quiz

Think you know the storage industry? Here is a quiz to test your knowledge. See if you can name pictured CEOs, match press release descriptions to suppliers, explain what acronyms mean, name more CEOs, and match logo symbols to suppliers. Answers at the bottom of this article.

Nine CEOs

Name the nine storage CEOs in this set of pictures:

Can you recognise them from their profile?

Storage companies like to provide concise descriptions of what they are about in their press releases. See if you can match the descriptions to the suppliers in the categories below:

a) File sharing suppliers – CTERA, Nasuni and Egnyte

A leading provider of cloud file storage,

A leader in cloud content security and governance

The edge-to-cloud file services leader

b) Storage software suppliers  – Databricks, DataCore Software, Delphix, Diamanti, Komprise, Minio, Nutanix, SingleStore, Snowflake, and WekaIO:

The data platform for AI,

The data and AI company,

A leader in hybrid multicloud computing

The leader in analytics-driven data management as a service,

The largest independent vendor of Software-Defined Storage solutions, 

The Data Cloud company,

The industry leading data company for DevOps,

The company that streamlines Kubernetes applications and data management for global enterprises,

The single database for all data-intensive applications,

A pioneer in high performance, Kubernetes-native object storage.

c) SmartNIC and DPU suppliers – Fungible, Liqid, Nebulon and Pensando:

The world’s leading software company delivering data center composability,

A pioneer in data-centric computing,

The leader in distributed computing for the new edge,

The pioneer of smart infrastructure, server-embedded infrastructure software delivered as-a-service.

d) Array suppliers – DDN, Hitachi Vantara, Infinidat, NetApp, Pure Storage, Qumulo and VAST Data:

A global cloud-led, data-centric software company,

The global leader in artificial intelligence (AI) and multicloud data management solutions

The digital infrastructure, data management and analytics, and digital solutions subsidiary of …

The storage software company breaking decades-old tradeoffs

A leading provider of enterprise-class storage solutions,

The IT pioneer that delivers storage as-a-service in a multi-cloud world,

The breakthrough leader in radically simplifying enterprise file data management across hybrid-cloud environments.

e) Media suppliers; an easy one with just Kioxia, Seagate and Western Digital:

A data infrastructure leader,

A world leader in memory solutions,

A world leader in mass-data storage infrastructure solutions

f) Data Protection – Acronis, Clumio, ExaGrid, Rubrik and Veeam:

The industry’s only Tiered Backup Storage solution,

The leader in backup, recovery and data management solutions that deliver Modern Data Protection

An industry leader in simplifying cloud data protection,

The Zero Trust Data Security Company

The global leader in cyber protection

Acronyms

What is the meaning of these acronyms?

  • HAMR
  • MAS-MAMR
  • CXL
  • HBM2e
  • ETL
  • EAMR
  • ZNS
  • EMIB
  • iSCSI
  • RoCE

Nine more Storage Supplier CEOs

Match the names and faces of storage supplier CEOs:

Storage supplier logos

Which storage suppliers from Druva, Hitachi Vantara, Liqid, Rubrik, StorONE, WekaIO, Qumulo, Panzura and DDN do the logos belong to?

Spoiler alert! Answers below

Nine storage CEOs – from top to bottom by row left to right

  • Hock Tan of Broadcom
  • Antonio Neri of HPE
  • Liran Eschel of CTERA
  • Jill Stellfox of Panzura,
  • Liran Zvebel of WekaIO
  • Phil Bullinger of Infinidat
  • Sumit Puri of Liqid
  • Dario Zamarian of Pavilion Data
  • Alex Bouzari of DDN

Press Release descriptions;

  • (a) Nasuni, Egnyte, CTERA in order
  • (b) WekaIO, Databricks, Nutanix, Komprise, DataCore Software, Snowflake, Delphix,  Diamanti, SingleStore, MinIO in order.
  • (c) Liqid, Fungible, Pensando, Nebulon in order.
  • (d) NetApp, DDN, Hitachi Vantara, VAST Data, Infinidat, Pure Sturage and Qumulo in order.
  • (e) Western Digital, Kioxia, and Seagate in order.
  • (f) ExaGrid, Veeam, Clumio, Rubrik and Acronis in order.

Acronyms

  • HAMR – Heat-Assisted Magnetic Recording
  • MAS-MAMR – Microwave Assisted Switching-Microwave Assisted Magnetic Recording
  • CXL – Compute Express Link
  • HBM2e – High Bandwidth Memory gen 2 Extended
  • ETL – Extract, Transform and Load
  • EAMR – Energy-Assisted Magnetic Recording
  • ZNS – Zone Nemespace Specification
  • EMIB – Embedded Multi-die Interconnect Bridge
  • iSCSi – Internet Small Computer Systems Interface
  • RoCE – RDMA over Converged Ethernet with RDMA being Remote Direct Memory Access

Nine More CEOs

From top to bottom by row left to right;

  • Herb Hunt – Nyriad,
  • Coby Hannnoch – Weebit Nano
  • Evan Powell – MayaData – now ex-CEO too
  • Chris Gladwyn – Ocient
  • Anand Eswaran – Veeam
  • Mohit Aron – Cohesity
  • Kumar Goswami – Komprise
  • Rajiv Ramaswami – Nutanix
  • Bill Andrews – ExaGrid.

Storage supplier logos

From top to bottom by row left to right;

  • DDN, Panzura and Qumulo,
  • WekaIO, StorONE and Rubrik,
  • Liqid, Hitachi Vantara, and Druva.

Cohesity talking IPO at $3.7 billion valuation – report

Data protector and manager Cohesity is talking about an IPO.

Mohit Aron

The firm’s CEO, Mohit Aron, told newswire Bloomberg there will be a $145 million tender offer for employees shares. He said Cohesity, backed by a Softbank fund, is valued at $3.7 billion.

That valuation and the tender offer were revealed back in March,and, according to the newswire, the tender offer is being led by Steadfast Capital Ventures, with participation from existing investors, including SoftBank Vision Fund, DFJ Growth, Foundation Capital and Wing Venture Capital.

Bloomberg interviewed Aron, who claimed Cohesity’s annual revenues were in the hundreds of millions of dollars area – $300 million was mentioned in September – and it is nearing break-even on a cash flow basis. The CEO, who is also a co-founder, also said an IPO was quite near. That is Bloomberg’s big point.

In September, CFO Robert O’Donovan said of Cohesity’s fourth fiscal 2020 results: “In Q4, we had our biggest day, week, month, and quarter, all resulting in our biggest year. From rapidly increasing ARR, to an outstanding net expansion rate, to strong customer growth — including impressive gains in the Fortune 500, the company is firing on all cylinders and breaking records at every turn.”

That sound like a good basis for an IPO.

Cohesity, founded in 2013, has raised a total of $660 million with the latest raise being an E-round for $250 million last year. Earlier this month we predicted Cohesity would file for an IPO. It looks like that prediction is coming true.

Storage news ticker – December 21

Clumio announced the general availability (GA) of Clumio Protect for Amazon S3, its backup as a service (BaaS) offering that provides protection against ransomware, simplified compliance reporting, and a low recovery time objective (RTO), while reducing the cost to protect data in Amazon S3. Since the announcement of its early access (EA) program in September, Clumio says it has successfully performed thousands of backups for EA customers.

CodeNotary has a useful website resource discussing the Apache Log4j vulnerability. It says CodeNotary Cloud gives you the tools needed to create, track and query your software including the SBOMs (Software Bill of Materials).

Data I/O Corp, a global provider of advanced data and security deployment systems for flash, flash-memory based intelligent devices and microcontrollers, has said that Dr Cheemin Bo-Linn will be joining its board. Dr Bo-Linn is the CEO of Peritus Partners, a valuation accelerator, for industry sectors including automotive, electronics, consumer, and medical sectors with integrated security.

HPE said NTT Business Solutions, part of NTT WEST Group, a Japanese network and system integrator, has selected the GreenLake edge-to-cloud platform to deliver its “Regional Revitalization Cloud” and provide a hybrid cloud service to local governments, educational institutions and businesses across western Japan. 

IBM Spectrum Scale container native storage access v5.1.2.1 is now generally available. What’s new?

IBM Spectrum Scale CSI Driver v2.4.0 is now generally available. What’s new:

LucidLink, whose FileSpaces software presents public cloud object storage as a local filer, is partnering AVIWEST, a provider of live video contribution systems. The two will offer to offer an easy-to-use cloud video production and delivery system. By using this , which includes LucidLink Filespaces and AVIWEST’s bonded cellular setup, broadcasters can capture camera feeds, deliver data to the cloud, and gain global access to the same file, with real-time collaboration from anywhere.

….

Consultant Mark Webb of MKW Ventures contributed a cutting assessment of the info provided by Floadia about its 7-bit NAND cell, saying the data provided in the announcement wasn’t sufficient. He told us the idea was something “many people looked at 20-30 years ago. Multiple gate, SONOS, etc.. lots of papers back then…. Macronix did some real nice work in this area.”

After taking a critical look, he detailed some questions he had about the information the firm provided:

  1. The cross section [graphic] is something lined up with 130nm or higher. [It should be something] more like 250nm.
  2. Did they test one bit on retention? A couple bits? 10 years at 150°C is tough. 
  3. Then someone hinted it is for OTP…. WORM. There are many technologies that are better at this …. Fuse/antifuse and ReRAM can work well as OTP
  4. Why have 7bits per cell at a cell size 10x bigger than planar technologies from 5 year ago? 
  5. It’s not clear whether anyone made one working cell.

Finally, Webb added that he believed this would not be “useful for SSD or cellphone,” claiming he had “10 better ways to get similar results on embedded.” He went on to pronounce it: “A very vague announcement.”
… 

Global IT services provider phoenixNAP announced a collaboration with MemVerge, which supplies Big Memory. The collaboration involves running MemVerge’s Memory Machine on phoenixNAP’s Bare Metal Cloud and so providing an infrastructure solution for Big Memory workloads. Bare Metal Cloud can be deployed in minutes and managed using its API, CLI, and Infrastructure as Code integrations. It comes with 15 TB of free bandwidth (5 TB in Singapore) and flexible bandwidth packages. The platform also provides access to S3-compatible object storage, phoenixNAP’s global DDoS-protected network, and strategic global locations. 

Oracle is buying electronic health records (EHR) vendor Cerner Corp for $28.3 billion in an all-cash transaction, and Oracle’s largest ever acquisition. The deal will strengthen Oracle’s presence in the large and strategic healthcare market. William Blair analyst Jason Ader points out that “Oracle is acquiring a slow-growth, lower-margin business at a time when management has been touting top- and bottom-line acceleration. In sum, given that the deal does not address our longer-term structural concerns for Oracle (e.g., steady share loss in database market, playing catch-up in public cloud) and, in fact, injects new risks to the investment thesis.” There’s more about the deal in The Register.

Pure Storage tells us that the IDC Worldwide Quarterly Enterprise Storage Systems Tracker came out this month and it shows that external storage is accelerating and continuing the turnaround that started in Q2. Pure experienced a good Q3 on a global basis; it claimed it had the highest YoY growth rate across every major region compared to the other major global storage providers.

  • Pure grew 26 per cent in a global external storage market that just grew 7 per cent. 
  • In EMEA Pure grew 10x faster than the market, 21 per cent in a market that only grew just 2.1 per cent
  • In APJ Pure grew 3.5x the market, 53.5 per cent in a market that grew 15.3 per cent
  • In Latin America Pure grew 80.9 per cent in a market that grew just 0.92 per cent
  • In North America Pure grew 24.2 per cent in a market that grew just 3.6 per cent

RAIDIX has been granted patent number US 20200371942A1, a method for performing read-ahead operations in the data storage systems. It says read-ahead is a caching technology that analyses the workload and predicts which fragment of data will be requested in the future. Then, for overall system acceleration, the data is cached on a faster RAM or SSD. RAIDIX engineers have developed a new approach to operating data intervals and it’s implemented in RAIDIX products for software-defined storage solutions.  RAIDIX says it  brings better data accessibility to businesses depending on huge volumes of critical data (media production, video surveillance, data centres).

SingleStore  announced it has been recognised (as a niche player) for the first time by Gartner in the 2021 Magic Quadrant and Critical Capabilities for Cloud Database Management Systems (CDBMS). Amazon led the rankings, closely followed by Microsoft with Oracle in third place and Google fourth.

MariaDB also announced its SkySQL was named for the first time in this MQ, as a niche player.

Teradata announced that Vantage, its multi-cloud data platform, ranked highest in all analytical use cases with the top scores in the 2021 Gartner Critical Capabilities for Cloud Database Management Systems for Analytical Use Cases, issued December 14, 2021. Teradata was also named a Leader in the 2021 Gartner Magic Quadrant for Cloud Database Management Systems (DBMS), issued this month.

SANBlaze Technology announced the availability of the industry’s first platform to support NVMe over PCIe Gen5 validation and compliance testing. The SBExpress-RM5 platform is a 16-bay enterprise-class NVMe test appliance supporting hot-plug and PCIe speeds from Gen1 to Gen5.

The system features a unique modular “riser” design that enables user-configurable variable slot support, as well as field-upgradable support for all Gen5 connector form factors, including U.2, M.2, EDSFF, and the new E3/EDSFF form factor. It provides test capabilities for development, QA, validation, and manufacturing teams, and includes the company’s Certified by SANBlaze (SBCert) compliance test suite.

SANBlaze SBExpress-RM5

The Storage Networking Industry Association (SNIA) has moved its SM (Storage Management) Lab from an SNIA Tech Centre in Colorado Springs to a data centre co-lo elsewhere in Colorado. The program has provided an environment for over 15 years to support and coordinate vendors’ development efforts to deliver SMI-S compliant products to market, and is now tailored for SNIA Swordfish. The new facility is said to be highly secure and efficient and enables a focus on lab use rather than lab management. It is helping partner organisations DMTF Redfish Forum and SNIA’s CMSI by providing a shared space to accommodate their needs.

WANdisco, the live data replication supplier, announced a significant contract win through its IBM channel; IBM has secured a $3.3m three-year license contract with a large North American multinational investment bank for the use of LiveData Migrator. The initial use case will be for on-premises data replication with further use cases for cloud migration providing opportunities to expand the relationship. WANdisco’s revenue share will be 50 per cent of the license under its OEM agreement with IBM. WANdisco now expects FY21 revenues to be meaningfully ahead of current market estimates.

Wells Fargo analyst Aaron Rakers is telling his subscribers that “the HDD industry appears to be poised to achieve 10 per cent+ y/y revenue growth in 2021 — representing the first y/y growth in HDD industry revenue since 2012; vs. HDD industry revenue declining at a 4-5 per cent CAGR over the past 10- and 5-year periods. … we think overall nearline HDD capacity shipments can sustain a 30-40 per cent y/y growth trend in 2022.”

Airbyte: Open-source ETL startup goes from zero to unicorn in 2 years

Michael Tricot and John Lafleur must be patting themselves on the back. In just under two years, their Airbyte startup has progressed from nothing to a $1.5 billion valuation and $181.2 million funding to develop its open-source Extract-Transform-and-Load (ETL) data analytics-feeding technology.

Airbyte started up in January 2020 with the aim of building open-source connectors from data sources to data lakes and warehouses. These would replace proprietary tools and also enable feeds from less popular data sources ignored by proprietary suppliers as being of too little value – long tail connectors. Progress was rapid; within 17 months, Airbyte claims it caught up with the ETL incumbents with 150 connectors running Docker containers and deployable in minutes on any platform.

Co-founder and CEO Michael Tricot said: “With the rise of the modern data warehouses, our mission is to power all the organisations’ data movement and doesn’t end at ELT. By the end of 2022, we will cover more types of data movement, including reverse-ETL and streaming ingestion.”

Michael Tricot (left) and John Lafleur (right)

Tricot, a former director of engineering and head of integrations at Liveramp and RideOS, founded Airbyte with John Lafleur. Lafleur is described as a serial entrepreneur of dev’ tools and B2B technology. A year after starting it up, and in a 12-month period, they took in $£6.2 million seed funding, then a $25 million A-round and, such was their progress, have just raised $150 million in a B-round. This B-round was led by Altimeter Capital and Coatue Management, also including Thrive Capital, Salesforce Ventures, Benchmark, Accel, SV Angel.

San Francisco-based Airbyte launched a compute time-based cloud service for its connectors in October. Its software enables businesses to create data pipelines from sources such as PostgreSQL, MySQL, Facebook Ads, Salesforce, Stripe, and connect to destinations that include Redshift, Snowflake, and BigQuery.

It also announced a community-based participative model in which it plans to share revenues with connector contributors. Airbyte expects  to have a roster of 500 connectors by the end of 2022. 

Jamin Ball, a partner at Altimeter Capital, provided a statement: “Airbyte has already made a huge impact in a very short period of time and has more than 1,000 companies lined up to take advantage of its Airbyte Cloud data service that is starting to roll out. There is tremendous market momentum on top of Airbyte’s disruptive model to involve its users in building the ecosystem around its data integration platform.”

Blocks and Files has never come across a startup until now, which, in less than 24 months, has gone from founding to a $1.5 billion valuation, and taken in $181.2 million across seed, A- and B-rounds in its second year. 

In September, in the context of Fivetran raising $565 million in a single round,  we talked about the notion of a funding frenzy for companies involved in sourcing and storing data for analytics. We calculate that 2021 saw a grand total of $6.3 billion in storage-related startup funding across 30 companies, with $5.0 billion of that going into data preparation and storage for analytics startups – quite the funding frenzy.

Here is yet more evidence, with Airbyte, that the investing community is seeing an astoundingly vast opportunity in this field. Soon, it appears, virtually every enterprise on Earth will be gathering data about its sales and operations for analysis. 

VAST gets even VASTer with 100 per cent capacity increase

VAST Data

VAST Data has doubled the storage density of its all-flash Universal Storage hardware to 1350TB of raw capacity, by supporting Intel 30TB QLC NVMe SSDs. That’s over a petabyte of effective capacity per rack unit at 5:1 data reduction.

This qualifies it to claim in a blog that, by providing twice the capacity in the same two rack units and the same power consumption, it cuts datacentre costs for space, power and cooling. It also has a Universal Power Control feature coming next year. This will enforce limits on system power consumption by intelligently scheduling CPUs to reduce peak power draw by 33 per cent as the power drawn by the system reaches a set limit.

VAST co-founder Jeff Denworth said: “With this announcement, we are eliminating all of the arguments for HDD-based infrastructure and making it even easier for customers to reach the all-silicon datacentre destination we first charted back in 2018.”

The blog explains: “The method behind Universal Power Control is pretty simple … hard drives draw more power when they’re moving their heads, SSDs draw much more power when accepting data writes than when processing reads. The write/erase cycle just uses more power than processing reads. With Universal Power Control, as the power drawn by the system reaches the power limit, the system starts reducing the number of active VAST Protocol Servers (aka CNodes), which reduces the rate at which the system writes data. Such an approach would be impossible with direct-attached shared-nothing storage architectures that tightly couple CPUs with storage devices.”

The blog compares compare a cluster of VAST 1350TB enclosures with Dell’s PowerScale A300 — an enterprise HDD-based archive system — and Pure’s all-flash scale-out FlashBlade systems. It says “A 4U FlashBlade chassis holds 15 blades, each with 52TB of flash delivering 535TB of usable capacity and consuming 1800W. … The PowerScale A300 nodes hold 15x 16TB hard drives per node, with four nodes in a 4U chassis that consumes 1070W. Assuming 20 per cent overhead for data protection, virtual hot spares, and the like, each chassis will deliver 768TB of useable space.”

It includes a graphic comparing VAST, Dell’s PowerScale A300 and the Pure FlashBlade systems:

The company claims that, by running at just 500 watts per petabyte (with the coming Universal Power Control), it is 11x more power efficient than Dell’s PowerScale A300 and up to 9x more power efficient than Pure’s Flashblade offering. It says it has a 5x datacentre density advantage over the PowerScale and FlashBlade products as well.

The blog also points out that HDD rebuild times are much longer than SSD rebuild times.

VAST says an unnamed large global financial customer has bought a more than 200PB VAST system to consolidate its performance and archive tiers from mostly HDDs into a single flash-based store.

The new 30TB drives can be mixed and matched in VAST systems with the existing 15TB SSDs in a single cluster.

Veeam appoints new CEO as it looks ahead to an IPO

Data protector Veeam has appointed a new CEO, Anand Eswaran, to take over from Bill Largent who continues as board chair. The company is showing all signs of contemplating an IPO.

Anand Eswaran.

Eswaran brings, Veeam said, extensive experience in developing new business models, executing on market expansion, and driving growth with an inclusive purpose-led and people-first culture. He was previously the president and COO of RingCentral, which supplies cloud-based communications and collaboration offerings for business. In its most recent results it reported ARR of $1.6 billion, a 39 per cent increase year-over-year — just what Veeam would like to do.

A Largent statement said: “To have someone with Anand’s experience on board will lead us into a new era of success, as we further accelerate into the cloud and evaluate the opportunity for Veeam to be a publicly traded company in the future.”

This is Eswaran’s first CEO gig, but he has a big company background. At RingCentral, he led Product, Engineering, Sales, Marketing, Services, Customer Care, Operations, IT, and Human Resources. Before that he looked after Microsoft’s Enterprise Commercial and Public Sector business globally, after having led Microsoft Services, Industry Solutions, Digital, Customer Care, and Customer Success — a global team of 24,000 professionals. 

Prior to Microsoft he was a SAP EVP, head of its $5.4 billion Global Services business with 17,000 business process and technology professionals.

Veeam attained the number two position by revenue in IDC’s Data Replication and Protection Software Tracker for the first half of 2021, having overtaking Veritas and behind only Dell Technologies. It wants the top slot.

Eswaran commented: “Data is exploding and has become one of the most important assets for all organisations. As such, data management, security and protection are pivotal to the way organisations operate today, and failure to have a robust strategy can be catastrophic. Veeam has a unique opportunity to break away as we sit in the middle of the data ecosystem, with the most robust ransomware protection and ability to protect data wherever it may reside.”

A Veeam breakaway, meaning a jump in revenues, particularly ARR, would be a great prelude to an IPO. And an IPO would hopefully enable Insight Partners to get a decent payback for its $5 billion acquisition of Veeam in January 2020.

Panzura sets up white glove migration service to its cloud

Cloud-based file collaborator Panzura has launched Managed Migrations — a service to move customers’ on-premises and hybrid NAS and object data to its cloud.

Managed Migrations includes dedicated engineering experts, start-to-finish implementation, and technical resources to move data, applications, and workloads to the cloud so customers can use its CloudFS global file system and Data Services — its SaaS data management offering. Panzura claims that a a lossless migration of data, applications, and workloads, and even mega-projects can be completed in weeks, or even days, instead of months.

James Seay, chief services officer at Panzura, offered a statement: “Panzura Managed Migrations expands our ability to help customers become future-ready as they tune up their IT infrastructure for the cloud revolution.”

The company says each customer gets a dedicated project manager, migration architect, and migration engineer. Migration starts with a roadmap including architecture, prototyping, and operational recommendations concerning the data sources, the destination, and network connectivity. Panzura then handles everything from hands-on execution, performance analysis, and real-time optimisation. Specialised automation capabilities help migrate data and workloads with speed and precision. 

Panzura provides ongoing support for customers until their data and applications are fully migrated and workloads are in production. It provides staff training assistance so customers can run their own environment, or the Panzura Global Services team can run the environment on behalf of customers.

The company says Managed Migrations provides support for any hybrid multi-cloud migration through Panzura partnerships with Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, IBM Cloud Object Store (iCOS), Wasabi, and Cloudian. The Managed Migrations service has high-speed file transfer capabilities, regional and cross-region offerings, with built-in data security and contingent compliance with HIPAA and other regulatory mandates.

Panzura claims its experts also provide recommendations to reduce a customer’s overall data footprint and incumbent cloud storage costs. The Panzura global file system consolidates unstructured data from all locations, after deduplicating and compressing, typically resulting in cost savings of up to 70 per cent.

This dedicated migrate-to-Panzura service will compete with generalised data migrators such as Datadobi and DIY tools like Windows Robocopy and the Linux/Unix Rsync facility. There is also Atempo’s Miria product which can migrate data and other dedicated destination migration services, such as Kioxia’s online data migration to its KumoScale flash box.

In effect Panzura is offering a white glove migration service to its own cloud and will even operate its cloud for customers. This may encourage Panzura competitors Egnyte and Nasuni to do likewise.

7bits/cell flash in Floadia’s AI Compute-in-Memory chip is not for SSDs

Japanese microcontroller embedded flash design company Floadia has developed a 7bits/cell — yes, an actual seven bits per cell — NAND technology that can retain data for ten years at 150°C, that will be used for a AI Compute-in-Memory (CiM) operations chip. Its use in SSDs looks unlikely.

The company’s announcement specifies analog data, but also says it is a 7bits/cell structure — which means digital data. Perhaps there is a “Lost in Translation” effect here. It also says that without its semiconductor design tweaks, a cell would only retain data for 100 seconds. How long its tweaked-design cell could retain data at room temperatures is not revealed.

This 7bits/cell technology is based on Silicon-Oxide-Nitride-Oxide-Silicon or SONOS-type flash memory chips developed by Floadia for integration into microcontrollers and other devices. Floadia said it optimised the structure of charge-trapping layers — ONO (oxide-nitride-oxide) film — to extend the data retention time when storing seven bits of data. 

Floadia SONOS cell image

Its announcement says “the combination of two cells can store up to eight bits of neural network weights” which sounds odd. If one cell can store seven bits why shouldn’t two cells store 2 x 7 bits?

The CiM chip stores neural network weights in non-volatile memory and executes a large number of multiply-accumulate calculations in parallel by passing current through the memory array. Floadia says that makes it a good fit for edge computing environment AI accelerators because it can read a large amount of data from memory and consumes much less power than conventional AI accelerators that perform multiply-accumulate calculations using CPUs and GPUs.

It claims that its intended chip, despite a small area whose exact size is not revealed, can achieve a multiply-accumulate calculation performance of 300 TOPS/W, far exceeding that of existing AI accelerators.

Floadia is not revealing the capacity of its CiM ship’s NAND. A 2020 International Symposium on VLSI Design, Automation and Test paper, “ A Cost-Effective Embedded Nonvolatile Memory with Scalable LEE Flash-G2 SONOS for Secure IoT and Computing-in-Memory (CiM) Applications” may reveal more, but it is behind a pay wall.

SONOS is a charge-trap mechanism, trapping electrons in Silicon Nitride film. The retention life is controlled by optimising the thickness and film properties of oxide and/or Silicon Nitride films. The company’s website states: “SONOS is free from leakage of electric charge through defect or weak spots in the Bottom Oxide film caused by damage during Program and Erase operation, because trapped charges are tightly bonded with the trap site in Silicon Nitride Film.”

Standard MOS vs non-volatile MOS.

Also the “G2 cell consists of one SONOS transistor and two switching transistors placed adjacent to the SONOS transistor (see image above). This tri-gate transistor works as one Non-Volatile transistor operated by logic level voltage to switching transistors and high voltage only to the SONOS memory gate. Because of quite low programming current — pA order to each cell which is equivalent 1/1,000,000 of Floating Gate NVM — the wiring to the SONOS memory gate is treated like a signal line. And the power supply unit is able to be placed outside of the memory block — such as a corner area of the die. This unique feature of G2 provides LSI designers freelines of chip design and creation of new circuits combining logic circuits with Non-Volatile functionality.”

SONOS charge trap has much less leakage that a floating gate cell.

Floadia says the technology also uses Fowler Nordheim (FN) tunnelling technology to achieve extremely low power in program and erase operations, consuming 1/1,000,000 times current compared to conventional technologies using hot carrier injection for program/erase operation. It says: “G2 satisfactory supports operating temperature up to 125°C and 20 years of data retention life at 125°C.”

So we have 10 years at 150°C and 20 years at 125°C, which suggests retention periods at room temperature could be immense.

Whether this technology could be used in commercial SSDs is an interesting question and we’re asking a non-volatile memory expert, Jim Handy, about it.

Jim Handy’s view

Jim Handy of Objective Analysis told us “The first thing that I thought of when I saw your note was the company Information Storage Devices (ISD), which was acquired by Winbond in 1999. ISD made floating-gate chips that stored linear voltages to better than ±1 per cent accuracy (that would be around seven bits, if it were digitised).  They got designed into lots of measurement equipment and into record-your-own greeting cards. My favorite application was an instrument to measure and record the stresses on bridges.

“Move ahead a couple of decades and you have Floadia doing the same thing, but a charge trap version targeting a different application: AI.

“It’s a good use of these technologies, and there’s a lot to be said in favour of making neural calculations in the linear, rather than digital, domain. AI can overlook noise and variations that add cost and complexity to digital implementations, and it’s trivial to perform multiplication and sums (linear algebra) in linear electronics at a high speed using very little energy.

“Floating gates and charge traps store voltages, and the use of MLC, TLC, and QLC has given developers a very good understanding of how best to manage that.

“As for using this in SSDs, that’s a different matter. If a flash chip didn’t need to run fast and be cheap then we may already have seen 7-bit MLC.

  • Fast: There’s a lot of sensitivity to noise when you’re trying to digitise multiple voltage levels, but if you average over time then you can manage that, if you have a lot of time. Who wants a slow SSD?
  • Cheap: You can store more precise voltages with a big charge trap than with a small one (likewise for floating gates). The bigger the bit, the fewer you get onto a wafer, so the higher the cost.

“But the whole point in going from SLC to MLC to TLC to QLC is to reduce costs. You wouldn’t do that by increasing the size of the bits.”

That’s a “No” then.

We have questions. Cohesity has answers

We had the opportunity to ask Cohesity VP of product marketing Chris Wiborg some questions about cloud data management, data storage growth, ransomware and Cohesity’s future. Enjoy reading his answers.

Cloud Data Management

Blocks & Files: Is it likely that the three main cloud suppliers move up stack and start offering their own data protection and then data management services on top of compute instances, storage classes and Serverless functions?

Chris Wiborg.

Chris Wiborg: One can never predict what areas CSPs may dive into next should they see the market opportunity as worthy of their attention. However, given the hybrid and multi-cloud nature of many customer deployments, most of the customers we talk to today prefer a third-party alternative. 

Any given CSP being good at just protecting their own offerings likely will not be sufficient — similar to why even though database vendors have offered a certain level of data protection themselves for years … most customers have moved towards a provider whose support spans across many different sources (unstructured data, containers, VMs, other DBs, etc.) in the interest of consolidating silos as opposed to managing multiple flavors and instances of data protection and management.

Could one or more of the three big CSPs buy data protection and data management suppliers so as to acquire needed technology? For example, Google acquired Elastifile to gain file system technology.

It’s always a possibility. There certainly have been independent vendors that at times have shopped themselves around as an alternative exit strategy.   Effectively integrating acquisitions not architected in a cloud era into a commercially viable cloud offering another discussion entirely.

Will external data protection and data management service suppliers — as opposed to in-house AWS, Azure and GCP supplied services — naturally gravitate towards multi-cloud vendor offerings?

Absolutely. The modern enterprise IT world is de-facto a hybrid, multi-cloud one — and likely to be so for the foreseeable future.  

What other competitive differences might they have from in-house CSP-supplied services, like functional superiority? Would these be sustainable?

Well, one example would be a conflict of business interests. Let me explain: Would you expect Microsoft to be incentivised to play nicely with AWS (or vice versa) in providing robust support for workloads offered by the other?  This is where more neutral third-party options likely will have room to navigate for some time to come: by providing the features customers need irrespective of the native hosting environment.

Data Storage/Management

We are facing a seeming relentless rise in unstructured data. Petabyte-level backup and archive data estates are becoming commonplace. Will these develop into exabyte-level estates and then zetabyte-level ones? 

Given the continued exponential growth of data, yes. In fact, given this is the prediction season, this may be sooner than we imagine. I don’t want to put a date on it, but with 5G rollouts occurring globally now too, I would bet on it happening very soon. 

Surely the cost of storing zetabyte-class unstructured data, some of it or even most of it with very low access rates, will lead to mass deletion exercises and a cap on data growth?

This is one reason that capacity efficiency with capabilities such as global (as opposed to, for instance, per-volume) dedupe is critical even today. And, yet another reason why data governance has an important role to play in data management going forward. What’s really worth protecting or keeping — and for how long? Businesses are still not on top of this.

This will increasingly play into the data management strategies of large organisations as they wrestle with the balance of the costs/regulatory requirement/SLAs equation as appropriate within their organisation.

Ransomware/Cybersecurity

Might we have a situation where we could view ransomware as an effectively solved problem? 

Sadly, not very likely as cybercriminals continue to evolve their tactics and techniques. We’ve witnessed the shift from attacking production data only, to going after the backup data first, to now the rise in exfiltration and double extortion schemes — a progression we’ve seen unfold over just the past couple of years. And, sadly, the next extension of ransomware’s blast radius is likely being cooked up right now. 

Are immutable and threat-cleaned backups along with fast recovery needed as normal business data protection practice for this to happen?

These should be table stakes for every organisation looking to harden their ransomware security posture with next generation solutions today.

How far off are we in your view of companies widely having this type of recovery as standard?

Within the past year or so, it’s the first thing on a customer’s lips when they meet with us and want to have us help them with. And what’s not surprising perhaps is that it’s not just IT Operations, but also SecOps now at the table helping set these requirements.

Cohesity focused

We have several large data protection suppliers, most of whom are engaged in developing SaaS offerings. For example, Cohesity, Commvault, Druva, Rubrik, Veeam and Veritas. They are also extending into cyber-security to fight ransomware. There appears to be a looming lack of greenfield customers.

Apart from the NewCorps out there, were there ever really any greenfield customers? Since we arrived with our initial offerings it has been about displacing more established legacy IT rivals. As our results show, we are adept at providing a next generation approach to age old IT problems. That  is how we’ve been disrupting and gaining market share since the beginning, and we are very prepared for more competitive battles. When prospects see what we can do in a fair POC, our technology shines.

How will a company like Cohesity grow once it is competing for customers in a relatively static data protection and data management market against other large suppliers? 

Vendors differ in what data management really means to them.  Some are using those words while really just focusing on addressing the backup and recovery problem.  

The legacy generation of data protection tools — which IT teams still grapple with to perform and manage routine backup and recovery functions — poorly scale to do more data management tasks such as file and object services, secure data isolation, governance and audit support, dev/test copy management, and running analytics to obtain insights.

More importantly, these tools are not just failing IT — they are also failing the business in terms of its online reputation, its operating efficiency, and its ability to use data as a strategic asset. If you look at tackling ransomware — most of the companies we are up against predate modern malware attack vectors.

Our view on this has always been more expansive, taking on other workloads. Our perspective is that the long game is really about supplying a platform and set of services that our customers need across the lifecycle of their data in a distributed (core/edge/cloud), yet not decentralised (single point of administration) fashion. Data protection is just one phase in that broader lifecycle of data management, but when I look around there’s a lot of Frankenstein’s monsters and not much built for the next generation of requirements.

Could Cohesity move into the very fast-growing analytics space?

We already are there at a “primitives” level — we supply building blocks for others to take a step further. We also view third-party extensibility as a key platform design tenet for us, and therefore believe that there is room to both grow and aggressively partner in this space — guided as always by where our customers lead us.

Storage news ticker – December 16

Data migrator and manager Datadobi announced the appointment of Charlie Collins as the company’s new channel sales director for the Americas. He’ll be responsible for developing and managing strategic plans with focus partners in North America and will report directly to Paul Repice, VP of Americas sales.

The Miami Dolphins NFL franchise is using Dell’s unstructured data storage and hyperconverged infrastructure systems to help expand the use of video for fan engagement, safety, and security for all events at Hard Rock Stadium. The Miami Dolphins standardised on Dell products for all applications, media asset management, safety and security, disaster recovery, data backup and virtualization. The Dolphins estimate the organisation has generated more than $1.2 million in cost savings, which was used to help fund a data recovery site.

Roger Cox (Sourced from Crunchbase.)

Roger Cox, a highly-regarded enterprise storage research VP at Gartner, passed away in the last 48 hours and will be much missed within the storage supplier community. He spent 22 years at Gartner, joining from Adaptec where he was a director for RAID Marketing.

The UK’s Barclays Bank has decided to go all-in with HPE GreenLake for its global private cloud. The GreenLake platform will host thousands of applications, more than 100,000 workloads, and support the bank in delivering an enhanced personalised banking experience for its customers. The workloads include virtual desktop infrastructure (VDI), SQL databases, Windows server and Linux. The migration from the legacy infrastructure to the private cloud is being performed by HPE Pointnext Services in partnership with the Barclays team. More details can be found on our parent website, The Register.

Hyve Solutions, which provides hyperscale digital infrastructures, has qualified Micron’s datacenter and enterprise-optimised 7400 E1.S SSDs with NVMe for its configurable Polaris 9219 platform. “We designed the E1.S form factor into our systems portfolio because it addresses several pain points for customers, such as flexibility, heat dissipation, optimal performance/power, energy savings through improved thermal cooling capabilities, and rack consolidation through storage per node improvements,” said Jay Shenoy, Hyve’s VP of technology.

ioSafe, which provides disaster-proof data storage devices, announced the new ioSafe 1520+ 5-Bay Network Attached Storage (NAS) device, a fireproof and waterproof device can protect up to 210TB of data. The ioSafe 1520+ suited for disaster-proofing data for the privacy-concerned, off-grid locations, small businesses, and departmental applications. It can also integrate with cloud applications and has optional disaster-proof expansion bays.

Kioxia America announced its CM6 and CD6 Series of PCIe 4.0 NVMe SSDs have earned VMware vSAN 7.0 certification, enabling them to be shared across connected hosts in a VMware vSphere cluster. 

Lightbits Labs announced a partnership with Define Tech — a global independent software vendor for cloud-native computing in finance, life sciences, broadcast and media, and high-performance computing — to enable data-intensive workloads like AI/ML at scale. Lightbits supplies all-flash storage accessed across NVMe/TCP.

OWC announced the release of SoftRAID 3.0 for Windows. This includes support for RAID 5 on Windows 10 and 11. Users can create RAID 0/1/5 with Windows and RAID 0/1/4/5/1+0 (10) with Macs. Built-in OWC MacDrive technology lets you seamlessly move SoftRAID volumes between OSes. SoftRAID Monitor constantly watches your disks and alerts you if problems are detected. Volume validation ensures sectors can be read and parity is correct. Error prediction helps protect against unexpected failure.

File system supplier Qumulo’s Core product has been certified by Veritas Technologies as a qualified software platform for Veritas Enterprise Vault.

… 

Samsung Electronics unveiled a lineup of automotive memory products designed for next-gen autonomous electric vehicles. It includes a 256GB PCIe 3 NVMe ball grid array (BGA) SSD, 2GB GDDR6 DRAM and 2GB DDR4 DRAM for high-performance infotainment systems, as well as 2GB GDDR6 DRAM and 128GB Universal Flash Storage (UFS) for autonomous driving systems.

TerraMaster D5 Thunderbolt 3.

China’s TerraMaster announced its D5 Thunderbolt 3 professional-grade RAID storage compatible with the latest Apple M1 chip-powered MacBook Pro, with M1 Pro and M1 Max and the latest macOS Monterey. It is compatible with the latest Thunderbolt 4 protocol and has five bays that can take 3.5-inch SATA disks and 2.5-inch SSDs — a total storage capacity of up to 90TB. It delivers 40Gbit/sec through its interface, and attains speeds of up to 1,035 MB/sec (test conditions: 5 HDDs, RAID 0 mode). The device is compatible with daisy chaining across multiple Thunderbolt 3 devices, and each one can be realized using the Thunderbolt interface.

Lee Caswell.

Lee Caswell, VP Marketing at VMware, has announced he’s off on a new adventure, after five years at VMware where he marketed vSAN and the HCI concept. Caswell joined VMware after a brief stint at NetApp and time at FusionIO and Pivot3 (the was a co-founder) before that. Ironically he was an EVP for marketing at VMware for a year before leaving to help startup up Pivot3. He said that, at VMware “We made it happen in HCI with 10x revenue growth to more than $1 billion, 30,000 new customers, and five consecutive years of MQ leadership. Now that was fun!” It looks like he may be joining a startup.

MSP360, a provider of backup and IT management products for MSPs and IT departments worldwide, has added Wasabi Object Lock immutable storage to the latest version of its MSP360 Managed Backup Service to help MSPs and internal IT teams protect cloud-based backups from ransomware, natural disasters, or accidental human error.