Home Blog Page 265

A digest with lots of storage news shrimps on the barbie: Cohesity, DDN’s Tintri and TrendFocus disk ships

The focus this week is on Cisco and Cohesity partnering to help security ops people fight malware, TrendFocus’s disk ship data for the third quarter — cue a PC drive increase, and exec changes at DDN’s Tintri business unit.

Cohesity links Helios to Cisco’s SecureX 

Cohesity’s Helios management system has been integrated with Cisco’s SecureX security risk monitoring  and response system. It means SecureX admins can get to see Cohesity’s DataProtect product’s (anomaly spotting, resource management, migration, backup and recovery status info) through their dashboard alongside SecureX’s existing capabilities to automatically aggregate signals from networks, endpoints, clouds, and apps. 

This aggregation and correlation of Cisco and Cohesity information should help an IT Security team see the emergence of malicious activity more quickly and fully, view operational performance, and shorten threat response times. The scope of ransomware attacks can be understood better and the security operations (SecOps) team initiate a workflow to restore compromised data or workloads to the last clean snapshot.

Al Huger, Cisco’s VP and GM of Security Platform and Response, said: “Cisco SecureX’s comprehensive security platform offers customers a system-wide view of security threats and issues. Adding the Cohesity Helios data protection and … data management solution to Cisco SecureX provides businesses with superior ransomware detection and response capabilities.”

Cohesity is now a Cisco Secure Technical Alliance Partner and a member of Cisco’s security ecosystem. The Cohesity-Cisco relationship has enabled:

  • Cohesity Helios as a validated, S3-compatible backup, disaster recovery, and long-term retention solution for Cisco Secure Workload (formerly Cisco Tetration);
  • Cohesity ClamAV app on Cohesity Marketplace based on a Cisco open source antivirus solution;
  • Cohesity integrated secure, single sign-on (SSO) with Cisco Duo.

Every Cisco Secure product includes Cisco SecureX. The integrated solution and support are generally available from Cisco worldwide.

Tintri exec churn

DDN’s Tintri business unit has seen three senior executives leave and a new one appointed:

Phil Trickovic.

Phil Trickovic was appointed SVP of Revenue for Tintri in April, coming from two years at Diamanti. He had previously been Tintri’s VP of worldwide Sales and Marketing.

General manager and SVP Field Operations Tom Ellery resigned in June to join Kubernetes-focussed StormForge.

Paul Repice, Tintri’s VP Channel Sales for the Americas and Federal, left in March this year to join DataDobi as its VP Sales Americas.

Amy Mollat-Medeiros, SVP Corporate Marketing & Inside Sales for Tintri and DDN brands, resigned in June and joined Tom Ellery at StormForge to become SVP Marketing and SDR.

Graham Breeze was appointed Tintri’s VP of Products in March and came from 18 months at Diamanti. He’d also been at Tintri before, in the office of the CRO.

Christine Bachmayer was promoted to run Tintri’s EMEA marketing in April.

Recent Tintri Glassdoor reviews are uniform, being pretty negative. We hear changes are coming.

TrendFocus disk ship data: PC drive shipments increase

Thank you Wells Fargo analyst Aaron Rakers for telling subscribers that TrendFocus’s disk ship data for 2021’s third quarter saw about 67.8 million units shipped, up seven per cent year-on-year. Seagate had the leading share, at 42 per cent, Western Digital was ascribed around 37.5 per cent and Toshiba the rest, some 21 per cent.

It’s estimated that 19.3 million nearline disk drives were shipped — more than 250EB of capacity. This compares to the year-ago quarter when 13.3 million nearline disks hit the streets. Rakers thinks Seagate and Western Digital have a near-equal nearline disk ship share at 43 to 44 per cent.

Trend Focus estimates there were ~21 million 2.5-inch mobile and consumer electronics disk drives shipped, lower than the ~26 million shipped a year ago.

There were ~23.5-24.0 million 3.5″ desktop/CE disk units shipped shipped in the third quarter; an unexpected increase on the year-ago 21.5 million drives.

Shorts

Civo, a pure-play cloud native service provider powered by Kubernetes, announced general availability of its production-ready managed Kubernetes platform. It claimed that, at launch, it is the fastest managed Kubernetes provider in the world — deploying a fully usable cluster in under 90 seconds.

Cohesity has joined the Dutch Cloud Community — an association of hosting, cloud and internet service providers — as a supporting member. So what? Mark Adams, Cohesity’s Regional Director NEUR, said: “We are keen to work with the cloud community to offer either a customer-managed solution, or our Cohesity-managed SaaS implementation, or as some organisations prefer, a mix of both offerings. Together with this community, we will help service providers to consolidate silos and unleash the power of data and drive profitable growth for cloud and managed services platforms.”

DataStax, which supplies the Astra DB serverless database built on Cassandra, has new capabilities in the open-source GraphQL API, enabling developers to develop applications with Apache Cassandra faster and manage multimodel data with Apollo The API is available in Stargate, the open-source data gateway.

Cloud-based file collaborator Egnyte announced its Enterprise Plan ransomware protection is now available as part of its entry-level Business Plan (which starts at $20 per user per month). The offering can detect more than 2000 ransomware signatures, block attacks immediately, and automatically alert admins of the infected endpoint. New signatures are crowdsourced daily. It is also announcing a Ransomware Recovery solution as part of its Enterprise package. The recovery capability allows customers to “look back” at previous file snapshots to determine at which point ransomware infected a file and restore data to that point with a single click.

The latest version of FileCloud’s cloud-agnostic enterprise file sync, sharing and data governance product integrates with Microsoft Teams, so that organisations can share files and links from a single workspace. FileCloud can be self-hosted on-premises, operated as IaaS, or accessed in the cloud.

Iguazio, calling itself the MLOps (machine learning operations) company, today announced its software’s availability in the AWS Marketplace. This software automates machine learning (ML) pipelines end-to-end and accelerates deployment of artificial intelligence (AI) to production by 12x.

GiagaOm Data Governance radar diagram, Oct 2021.

Immuta, a universal cloud data access control supplier, announced it was named a Leader in the GigaOm Radar Report for Data Governance Solutions. The company is positioned in the Leader category as a “Fast Mover,” the most innovative, and ahead of all other data access control providers.

iXsystems and Futurex have announced the integration of iXsystems’ TrueNAS Enterprise with Futurex’s Key Management Enterprise Server (KMES) Series 3 and Futurex’s VirtuCrypt Enterprise Key Management. This uses the Key Management Interoperability Protocol (KMIP) and enables centralised key management for TrueNAS.

Kingston Technology Europe announced its forthcoming DDR5 UDIMMS have receivedIntel Platform Validation and claims it is the first and arguably most important milestone in validating compatibility between Kingston DDR5 memory solutions and Intel platforms utilizing DDR5.

Lenovo has joined Nvidia’s early access program in support of Project Monterey, with its use of the BlueField-2 SmartNIC to offload host server CPUs. It means Lenovo’s ThinkAgile VX and ThinkSystem ReadyNodes will support the BlueField-2 SmartNIC.

Scalable, high-performance file system supplier ObjectiveFS has announced its v6.9 release. This includes new features, performance improvements and efficiency improvements, such as integrated Azure blob storage support, Oracle Cloud support, macOS extended ACL, cache performance, memory usage improvements, compaction efficiency and more. For the full list of updates in the 6.9 release, see the release note.

Phison offers two grades of its Write Intensive SSD. The standard grade comes in a 2TB capacity and is capable of sustained writes of 1GB/sec. Its write endurance is 3,000TB — compared to a typical consumer-level SSD’s endurance of around 600TB. The pro grade is available in either 1 or 2TB capacities, both capable of sustained writes of 2.5GB/sec — more than three times a typical SSD’s speed of 0.8GB/sec. The 1TB pro grade SSD delivers write endurance of 10,000TB, and the 2TB 20,000TB.

In a heavy workload of ten drive writes a day, a standard grade endurance SSD will survive for 300 days of sustained work. And the pro grade 1TB and 2TB models survive 1000 and 2000 days respectively — possibly outlasting the machines they’re running in. The typical SSD running the equivalent workload will only survive 60 days. Phison is now offering write-intensive SSDs through OEMs serving professional users such as PNY.

Pure Storage announced the release of a new Pure Validated Design (PVD) in collaboration with VMware to provide mutual customers with a complete, full-stack solution for deploying mission-critical, data rich workloads in production on VMware Tanzu. It provides an architecture, including design considerations and deployment best practices, that customers can use to enable their stateful applications like databases, search, streaming, and AI/machine learning apps running on VMware Tanzu to have access to the container-granular storage and data management provided by Portworx.

Rambus has developed a CXL 2.0 controller with zero-latency integrated Integrity and Data Encryption (IDE) modules. The built-in IDE modules employ a 256-bit AES-GCM (Advanced Encryption Standard, Galois/Counter Mode) symmetric-key cryptographic block cipher. Check out the technical details on the CXL 2.0 controller with IDE here and the CXL 2.0/PCI Express 5.0 PHY here.

Game Drive for Xbox SSD.

Seagate has launched a $169.99 Game Drive for Xbox SSD. It features a lightweight, slim design with an illuminating Xbox green LED bar, USB 3.2 Gen-1 universal compatibility, 1TB capacity, compatibility with Xbox Series X, Xbox Series S and any generation of Xbox One, and installation in under two minutes through Xbox OS. The drive comes with a three-year limited warranty and three-year Rescue Data Recovery Services.

Seagate has developed a Kubernetes CSI driver for its Exos disk drive. It’s available for download under Apache v2 license from Github and can be used by any customer running Seagate storage systems with 4xx5/5xx5 controllers.

Swordfish 1.2.3, having been approved by the SNIA Technical Committee as working draft, is now available for public review. Swordfish defines a comprehensive, RESTful API for managing storage and related data services. V1.2.3 adds enhanced support for NVMe advanced devices (such as arrays), with detailed requirements for front-end configuration specified in a new profile, enhancements to the NVMe Model Overview and Mapping Guide.

Cloud data warehouser Snowflake announced its next Global Startup Challenge for early stage companies developing products for its Data Cloud. The Challenge invites entrepreneurs and early stage organisations, that have raised less than $5 million in funding, to showcase a data application with Snowflake as a core part of the architecture. It offers the three competition finalists the opportunity to be considered for an investment (total of up to $1 million across the three finalists), and global marketing exposure.

Storage array software supplier StorONE has signed a strategic distribution agreement with Spinnaker to distribute StorONE’s S1 Enterprise Storage Platform software in the EMEA market.

Data manager and archiver StrongBox Data Solutions (SBDS) has announced a partnership in the UK and Benelux with value-added distributor Titan Data Solutions. Titan will offer end-to-end data management solutions and cybersecurity services.

For the fourth consecutive time, data integration and data integrity supplier Talend announced it has been recognised by Gartner as a Leader in data quality solutions as described in the 2021 Magic Quadrant for Data Quality Solutions.

TimescaleDB, which supplies a relational database for time-series data, announced the new Timescale Cloud, an easy and scalable way for developers to collect and analyse their time-series data. This offering is built around a cloud architecture, with compute and storage fully decoupled. All storage is replicated, encrypted, and highly available; Even if the physical compute hardware fails, the storage stays online and the platform immediately spins up new compute resources, reconnects it to storage, and quickly restores availability. 

Veeam has announced an update to its Backup & Replication product, v11a, offering Red Hat Virtualization backup support, and native backup and recovery to Amazon Elastic File Systems and Microsoft SQL databases. There’s more support for archive storage backup, and security integrations with AWS Key Management Service and Azure Key Vault to safeguard encrypted backup data from ransomware. Kasten K10 v4.5 will be able to direct backups of Kubernetes clusters that leverage VMware persistent volumes to a Veeam Backup & Replication repository where its lifecycle can be managed and additional Veeam features and capabilities leveraged.

Veritas has announced the Veritas Public Sector Advisory Board. This consists of “renowned public sector experts” who will advise Veritas, already the leading provider of data protection for the public sector, on ongoing developments such as the recent Executive Order on Improving the Nation’s Cybersecurity.  It will work closely with Veritas executives to help prioritize the most important programs and initiatives in addition to recommending actions and direction on strategic business opportunities and go-to-market, route-to-market, customer and operational strategies for the public sector.

Hyperconverged infrastructure software provider Virtuozzo has acquired the technology and business of Jelastic, a multi-cloud Platform-as-a-Service (PaaS) software company, following a ten-year partnership. It says bringing Jelastic’s platform and application management capabilities in-house completes Virtuozzo’s core technology stack, delivering a fully integrated solution that supports all relevant anything-as-a-service (XaaS) use cases — from shared hosting to VPS to cloud infrastructure, software-defined storage and application management and modernisation.

VMware announced an upcoming update to VMware vSphere with Tanzu so that enterprises can run trials of their AI projects using vSphere with Tanzu in conjunction with the Nvidia AI Enterprise software suite. Nvidia AI Enterprise and VMware vSphere with Tanzu enable developers to run AI workloads on Kubernetes containers within their VMware environments. The software runs on mainstream, Nvidia-Certified Systems from leading server manufacturers, providing an integrated, complete stack of software and hardware optimized for AI.

Customer wins

The Hydroinformatics Institute in Singapore (H2i) uses Iguazio’s software on AWS to build and run a real-time Machine Learning pipeline that predicts rainfall by analysing videos of cloud formations and running CCTV-based rainfall measurements. Gerard Pijcke, Chief Consultancy Officer, H2i, said: “With Iguazio, we are able to analyze terabytes of video footage in real time, running complex deep learning models in production to predict rainfall. Repurposing CCTV-acquired video footage into rainfall intensity can be used to generate spatially distributed rainfall forecasts leading to better management of urban flooding risks in densely populated Singapore.”

StorMagic announced that Giant Eagle, Inc. a US food, fuel and pharmacy retailer with more than 470 locations across five states, has selected StorMagic SvSAN virtual SAN software and SvKMS encryption key management to store and protect in its 200-plus supermarkets with in-store pharmacies. Today, SvSAN is running on three-node Lenovo clusters at each store, and SvKMS on three virtual machines at its primary datacentre.

People moves

Remember Milan Shetti? He was SVP and GM of HPE’s storage business unit, leaving in March last year, and before that CTO of the Datacenter Infrastructure and Storage divisions. He’s being promoted from President to CEO at Rocket Software, an IBM systems focused business supplying software to run on legacy kit.

John Rollason.

John Rollason resigned from NetApp, where he was senior director for global revenue marketing after being senior director for EMEA marketing. An ex-SolidFire marketing director, he quit in August this year and has become a part-time marketing consultant at Nebulon. He is also MD at REMLIVE in the UK, which is an electrical safety warning indicator specialist.

Keith Parker, Product Marketing Director at Pavilion Data, is leaving for another opportunity.

Ceph hardware and software system builder SoftIron has appointed Kenneth Van Alstyne as its CTO, responsible for “building out SoftIron’s technology strategy and roadmap as the company advances its mission to re-engineer performance and efficiency in modern data infrastructure through its task-specific, open source-based solutions.” He comes from Peraton, the US Naval Research Laboratory, QinetiQ North America and Knight Point Systems.

Bold move: money and mouth co-located with Rubrik’s ransomware recovery warranty

Data protector Rubrik has announced a $5 million ransomware recovery warranty for Rubrik Enterprise Edition.

The warranty will cover expenses related to data recovery and restoration in the event that Rubrik is unable to recover protected data in the event of a ransomware attack.

Bipul Sinha.

Bipul Sinha, Rubrik’s CEO and co-founder, issued a statement: “With this new Ransomware Recovery Warranty, our customers have our commitment that we care as deeply about protecting their data as they do. With ransomware attacks increasing more than any time in history, having a recoverable copy of your data has become a top agenda item for CIOs and CISOs, and we understand how important data security is to ensuring the security of a business.”

This offer will be available for Rubrik customers running Rubrik Enterprise Edition and working with a Rubrik Customer Experience Manager (CEM) to ensure industry data security best practices are in place.

Rubrik’s SaaS-based  Enterprise Edition includes zero-trust data protection, ransomware investigation, sensitive data discovery, incident containment (data quarantine) and orchestrated application recovery. 

Matthew Day, CIO of Langs Building Supplies, said: “With this bold move, Rubrik’s Ransomware Recovery Warranty proves they’re putting their money where their mouth is.”

You can learn more about the ransomware recovery here by registering your interest.

Quantum’s exabyte-munching scale-out modular tape library

Oh, it turns out Quantum’s success in selling tape libraries to three of the top hyperscalers is due to specially developed scale-out and modular tape libraries and object software.

Eric Bassier.

In August it said it was engaged with six of the top ten hyperscalers, either in production or in product trials. During an IT Press Tour online briefing yesterday Eric Bassier, Quantum’s senior director for product marketing, said: “Three of the world’s biggest hyper scalers, three of the top five, use Quantum in production. Predominantly, they’re using Quantum tape. … And we also have an initial tape footprint deployed at the other two … in a proof of concept stage.”

It’s also expanding to a tier of customers one down from the hyperscalers: “This last quarter, we added three design wins and I would call them international web scale companies. So these are not in the top 10 but they’re in the top 100. … One of these customers is a popular social media video sharing application.Our initial footprint is a two exabyte archive that combined StorNext 7 software with Quantum tape in a RAIL configuration as this massive video archive for all of their content.”

RAIL is Quantum’s Redundant Array of Independent Libraries concept, a RAID-like scheme providing increased scale, protection and performance. A second webscale client has a 1.3EB initial deployment.

The company has a Scalar line of tape library products:

  • i3 with 25 to 400 tapes and 18PB compressed capacity with LTO-9 media;
  • i6 with 50 to 800 tapes and 136PB of maximum compressed capacity;
  • i6000 with 100 to 12,000 tapes and 540PB of compressed capacity in 21 racks.
Quantum’s Scalar tape library line.

These are scale up or monolithic libraries. Bassier believes that the hyperscaler tape wins are unlikely to use these products, and he may be right.

He said: ”We’ve done a lot of custom engineering work to make our tape systems designed for archives at that scale. This is really hardware-based engineering. And there are capabilities and there are actually even models of tape systems that we sell into this market that we do not have on our web site today.”

Scalability

What scale is that? One International webscale customers was said to have a two-exabyte archive — that would require four Scalar i6000 libraries to reach that level. We can imagine that the hyperscaler customers have multi-exabyte configurations. A 10EB deployment would need almost twenty i6000s and more than 400 racks. That sounds unwieldy and it would be ridiculously complex to manage twenty separate monolithic llibraries.

This  image from a Quantum RAIL slide looks like racks filled with eight Scalar i6 library 6U chassis. That would be 48U, higher than a standard 42U rack. Having multiple small library chassis, each with their own drive, would certainly help performance.

During the briefing Bassier said: “We have a model of tape library today that we don’t have on our web site that has better densities than anything that we show on our web site. And  … that’s the model that we’re selling to some of these large hyperscalers.”

So Quantum has a Scalar iSomething that is denser, meaning more media in less space, than the i6000. He said more: “If someone were to purchase ActiveScale cold storage, we would deploy this tape system as part of delivering that.”

It’s also scalable out to … well, 10EB and beyond, we think. Well beyond, because the hyperscalers could keep cold data for five years, possibly ten, possibly even more, and they just keep on accumulating it. Are we heading for 100EB archives?

Front-end management

The ActiveScale scheme has an S3-accessed front-end tier of active data storage using disk for data and flash for metadata, and a back-end cold tier, S3 Glacier-class storage, using objects written to tape in 1TB chunks and using erasure coding. 

This cold tier has multiple libraries in the RAIL scheme. It also uses dynamic data placement, with the ActiveScale software mapping an object’s name to a particular tape in a particular library. This is not an object content-addressed hashing scheme with objects stored in a ring of systems, like the Scality RING. Instead think of it as a quasi-single-level file:folder scheme, with the folder containing mapping information to link object names to their addresses in the  RAIL system. 

In a way, Quantum has invented away of clustering tape libraries behind the ActiveScale front-end system, making its monolithic libraries kind of modular through clustering. The new tape library hardware it has developed will be natively modular, we expect.

The ActiveScale system provides a single pane of glass for managing a cluster of modular libraries — that would solve the problem of managing multiple separate libraries — and Quantum’s CatDV software is set to be developed into a content indexing scheme for them.

We can expect more information to be revealed fairly soon, as Bassier said: ”We’re going to have a lot more news around tape this quarter.”

Quantum said the hyperscale (and webscale) tape market is set for growth. It has more than 30EB of deployed capacity in these two markets already and reckons it is the runaway leader. Possibly other tape library vendors, such as IBM, HPE and Spectra Logic, will also follow the modular library route.

Who dat? Kubernetes wave rider StorageOS changes name to Ondat

StorageOS is rebranding to Ondat, signifying a Shift Left exercise.

The company supplies software-defined cloud-native storage for enterprise Kubernetes-orchestrated environments and has more than 5500 installed Kubernetes clusters worldwide. It has produced a blog explaining why it has done so to address a Shift Left in storage.

What is that when it is at home? “Shift Left” is DevOps jargon for automating cloud-native application test, management and operational processes and doing them early on in an application’s life cycle. Imagine an application’s agile development life cycle flows from the left o the right through from define, through plan, code, build, test, release, deploy, operate and monitor, and back to plan again in a loop. Shift Left means doing things earlier in the flow to get sight of problems faster and fix them quicker. 

VMware blog diagram.

The Ondat blog explains: “Data, storage and storage management is ‘shifting left’ to the developer. Kube-native developers and platform engineers are becoming the most influential consumers of enterprise storage and storage-based data services.“

We should realise that: “Developers compose the entire application platform — including storage, as all applications store state somewhere. Organisations must make it easier for developers to get storage right, from the start, in order to avoid having to fix it later.”

In Ondat’s view, Kube-native developers expect push-button access to a persistent data store with scale, high availability, flexibility, security and performance, and no cloud services lock-in.

It says other CSPs, storage vendors and system suppliers view developers “as a new route to lock in customers to their storage.” Ondat does not, and provides customers with the freedom to choose, configure and control the platform, and the places where their applications are built and run.

Its eponymous software is a — in fact it says the — Kube-native platform for running stateful applications, anywhere, at scale.

The name

But the name: Ondat. Where does that come from? StorageOS, as it then was, announced on October 4 that it had joined The Data on Kubernetes Community (DoKC), an openly governed community for data on Kubernetes, as a Silver Sponsor. It looks like OnDat is a rearrangement of “Data on” — shift the “on” left, drop an “a” and you have OnDat — a kind of visual Shift Left effect.

In StorageOS’s DoKC announcement, CEO Alex Chircop said: “As companies migrate more business-critical applications onto Kubernetes, DevOps teams and Platform Engineers are becoming the new controllers of enterprise data. This can open up enterprises to massive new risks, but offers equally large opportunities for storage innovation, freedom and cost savings. The DoKC is an open, collaborative community at the heart of this movement. These are exactly the people StorageOS is working to serve.”  

StorageOS wants to fuel open collaboration and knowledge-sharing in the way data is handled on Kubernetes. Chircop said this: “The more innovation we see in this space, the greater the demand will be for StorageOS technology. We offer Kube-native technology that delivers data freedom and control; we enable innovation and allow new workloads to be brought onto Kubernetes; and we give our users independence from storage vendors and cloud provider lock-in.”

The OnDat rebrand is entirely in keeping with this view.

Coldago pours cold water on Gartner Distributed Files and Object Storage MQ

An interview with Coldago research analyst Philippe Nicolas has revealed what he sees as vendor and product selection choices that in his view weaken Gartner’s Distributed File Systems and Object Storage MQ.

Read the Q&A below and see what you think.

Blocks and Files: Should distributed file systems and object storage be viewed as a single category?

Philippe Nicolas.

Philippe Nicolas: Hmm, it’s a good question. What is true is both address unstructured data but many applications can use one and not the other, even if the access method is standardised. At the same time, we see more and more vendors offering both interfaces. Clearly it creates a challenge when you need to analyse the segment.

If we consider these two access models as one category, Gartner has to select products that do both to avoid a penalty for only file or object vendors. But why should a vendor be penalised when it delivers only one interface, especially when that can be a very good one? 

Considering the two as one category invites us to make the same point we have made for years: Gartner considers one product for some vendors and multiple products for others, and therefore creates an unfair or unbalanced comparison. So the real question is, do we compare one product or do we compare vendors?

Some suppliers, such as Pure Storage and Scality, are combining file and object storage. Shouldn’t analysts do the same? And if not, why not??

And you can add Caringo (now DataCore), Qumulo, DDN, Dell, NetApp, VAST Data or Cloudian to extend the list; I probably even forget a few. This is a general answer that demonstrates once again that differentiators across offerings are reduced year by year. It’s also a sign of maturity. Having check boxes ticked in RFPs does the job but product behaviour is very different.

How vendors implement their access layers really differs. But it also confirms the merger between offerings — because it’s essentially two access methods to access the same unstructured content.

Also, you can merge the category but what about pure object storage or pure file storage products/vendors? Does it mean we need separate sub-MQ for each category with the presence of players who deliver the individual access layers? I think this is where other analysts reports come into the game, and users must consider several of them to form their own ideas and opinions.

Purists would tell you that object storage is more than just an interface and they’re right, but nobody cares today about internal design, especially when products expose both interfaces. Many users ask their vendors: “Could you expose my content on a file server via S3?” and the reverse as well.

But all these products are far from equal when we look at access methods. Do you really compare native NFS access built on object layers and vice-versa? Of course, it can provide some flexibility but users’ experience shows very diverse capabilities and realities.

And lastly, the problem with grouping the two is that some pure file or object players are sanctioned. And this is a paradox — you can be a very good product in one category but badly positioned in the global quadrant. On the other side, having the two, let’s say with average capabilities, provides some artificial advantages.

Look at the trajectory of VAST Data in the market — not having it listed is pretty bizarre and makes this report a bit incomplete.

With flash hardware and better-designed object software accelerating object storage to filer-level performance and so to satisfying primary storage roles, aren’t the two access protocols (file and object) merging?

Flash is used in object storage for metadata for a long time and it was too expensive for data for large configurations. But the reality was also that some object storage products didn’t receive any perormance gain by using flash for data and several of them had to adapt, change and update their software to maximise the gain. And then flash pricing went down so it created some extra opportunities.

Your point is interesting. I remember a recent study by one vendor claiming that object storage with flash can do primary storage. In fact, primary storage is only determined by its role and not by a technology. Many people limit primary storage to block storage and it’s a very narrow view of the sector. Primary storage is where data is generated and thus it’s active and considered hot data. It supports production and sustains the business. With that in mind we understand that it can be block, file or object, whether HDD, flash, SCM or full DRAM lies underneath.

On the other side, secondary storage is a protection level, needed to protect the business and support IT in its mission. Data is not generated there — it’s copied from the primary level. This secondary level is full of inactive data — cold and even fixed or reference data. Here we see also some block, file or object access systems.

Your question remark confirms, once again, that object storage has become an interface in people’s mind.

What is your view of the general relevance and usefulness of Gartner’s Magic Quadrant for Distributed File Systems and Object Storage?

I like it, I like that exercise, it’s good that such tools exist with several other ones to invite users to read and analyse several of them, understand context and criteria to form their own opinion. We just regret that some visible players are not listed and that Gartner didn’t accept or consider points many other people do year after year.

Even if we understand the criteria chosen by Gartner, it is always a surprise to not see some players as they refuse to be listed or because Gartner eliminates them. Look at the trajectory of VAST Data in the market — not having it listed is pretty bizarre and makes this report a bit incomplete.

What about open-source? What about MinIO, clearly the number one object storage by the number of instances running on the planet?

And the reverse is also true in this MQ. I’m pretty sure that all readers were surprised to see some brands on it this year.

How should and could IT buyers find out MQ-type information about the distributed file systems and object storage suppliers if the Gartner MQ is rejected?

Hmm, there is no one source of information and I invite buyers to make their own search of similar reports and analysis to build their own matrix with their own criteria as a mix or union of these documents. Honestly they already do this for RFPs; it’s just an extension. When they need to research the state-of-the-art in a domain, they have to do it. A good source is a few key information sites like yours, StorageNewsletter, TechTarget, Speicherguide.de and a few others that go beyond posting press release and analyse things. And lastly, if buyers can speak directly with users who have already deployed and adopted solutions, they’ll get excellent inputs.

Gartner 2021 files and objects MQ gets Purified, Nutanixed and Wekanated

Pure Storage has become a leader in Gartner’s latest Distributed File Systems and Object Storage magic quadrant, and both Nutanix and Weka enter this MQ for the first time.

The files and object storage MQ is produced once a year and features the  well-known Leaders, Challengers, Niche Players and Visionaries quadrants in a square chart with Ability to Execute and Completeness of Vision axes. Suppliers are scored on various attributes of these two concepts and then placed in the MQ diagram according to their summed and weighted scores, with a presence towards the top right being desirable. That area has the highest ability to execute and completeness of vision

We reproduce last year’s MQ alongside the latest version we that we can see how supplier positions have changed and which suppliers have exited and entered the Gartner analysts’ view of this field; 

The 2020 (left) and 2021 (right) MQ diagrams from Gartner’s Distributed File Systems and Object Storage report.

As before Dell and IBM are the two leaders, with Dell top of the tree. Scality and Qumulo are also in the Leaders’ quadrant.

Pure Storage, with its FlashBlade product, has been promoted from a Challenger to the Leaders’ quadrant. Matt Burr, VP and GM, FlashBlade, at Pure Storage, issued a quote bigging up the company, the product and Gartner: “Since FlashBlade’s inception, we have believed that unifying unstructured file and object data to consolidate workloads on a single platform is critical to powering the future of modern applications. It is great to see the industry follow suit and our position in this Magic Quadrant validate this vision. This is an honor we could not have achieved without our great team, customers, and partners.”

You can get a copy of Gartner’s report from Pure’s website, with no registration required, and also from Nutanix and Weka, where registrations are needed.

Nutanix, with its Files and Objects offering, and Weka, with its WekaFS product, are first-time Challengers and both are delighted to be included. 

Rajiv Mirani, Nutanix CTO, put out a statement: “We believe being named in the Gartner Magic Quadrant for Distributed Files and Objects Storage is a significant recognition of Nutanix’s storage offerings, which aim to simplify and lower operating costs.” 

Liran Zvibel, Co-founder and CEO at Weka, said: “We are extremely pleased that Gartner has placed Weka in the Visionaries quadrant, which definitely is fitting for us at this stage of growth.”

MQ entry is great free marketing for both Nutanix and Weka, and Pure of course, apart from having to pay Gartner for the right to distribute report copies that is.

Quantum has changed square, moving from the Challenger’s quadrant to the Visionaries one. The Gartnerites reckon its ability to execute has decreased but its vision has become more complete. The report says: “ActiveScale is lacking in feature parity compared to its competitors. For example, it is missing features such as data deduplication and compression, QoS, mixed flash support, NFSv4 and SMB, hybrid cloud integration, and dual protocol access.”

Object storage supplier Caringo is the only deleted vendor in this MQ. It has been bought by DataCore and its position has not been inherited by DataCore.

Cohesity, Minio and VAST Data get honourable mentions in this MQ report for being “noteworthy vendors that did not meet all inclusion criteria, but that could be appropriate for clients, contingent on requirements”.

NetApp setting up streaming TV service — but Netflix is in no danger of being stung

NetApp is setting up its own streaming TV service, starting with digital content from its Insight 2021 virtual event and including a performance by Sting. You remember Sting?

Whitney Cummings.

There will be an Insight event channel on the service, and the Insight event is morphing from an annual show to an always-on online hub for on-demand and live content. As well as broadcasting webcasting a live Sting performance, NetApp’s Insight channel will feature hosting by Whitney Cummings — billed as “the reigning Queen of American stand-up”.

A sample Cummings joke goes: “Found a fragrance called Vixen. Guess they can’t name them after the people who actually wear them. Nobody’s going to buy Secretary.”

And another: “Stand-up is a lot like sex. There’s a lot of crying involved and I get paid to do it.”

We wonder — we seriously wonder — just how rude she will be and what she will say about NetApp.

We envisage NetApp TV being like a series of video blogs, podcasts and interviews, with execs and customer people, about NetApp’s products, services and views on industry trends. Maybe it will use it for product and service launches as well.

NetApp Insight 2021 runs from October 20 to 21 and you can register here.

Micron steps into the ring with PCIe Gen-4 enterprise-class datacentre SSD

Micron is launching a PCIe Gen-4 enterprise-class NVMe SSD with ruler formats as it stakes a claim for greater datacentre SSD market share, but it’s using the same old 96-layer 3D NAND as the prior 7300 NVMe SSD, which was based on PCIe Gen-3.

The earlier 7300 SSD was a TLC (3bits/cell) drive which came in PRO (1 drive write per day) and MAX (3 DWPD) versions, both packaged in M.2 (gumstick) and 2.5-inch (U.2) form factors. The newer 7400 sticks with the PRO and MAX variants, but there are now seven form factors in total: M.2 2280 in 80mm and 110mm lengths, U.3 2.5-inch 7mm and 15mm thick, and three sizes of E1.S (short ruler or large gumstick). The E1.S ruler is designed to take over from the M.2 format.

Jeremy Werner, Corporate VP and GM of Micron’s Storage BU, issued the announcement quote: “Our customers need improved storage density and efficiency to run their businesses. The Micron 7400 SSD is flexible in its ability to address myriad applications and system interoperability requirements, enabling deployments and delivering value from edge to cloud.” 

Micron 7400 SSD variants.

They don’t get any improved density from denser NAND; Micron is already shipping far denser 176-layer product than the legacy 96-layer stuff. For example the 2450 and 3400 notebook and desktop/workstation PCIe Gen-4 drives in M.2 format, which were announced just four months ago. The 7400 customers will get improved density from the E1.S format drives, more of which can be packed into a server chassis than the M.2 product.

7400 and 7300 capacities in the M.2 and 2.5-inch form factors are identical except in the MAX products. There, the 7300 MAX M.2 capacities are 400GB and 800GB while the 7400 MAX M.2 has these capacities plus 1.6TB and 3.2TB. A table provides the full capacity points and maximum performance numbers:

The performance increases with capacity and only the maximum numbers are shown.

PCIe Gen-4 helps give the 7400 a significant speed boost over the PCIe Gen-3 7300, as a glance at the following table will show:

Micron 7300 performance table.

The 7400 outperforms the PCIe Gen-4 2450, but only just surpasses the 3400 which maxes out at 720,000/700,000 random read/write IOPS and 6.6/5.0 sequential read/write GB/sec bandwidth. Micron’s 7400 PRO U.3 pumps out 1,000,000/400,000 random read/write IOPS and 6.6/5.4 sequential read/write GB/sec bandwidth. The 7400 puts out more random read IOPS but fewer write IOPS than the 3400 and has more sequential write bandwidth.

Controller security features are enhanced compared to the 7300, as the 7400 comes armed with TCG-Opal 2.01 and IEEE-1667, Firmware Activate without Reset, Power Loss Protection, Enterprise Data Path Protection, Secure Erase, Secure Boot, Hardware Root of Trust and Secure Signed Firmware.

The 7400 controller features support for 128 namespaces to increase scalability for software environments, and also supports Open Compute Project (OCP) deployments. 

Check out Micron’s 7400 web pages to find a product brief and other documentation.

The contenders

In the PCIe Gen-4 arena, Samsung has its PM9A3 — an E1.S format drive which complies with the OCP specification. Its capacities range from 960GB to 7.68TB — virtually the same as Micron’s 7400. Kioxia’s CM6 and CD6 client SSDs in U.3 format outperform Micron’s 7400. Liqid’s LQD4500 Honey Badger SSD simply blows the Micron drive out of the park performance-wise, with its four million random IOPS, but it does use 16 lanes instead of the 7400’s four.

SK Hynix is sampling two PCIe 4.0 drives: the 96-layer TLC flash PE8010 and PE8030, both in U.2 format. It is also prepping the 128-layer TLC PE8111 in EDSFF long format. Provisional performance numbers show similar numbers to the 7400.

Micron may well have an edge because of its form factor range (M.2, U.3 and E1.S) and its security features.

Samsung SW virtualises CXL-attached memory

Samsung has an open-source Scalable Memory Development Kit (SMDK) which virtualises memory attached to the CXL interconnect.

The Compute eXpress Link is a developing open, industry-backed standard interconnect to enable servers, accelerators, memory expanders and smart I/O devices to exchange data using shared memory and at high speed across a PCIe Gen-5 connection. Samsung makes DRAM and NAND-based products which can be attached to a CXL link.

Cheolmin Park, VP of the Memory Product Planning Team at Samsung Electronic, said in a statement: “In order for datacentre and enterprise systems to smoothly run next-generation memory solutions like CXL, development of corresponding software is a necessity.”

Samsung wants to deliver “a total memory solution that encompasses hardware and software, so that IT OEMs can incorporate new technologies into their systems much more effectively.”

Samsung launched a CXL expander device in May, with CXL 2.0 support. This was a CXL-connected DDR5 memory module with memory mapping, interface converting and error management technologies to enable CPUs and GPUs to use its DDR5 DRAM as main memory. It suggests a host server’s per-CPU memory capacity can be increased up to 50 per cent and bandwidth boosted up to 75 per cent. There would be an expander per CPU.

Samsung CXL Expander box and card.

The SMDK consists of pre-built code libraries and APIs to enable a host’s main memory and the CXL memory expander to work together in heterogeneous memory systems. There are two APIs. System developers can use a compatibility API to incorporate CXL-attached memory into IT systems without modifying existing app environments, or an optimisation API to optimise app software to suit special needs.

Samsung heterogeneous memory diagram. The memory zones recognise normal DRAMS and CXL memories separately.

The SMDK supports memory virtualisation, meaning separate pools of memory, such as server socket-attached DRAM and CXL-attached memory (such as DRAM or storage-class memory) can be shared. It includes what Samsung calls a proprietary Intelligent Tiering Engine, with which the SMDK user can identify and configure the pool memory type, capacity and bandwidth to match particular use cases with tiering priorities.

Samsung’s SMDK is available on a limited basis for initial testing and optimisation and will be open-sourced within the first half of next year.

Yesterday VMware announced its Project Capitola development to virtualise different memories into a single logical pool. That involves the CXL interconnect, and Samsung is one of VMware’s partners in the project. We would hope for similar initiatives to emerge from other hypervisor developers, such as Red Hat and Nutanix, leading to an industry standard so that application developers don’t have to re-invent their CXL wheel for each hypervisor they support.

Questions

We have asked Samsung several questions about this SMDK, and the answers are below each question:

1. The SMDK is open source. Will the Intelligent Tiering Engine be open sourced? 

→ Yes, it will be open sourced, too.

2. Will the Intelligent Tiering Engine (ITE) identify and configure the memory type, capacity and bandwidth of non-Samsung CXL-attached memory types? 

→ Yes, as long as they comply with CXL and PCI specifications.

3. Will it be used by server operating system developers, system SW developers (such as in-memory SW tools) or application developers or all three? 

→ Yes, it can be used by all three, however the current version mostly targets application developers and system SW developers.

 4. Is Samsung expecting or hoping on working towards an open standard for CXL-attached memory devices?

→ Yes, absolutely.

NeuroBladers build a processing-in-memory analytics chip and server

An Israeli startup called NeuroBlade has exited stealth mode, built a processing-in-memory (PIM) analytics chip combining DRAM and thousands of cores, put four of them in an analytics accelerating server appliance box, and taken in $83 million in B-round funding.

The idea is to take a GPU approach to big data-style analytics and AI software by employing a massively parallel core design, but take it further by layering the cores on DRAM with a wide I/O bus architecture design linking the cores and memory to speed processing even more. This design vastly reduces data movement between storage and memory and also accelerates data transfer between memory and processing cores.

A statement from CEO Elad Sity said: “We built a data analytics accelerator that speeds up processing and analysing data over 100 times faster than existing systems. Based on our patented XRAM technology, we provide a radically improved end-to-end system for the data centre.”

A supportive quote from Patrick Jahnke, head of the innovation office at SAP, which has been working with NeuroBlade, said: “The performance projections and breadth of use cases prove great potential for significantly increased performance improvements for DBMS at higher energy efficiency and reduced total cost of ownership on-premises and in the cloud.” 

PIM XRAM chip

The rationale is the same as for having GPUs accelerate graphics workloads, but taken the extra step forward with a PIM architecture called XRAM computational memory. NeuroBlade says the XRAM processors “enable the system to compute inside the memory itself, drastically reducing data movement, saving energy, and speeding up data analytics processing times.”

NeuroBlade XRAM graphic.

The PIM XRAM chip is embedded into an Intense Memory Processing Unit (IMPU) and the appliance, in which a quartet of IMPUs is installed, is called Xiphos. This, NeuroBlade says, “has a parallel, scalable, and programmable architecture that is optimised for accelerated data analytics, enabled through terabytes-per-second of memory bandwidth.”

Xiphos appliance.

The Xiphos motherboard has a PCIe capability about which NeuroBlade said: “Everything is connected on top of PCIe fabric.” The appliance contains local direct-attached NVMe storage, with up to 32 NVMe SSDs per appliance. An x86 CPU running Linux acts as the appliance controller.

An Insights Data Analytics software suite is said to provide the software needed to support high-performance data analytics on Xiphos hardware and to integrate with the existing ecosystem.

Xiphos SW suite.

We asked about the bandwidth on the wide I/O bus and a spokesperson said: “We are talking about multiple x16 lanes PCIe buses, the official spec is still under NDA at this stage.”

We also asked what the 100x performance increase was based upon and were told: “We compare to standard TPC benchmarks and queries we work on with customers.”

Speedata

Coincidentally Israel-based Speedata exited stealth at the end of September and announced an APU or Analytics Processing Unit chip along with $55 million in funding. A NeuroBlade spokesperson told us: “NeuroBlade has paying customers already and is shipping out to data centres all over the world — a big differentiator here.” NeuroBlade is further along in the process as well, as its technology uses XRAM computational memory.

NeuroBlade said: “The data analytics market is projected to be somewhere at $65 billion so the fact that Speedata identified the same target is great. We see even the hyperscalers like Amazon working on new solutions. Couple the giants with other startups it really just suggests that this is a new market with plenty of room to approach in different ways.”

NeuroBlade background

NeuroBlade was founded in 2016 in Tel Aviv by CEO Elad Sity and CTO Eliad Hillel who is also VP for Product Strategy, and formally launched as a company in 2018. Sity and Hillel were in the technological unit of Israel’s Intelligence Corps and then worked at SolarEdge.

It raised a $4.5 million seed round in 2018 and a $23 million A-round the next year. The B-round was led by Corner Ventures with contribution from Intel Capital, and supported by current investors StageOne Ventures, Grove Ventures and Marius Nacht plus technology companies including MediaTek, Pegatron, PSMC, UMC and Marubeni. Total funding is now $117.5m.

Hillel and Sity have filed patents, such as US patent number 10,762,034 for memory-based distributed processor architecture.

The company has passed the 100-employee count and started shipping its Xiphos data accelerator to customers and partners worldwide. The new cash will be used to expand the engineering teams in Tel Aviv and build out sales and marketing teams globally. 

Bootnote: A xiphos is a double-edged, Iron Age straight and short sword used by the ancient Greeks.

VMware going to Capitola for memory tiering

Capitola beach front
Capitola beach

VMware is developing vSphere software to virtualise different kinds of memory into a single logical tier, so that applications can have access to more memory than there is DRAM in their host physical server without having to use different coding methods.

The initiative is called Project Capitola and was revealed as a technology review at the VMworld 2021 event. It is discussed in a VMware blog by the vSphere team. It is, they write, “a software-defined memory implementation that will aggregate tiers of different memory types such as DRAM, PMEM, NVMe and other future technologies in a cost-effective manner, to deliver a uniform consumption model that is transparent to applications”.

The developing CXL interconnect has a role to play as a blog diagram shows:

We see pools of different kinds of memory: DRAM (DDR), CXL-attached Optane persistent memory (DIMMs)  and other “memory” accessed via CXL, RDMA-over-Ethernet (RoCE), NVMe — which must surely mean SSDs — and pooled NVMe. More than one physical server can be involved in this memory tiering and logical pooling, according to the diagram.

The pooled memory types will form a non-uniform memory architecture (NUMA) with different tiers having different access speeds. That will have to be managed by the Project Capitola software.

VMware is working with:

  • Memory vendors such as Samsung, Micron and Intel — memory here meaning DRAM and Optane and possibly Samsung’s Z-SSD;
  • Server vendors such as Cisco, Dell, HPE, and Lenovo;
  • Service providers — Equinix.

You can read VMware partner comments and here is a sample of them:

  • Cisco CTO Dan Hanson — “We are excited to partner with VMware on Project Capitola to further enhance the hybrid cloud vision we have with UCS, HyperFlex, and Intersight by including this software-defined memory management into our set of solution offerings.”
  • Dell chief technology & innovation officer Paul Perez — “With tiered memory technology from VMware on Dell EMC PowerEdge servers, we’re able to increase capacity and performance for memory-intensive workloads.”
  • Hazelcast chief product officer Manish Devgan — “Hazelcast is excited to partner with VMware on Project Capitola to deliver a flexible, simplified software defined memory management solution that brings together historical and real-time data at microsecond latencies to empower innovative applications.”
  • Micron senior director of the datacentre segment Ryan Baxter — “Project Capitola can deliver new levels of memory access to data-hungry applications, enabling customers to optimise for solution performance and performance per dollar. As an industry leader in DRAM and NAND technology, we are delighted to work with VMware to deliver this value to customers.”

It will be interesting see what Micron brings to the Capotola table as it exited its 3D XPoint partnership with Intel in favour of developing its own CXL-accessed memory products.

VMware says its leading partner is Intel and Capitola will come to the market, possibly in a first phase, using Xeon processors and Optane persistent memory. Trust Intel to support this idea.

If one hypervisor can abstract different tiers of memory into a single virtual tier then so can another — and we expect Nutanix’s AHV and other KVM versions such as Red Hat to do so as well. And if a hypervisor can do it then why not an operating system? 

Quantum writes old and cold objects to tape for archiving

Quantum is introducing an object-storage-on-tape tier to its ActiveScale object storage system, providing an on-premises Amazon S3 Glacier-like managed service offering.

The idea is to have a single namespace covering ActiveScale disk and tape storage with multi-site data durability, high capacity, low cost, and policy-based data transfer to the cold storage — all available through a subscription to a fully-managed service.

Bruno Hald, Secondary Storage GM at Quantum, provided an announcement quote: “ActiveScale is now the industry’s first and only on-premises object store system with an integrated cold storage class based on tape technology. In short, this means it dramatically lowers costs, consumes little power, and reliably stores data for decades.” 

Jamie Lerner, Quantum’s chairman and CEO, said: “The innovations announced today enable us to combine Quantum hyperscale tape architectures with Quantum software, package all of this technology as a cloud service that can be deployed anywhere and offer it to enterprises and cloud providers who are facing the same challenges as hyperscalers.”

Blocks & Files diagram.

Data durability

Quantum says the cold storage class has up to 19 nines of data durability. Backblaze says its 11 nines durability has a .99999999999 per cent chance of data loss — “if you store one million objects in B2 for ten million years, you would expect to lose one file.” So, with 19 nines of durability, your data is so much safer that the chances of any data loss are astronomically remote.

According to an ActiveScale datasheet, the extra durability comes from Quantum’s Reed-Solomon-based two-Dimensional Erasure Coding (2D EC) and a RAIL (Redundant Arrays of Independent Libraries) architecture. The 2D EC distributes objects and parity data shards within and across tapes and libraries to maximise recoverability from data loss while not having too much extra data stored, down to 15 per cent. Quantum says restoring an object requires only a single tape read, and 2D EC uses local reconstruction codes to recover from nearly all tape and drive errors using just a single tape.

RAIL provides parallel access to multiple tape libraries that can be geo-distributed, high availability and scalability. A 3-geo system has three separate libraries in geographically dispersed locations. The 2D EC allows for hierarchical data spreading with object data split into chunks written across 18 drives in the three data centres — an 18/8 erasure code policy in which objects can be decoded from just ten chunks. The system can recover from three tapes being lost.

Alternative deployments include a single datacentre or two datacentres with data replication. 

Lifecycle policies can be used to select objects in the Active Storage class for transfer to cold storage. Objects sent to the cold storage class (tape) by an application direct from other storage are first stored in the active storage class (disk) for fast acknowledgement. Then they are batched and interleaved with read requests before being sent to cold storage so as to optimise tape streaming performance. Object restoration from the cold storage class will typically occur in less than five minutes.

Object data buckets can also be migrated to the public cloud if desired.

Services

Quantum is providing Object Storage Services to deliver this ActiveScale technology, with two classes of service. There is one for for active data and a second for cold data, with an all-inclusive two-tier pricing model:

Service delivery is aided by Quantum’s AIOps software, Cloud-Based Analytics (CBA) with its predictive analytics based on device sensor telemetry, and the MyQuantum service delivery facility which provides access to CBA and the AIOps feature. 

 Find out more about Active Scale Cold Storage on Quantum’s mini-site.

Comment

Other object-to-tape technologies include Germany’s PoiNT Systemes with S3-supporting Archival Gateway software, which supports Quantum Scalar libraries as target tape systems. It also uses erasure coding.

A second competitor to Quantum is Fujifilm’s Object Archive which has an S3 API and uses OTFormat — an open-source file format — to store objects and their metadata on tape. Scality’s Zenko is the object server used inside this product. PoiNT Systemes said in June last year that it will support Fujifilm’s Object Archive format.

Quantum is introducing its cold object storage as an integrated offering in its ActiveScale product line which itself is integrated in Quantum’s overall StorNext portfolio, making it a potential add-on sale to existing Quantum customers. The managed services angle helps make it an affordable add-on as well.

Will we see other on-premises object storage providers add a tape storage archive tier to their products? The easiest way for them to do that would be through a partnership with a tape library vendor, such as HPE, IBM and Spectra Logic. Will they or won’t they? We will have to wait and see.