HPE has announced additions to its GreenLake subscription offerings, marking the company’s shift into direct competition with public cloud data warehouses like Snowflake and SaaS-based data protectors such as Cohesity, Druva, HYCU and others.
We are told this represents HPE’s entry into two large, high-growth software markets — unified analytics and data protection — and accelerates HPE’s transition to a cloud services company. HPE announced GreenLake for analytics with Ezmeral Unified Analytics and an Ezmeral Object Store, GreenLake for data protection with disaster recovery (DR) and backup cloud services, and a professional services-based Edge-to-Cloud Adoption Framework. No HPE announcement is complete these days unless it includes the phrase “edge-to-cloud” — preferably several times.
Antonio Neri, HPE president and CEO, said: “The $100 billion unified analytics and data market is ripe for disruption, as customers seek a hybrid solution for enterprise datasets on-premises and at the edge. … The new HPE GreenLake cloud services for analytics empower customers [and] gives them one platform to unify and modernise data everywhere. Together with the new HPE GreenLake cloud services for data protection, HPE provides customers with an unparalleled platform to protect, secure, and capitalise on the full value of their data, from edge to cloud.”
HPE’s GreenLake platform now has more than 1200 customers and $5.2 billion in total contract value, and is growing. In its most recent quarter, Q3 2021, the Annualised Revenue Run Rate was up 33 per cent year-over-year, and as-a-service orders up 46 per cent year-over-year.
Unified analytics
HPE reckons to take on the big beasts in the cloud data warehouse and data lake market — like Snowflake and Databricks — who use proprietary technology, by offering a cloud-native, open-source software base with no lock-in. It has a two-pronged approach, offering Ezmeral-branded Unified Analytics and Data Fabric Object Store. These are available as services or software, deployed on-premises, in the public cloud or across a hybrid environment.
HPE analytics chaos slide.
Matt Maccaux, the global CTO for Ezmeral SW at HPE, said that different analytics users — data engineers, analytics users and data scientists — faced a massive diversity of siloed data sets and types. Ezmeral Unified Analytics has a common set of application tools, is usable by all three kinds of user and based on an underlying data fabric file and object store.
There are three Ezmeral offerings accessing this:
Ezmeral Runtime Essentials and Enterprise Editions — an orchestrated Kubernetes app modernisation platform;
The new Ezmeral Unified Analytics — a hybrid lakehouse offering for analytics and machine learning (ML);
Ezmeral ML Ops — integrated ML workflows, which Maccaux called the crown jewels.
The existing Ezmeral Data Fabric file and streaming data store has been used here, but the object store is new and can run on-premises or in the cloud. It can store files and streaming data as well as objects, and supports the S3 API.
Maccaux said it did not use MinIO technology, nor Scality’s: “We wrote our own object store abstraction and it’s based upon some of the underlying technology of the data fabric.” We understand it is cloud-native and optimised for small file performance — think 100-byte objects placed on NVMe flash, which are then aged into larger objects on slower storage.
Data protection
HPE is entering the rapidly growing Data Protection-as-a-Service market with GreenLake for Data Protection. This consists of two offerings:
GreenLake for Disaster Recovery — this uses acquired Zerto’s journal-based Continuous Data Protection (CDP) technology for disaster recovery, backup, and application and data mobility across hybrid and multi-cloud environments. It helps customers recover in minutes from ransomware attacks.
GreenLake for Backup and Recovery — backup as a service built for hybrid cloud with integrated snapshot, on-premises backup, and cloud backup. It provides policy-based orchestration and automation and protects customers’ virtual machines. HPE says it eliminates the complexities of managing and operating backup hardware, software, or cloud infrastructure.
The HPE Backup and Recovery Service code consists of new cloud-native micro services, along with existing Aruba Central, HPE Catalyst and HPE Recovery Manager Central code.
HPE storage czar Tom Black wrote in a blog, which we saw pre-publication, “With HPE GreenLake for data protection, now customers can secure their data against ransomware, recover from any disruption, and protect their VM workloads effortlessly across on-premises and hybrid cloud environments.”
The Edge-to-Cloud Adoption Framework is a professional services-led set of capabilities to help customers adopt HPE GreenLake offerings. It addresses eight domains which HPE says are critical to the strategy, design, execution, and measurement of an effective cloud operating model: Strategy and Governance, People, Operations, Innovation, Applications, DevOps, Data, and Security.
HPE supplies an AIOps for infrastructure offering, InfoSight, that constantly observes applications and workloads running on the GreenLake platform. A new capability, called HPE InfoSight App Insights, detects application anomalies, provides prescriptive recommendations, and keeps the application workloads running disruption-free.
Ezmeral Unified Analytics should become available in the first half of 2022.
Comment
Neri is in a tremendous hurry to transform HPE into a data services-led company, and boost GreenLake’s revenues. He has changed its leadership, recruiting VMware exec Fidelma Russo as CTO and effective GreenLake head. The move into analytics and head-on competition with Snowflake, Databricks and others is a massive bet that HPE, with an open-source unified offering can offer a compelling choice to customers who are no doubt feeling pressured to jump aboard the analytics juggernaut.
The move into SaaS data protection is much less of a risk. This market is, unlike the cloud analytics market, nowhere near being dominated by elephant-sized players. Zerto has good DR technology and HP acquired a failing company with tremendous promise. The SaaS data protection field is comparatively young. Dell has made its entry by OEMing Druva, and competitors such as Commvault (with Metallic), HYCU, Cohesity, Rubrik and others are not out of reach. If HPE’s backup service is good then it should grow nicely and penetrate HPE’s customer base.
Analytics is the one to watch. Snowflake is busily and effectively building out an ecosystem and marketplace around its offering. HPE will need to match that with its Ezmeral marketplace.
Pure Storage has added greatly increased scale-out for its arrays with clusters in availability zones, and added cloud-like storage resource consumption and management, as well as automated database deployment services for containerised apps.
It lifted the veil today on Pure Fusion, which it calls a self-service, autonomous, SaaS management plane; enhanced Pure1 operational management; and Portworx Data Services (PDS). Fusion is a federation capability for all Pure devices — on- and off-premises — with a cloud-like hyperscaler consumption model. PDS, meanwhile, brings the ability to deploy databases on demand in a Kubernetes cluster.
Pure’s chief product officer, Ajay Singh, said: “Customers want a new agile storage experience that is fully automated, self-service, and pay-as-you-go. Pure Fusion breaks down the traditional barriers of enterprise storage to deliver true storage-as-code and much faster time to innovation.”
Murli Thirumale, VP and GM, Cloud Native Business Unit, Pure Storage, said Portworx Data Services showed the firm had made progress in helping customers to deploy stateful applications on Kubernetes: “Now we don’t just give IT teams the tools needed to run data services in production, we are providing an as-a-service experience for the data services themselves so our customers can focus on innovation, not operations.”
Pure provides all-flash storage resources for both traditional applications and for containerised applications. Drilling down into the conceptual aspect, traditional apps running on-premises are provisioned using abstractions such as LUNs, and arrays with defined capacities, whereas public cloud provisioning is on the basis of capacity, service classes, protection levels and near-infinite scale-out. In the containerised world, Kubernetes facilitates storage provisioning but also brings a whole new dimension of complexity with application deployment and monitoring. It can be used to automate both storage provisioning and application deployment.
The idea is for the Pure infrastructure to combine to form greater resource pools. For traditional environments that infrastructure is presented as cloud-like, with service classes and in-built workload placement and balancing. For cloud-native — for example Kubernetes-based — shops, their ability to deploy databases on the Pure infrastructure is now automated, as is their ongoing (“day 2”, as the kids say) management.
Pure Fusion
Pure International CTO Alex McMullan told B&F: “Fusion … is a SaaS layer to federate Pure arrays together into a software-defined and multi-tenant private cloud.”
He added: “This is Pure’s first major journey into active management. Pure1 today is a monitoring and read-only portal. The change that comes with Fusion is that we’ll be managing Pure devices actively from the cloud layer, from the SaaS layer, and that’s a huge thing that has been planned, discussed, syndicated with customers for many years.”
Pure Fusion provides availability zones and regions across clusters of Pure arrays, including Cloud Block Store, with everything organised by service level. Storage admins can define a catalogue of storage classes with different access types (block, object, file), performance and protection and cost profiles. Users can then self-select the class they want with a UI click, or software API call and a “Pure1 layer for Fusion to automatically implement those changes behind the scenes, without anyone knowing.”
In theory, customers will be able to provision and deploy storage volumes faster, from common tools developers use today like Terraform and Ansible, with an API-first interface. Their storage will also have higher availability through the use of availability zones.
Suppose there is an array problem? McMullan said that, with Pure1’s access to array telemetry data, the firm and Fusion can “work between themselves to work out the best place for a data set to be and to move it there with Active Cluster. … it’s all completely transparent to the user, the storage administrator, the DBA and, of course, to the end application as well.”
McMullan added: “Each part of the federation of pools of capacity is carved into tenant spaces. For each of those tenant spaces there’s an overall administrator, allowing different storage classes to be defined per tenant space, each with its own separate administrator. … Each tenant space can call out into any Availability Zone, any region or more than one region with protection policies to be able to run their business in the way that they see is optimal.”
Pure Fusion scaling will integrate first with FlashArray//X, FlashArray//C, and Pure Cloud Block Store, and has future integrations planned with FlashBlade and Portworx.
Portworx Data Services
Cloud native development features rapid code iterations and complex database deployments, which means DevOps staff end up doing a lot of Ops work along with the Dev. Portworx Data Services is aimed at reducing their Ops-type workload by automating database deployments. It’s built on the Portworx Enterprise product.
Users select a database type — such as Cassandra, Couchbase, ElasticSearch, Kafka, or MongoDB — its name and size, and it is then downloaded from the container registry, installed on a cluster, and brought up by the Kubernetes operator. The user then gets a connection string back so they can use the database.
Alex McMullan.
The data protection is taken care of by Portworx. The product is database-as-a-service, McMullan said: “No more ServiceNow tickets, no more help desk tickets, no more calling help desks. You click and you go; you deploy as you would do in the public cloud.”
A range of SQL and NoSQL databases will be supported. According to McMullan: “We’re going to start with a couple of the obvious ones in terms of Cassandra, Kafka. Rabbit MQ, I think, is one of the others, and that list will get longer as we approach GA.”
He said: “Portworx Data Services is able to maintain, expand capacity, monitor, feedback, and take repetitive actions, if and when there’s a database problem.
“It won’t do DBA-based tasks in terms of table management, the traditional things. That still goes via the normal service, but the actual capability of bringing the database up inside the application for the developer, the DBA, the data scientists to consume the endpoint, is all taken care of automatically.”
He emphasised that “the ability to repeatedly deploy and manage and maintain and protect containerised applications is something that nobody else can offer, talk about or even do right now, so this is an industry first for us.”
Availability
Both Pure Fusion and Portworx Data Services are in an Early Access Programs. Fusion will be in preview by the end of this year, with general availability to come in the first half of 2022. Portworx Data Services should be generally available in early 2022.
The Arctic World Archive has held a depositing event for data ingestion this month, following a prior one in February, thus providing an effective five to six months write latency.
Data is written on reels of Piql-format 35mm film and is claimed to be recoverable for more than 1000 years. The AWA stores the film reels of data in shipping containers in a steel vault, 300m inside Gruve 3 — an old coal mine at Longyearbyen, in Spitzbergen. This is the largest island in the Svalbard archipelago, located between Norway and Greenland, 965km south of the North Pole. This offline archive is marketed as the place to store the most precious artworks, cultural artefacts and data in the world over the very long term. Customers include the Vatican Library, the National Museum of Norway, the European Space Agency, Github for source code, and several global corporations.
At the event Piql’s Managing Director, Rune Bjerkestrand, said in a portentous statement: ”The data you have deposited today holds significance for communities around the world. Choosing to preserve these items and ensuring they will never be forgotten is passing on value to future generations.”
New data depositers included the Norwegian Armed Forces Museum with digitised photographs, Norway’s Natural history museum, an digital art collection, the National Library of Hungary, and the corporate history of Norway’s Tronrud Engineering — shades of a vanity project.
We wondered if the read latency (restoring items from the archive) was five to six months as well. Not so. A spokesperson said: “If a client needs to retrieve their data, they make a request through our online platform, piqlConnect. The reel is then loaded on to a piqlReader at the vault and the files are made available online (or on a portable media device if the client wishes). … PiqlConnect is hosted in Microsoft Azure … and there is a high-speed fibre optic connection to the vault.” The film reel holding the data has to be identified and manually fetched from the vault before being placed in the reader. This process could take just 30 minutes, but is likely to take longer.
We also asked about prices but didn’t get anywhere. “We unfortunately don’t publish our prices as the needs of each client (including file preparation, cataloguing, archival actions, etc) vary from client to client.”
AWA clients can request a private deposit ceremony if they wish. The AWA says it won’t accept deposits from just anybody — it’s all very worthy.
Amazon has extended its QuickSite business intelligence (BI) service with QuickSite Q, which enables users to type questions about their business data in natural language and receive accurate answers in seconds.
QuickSite is a scalable, serverless, embeddable, machine learning-powered BI service built for the cloud that makes it easy to create and publish interactive BI dashboards. With the Q version users can type in questions using natural language instead of using a point-and-click dashboard selection method.
QuickSite Q screengrab.
Matt Wood, VP of Business Analytics at AWS, provided a statement: “With Amazon QuickSight Q, anyone within an organisation has the ability to ask natural language questions and receive highly relevant answers and visualisations. For the first time, anyone can tap into the full power of data to make quick, reliable, data-driven decisions to plan more efficiently and be more responsive to their end users.”
QuickSite Q uses machine learning (natural language processing, schema understanding, and semantic parsing for SQL code generation) to automatically interpret the intent of a question and relationships among business data. The machine learning models behind QuickSight Q are pre-trained on data from various domains such as sales reporting, ads and marketing, financial services, healthcare, and sports analytics.
It can handle questions such as “How are my sales tracking against quota?” and “What are the top products sold week-over-week by region?” The answers come as visualisations and text and mean a business intelligence analyst is not needed to interrogate data to get the answers to such questions.
Users don’t need to learn SQL but, on the flip side, they can’t ask such detailed questions as one can with SQL. QuickSight Q continually improves over time based on the auto-complete phrases a user selects. Amazon says “Customers can [also] refine the way Amazon QuickSight Q understands questions with an easy-to-use editor, which removes the need for complex data preparation before users can ask questions of data in natural language.“
We’re told that, as QuickSite does not depend on pre-built dashboards and reports, users are not limited to asking only a specific set of questions. QuickSight Q provides auto-complete suggestions for key phrases and business terms. It performs spell checking and acronym/synonym matching, so users need not worry about typos or remembering exact business terms for their data.
There are no upfront commitments to use Amazon QuickSight Q, and customers only pay for the number of users or queries. Amazon QuickSight Q currently supports questions in English and is generally available today to customers running Amazon QuickSight in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (London), with availability in additional AWS Regions coming soon.
There’s a strong focus on data querying and analysis in this week’s digest. Data querying has been improved by Elastic enhancing its search products. A US research institute is using MemVerge’s virtual memory to speed plant DNA analysis. And data warehouse data feeder Fivetran has pulled off an amazing funding round.
Apart from these things we look at two funding rounds for Fivetran and Stairwell, and have masses of shorter news items about people being hired and product news. Grab a coffee and read on …
Elastic stretches its software
Elastic has announced enhancements to its ElasticSearch Platform offering with v7.15 Elastic. The platform includes Elasticsearch and Kibana, and its three built-in features: Elastic Enterprise Search, Elastic Observability, and Elastic Security.
Web crawler improvements in Search include automatic crawling controls, content extraction tools, and the ability to analyse logs and metrics natively in Kibana, giving users a single platform to search all of their organisation’s data. There is a native Google Cloud data source integration with Google Cloud Dataflow, providing customers with faster data ingestion in Elastic Cloud.
APM (Application Performance Monitoring) correlations, now generally available in Elastic Observability, help DevOps teams and site reliability engineers accelerate root cause analysis and reduce mean time to resolution by automatically surfacing attributes correlated with high-latency or erroneous transactions.
Enhancements have been added to Limitless Extended Detection and Response (XDR) in Elastic Security, including malicious behaviour protection for Windows, macOS and Linux hosts, and one-click host isolation for cloud-native Linux environments. Powered by analytics that prevent attack techniques leveraged by known threats, malicious behaviour protection strengthens existing malware and ransomware prevention by pairing post-execution analytics with response actions to disrupt adversaries early in an attack.
Read a blog to find out about several more additions made by Elastic.
MemVerge case study
Penn State Huck Institutes of the Life Sciences is using MemVerge’s Memory Machine Big Memory software to speed genomic analytic processing in plant DNA research to develop plant products that can better sustain droughts, diseases and achieve longer transportation and shelf life. The bioscientists at Penn State need to analyze genome sequences that are even more complex than human genomes. Using traditional analytic methods, it could take weeks to process specific workloads and running jobs can often fail as data is much larger than the available compute memory.
By deploying a pool of MemVerge software-defined DRAM and Optane persistent memory (PMem), the Institute has been able to speed the completion of its analytic pipeline and run jobs that it hasn’t been able to process before.
In the MemVerge announcement, Claude dePamphilis, Director of the Center for Parasitic and Carnivorous Plants at Penn State Huck Institutes of the Life Sciences, said: “The innovative technology helps us achieve DRAM-like performance from a mixed memory configuration. This has not only dramatically increased our time to research findings but has also saved us considerable budget in the process.”
We respectfully suggest he meant to say “dramatically decreased our time to research findings”. 🙂
Fivetran funding
Data integration startup Fivetran has taken in a massive $565 million at a valuation of $5.6 billion, taking total funding to $730 million. The round was led by Andreessen Horowitz with existing investor General Catalyst and new investors such as YC Continuity. Fivetran also bought private equity-owned HVR and its enterprise, mission-critical, real-time database replication technology for $700 million in a cash-and-stock transaction.
The new funding made the acquisition possible.
Fivetran was founded in 2012 and its funding history shows a hockey-stick curve:
2013 — $743,000 seed
2017 — $3.4M second seed
2018 — $15M A-round
2019 — $44M B-round
2020 — $100M C-round
2021 — $565M D-round
A chart shows this spectacular betting at the Fivetran funding casino in more detail:
The reason for this funding frenzy, panic investing almost — don’t you think it resembles panic buying? — is that Fivetran’s software, provided as a managed service, provides connectors to get formatted data into data warehouses, such as Snowflake, from disparate and distributed data sources. The sources can be databases, files, applications, events and functions. Fivetran is, in one way, just another ETL (Extract, Transform and Load) process, but provided as a separate function with more than 150 pre-built change data capture connectors and automated schema migrations.
Fivetran has more than 1000 customers, such as ASICS, Autodesk, Monzo, Square, DocuSign, Everlane and Lime. A blog explains some of Fivetran’s thinking behind the HVR deal.
HVR sold itself for $51 million to private equity business Level Equity Management in 2019 and has a partnership with Snowflake and Snowflake competitor Databricks. The Fivetran deal represents a near 14x return for Level Equity Management.
VC bets on data analytics, spurred by the success of Snowflake’s IPO, are getting bigger and bigger. This is blitz-scaling with a vengeance.
Stairwell Funding
Mike Wiacek.
Palo Alto-based startup Stairwell has raised $20 million in an A-round and launching an Inception offering to detect threats by looking at a company’s data. Stairwell was founded in 2019, by CEO Mike Wiackek, an ex-principal software engineer at Google. He worked on threat analysis intelligence at an Alphabet company called Chronicle, which he co-founded.
There the main development was called Backstory — a planet-scale data analysis platform that could search petabytes of security data in just milliseconds.
Stairwell’s web site reads “Security is currently practised as a linear process, but ideally it is circular, continuous and should alway be building on itself. And, it should always start by looking at what’s most important to an organisation, which is their own internal data. Good security starts with a deep understanding of the assets, software, and files inside your environment.“ It calls this an inside-out approach.
Investors include Sequoia, Accel, Allen and Company and Gradient Ventures. Inception is in a beta test release with GA slated for next year.
Shorts
archTIS, supplying software for collaboration of sensitive information, has acquired select assets including customers, technology and the European operations of Cipherpoint Ltd’s software division. This includes the IP for cp.Protect, a SharePoint on-premises data encryption offering, and cp.Discover, a data discovery and classification platform. Existing cp.Protect and cp.Discover customers include DHL, Bank of Finland, California State University, Arthur J Gallagher, US DARPA, Singapore Power, Singapore Tote and Acronym Media.
Cirrus Migrate Cloud can migrate live block data from a public cloud-based or on-premises block storage system to Azure. Migration uses cMotion technology and is performed while the original system is still in operation. It involves distributed Migration Agents running on every host that allows direct Host-to-Host connections. Each Host-to-Host migration is independent making the solution infinitely scalable, without central bottlenecks for the dataflow. A user guide explains more.
Cloud-native service provider Civo has launched a new region in Germany, based in Frankfurt, for its Kubernetes-focussed cloud platform, adding to existing regions in the USA and UK. It aims to be a disruptive alternative to the hyperscale providers, such as AWS, Google and Microsoft. Civo’s platform launched to early access in May 2021, and the company has seen strong demand for Civo Kubernetes with users from over 130 countries worldwide, and Germany accounting for nearly five per cent of its user base. Civo will launch a further region into Asia this year, with another five regions planned across the globe in 2022.
FileCloud which offers file sync-and-share to enterprises, has introduced v21.2 of its product, which features a native drag-and-drop workflow automation tool that allows managers and team members to create business workflows with no coding necessary.
Multi-cloud security vendor Fortanix announced a partnership with cloud data warehouser Snowflake to make its Data Security Manager SaaS (DSM SaaS) available to Snowflake customers, giving them the ability to tokenise data inside and outside Snowflake. Tokenisation is related to masking. It replaces personally identifiable information (PII) such as credit card account numbers with non-sensitive and random strings of characters, known as a ‘Token’, that preserves the format for the data and the ability to extract the real information.
Public cloud supplier Linode is rolling out NVMe SSD storage. The rollout kicks off in the company’s Atlanta data center, with the remainder of its global network upgrading over the next quarter. It includes the company’s first erasure-coded block storage cluster, using Ceph. The performance upgrade is provided free to all Linode customers. Linode Block Storage rates will remain at $0.10/GB per month. Customers with existing block storage volumes in Linode’s ten other global data centres will be able to migrate their block storage volumes when NVMe becomes available.
AIOps supplier OpsRamp announced a summer release which includes alert predictions for preventing outages and incidents, alert enrichment policies for faster incident troubleshooting, and new monitoring integrations for Alibaba Cloud, Prometheus metrics ingestion, Hitachi, VMware, Dell EMC, and Poly. CloudOPs engineers can visualize, alert, and perform root cause analysis on ECS instances, Auto Scaling, RDS, Load Balancer, EMR, and VPC services within Alibaba Cloud and accelerate troubleshooting for multi-cloud infrastructure within a single platform.
Platform9 can run virtual machines (VMs) on Kubernetes and you can read about it in a white paper. This should be called Project Uznat <- read it backwards and think VMware.
Startup Pliops which sells an x86 server offloading XDP storage accelerator, has set up a distribution deal with TD SYNNEX covering North America. Pliops has implemented a partner assist, channel-centric go-to-market strategy and will strategically align with systems integrators and value-added resellers offering database, analytics, ML/AI, HPC and web-scale solutions, as well as cloud deployments.
Pure Storage’s FlashBlade file and object-storing array has been adopted by DC BLOX, a provider of interconnected multi-tenant datacentres across the Southeastern US. The array comes through Pure-as-a-Service and Pure’s Evergreen storage service for non-disruptive upgrades and future expansion.
Redstor, a UK- and SaaS-based data protection and management supplier, is partnering with Bocada, so its automated, aggregation of backup and storage performance metrics reduce the work for MSPs involved in daily backup operations monitoring, monthly SLA audits and provide better reporting.
Samsung’s T7 SSD is a credit card size portable external drive with up to 2TB capacity and a USB 3.2 Gen-2 connecting cable. The drive uses embedded PCIe NVMe technology and can transfer data fast, with sequential reads up to 1050MB/sec and sequential writes at 1000MB/sec. The aluminium-cased drive can survive a fall up to two metres, weighs 58 grams, and has a three-year warranty. It’s compatible with PCs, Macs and Android devices and comes in silver, blue and red colours.
Samsung T7 external SSD.
Earlier this month Seagate’s La Cie unit launched the LaCie Mobile SSD Secure, and the LaCie Portable SSD, both with up to 2TB capacity and the same read and write performance as Samsung’s T7 drive. OWC’s Envoy Pro Elektron portable external SSD, like the La Cie and Samsung products, also uses a USB 3.2 Gen-2 cable connector and has equivalent performance.
OWC Electron fast portable SSD.
Scale Computing, which supplies HCI systems for edge sites — smaller and remote IT sites — says it’s doing well in the North America public sector, with sales to the Kitselas First Nation, a self-governing nation and tribe in British Columbia, The Summit County Board of Elections in Ohio, and Madison County, Kentucky.
Scality has announced Splunk SmartStore-Ready validation with its RING product providing Splunk customers with petabyte-scale storage for SmartStore single-site, multi-site active-active and multi-site stretched deployments. A Scality blog explains the details: “Scality RING holds the master copy of warm buckets while indexers’ local storage is used for hot and cache copies of warm buckets. With most data residing on Scality RING, the indexer maintains a local cache that contains a minimal amount of data: hot buckets, copies of warm buckets participating in active or recent searches, and bucket metadata. This renders indexers stateless for warm and cold data, boosting operational flexibility and agility.”
Seagate is working with Microsoft to provide more affordable expansion SSDs for the Xbox XIS console. Such expansion storage has a proprietary format. There is talk of a 500GB capacity card costing around $150. The existing 1TB card costs $220 or so but you can get a 1TB M.2 format SSD for $130 or less. A French media outlet, Xboxsquad.fr, reported this story.
The SNIA (Storage Networking Industry Association) has made v1.2.3 of Swordfish available for public review. Swordfish extends the DMTF Redfish Scalable Platforms Management API specification to define a comprehensive, RESTful API for managing storage and related data services. Version 1.2.3 adds enhanced support for NVMe advanced devices (such as arrays), with detailed requirements for front-end configuration specified in a new profile, and enhancements to the NVMe Model Overview and Mapping Guide. It also includes new content in both the User’s Guide and the Error Handling Guide. You can also learn more about SNIA Swordfish and access developer resources here.
The SNIA’s Developer Conference runs from September 28 to September 29. There are over 120 technical sessions on topics including NVMe, NVMe-oF, Zoned Storage, Computational Storage, Storage Networking, Persistent Memory, and more. Register here.
Synology is launching a cloud-based C2 Backup forever-incremental offering. It backs up Windows devices, whether they are located at home (C2 Backup For Individuals) or distributed across multiple offices (C2 Backup For Businesses). Data backed up on C2 Backup is fully shielded against unauthorised access by end-to-end AES-256 encryption, with a user-held private key necessary to unlock backup files and sensitive information. File-level recovery allows you to retrieve any file you need immediately, while entire devices can be restored to their previous state with bare-metal recovery.
TeamGroup is developing a liquid-cooled SSD family — yes, really. The T-Force Cardea Liquid II drives are for extreme gamers. This SSD is an M.2 format product and presumably will run at high speed for longer than at room temperature.
China’s TerraMaster has a D16 Thunderbolt 3 tower storage box for digital image technology workflows. It as 16 bays for 3.5-inch SATA disks or 2.5-inch SSDs and can be configured with a total storage capacity of up to 288TB (18TB HDDs x 16). The box has a 40Gbit/sec connection, delivering speeds of up to 2817MB/sec when fitted with 16 SSDs in RAID 0 array mode on Windows OS. In RAID 6 mode, it can deliver speeds of up to 2480MB/sec.
Yugabyte, which supplies an open-source distributed SQL database, announced the general availability of Yugabyte Cloud, its fully-managed, public database-as-a-service offering. Developers can create and connect to a highly scalable, resilient Postgres compatible database in minutes with zero operational overhead.
Managed storage provider Zadara has a deal with connectivity supplier Zenlayer for its fully-managed zStorage to be available in Zenlayer’s North American locations and will soon expand into emerging markets such as India, China, and South America. Zadara and Zenlayer now offer managed storage solutions that businesses can deploy from on-premises datacentres, private colocation facilities, or the cloud with 100 per cent OpEx model.
People
Cloud backup and file storage supplier Backblaze has appointed two new board members: Evelyn D’An and Earl E. Fry. D’An currently serves as a member of the board of directors of Summer Infant, Inc. and is a former partner of Ernst & Young, where she spent 18 years serving clients in retail, consumer products, technology, and other sectors. Fry serves as a member of the board of directors of Hawaiian Airlines, including as chair of their Audit and Finance Committee. Previously, Fry was chief financial officer, chief administrative officer, chief customer officer, and executive vice president, operations strategy at Informatica Corporation.
Danial Beer.
HCI and virtual SAN supplier StorMagic has hired a new CEO, Danial Beer, ex-CEO of Ottawa-based GFI Software, to replace the departed Hans O’Sullivan. Beer will oversee operational efficiencies and moves into Edge, security and video surveillance markets.
We asked if “operational efficiencies” refers to layoffs, office closures or other cost-cutting measures and a StorMagic spokesperson replied: “We are in a growth phase of the company and are expanding, not shrinking. The reference to operational efficiencies means we will align our operational processes with our GTM strategy to accelerate growth. We are continuing to find ways to update our operations to better enable our expected growth.”
Western Digital and Toshiba have promoted their advances in disk drive areal density and performance based on specific technology advances, such as bias currents, enhanced PMR using a microwave oscillator and embedded NAND in the controller — Western Digital’s OptiNAND. Yet Seagate, apart from saying its over-arching recording technology development is heat-assisted magnetic recording (HAMR) in contrast to Toshiba and Western Digital’s focus on microwave-assisted magnetic recording (MAMR) as a stage before possible HAMR, has nothing to say about such things.
This can lead commentators to think it is running out of technology advancements. We asked Colin Presley, a Senior Director in Seagate’s CTO organisation, about the recent technology advances in the HDD area — bias currents, ePMR, and embedded controller NAND. Then we had a discussion about the state of HAMR.
Colin Presly
Bias current/ePMR
Without disputing Western Digital’s internal data, Presly doesn’t think that the use of bias current-type technology will provide an intrinsic advantage or gain. For him the changeover from longitudinal magnetic recording to perpendicular magnetic recording (PMR) provided an intrinsic gain in areal density. The bias current idea doesn’t, in his view, provide an intrinsic gain of itself.
He said: “We consider complete system optimisation for given head designs. The physics behind bias currents is well understood. … We can do it in other ways with head designs and geometry design.” Seagate could control the write field, head spacing, domain alignment and other items from the hundreds of features affecting a specific HDD design point.
According to Presly, Western Digital reached the 18TB level with ePMR and Seagate reached it without PMR, supporting his point that Seagate can reach higher incremental capacity levels without relying on any one specific technology to get there.
Embedded controller NAND
Seagate does know about shipping NAND with disk drives because: “We have shipped approximately 40 million hybrid drives with NAND on them for performance gains. It’s not really new. … It was good technology but it’s not used in today’s nearline drives.”
Western Digital says its OptiNAND embedded NAND technology provides more performance and better metadata management, leading to areal density increases.
Embedding NAND in the controller is not something that Seagate thinks is worth doing. If customers want more performance then it has its MACH.2 dual actuator technology providing two read:write channels to the drive and effectively doubling performance, not increasing it by low percentage numbers through better metadata management.
Presly said: “We don’t see any reason to put NAND in the controller for performance.” As for metadata: “There are ways of managing metadata; other ways of solving the same problem. … We just don’t see the value of adding NAND to a nearline drive.”
A NAND-enhanced controller was not needed for the 20TB PMR drives that Seagate is now sample shipping, although one such is used in Western Digital’s sampling 20TB drive. Presly reiterated Seagate’s over-arching areal density increase direction: “For capacity we’re focussed on HAMR.”
He was keen on the idea of deterministic performance gains, saying customers wanted predictable and consistent (deterministic) performance, not variable or non-deterministic performance, and that’s why it uses DRAM in its controllers and not NAND. But: “In the future, if there was a reason to do it then we could make that pivot.”
HAMR
Presly was clear about one aspect of HAMR: “[It] is really really hard technology.” Even so: “The industry across the board recognises HAMR is the road to high capacity.”
Seagate HAMR graphic.
Inside Seagate: “HAMR continues to progress. We’re not ready to publish an areal density record but we are progressing. … We’re strong believers in HAMR; it is a step change.”
I asked him about the apparent capacity gap between the latest 20TB sampling drive and the coming 30TB HAMR drive mentioned by Seagate’s CFO Gianluca Romano. Presly said: “The 20 to 30TB transition is a good discussion point … but I can’t talk about that. …. We will have competitive technologies in the 20 to 30TB space. … We’ll absolutely be competitive.”
Comment
It is apparent that Seagate believes it is in pole position in the HAMR race. It is gaining customer feedback on its 20TB Gen-1 technology and also manufacturing experience. This knowledge is being fed into its Gen-2 30TB product. By the time when, and if, both Toshiba and WD make a transition to HAMR, Seagate will have several years of experience from manufacturing and shipping millions of HAMR drives.
That could give it an advantage when competing with HAMR newbies. Until then it has to stay competitive when WD and Toshiba bring out their full MAMR technology. Who knows, but we could see Seagate’s Gen-2 HAMR drive facing up against WD’s first full MAMR drive. Competitive bakeoffs will be interesting. We hope Backblaze gets its hands on both types of drives and publishes its stats on drive reliability about them.
Scality has patented software to capture billions of files in multi-petabyte environments in a single snapshot, orders of magnitude faster than traditional filers, and enabling better recovery from ransomware attacks.
The company’s claim means at least two orders of magnitude — a hundred times faster — and maybe three — a thousand times faster. How, after 20-plus years of snapshot technology development, is it possible to do this?
Scality CTO Giorgio Regni said in a statement that Scality “advances the state of the art in the storage industry and, more importantly, provides the foundation for new and enhanced products.”
Evaluator Group senior strategist and analyst Randy Kerns sang off the same hymn sheet: “Scality continues to push the envelope with innovations for protecting data at massive scale, as demonstrated by its new US patent for snapshot technology. This promises to be a key foundation for enabling future ransomware protection solutions.”
Ironically it uses some of the key concepts in Scality’s own RING object storage system — as we shall see.
Limited snapshot capabilities
Paul Speciale.
A Scality blog by Chief Product Officer Paul Speciale reads: “Snapshots are a traditional method of data protection in most NAS file systems. The challenge for snapshot implementations at larger scale is to keep them fast, accessible and space efficient. The underlying management and tracking needed for each snapshot is complex, creating overhead in both space, CPU consumption and time.
“That’s one reason why (in most NAS systems) snapshots haven’t been applicable for truly high-scale file systems. Most snapshot technology works well for use cases such as user directories or shared folders — typically in the range of thousands to tens of thousands of files — but it’s rare to see NAS file systems with millions or billions of files.
“The frequency of snapshots also tends to be quite sparse, typically with policies that take daily and weekly snapshots but purge more granular and longer-term ones.”
And the conclusion is this: “The important breakthrough in our patent is that Scality has figured out how to make snapshots work at cloud-scale, with millions to billions of files in a file system, and enable an essentially unlimited number of snapshots.”
Okay. We thought we’d better take a closer look at the patent. It is US Patent number 11,061,928: “Snapshots and forks of storage systems using distributed consistent databases implemented within an object store.” So Scality is using an object store, its own RING no doubt, as the basis for a high-speed snapshotting process. That seems surprising.
The intro para says the patent involves “providing a snapshot counter for a storage system implemented with multiple distributed consistent database instances.” Reading the Background part of the patent brings out some very interesting points. We summarise a great deal here. Bear with it — we’ll arrive at familiar territory.
Patent background
The text reminds readers of several core physical ways of accessing stored information which, ultimately, comes down to accessing blocks on a disk or solid state drive. These are objects accessed via their own unique alphanumeric ID, and key:value stores which link keys uniquely connected to values (stored information or objects). There are files holding individual pieces of information, accessed through folders, which themselves can contain folders thus firming a directory tree.
There are also blocks on drives which store information and which are accessed by the block number or offset from a starting point on the drive. We are told that a database is away of using a core physical storage system. And consists of “a database interface 108, an indexing layer 109 and a storage layer 110.” [The numbers refer to defined items in the patent.] The indexing layer exists to speedup lookups into the storage layer and so avoid scrolling through the storage layer item by item until the right one is found. It is a mechanism for searching the storage layer’s contents.
The storage layer can be implemented using either object, file or block storage methods.
A relational database (RDBMS) is a type of database with a collection of tables using rows and columns. Rows are records in the database and columns are data items for a particular row with one column used as a primary key to identify a row in a table. Columns in a table may include primary keys of other rows in other tables.
An index layer, such as B+ tree, can be added on top of tables to provide primary keys for items with specific values. Thus sounds vague but the essential idea is that indexing provides faster access to items in the RDBMS than traversing the rows and columns following primary key references. The patent says: “In a basic approach there is a separate index for every column in a table so that any query for any item of information within the table can be sped up.”
Having laid all this out the patent describes a key:value [object] store accessed via a distributed database (DDS) management system and connector nodes, and diagrams it:
The numbers refer to defined items in the patent. Read the patent text to understand them.
The distributed databases provide scale and parallel access speed.
Scality’s patent has a Chord-like distributed hash table access mechanism. The hash table stores key:value pairs with keys indicating different computers (nodes) with Chord being a hash table protocol saying how the keys are assigned to nodes. Nodes and keys are logically arranged in an identifier circle in which each node has a predecessor and a successor. Keys can have successors as well.
And here we are. We’ve arrived at Scality’s RING object storage system, which we first wrote about 11 years ago.
The DDS permits the underlying key:value store to be so used or “to be used as [a] file directory or block based storage system.” File accesses are converted into object ID accesses. Block access numbers are converted into Object IDs also. The patent reads: “If the KVS 201 is sufficiently large, one or more of each of these different types of storage systems may be simultaneously implemented” and “multiple distributed consistent database instances can also be coordinated together as fundamental kernels in the construction of a singular, extremely large capacity storage solution.”
These can “effect extremely large scale file directories, and … extremely large block storage systems.” The connector nodes can provide various interfaces to end users, such as Cloud Data Management Interfaces (CDMI) and the Simple Storage System (S3). NFS, CIFS, FUSE, iSCSI, Fibre Channel, etc.
Snapshot layer added to previous diagram.
Implications
This seems to us to be an astonishingly powerful idea. It implies that this combined key:value store and DDS bundle can be used in many different ways, not just for storing the data for billions of files captured in snapshot exercises.
We asked Scality if the technology in the patent could specifically be used for blocks and objects as well as files, and what that means in terms of Scality’s software strategy.
Speciale and Regni replied, saying: “We do agree that the core innovations in our patent are applicable to both block and object storage over time (in principle they share similar semantics). Especially as platform technologies (flash, RDMA/NVMEoF and networking) speeds and latencies continue to progress — we certainly see ways to leverage the patent for block storage.
“The idea of implementing Point-in-Time snapshots for object storage is potentially very useful for future cloud-native application workloads, and we do believe our innovations will be applicable to it in the future.”
It will be interesting to see how and when Scality implements the technology in this patent. Clearly it now has a way of protecting data in files it stores in its own SOFS (Scale-Out File System) software.
Seagate has a second-generation 30TB HAMR disk drive in development, and sees Gen-1 HAMR as having limited appeal.
Heat-Assisted Magnetic Recording (HAMR) is Seagate’s chosen technology to increase disk drive areal density beyond the limits of current perpendicular magnetic recording (PMR) technology, and uses laser-produced local bit area heating to make a hardened magnetic material accept a write signal and then hold it in a stable fashion. Seagate is shipping a Gen-1 20TB HAMR drive in a limited way.
Not for public consumption
Blessed Seeking Alpha published a transcript of a September 15 Q&A session between Seagate CFO Gianluca Romano and Citi MD and senior analyst Jim Suva at a Citi 2021 Global Technology Virtual Conference.
Jim Suva.
Suva’s first statement included this gem: “I want to go over a few housekeeping items first. First is, no media or press are allowed on this conference call or this video connect. If you are media or press, please disconnect immediately.” Oh, how we laughed when we read this on the Seeking Alpha transcript. It implies that what follows is not for public consumption, which is intriguing.
Romano was very confident in the growth prospects for high-capacity, 3.5-inch disk drives: “We are very confident that the mass capacity part of the business will continue to grow very strongly. We said in the past we expect volume to grow about 35 per cent CAGR for the next several years. … [with] cloud and video and image applications as the segments that are growing the most.”
Key to this growth is continuing to increase disk capacity and so keep up with, or even surpass, competitors Western Digital and Toshiba. Hence the importance of HAMR.
Romano said: “We have a 20-terabyte HAMR that we actually started to sell December last year. That is a small-capacity drive for that new technology. So we are now ramping in volume. We are just producing enough quantity that we can sell to our main customers so that they get familiar with the new drive, with the new technology.” This is why we made the “limited appeal” comment in our first paragraph above.
Then Romano said: “In the meantime, we are developing our second-generation HAMR drive that will be probably around 30 terabyte. That is the drive that we want to ramp in volume.” In our view this means the 20TB HAMR drive is not “the drive we want to ramp in volume.”
The HAMR tech should enable a 50TB HAMR drive in Seagate’s fiscal 2025 year and a 100TB drive “at least by 2030.”
Demarcation
Suva asked Romano which applications would use a 20TB PMR drive and which a 20TB HAMR drive. Romano made a somewhat surprising reply: “The high volume will go to the PMR, the 20-terabyte PMR. For HAMR, I think the best solution is to wait for [the] next generation when another drive will be a 30-terabyte drive, that will be at the beginning, focused only on cloud applications, while PMR will be used for our enterprise OEM, for video image application and some of the legacy applications.”
We understand Seagate is currently sampling a 20TB PMR drive. Even the 30TB HAMR drive will not be applied across Seagate’s disk product range. And, in an implied question here, what capacity will the Seagate PMR drives be at that point?
HAMR technology could also be used to produce what Romano called a “mid-cap” drive in the future.
A snapshot of the timescales Romano revealed:
2021 — 20TB PMR and 20TB HAMR;
Some time between 2022 and 2025 — 30TB HAMR;
FY 2025 — 50TB HAMR;
2030 — 100TB HAMR.
Our take
Our thinking is that Western Digital (WD) has reached the 20TB capacity level using ePMR (enhanced PMR) and embedded flash in the drive controller — its OptiNAND technology. It’s going to extend areal density using these technologies to reach 50TB in the 2025–2030 time period.
WD can also use OptiNAND to increase the slow write performance of its 20TB shingled disk drives and push their capacity to 22 to 24TB and beyond, in our view.
The apparent big puzzle here is how Seagate can increase the areal density of its PMR drives and keep up with WD as it pushes its ePMR/OptiNAND conventional (non-shingled) drive capacity up to the 30TB area. We have had an interview with Colin Presly, a Senior Director in Seagate CTO John Morris’s organisation, and he explained how Seagate will be very competitive in the future, both by extending PMR technology and developing HAMR. A follow-up article based on what he said discusses Seagate’s approach to increasing areal density and performance.
Nutanix and Citrix are partnering so customers can get Citrix VDI running on Nutanix HCI in hybrid and multiple public clouds, raising doubt about Nutanix’s Frame VDI offering.
VDI sends for Virtual Desktop Infrastructure by which Windows and Linux desktop PC screens are virtually presented to a remote terminal. Citrix is supplying its Desktop-as-a-Service (DaaS), virtual applications and Desktops services and Nutanix its HCI-based Cloud Platform software. Thus, Citrix VDI users will be able to access applications and services running on a Nutanix host system.
Tarkan Maner, Nutanix’s Chief Commercial Officer, who once ran the Wyse thin terminal business bought by Dell, was given the honour of a quote, and said: “Together, Nutanix and Citrix can deliver remote work solutions which can be deployed across private and public clouds, combining the simplicity of the Nutanix Cloud Platform, powered by the industry-leading HCI software, with Citrix Virtual Apps and Desktops services, to empower workers, wherever they happen to be.”
Nutanix and Citrix say they “can provide fully comprehensive Desktop-as-a-Service (DaaS) options for customers that enable them to procure, deploy, and manage their Citrix environments running on the Nutanix Cloud Platform, delivered with original equipment manufacturer (OEM), global system integrator (GSI), service provider (SP) and public cloud providers.”
Nutanix will become a Citrix preferred choice for HCI hybrid and multi-cloud deployments. Citrix will become the preferred enterprise end-user computing solution on the Nutanix Cloud Platform. They will work together on go-to-market programs and enablement, product roadmaps and customer support.
Hector Lima, Citrix EVP and Chief Customer Officer, said: “In strengthening our partnership, Citrix and Nutanix can deliver the right building blocks for customers to make the transition [to a hybrid workforce] successfully.”
Frame
Twinning Citrix VDI with Nutanix HCI software sets up a potential conflict inside the company. Nutanix has its own virtual desktop capabilities, in the form of the Xi Frame Desktop-as-a-Service (DaaS) offering running on AHV (private cloud), AWS and Azure. It acquired Frame in August 2018. Frame supplied a virtual desktop accessed on a host PC through a browser — a virtual desktop on a desktop, so to speak.
The Nutanix-Citrix announcement makes no mention of Frame, and it rather looks as if Frame is now out of the frame.
Research house GigaOm has published a Radar report on Disaster Recovery-as-a-Service (DRaaS) suppliers, saying the industry is young but oldster Microsoft is in the lead.
DRaaS is the provision of off-site disaster recovery facilities from a public cloud-delivered service instead of from a DR system hosted on the main and recovery datacentres. When a disaster strikes, the system is failed over to a recovery site which is typically hosted in a public cloud but could also be in a remote site. It fails back to the original site when it is operational once more. The DRaaS supplier has to look after copying new data to the recovery site and provide the failover and failback processes.
Analyst Alistair Cooke writes: “The market is still young, with plenty of space for vendors to enhance their positions.” He has evaluated eight vendors: Acronis, AWS, Druva, iland, Infrascale, Microsoft (Azure), RapidScale and VMware.
Cooke analysed their services using a set of key criteria and their impact on evaluation metrics to produce a Radar graphic — a forward-looking perspective on all the vendors in the report, based on their products’ technical capabilities and feature sets.
The arrows project each solution’s evolution over the coming 12 to 18 months.
We see one Leader — Microsoft (Azure) — with all the other suppliers regarded as Challengers. There are no new entrants and AWS, Druva and VMware are moving into the Leaders’ ring.
He writes: “The multi-purpose cloud platform vendors (AWS, Azure, and iland) offer multi-tenant disaster recovery with consumption-based pricing, so monthly costs vary depending on the amount of DR test and recovery activity undertaken. More specialised data protection providers, such as Acronis, Druva, and Infrascale, might provide more mature recovery methodologies due to their focus.”
Cooke highlights: “The lack of built-in DR orchestration from the two big cloud providers (AWS and Azure) stands out against the other great capabilities both platforms provide. We were also surprised by the progress Druva has made in the past couple of years, in particular how feature-packed its DRaaS service is.”
Supplier extracts from the report
“Acronis Cyber Protect Cloud DRaaS capabilities are a good fit for the smaller organisations that Acronis designed them to suit.” But there is not yet any automated script execution capability.
AWS Cloud Endure Disaster Recovery provides agent-based replication and recovery from physical, virtual, or cloud-based systems into an AWS Virtual Private Cloud (VPC) with down to sub-second RPO. But “there seems to be no built-in capability to sequence the starting of multiple VMs or to use scripts to initiate VMs.”
“Druva does not attempt to reach the lowest possible RTO/RPO, recognising that Tier-0 applications are better deployed as active-active geo-clustered configurations. Druva simply wants to protect every other application and machine in your estate.”
“The iland Secure Disaster Recovery as a Service can be implemented with either Veeam or Zerto supplying the underlying protection.” We note that HPE has bought Zerto and is expected to offer Zerto technology as-a-service through its GreenLake operation. There are no iland egress charges by the way.
“The Infrascale Backup and Disaster Recovery Cloud (IBDR-Cloud) service … uses an on-premises appliance for local recovery. The appliance provides self-contained storage and compute capabilities to allow on-premises recovery if, for example, your storage array fails.”
“Microsoft Azure Site Recovery (ASR) provides DRaaS from on-premises into an Azure region, or from one Azure zone or region to another. … Orchestration and automation are strong in ASR, with customers able to choose from a variety of automation tools including Microsoft PowerShell, ARM templates, Azure policies, and third-party tools such as Terraform and Ansible.”
“RapidScale Cloud Recovery provides disaster recovery using the technologies from RapidScale Backup-as-a-Service (BaaS), Infrastructure-as-a-Service (IaaS), and Zerto, combined with some automation and orchestration.” It uses continuous data protection technology, which can have a NetApp array as a target, and has a short RPO.
“The VMware Cloud Disaster Recovery (VCDR) service allows any vSphere-based VMs to be protected and recovered to the VMware Cloud on AWS (VMC). As VCDR is a first-party product, integration into vSphere and VMC is very tight, but there’s no protection for other hypervisors or physical hosts.”
Find out more about the GigaOm Radar for Disaster Recovery-as-a-Service here.
Dell has announced CloudIQ support for its PowerEdge servers and PowerSwitch networking gear, meaning that CloudIQ predictive analytics covers Dell’s IT infrastructure product range, providing level-3 conditional automation AIOps.
CloudIQ is an AI and machine learning-based cloud service monitoring and predictive analytics application, available as a SaaS offering for Dell’s IT infrastructure. Servers, storage, networking, etc. products in Dell’s product range are fitted with sensors which send state and activity information to CloudIQ. The software then analyses the patterns in the trillions of data points to check if a customer’s infrastructure components are working optimally and whether they are running out of any resource.
A blog by Jeff Boudreau, Dell’s President and General Manager for the Infrastructure Solutions Group, says Dell is delivering the path to autonomous operations with CloudIQ. It has: “intelligent AIOps capabilities and now covers the breadth of the Dell Technologies infrastructure portfolio, setting the stage for new levels of automation for autonomous operations and delivering up to 10x faster time to resolution.“
AIOps — or Artificial Intelligence for Operations — is a way of using AI software to analyse collected data points from an IT system and recognise if it is operating within or outside optimal limits for such things as performance, SLA matching, resource capacity and other things. The software can predict, based on past trends, what might happen and suggests that, for example, a virtual machine’s protection status needs uprating or a network switch needs more bandwidth.
A fully autonomous system would use AIOps to look after itself but that desired end state will be reached via steps on a pathway. Dell defines five such steps:
Boudreau says: “With AIOps-driven autonomous operations as part of your IT strategy, you get deeper access to operational data and the ability to act on it through automation at the infrastructure level.” This enables existing technical staff to be deployed elsewhere
As Dell has added its servers and network switches to CloudIQ, “This means that IT ops teams now have one UI and a single source of health notifications and recommended actions, real-time reports, and AI/ML-driven analytics for their entire fleet of Dell systems across all locations. CloudIQ also monitors data protection in public clouds and storage-as-a-Service as a key component of the Dell Technologies APEX Console and with APEX Data Storage Services.”
Boudreau says: “The combined intelligence of CloudIQ and the Dell EMC infrastructure portfolio supports up to Level 3 of the autonomous operations model for a sizeable and growing number of IT administration use cases.”
CloudIQ can integrate with third-party tools such as with tools like ServiceNow, Slack, Microsoft Teams, Ansible and vRealize, and provide them with its data and recommended resolutions to problems. This parallels what Infinidat has been doing with its InfiniBox-focussed AIOps and third-party integrations.
There is, as yet, no information about how Dell could move up to level-4 autonomous operations, and that would be a heck of a large step to make.
You can find out more about Dell’s ideas around autonomous operations by watching and listening to young Luke Skywalker, sorry, actor and voice-over man Mark Hamill moderate a panel in a Dell event entitled “Autonomous Operations: The Path to Your Digital Future” being held on September 22 and 23.
And remember, the AIOps force is strong in that one.
Scale-out filesystem supplier Qumulo is offering its own data protection and cloud-based disaster recovery facilities, saying it protects against ransomware and customers no longer need expensive secondary data centres for disaster recovery (DR).
Qumulo’s new Recover Q software is positioned as for business continuity and DR defence against ransomware.
Ben Gitenstein, VP of Product at Qumulo, said: “Recover Q provides our customers a radically simple solution to add an additional layer of strategic defence that helps mitigate attacks and provides the ability to seamlessly recover if one occurs.”
Qumulo Recover Q graphic.
Recover Q has two components: Qumulo Protect and Qumulo Secure. Qumulo Protect data protection features:
Erasure coding;
Snapshots;
Snapshot replication;
Continuous replication;
Failover for simple disaster recovery and failback;
Cloud volume snapshot to S3.
Qumulo Secure provides:
AES 256-bit software encryption at rest;
SMBv3 in-flight (over-the-wire) encryption;
Configure, schedule and query programmatically and securely with RESTful API over HTTPS;
Secure FTP traffic on TCP networks leveraging Transport Layer Security (TLS);
Track user activity within the filesystem with Audit logging;
Designate different levels of user and group privileges leveraging Role Based Access Control (RBAC).
Both are built into Qumulo’s main Core software. Recover Q snapshot replication/failover/failback facilities can be provided on-premises or as a DR-as-a-Service offering from the public cloud. With this customers replicate data and snapshots offsite, providing a near-instant failover capability in the event of a disaster.
The AWS and Google clouds are supported, with customers managing the infrastructure required, or Recover Q can work with Azure, in which case it is a fully-managed service.
Qumulo has not said that Amazon’s S3 Object Lock is supported and, in fact, it isn’t. If you want immutability in your replicated snapshots, you will have to wait.
Qumulo is running a Recover Q online seminar on October 13th to present more information and, no doubt, get leads.