Home Blog Page 263

Storage news ticker tape … …

Storage news
Storage news

Storage news comes in waves and there’s so much of it right now that the production of a weekly digest can’t keep up. Here’s the latest batch of news presented in a ticker tape-style format as a kind of experiment to see if we can keep up with the rush better.

Consultancy DCIG evaluated Large Enterprise VMware vSphere Backup Solutions suppliers: Acronis Cyber Protect, Arcserve Unified Data Protection (UDP),Cobalt Iron Compass, Cohesity DataProtect, Commvault Complete  Data Protection, Dell EMC NetWorker, HYCU for VMware, IBM Spectrum Protect Plus, Quest NetVault Plus, Rubrik Polaris, Unitrends Unified BCDR and Veritas NetBackup.

The top 5 were Cobalt Iron Compass, Commvault Complete  Data Protection, HYCU for VMware, Unitrends Unified BCDR and Veritas NetBackup.

Cohesity is hosting its inaugural user conference Cohesity Connect, a global, virtual event with a focus on cyber-resilience and ransomware in particular. It runs from October 19 to 21, and has more than 30 dynamic sessions and breakout discussions. Attendees can also obtain free professional certifications in data protection, file and object services, and multi-cloud solutions during half-day Cohesity Academy sessions.

Kasten by Veeam announced the launch of the Kasten Kubernetes Learning Series — a free educational program designed to improve the Kubernetes skill sets of all levels of practitioners, including novices, developers, operations and Kubernetes administrators.

SaaS-based data protector Druva introduced Druva Rollback Actions, which temporarily stores data from 24 hours up to seven days. It enables customers to roll back unauthorised or accidental deletion activity. In the case of credential misuse — where a bad actor may maliciously remove endpoints, users, virtual machines, NAS or file shares or even databases — Druva Rollback Actions will allow administrators to quickly recover not only the data from deleted backups but also environmental objects as well. Customers can safeguard against accidental or unintended deletions, with the ability to revert the unintended action without any loss of data.

Blogger Ronen Schwartz, SVP and GM of NetApp’s Cloud Volumes business, announced the integration of Google Cloud VMware Engine with NetApp Cloud Volumes Service support for virtual machine (VM) datastores. It will be a fully-managed service which scales storage independent of compute and supports Google Cloud regional DR VMware deployments. Register for a preview. General availability is expected in 2022.

ATTO announced support for LTO-9 tape technology across its product lines: HBAs, bridges, and Thunderbolt devices.

Synopsys announced the industry’s first complete HBM3 IP solution — including controller, PHY, and verification IP for 2.5D multi-die package systems. Its pre-hardened or configurable HBM3 PHY in 5nm process operates at 7200Mbit/sec for up to 2x the data rate and improves power efficiency by up to 60 per cent compared to HBM2E. Micron, Samsung and SK hynix provided supporting statements.

Quantum has set up a StorNext + CatDV + archiving offering for Adobe Premier Pro teams based in offices and/or working remotely. It’s called a Collaborative Workflow Solution and the components are:

  • StorNext shared storage provides the workflow storage,
  • CatDV Asset Management, with included CatDV Cloud Panel for Adobe Creative Cloud, delivers asset and project management and orchestration,
  • Archiving can be done to Scalar tape, ActiveScale object storage or any S3 target system.

The complete, integrated, tested and turnkey offering is installed and supported by Quantum Professional Services and reseller partners certified to install StorNext and CatDV.

Airbyte aims to kill proprietary ETL data warehouse feed pipelines

Airbyte has launched a cloud service for its open-source Extract, Transform and Load software product, until now only available on-premises.

Its Airbyte Cloud uses compute time rather than data volume-based pricing, providing up to a claimed 10x reduction in costs. Airbyte’s software, and now service, enables businesses to create data pipelines from sources such as PostgreSQL, MySQL, Facebook Ads, Salesforce, Stripe, and connect to destinations that include Redshift, Snowflake, and BigQuery.

A statement from Michel Tricot, co-founder and CEO of Airbyte, said: “Typically, companies use an ETL or ELT (extract, load, transform) technology to move data from the most common APIs (application programming interface), or they build in-house scripts for the less common ones, and have yet another technology for database replication.

“Using compute time as the basis for pricing is a well-understood concept and data processing platforms like Snowflake have already adopted that model.”

Companies can now build data pipelines using Airbyte Cloud at a fraction of the cost of volume-based pricing. The AirByte Cloud also enables companies to have multiple workspaces and access management for their teams. It supports OAuth authentication to enable less-technical users to connect their tools.

The company is growing both its customer and connector counts. It had 250 customers at the end of January and that has now grown past 5000. Back in July Airbyte had more than 75 pre-built connectors, and it now has 130 — a year after it began operations. More and more connectors are coming in as contributions from the open-source community — that now stands for 20 per cent of the connectors built. It says most ETL/ELT services plateau at around 150 connectors, and predicts: “At this pace, Airbyte’s data integration platform will have the most connectors in the industry by the end of this year.”

SingleStore says ETL procedures are not needed at all. Its distributed, relational SQL database can handle both transaction and analytic workloads and is available on-premises and in the public cloud.

Get a complete list of Airbyte connectors here. Customers can also build their own using the Airbyte CDK (Connector Developer Kit), which removes 75 per cent of the code needed to build a new connector. An Airbyte blog explains how it intends to make money and encourage ongoing third-party connector support.

Airbyte is phasing in users for Airbyte Cloud starting in the US.

Hitachi Vantara tries to catch up, with VSP arrays, hybrid cloud and containerisation support

Hitachi Vantara has added new top-end storage arrays along with support for hybrid clouds and containerisation, faster object storage and S3 object ingest, plus improved Ops Center management.

Update; VSP 5100 and 5500 latency numbers added, 13 Oct 2021.

The announcements were made at a Hitachi V event — “The Road Ahead: Digital Infrastructure for the Data-Driven” — at which execs revealed a roadmap combining hybrid cloud infrastructure products with Hitachi Virtual Storage Software for block, virtualised across distributed environments, spanning (with an echo of the HPE mantra) from edge to core to cloud. 

Mark Ablett, President of Digital Infrastructure at Hitachi Vantara, said in a statement: “Our new VSP 5000 and E Series hybrid cloud products deliver performance, consolidation, and enterprise class data services seamlessly on-prem, off-prem, and for cloud-based storage. Clients want agility and performance to meet the demands of digital business, and we’re delivering both.”

The new 5000s and updated E Series arrays will have common data services and virtualisation from Hitachi V’s software-defined and cloud-hosted offerings.

VSP 5000

The VSP (Virtual Storage Platform) is a mid-range to high-end storage array line starting with the entry-level all-flash F Series and hybrid and higher capacity SSD/HDD G Series. Then there is a three-model all-flash E Series (590, 790 and 990), followed by the high-end 5000 products.

There are H (hybrid) variant models to several all-flash arrays, which add disk drive capacity.

Hitachi V has refreshed its VSP 5000 range with two new models — the 5200 and 5600 — effectively replacing the current 5100 and 5200, as a table shows:

The proprietary FMD (Flash Module Drives) used in the 5100 and 5500 are dropped and the capacity limits remain the same. Both the IOPS and bandwidth performance are increased compared to the old models. Hitachi Vantara has subsequently told us that the latency for the 5100 and 5500 is 70 microseconds. The 5200 gas the sane latency a the 5100 but the 5600 has a much lower latency than the 5500.

Hitachi V does not supply the controller CPU details, but we would suppose that the new arrays have newer and more powerful Xeon processors.

Hitachi V says the new 5000s have 42 per cent improvement in data reduction efficiency, increasing their usable capacity by up to that amount. The 5600 has an end-to-end NVMe design, with the company claiming an industry-leading 33 million IOPS and under 39 microseconds of latency.

Hitachi Vantara VSP 5600.

The 5000 software gets containerisation integration with Google’s Anthos (on-premises and major public cloud support) and Red Hat OpenShift, the leading enterprise Kubernetes platform. This lays the foundation, Hitachi V says, for extending data fabrics to the cloud.

There is also a Hitachi Replication Plug-In for Containers, to replicate data off the host 5000.

Finally, the new 5000s feature Hitachi Modern Storage Assurance, which provides “seamless upgrades to all future enhancements of the latest VSP technology — simplifying the procurement process for several years”. This can be included in an array purchase or be part of an EverFlex pay-as-a-service offering.

VSP E Series

There are H (hybrid flash/disk) versions of the E590 and E790 arrays which means they get SAS drive expansion to increase their capacity. They can configure all-NVMe, all-SAS or mixed NVMe/SAS products, with the HDDs forming a data retention storage tier. The new capacity limits are 8.9PB internal and 144PB external for the E590 and 8.9PB internal and 216PB external for the E790.

Hitachi Vantara E590 or E790 chassis.

These refreshed E Series systems also get the Anthos and OpenShift integration and the replication plug-in plus a data-in-place, non-disruptive upgrade path to future E Series arrays.

The updated E Series has a simpler installation process and can be installed and switched on in 30 minutes or so. 

Hitachi Content Platform

The object-storing Hitachi Content Platform (HCP) has optimisation settings that distribute the objects across the architecture more efficiently and deliver a greater than 2x improvement in performance.

A new scale-out policy engine optimises performance through data services that can be customised per use case or workload. The actual performance numbers are not revealed, so comparisons with other suppliers are impossible to make.

HCP systems using Hitachi Content for File now support object ingestion using the S3 protocol which broadens the number of devices which can send data to HCP for storage and analysis.

Ops Center

An updated Ops Centre management facility applies AI and ML methodologies to reporting, performance optimisation and data management for simpler to use cloud-based health monitoring capabilities. We’re told there is an integrated real-time analytics and automation capability which automates 90 per cent of manual tasks and helps admins get to root cause analysis up to 4x faster than before. Hitachi V is now calling Ops Centre an AIOps facility.

On the green front, Hitachi V says customers can fine-tune system performance to reduce their overall carbon footprint by consuming less energy and cooling across the systems. 

Comment

These mid-range and high-end VSP product updates will strengthen Hitachi V’s ability to compete with Dell EMC (PowerMax, PowerStore), HPE, IBM, NetApp and Pure Storage.

Hitachi V is updating its marketing messages and tactics with containerisation support, phrases like “edge to core to cloud”,  the non-disruptive array upgrades (which Pure, for example, has been offering for several years), and a “Data fabric” notion — shades of NetApp.

What we are seeing here are elements of catch-up development, and there’s nothing wrong with that at all — it’s necessary. However there’s not a lot here, on the innovation front, to dissuade purchasers of Infinidat DRAM-cached or VAST Data single QLC flash tier storage to change their minds. Nor indeed is there much on offer for Hitachi V to claw market share away from its six main competitors in the enterprise storage array area: Dell, HPE, NetApp, Pure Storage, IBM and Huawei. 

Availability

The  VSP 5000, E-Series and Hitachi Content Platform are available for purchase worldwide now from Hitachi Vantara and its partner network, and also available through EverFlex.

To the stars: NetApp bringing cloud-native Astra

NetApp, is announcing the availability of an “early preview” of a file-focussed addition to its Astra family of Kubernetes products so that users get block stores and a cloud-native file store.

Astra Data Store (ADS) is a Kubernetes-native shared file unified data store for containers and virtual machines (VMs) with advanced enterprise data management and a standard NFS client. The software is based on NetApp’s enterprise data management technologies — meaning, we understand, ONTAP.

Eric Han.

Eric Han, a NetApp VP of product management, said in a supplied statement: “With Astra Data Store we’re giving customers more infrastructure options to build modern datacentres, with the ability to deploy world-leading primary storage and data management solutions directly into their Kubernetes clusters.”

Back in August last year, Han blogged that: “the Project Astra team has been redesigning the NetApp storage operating system, ONTAP, to be Kubernetes-native.” We think this is the first appearance of cloud-native ONTAP functionality.

ADS has been designed to fix challenges for Kubernetes users, including the lack of mature shared file services, proprietary file clients, and managing data stores separately for virtual machines and containers. It is said to be one of the first Kubernetes-native, unified shared file services for containers and VMs, and offering multiple parallel file systems on the same resource pool. 

The ADS software includes replication and erasure coding technologies for Kubernetes-native workloads so as to increase resiliency.

In the coming months NetApp will introduce more data services, hybrid, and multi-cloud capabilities by itself and co-developed with partners and customers. 

NetApp’s Astra portfolio.

The ADS preview will be publicly available over the coming months, with general availability targeted for the first half of 2022.

Comment

By converting its ONTAP storage software functionality to containerised code and moving it into the Kubernetes space, NetApp is making sure that it is in the front line for offering data storage and services to cloud-native applications and developers.

This means NetApp will be able to offer strong competition to cloud native startups such as OnDat, the renamed StorageOS, and Pure’s Portworx business unit. It will be able to reassure its existing customers that they have no need to move a risky startup to get such services — they can stay with trusty NetApp instead. This message could help prevent DevOps people inside NetApp’s customer base choosing a cloud-native startup for their storage. And NetApp can also go to cloud-native developers outside its base and say it is a more reliable storage supplier than any young startup.

Komprise adds global unstructured data search and subset move

Komprise has introduced Deep Analytics Actions (KDAA), a product which provides a systematic way to find specific data across hybrid cloud storage silos and can move a subset of data to data pipelines. 

KDAA is a managed hybrid cloud service. It builds a global file index, stored in the public cloud, by indexing data in-place across file, object and cloud data storage and can thereby look into the index to search petabytes of unstructured data distributed across multiple silos and locations. It enables Komprise to compete with suppliers such as OneSync and Rubrik, with its acquired Igneous technology.

Matt Madill, senior storage administrator at Komprise customer Duquesne University, said: “Different research groups have unique requirements which users can support with tagging so that those data sets can not only be discovered easily but they can apply the appropriate data management policies to them for long-term storage. We’ll be able to give users the power to have better control of their data and let us know what to archive and when.” 

The action part of this is KDAA can create a virtual data set based on a query and systematically and continuously move data from multiple file and object silos to a target location.

Komprise diagram.

For example, researchers at a pharmaceutical company can query and extract the files related to a specific experiment generated by a set of researchers, when these files might be a small part of petabytes of research and other scattered across datacentres and clouds. They can then import this virtual data set into a data lake or data warehouse for further analysis. 

Komprise says they can move smaller subsets of data than otherwise into data lakes and warehouses for analysis, which speeds data lake/warehouse load time and hopefully speeds the analytics runs as well — since there is less data to analyse.

Kumar Goswami, co-founder and CEO of Komprise, said: “With Komprise Deep Analytics Actions, departmental users can maximise the business value of their unstructured data by leveraging their domain knowledge to cull and find the right data sets to operate on across all their silos.

KDAA screen grab.

The global indexing is at file and object metadata level, not content level. That means users can create queries on file attributes and tags such as: data related to a specific tag or project name, inactive projects, file age, user/group ID’s, path, file type (aka JPEG) and specific extensions, and data with unknown owners.

Tags are what data owners and users can add as custom metadata. Unlike automatic metadata generated by a file or object system this is not necessarily systematic and certainly not automatic. It can be, as KDAA can be set up to add tags based on an unstructured data item’s characteristics.

Users could maximise the business value of unstructured data even more if they could search inside it — run a content-level search.

Check out Komprise’s KDAA web pages to learn more.

Comment 

Automated content indexing requires the content indexing system to look inside a file or object and recognise words that are relevant and not articles or pronouns or other relatively content-free items. A book indexer basically looks for nouns (array) and names (Komprise) and actions (eg versioning). An automated content indexer would have to be able to recognise such words in a file or object and then list them as content metadata items for that file or object.

Such content-level indexing runs would need a huge amount of storage I/O and many processing cycles.

A content-level metadata list could be huge. A 350-page book could easily have a 550-item content index. With a million text files a content index could hold hundreds of millions of entries. An automated indexer in effect builds up a massive quasi-dictionary or key:value store in which words (keys) are listed but not explained. Instead they have references to their use in a file or object (values).

If that existed then a content-level search could be run in the same way as KDAA searches file and object metadata today.

A digest with lots of storage news shrimps on the barbie: Cohesity, DDN’s Tintri and TrendFocus disk ships

The focus this week is on Cisco and Cohesity partnering to help security ops people fight malware, TrendFocus’s disk ship data for the third quarter — cue a PC drive increase, and exec changes at DDN’s Tintri business unit.

Cohesity links Helios to Cisco’s SecureX 

Cohesity’s Helios management system has been integrated with Cisco’s SecureX security risk monitoring  and response system. It means SecureX admins can get to see Cohesity’s DataProtect product’s (anomaly spotting, resource management, migration, backup and recovery status info) through their dashboard alongside SecureX’s existing capabilities to automatically aggregate signals from networks, endpoints, clouds, and apps. 

This aggregation and correlation of Cisco and Cohesity information should help an IT Security team see the emergence of malicious activity more quickly and fully, view operational performance, and shorten threat response times. The scope of ransomware attacks can be understood better and the security operations (SecOps) team initiate a workflow to restore compromised data or workloads to the last clean snapshot.

Al Huger, Cisco’s VP and GM of Security Platform and Response, said: “Cisco SecureX’s comprehensive security platform offers customers a system-wide view of security threats and issues. Adding the Cohesity Helios data protection and … data management solution to Cisco SecureX provides businesses with superior ransomware detection and response capabilities.”

Cohesity is now a Cisco Secure Technical Alliance Partner and a member of Cisco’s security ecosystem. The Cohesity-Cisco relationship has enabled:

  • Cohesity Helios as a validated, S3-compatible backup, disaster recovery, and long-term retention solution for Cisco Secure Workload (formerly Cisco Tetration);
  • Cohesity ClamAV app on Cohesity Marketplace based on a Cisco open source antivirus solution;
  • Cohesity integrated secure, single sign-on (SSO) with Cisco Duo.

Every Cisco Secure product includes Cisco SecureX. The integrated solution and support are generally available from Cisco worldwide.

Tintri exec churn

DDN’s Tintri business unit has seen three senior executives leave and a new one appointed:

Phil Trickovic.

Phil Trickovic was appointed SVP of Revenue for Tintri in April, coming from two years at Diamanti. He had previously been Tintri’s VP of worldwide Sales and Marketing.

General manager and SVP Field Operations Tom Ellery resigned in June to join Kubernetes-focussed StormForge.

Paul Repice, Tintri’s VP Channel Sales for the Americas and Federal, left in March this year to join DataDobi as its VP Sales Americas.

Amy Mollat-Medeiros, SVP Corporate Marketing & Inside Sales for Tintri and DDN brands, resigned in June and joined Tom Ellery at StormForge to become SVP Marketing and SDR.

Graham Breeze was appointed Tintri’s VP of Products in March and came from 18 months at Diamanti. He’d also been at Tintri before, in the office of the CRO.

Christine Bachmayer was promoted to run Tintri’s EMEA marketing in April.

Recent Tintri Glassdoor reviews are uniform, being pretty negative. We hear changes are coming.

TrendFocus disk ship data: PC drive shipments increase

Thank you Wells Fargo analyst Aaron Rakers for telling subscribers that TrendFocus’s disk ship data for 2021’s third quarter saw about 67.8 million units shipped, up seven per cent year-on-year. Seagate had the leading share, at 42 per cent, Western Digital was ascribed around 37.5 per cent and Toshiba the rest, some 21 per cent.

It’s estimated that 19.3 million nearline disk drives were shipped — more than 250EB of capacity. This compares to the year-ago quarter when 13.3 million nearline disks hit the streets. Rakers thinks Seagate and Western Digital have a near-equal nearline disk ship share at 43 to 44 per cent.

Trend Focus estimates there were ~21 million 2.5-inch mobile and consumer electronics disk drives shipped, lower than the ~26 million shipped a year ago.

There were ~23.5-24.0 million 3.5″ desktop/CE disk units shipped shipped in the third quarter; an unexpected increase on the year-ago 21.5 million drives.

Shorts

Civo, a pure-play cloud native service provider powered by Kubernetes, announced general availability of its production-ready managed Kubernetes platform. It claimed that, at launch, it is the fastest managed Kubernetes provider in the world — deploying a fully usable cluster in under 90 seconds.

Cohesity has joined the Dutch Cloud Community — an association of hosting, cloud and internet service providers — as a supporting member. So what? Mark Adams, Cohesity’s Regional Director NEUR, said: “We are keen to work with the cloud community to offer either a customer-managed solution, or our Cohesity-managed SaaS implementation, or as some organisations prefer, a mix of both offerings. Together with this community, we will help service providers to consolidate silos and unleash the power of data and drive profitable growth for cloud and managed services platforms.”

DataStax, which supplies the Astra DB serverless database built on Cassandra, has new capabilities in the open-source GraphQL API, enabling developers to develop applications with Apache Cassandra faster and manage multimodel data with Apollo The API is available in Stargate, the open-source data gateway.

Cloud-based file collaborator Egnyte announced its Enterprise Plan ransomware protection is now available as part of its entry-level Business Plan (which starts at $20 per user per month). The offering can detect more than 2000 ransomware signatures, block attacks immediately, and automatically alert admins of the infected endpoint. New signatures are crowdsourced daily. It is also announcing a Ransomware Recovery solution as part of its Enterprise package. The recovery capability allows customers to “look back” at previous file snapshots to determine at which point ransomware infected a file and restore data to that point with a single click.

The latest version of FileCloud’s cloud-agnostic enterprise file sync, sharing and data governance product integrates with Microsoft Teams, so that organisations can share files and links from a single workspace. FileCloud can be self-hosted on-premises, operated as IaaS, or accessed in the cloud.

Iguazio, calling itself the MLOps (machine learning operations) company, today announced its software’s availability in the AWS Marketplace. This software automates machine learning (ML) pipelines end-to-end and accelerates deployment of artificial intelligence (AI) to production by 12x.

GiagaOm Data Governance radar diagram, Oct 2021.

Immuta, a universal cloud data access control supplier, announced it was named a Leader in the GigaOm Radar Report for Data Governance Solutions. The company is positioned in the Leader category as a “Fast Mover,” the most innovative, and ahead of all other data access control providers.

iXsystems and Futurex have announced the integration of iXsystems’ TrueNAS Enterprise with Futurex’s Key Management Enterprise Server (KMES) Series 3 and Futurex’s VirtuCrypt Enterprise Key Management. This uses the Key Management Interoperability Protocol (KMIP) and enables centralised key management for TrueNAS.

Kingston Technology Europe announced its forthcoming DDR5 UDIMMS have receivedIntel Platform Validation and claims it is the first and arguably most important milestone in validating compatibility between Kingston DDR5 memory solutions and Intel platforms utilizing DDR5.

Lenovo has joined Nvidia’s early access program in support of Project Monterey, with its use of the BlueField-2 SmartNIC to offload host server CPUs. It means Lenovo’s ThinkAgile VX and ThinkSystem ReadyNodes will support the BlueField-2 SmartNIC.

Scalable, high-performance file system supplier ObjectiveFS has announced its v6.9 release. This includes new features, performance improvements and efficiency improvements, such as integrated Azure blob storage support, Oracle Cloud support, macOS extended ACL, cache performance, memory usage improvements, compaction efficiency and more. For the full list of updates in the 6.9 release, see the release note.

Phison offers two grades of its Write Intensive SSD. The standard grade comes in a 2TB capacity and is capable of sustained writes of 1GB/sec. Its write endurance is 3,000TB — compared to a typical consumer-level SSD’s endurance of around 600TB. The pro grade is available in either 1 or 2TB capacities, both capable of sustained writes of 2.5GB/sec — more than three times a typical SSD’s speed of 0.8GB/sec. The 1TB pro grade SSD delivers write endurance of 10,000TB, and the 2TB 20,000TB.

In a heavy workload of ten drive writes a day, a standard grade endurance SSD will survive for 300 days of sustained work. And the pro grade 1TB and 2TB models survive 1000 and 2000 days respectively — possibly outlasting the machines they’re running in. The typical SSD running the equivalent workload will only survive 60 days. Phison is now offering write-intensive SSDs through OEMs serving professional users such as PNY.

Pure Storage announced the release of a new Pure Validated Design (PVD) in collaboration with VMware to provide mutual customers with a complete, full-stack solution for deploying mission-critical, data rich workloads in production on VMware Tanzu. It provides an architecture, including design considerations and deployment best practices, that customers can use to enable their stateful applications like databases, search, streaming, and AI/machine learning apps running on VMware Tanzu to have access to the container-granular storage and data management provided by Portworx.

Rambus has developed a CXL 2.0 controller with zero-latency integrated Integrity and Data Encryption (IDE) modules. The built-in IDE modules employ a 256-bit AES-GCM (Advanced Encryption Standard, Galois/Counter Mode) symmetric-key cryptographic block cipher. Check out the technical details on the CXL 2.0 controller with IDE here and the CXL 2.0/PCI Express 5.0 PHY here.

Game Drive for Xbox SSD.

Seagate has launched a $169.99 Game Drive for Xbox SSD. It features a lightweight, slim design with an illuminating Xbox green LED bar, USB 3.2 Gen-1 universal compatibility, 1TB capacity, compatibility with Xbox Series X, Xbox Series S and any generation of Xbox One, and installation in under two minutes through Xbox OS. The drive comes with a three-year limited warranty and three-year Rescue Data Recovery Services.

Seagate has developed a Kubernetes CSI driver for its Exos disk drive. It’s available for download under Apache v2 license from Github and can be used by any customer running Seagate storage systems with 4xx5/5xx5 controllers.

Swordfish 1.2.3, having been approved by the SNIA Technical Committee as working draft, is now available for public review. Swordfish defines a comprehensive, RESTful API for managing storage and related data services. V1.2.3 adds enhanced support for NVMe advanced devices (such as arrays), with detailed requirements for front-end configuration specified in a new profile, enhancements to the NVMe Model Overview and Mapping Guide.

Cloud data warehouser Snowflake announced its next Global Startup Challenge for early stage companies developing products for its Data Cloud. The Challenge invites entrepreneurs and early stage organisations, that have raised less than $5 million in funding, to showcase a data application with Snowflake as a core part of the architecture. It offers the three competition finalists the opportunity to be considered for an investment (total of up to $1 million across the three finalists), and global marketing exposure.

Storage array software supplier StorONE has signed a strategic distribution agreement with Spinnaker to distribute StorONE’s S1 Enterprise Storage Platform software in the EMEA market.

Data manager and archiver StrongBox Data Solutions (SBDS) has announced a partnership in the UK and Benelux with value-added distributor Titan Data Solutions. Titan will offer end-to-end data management solutions and cybersecurity services.

For the fourth consecutive time, data integration and data integrity supplier Talend announced it has been recognised by Gartner as a Leader in data quality solutions as described in the 2021 Magic Quadrant for Data Quality Solutions.

TimescaleDB, which supplies a relational database for time-series data, announced the new Timescale Cloud, an easy and scalable way for developers to collect and analyse their time-series data. This offering is built around a cloud architecture, with compute and storage fully decoupled. All storage is replicated, encrypted, and highly available; Even if the physical compute hardware fails, the storage stays online and the platform immediately spins up new compute resources, reconnects it to storage, and quickly restores availability. 

Veeam has announced an update to its Backup & Replication product, v11a, offering Red Hat Virtualization backup support, and native backup and recovery to Amazon Elastic File Systems and Microsoft SQL databases. There’s more support for archive storage backup, and security integrations with AWS Key Management Service and Azure Key Vault to safeguard encrypted backup data from ransomware. Kasten K10 v4.5 will be able to direct backups of Kubernetes clusters that leverage VMware persistent volumes to a Veeam Backup & Replication repository where its lifecycle can be managed and additional Veeam features and capabilities leveraged.

Veritas has announced the Veritas Public Sector Advisory Board. This consists of “renowned public sector experts” who will advise Veritas, already the leading provider of data protection for the public sector, on ongoing developments such as the recent Executive Order on Improving the Nation’s Cybersecurity.  It will work closely with Veritas executives to help prioritize the most important programs and initiatives in addition to recommending actions and direction on strategic business opportunities and go-to-market, route-to-market, customer and operational strategies for the public sector.

Hyperconverged infrastructure software provider Virtuozzo has acquired the technology and business of Jelastic, a multi-cloud Platform-as-a-Service (PaaS) software company, following a ten-year partnership. It says bringing Jelastic’s platform and application management capabilities in-house completes Virtuozzo’s core technology stack, delivering a fully integrated solution that supports all relevant anything-as-a-service (XaaS) use cases — from shared hosting to VPS to cloud infrastructure, software-defined storage and application management and modernisation.

VMware announced an upcoming update to VMware vSphere with Tanzu so that enterprises can run trials of their AI projects using vSphere with Tanzu in conjunction with the Nvidia AI Enterprise software suite. Nvidia AI Enterprise and VMware vSphere with Tanzu enable developers to run AI workloads on Kubernetes containers within their VMware environments. The software runs on mainstream, Nvidia-Certified Systems from leading server manufacturers, providing an integrated, complete stack of software and hardware optimized for AI.

Customer wins

The Hydroinformatics Institute in Singapore (H2i) uses Iguazio’s software on AWS to build and run a real-time Machine Learning pipeline that predicts rainfall by analysing videos of cloud formations and running CCTV-based rainfall measurements. Gerard Pijcke, Chief Consultancy Officer, H2i, said: “With Iguazio, we are able to analyze terabytes of video footage in real time, running complex deep learning models in production to predict rainfall. Repurposing CCTV-acquired video footage into rainfall intensity can be used to generate spatially distributed rainfall forecasts leading to better management of urban flooding risks in densely populated Singapore.”

StorMagic announced that Giant Eagle, Inc. a US food, fuel and pharmacy retailer with more than 470 locations across five states, has selected StorMagic SvSAN virtual SAN software and SvKMS encryption key management to store and protect in its 200-plus supermarkets with in-store pharmacies. Today, SvSAN is running on three-node Lenovo clusters at each store, and SvKMS on three virtual machines at its primary datacentre.

People moves

Remember Milan Shetti? He was SVP and GM of HPE’s storage business unit, leaving in March last year, and before that CTO of the Datacenter Infrastructure and Storage divisions. He’s being promoted from President to CEO at Rocket Software, an IBM systems focused business supplying software to run on legacy kit.

John Rollason.

John Rollason resigned from NetApp, where he was senior director for global revenue marketing after being senior director for EMEA marketing. An ex-SolidFire marketing director, he quit in August this year and has become a part-time marketing consultant at Nebulon. He is also MD at REMLIVE in the UK, which is an electrical safety warning indicator specialist.

Keith Parker, Product Marketing Director at Pavilion Data, is leaving for another opportunity.

Ceph hardware and software system builder SoftIron has appointed Kenneth Van Alstyne as its CTO, responsible for “building out SoftIron’s technology strategy and roadmap as the company advances its mission to re-engineer performance and efficiency in modern data infrastructure through its task-specific, open source-based solutions.” He comes from Peraton, the US Naval Research Laboratory, QinetiQ North America and Knight Point Systems.

Bold move: money and mouth co-located with Rubrik’s ransomware recovery warranty

Data protector Rubrik has announced a $5 million ransomware recovery warranty for Rubrik Enterprise Edition.

The warranty will cover expenses related to data recovery and restoration in the event that Rubrik is unable to recover protected data in the event of a ransomware attack.

Bipul Sinha.

Bipul Sinha, Rubrik’s CEO and co-founder, issued a statement: “With this new Ransomware Recovery Warranty, our customers have our commitment that we care as deeply about protecting their data as they do. With ransomware attacks increasing more than any time in history, having a recoverable copy of your data has become a top agenda item for CIOs and CISOs, and we understand how important data security is to ensuring the security of a business.”

This offer will be available for Rubrik customers running Rubrik Enterprise Edition and working with a Rubrik Customer Experience Manager (CEM) to ensure industry data security best practices are in place.

Rubrik’s SaaS-based  Enterprise Edition includes zero-trust data protection, ransomware investigation, sensitive data discovery, incident containment (data quarantine) and orchestrated application recovery. 

Matthew Day, CIO of Langs Building Supplies, said: “With this bold move, Rubrik’s Ransomware Recovery Warranty proves they’re putting their money where their mouth is.”

You can learn more about the ransomware recovery here by registering your interest.

Quantum’s exabyte-munching scale-out modular tape library

Oh, it turns out Quantum’s success in selling tape libraries to three of the top hyperscalers is due to specially developed scale-out and modular tape libraries and object software.

Eric Bassier.

In August it said it was engaged with six of the top ten hyperscalers, either in production or in product trials. During an IT Press Tour online briefing yesterday Eric Bassier, Quantum’s senior director for product marketing, said: “Three of the world’s biggest hyper scalers, three of the top five, use Quantum in production. Predominantly, they’re using Quantum tape. … And we also have an initial tape footprint deployed at the other two … in a proof of concept stage.”

It’s also expanding to a tier of customers one down from the hyperscalers: “This last quarter, we added three design wins and I would call them international web scale companies. So these are not in the top 10 but they’re in the top 100. … One of these customers is a popular social media video sharing application.Our initial footprint is a two exabyte archive that combined StorNext 7 software with Quantum tape in a RAIL configuration as this massive video archive for all of their content.”

RAIL is Quantum’s Redundant Array of Independent Libraries concept, a RAID-like scheme providing increased scale, protection and performance. A second webscale client has a 1.3EB initial deployment.

The company has a Scalar line of tape library products:

  • i3 with 25 to 400 tapes and 18PB compressed capacity with LTO-9 media;
  • i6 with 50 to 800 tapes and 136PB of maximum compressed capacity;
  • i6000 with 100 to 12,000 tapes and 540PB of compressed capacity in 21 racks.
Quantum’s Scalar tape library line.

These are scale up or monolithic libraries. Bassier believes that the hyperscaler tape wins are unlikely to use these products, and he may be right.

He said: ”We’ve done a lot of custom engineering work to make our tape systems designed for archives at that scale. This is really hardware-based engineering. And there are capabilities and there are actually even models of tape systems that we sell into this market that we do not have on our web site today.”

Scalability

What scale is that? One International webscale customers was said to have a two-exabyte archive — that would require four Scalar i6000 libraries to reach that level. We can imagine that the hyperscaler customers have multi-exabyte configurations. A 10EB deployment would need almost twenty i6000s and more than 400 racks. That sounds unwieldy and it would be ridiculously complex to manage twenty separate monolithic llibraries.

This  image from a Quantum RAIL slide looks like racks filled with eight Scalar i6 library 6U chassis. That would be 48U, higher than a standard 42U rack. Having multiple small library chassis, each with their own drive, would certainly help performance.

During the briefing Bassier said: “We have a model of tape library today that we don’t have on our web site that has better densities than anything that we show on our web site. And  … that’s the model that we’re selling to some of these large hyperscalers.”

So Quantum has a Scalar iSomething that is denser, meaning more media in less space, than the i6000. He said more: “If someone were to purchase ActiveScale cold storage, we would deploy this tape system as part of delivering that.”

It’s also scalable out to … well, 10EB and beyond, we think. Well beyond, because the hyperscalers could keep cold data for five years, possibly ten, possibly even more, and they just keep on accumulating it. Are we heading for 100EB archives?

Front-end management

The ActiveScale scheme has an S3-accessed front-end tier of active data storage using disk for data and flash for metadata, and a back-end cold tier, S3 Glacier-class storage, using objects written to tape in 1TB chunks and using erasure coding. 

This cold tier has multiple libraries in the RAIL scheme. It also uses dynamic data placement, with the ActiveScale software mapping an object’s name to a particular tape in a particular library. This is not an object content-addressed hashing scheme with objects stored in a ring of systems, like the Scality RING. Instead think of it as a quasi-single-level file:folder scheme, with the folder containing mapping information to link object names to their addresses in the  RAIL system. 

In a way, Quantum has invented away of clustering tape libraries behind the ActiveScale front-end system, making its monolithic libraries kind of modular through clustering. The new tape library hardware it has developed will be natively modular, we expect.

The ActiveScale system provides a single pane of glass for managing a cluster of modular libraries — that would solve the problem of managing multiple separate libraries — and Quantum’s CatDV software is set to be developed into a content indexing scheme for them.

We can expect more information to be revealed fairly soon, as Bassier said: ”We’re going to have a lot more news around tape this quarter.”

Quantum said the hyperscale (and webscale) tape market is set for growth. It has more than 30EB of deployed capacity in these two markets already and reckons it is the runaway leader. Possibly other tape library vendors, such as IBM, HPE and Spectra Logic, will also follow the modular library route.

Who dat? Kubernetes wave rider StorageOS changes name to Ondat

StorageOS is rebranding to Ondat, signifying a Shift Left exercise.

The company supplies software-defined cloud-native storage for enterprise Kubernetes-orchestrated environments and has more than 5500 installed Kubernetes clusters worldwide. It has produced a blog explaining why it has done so to address a Shift Left in storage.

What is that when it is at home? “Shift Left” is DevOps jargon for automating cloud-native application test, management and operational processes and doing them early on in an application’s life cycle. Imagine an application’s agile development life cycle flows from the left o the right through from define, through plan, code, build, test, release, deploy, operate and monitor, and back to plan again in a loop. Shift Left means doing things earlier in the flow to get sight of problems faster and fix them quicker. 

VMware blog diagram.

The Ondat blog explains: “Data, storage and storage management is ‘shifting left’ to the developer. Kube-native developers and platform engineers are becoming the most influential consumers of enterprise storage and storage-based data services.“

We should realise that: “Developers compose the entire application platform — including storage, as all applications store state somewhere. Organisations must make it easier for developers to get storage right, from the start, in order to avoid having to fix it later.”

In Ondat’s view, Kube-native developers expect push-button access to a persistent data store with scale, high availability, flexibility, security and performance, and no cloud services lock-in.

It says other CSPs, storage vendors and system suppliers view developers “as a new route to lock in customers to their storage.” Ondat does not, and provides customers with the freedom to choose, configure and control the platform, and the places where their applications are built and run.

Its eponymous software is a — in fact it says the — Kube-native platform for running stateful applications, anywhere, at scale.

The name

But the name: Ondat. Where does that come from? StorageOS, as it then was, announced on October 4 that it had joined The Data on Kubernetes Community (DoKC), an openly governed community for data on Kubernetes, as a Silver Sponsor. It looks like OnDat is a rearrangement of “Data on” — shift the “on” left, drop an “a” and you have OnDat — a kind of visual Shift Left effect.

In StorageOS’s DoKC announcement, CEO Alex Chircop said: “As companies migrate more business-critical applications onto Kubernetes, DevOps teams and Platform Engineers are becoming the new controllers of enterprise data. This can open up enterprises to massive new risks, but offers equally large opportunities for storage innovation, freedom and cost savings. The DoKC is an open, collaborative community at the heart of this movement. These are exactly the people StorageOS is working to serve.”  

StorageOS wants to fuel open collaboration and knowledge-sharing in the way data is handled on Kubernetes. Chircop said this: “The more innovation we see in this space, the greater the demand will be for StorageOS technology. We offer Kube-native technology that delivers data freedom and control; we enable innovation and allow new workloads to be brought onto Kubernetes; and we give our users independence from storage vendors and cloud provider lock-in.”

The OnDat rebrand is entirely in keeping with this view.

Coldago pours cold water on Gartner Distributed Files and Object Storage MQ

An interview with Coldago research analyst Philippe Nicolas has revealed what he sees as vendor and product selection choices that in his view weaken Gartner’s Distributed File Systems and Object Storage MQ.

Read the Q&A below and see what you think.

Blocks and Files: Should distributed file systems and object storage be viewed as a single category?

Philippe Nicolas.

Philippe Nicolas: Hmm, it’s a good question. What is true is both address unstructured data but many applications can use one and not the other, even if the access method is standardised. At the same time, we see more and more vendors offering both interfaces. Clearly it creates a challenge when you need to analyse the segment.

If we consider these two access models as one category, Gartner has to select products that do both to avoid a penalty for only file or object vendors. But why should a vendor be penalised when it delivers only one interface, especially when that can be a very good one? 

Considering the two as one category invites us to make the same point we have made for years: Gartner considers one product for some vendors and multiple products for others, and therefore creates an unfair or unbalanced comparison. So the real question is, do we compare one product or do we compare vendors?

Some suppliers, such as Pure Storage and Scality, are combining file and object storage. Shouldn’t analysts do the same? And if not, why not??

And you can add Caringo (now DataCore), Qumulo, DDN, Dell, NetApp, VAST Data or Cloudian to extend the list; I probably even forget a few. This is a general answer that demonstrates once again that differentiators across offerings are reduced year by year. It’s also a sign of maturity. Having check boxes ticked in RFPs does the job but product behaviour is very different.

How vendors implement their access layers really differs. But it also confirms the merger between offerings — because it’s essentially two access methods to access the same unstructured content.

Also, you can merge the category but what about pure object storage or pure file storage products/vendors? Does it mean we need separate sub-MQ for each category with the presence of players who deliver the individual access layers? I think this is where other analysts reports come into the game, and users must consider several of them to form their own ideas and opinions.

Purists would tell you that object storage is more than just an interface and they’re right, but nobody cares today about internal design, especially when products expose both interfaces. Many users ask their vendors: “Could you expose my content on a file server via S3?” and the reverse as well.

But all these products are far from equal when we look at access methods. Do you really compare native NFS access built on object layers and vice-versa? Of course, it can provide some flexibility but users’ experience shows very diverse capabilities and realities.

And lastly, the problem with grouping the two is that some pure file or object players are sanctioned. And this is a paradox — you can be a very good product in one category but badly positioned in the global quadrant. On the other side, having the two, let’s say with average capabilities, provides some artificial advantages.

Look at the trajectory of VAST Data in the market — not having it listed is pretty bizarre and makes this report a bit incomplete.

With flash hardware and better-designed object software accelerating object storage to filer-level performance and so to satisfying primary storage roles, aren’t the two access protocols (file and object) merging?

Flash is used in object storage for metadata for a long time and it was too expensive for data for large configurations. But the reality was also that some object storage products didn’t receive any perormance gain by using flash for data and several of them had to adapt, change and update their software to maximise the gain. And then flash pricing went down so it created some extra opportunities.

Your point is interesting. I remember a recent study by one vendor claiming that object storage with flash can do primary storage. In fact, primary storage is only determined by its role and not by a technology. Many people limit primary storage to block storage and it’s a very narrow view of the sector. Primary storage is where data is generated and thus it’s active and considered hot data. It supports production and sustains the business. With that in mind we understand that it can be block, file or object, whether HDD, flash, SCM or full DRAM lies underneath.

On the other side, secondary storage is a protection level, needed to protect the business and support IT in its mission. Data is not generated there — it’s copied from the primary level. This secondary level is full of inactive data — cold and even fixed or reference data. Here we see also some block, file or object access systems.

Your question remark confirms, once again, that object storage has become an interface in people’s mind.

What is your view of the general relevance and usefulness of Gartner’s Magic Quadrant for Distributed File Systems and Object Storage?

I like it, I like that exercise, it’s good that such tools exist with several other ones to invite users to read and analyse several of them, understand context and criteria to form their own opinion. We just regret that some visible players are not listed and that Gartner didn’t accept or consider points many other people do year after year.

Even if we understand the criteria chosen by Gartner, it is always a surprise to not see some players as they refuse to be listed or because Gartner eliminates them. Look at the trajectory of VAST Data in the market — not having it listed is pretty bizarre and makes this report a bit incomplete.

What about open-source? What about MinIO, clearly the number one object storage by the number of instances running on the planet?

And the reverse is also true in this MQ. I’m pretty sure that all readers were surprised to see some brands on it this year.

How should and could IT buyers find out MQ-type information about the distributed file systems and object storage suppliers if the Gartner MQ is rejected?

Hmm, there is no one source of information and I invite buyers to make their own search of similar reports and analysis to build their own matrix with their own criteria as a mix or union of these documents. Honestly they already do this for RFPs; it’s just an extension. When they need to research the state-of-the-art in a domain, they have to do it. A good source is a few key information sites like yours, StorageNewsletter, TechTarget, Speicherguide.de and a few others that go beyond posting press release and analyse things. And lastly, if buyers can speak directly with users who have already deployed and adopted solutions, they’ll get excellent inputs.

Gartner 2021 files and objects MQ gets Purified, Nutanixed and Wekanated

Pure Storage has become a leader in Gartner’s latest Distributed File Systems and Object Storage magic quadrant, and both Nutanix and Weka enter this MQ for the first time.

The files and object storage MQ is produced once a year and features the  well-known Leaders, Challengers, Niche Players and Visionaries quadrants in a square chart with Ability to Execute and Completeness of Vision axes. Suppliers are scored on various attributes of these two concepts and then placed in the MQ diagram according to their summed and weighted scores, with a presence towards the top right being desirable. That area has the highest ability to execute and completeness of vision

We reproduce last year’s MQ alongside the latest version we that we can see how supplier positions have changed and which suppliers have exited and entered the Gartner analysts’ view of this field; 

The 2020 (left) and 2021 (right) MQ diagrams from Gartner’s Distributed File Systems and Object Storage report.

As before Dell and IBM are the two leaders, with Dell top of the tree. Scality and Qumulo are also in the Leaders’ quadrant.

Pure Storage, with its FlashBlade product, has been promoted from a Challenger to the Leaders’ quadrant. Matt Burr, VP and GM, FlashBlade, at Pure Storage, issued a quote bigging up the company, the product and Gartner: “Since FlashBlade’s inception, we have believed that unifying unstructured file and object data to consolidate workloads on a single platform is critical to powering the future of modern applications. It is great to see the industry follow suit and our position in this Magic Quadrant validate this vision. This is an honor we could not have achieved without our great team, customers, and partners.”

You can get a copy of Gartner’s report from Pure’s website, with no registration required, and also from Nutanix and Weka, where registrations are needed.

Nutanix, with its Files and Objects offering, and Weka, with its WekaFS product, are first-time Challengers and both are delighted to be included. 

Rajiv Mirani, Nutanix CTO, put out a statement: “We believe being named in the Gartner Magic Quadrant for Distributed Files and Objects Storage is a significant recognition of Nutanix’s storage offerings, which aim to simplify and lower operating costs.” 

Liran Zvibel, Co-founder and CEO at Weka, said: “We are extremely pleased that Gartner has placed Weka in the Visionaries quadrant, which definitely is fitting for us at this stage of growth.”

MQ entry is great free marketing for both Nutanix and Weka, and Pure of course, apart from having to pay Gartner for the right to distribute report copies that is.

Quantum has changed square, moving from the Challenger’s quadrant to the Visionaries one. The Gartnerites reckon its ability to execute has decreased but its vision has become more complete. The report says: “ActiveScale is lacking in feature parity compared to its competitors. For example, it is missing features such as data deduplication and compression, QoS, mixed flash support, NFSv4 and SMB, hybrid cloud integration, and dual protocol access.”

Object storage supplier Caringo is the only deleted vendor in this MQ. It has been bought by DataCore and its position has not been inherited by DataCore.

Cohesity, Minio and VAST Data get honourable mentions in this MQ report for being “noteworthy vendors that did not meet all inclusion criteria, but that could be appropriate for clients, contingent on requirements”.

NetApp setting up streaming TV service — but Netflix is in no danger of being stung

NetApp is setting up its own streaming TV service, starting with digital content from its Insight 2021 virtual event and including a performance by Sting. You remember Sting?

Whitney Cummings.

There will be an Insight event channel on the service, and the Insight event is morphing from an annual show to an always-on online hub for on-demand and live content. As well as broadcasting webcasting a live Sting performance, NetApp’s Insight channel will feature hosting by Whitney Cummings — billed as “the reigning Queen of American stand-up”.

A sample Cummings joke goes: “Found a fragrance called Vixen. Guess they can’t name them after the people who actually wear them. Nobody’s going to buy Secretary.”

And another: “Stand-up is a lot like sex. There’s a lot of crying involved and I get paid to do it.”

We wonder — we seriously wonder — just how rude she will be and what she will say about NetApp.

We envisage NetApp TV being like a series of video blogs, podcasts and interviews, with execs and customer people, about NetApp’s products, services and views on industry trends. Maybe it will use it for product and service launches as well.

NetApp Insight 2021 runs from October 20 to 21 and you can register here.