Home Blog Page 196

Toshiba’s 2TB disk is a low-end gaming and archive drive 

Earlier this week Toshiba announced a 2TB disk drive designed for desktop, PC computing, gaming and storage applications, where performance, capacity and reliability are all critical.

We were surprised at this phrasing describing the highspin P300 Desktop PC Hard Drive spinning at nearline drive speed – 7,200rpm – and using a shingled magnetic recording media (SMR) format with its inherently slower data rewrite speed than conventional non-SMR media.

Toshiba specs say the drive has a larger than normal (for a 2TB drive) 256MB buffer to combat the SMR slowdown, resulting in a sustained transfer rate up to 210 MiB/sec – a 19 percent increase over its conventional P300 Desktop PC hard drive. 

We asked Toshiba three questions about the new drive’s characteristics.

Blocks & Files: It would be great if Toshiba could explain why, in these days of SSDs for notebooks, desktops and gaming systems the updated P300 is described as “a highly optimized choice for addressing growing desktop computing demands, as well as web applications, gaming and data archiving work”? A 2TB disk is not that good in capacity terms for archiving when 18TB disks are readily available. How would Toshiba justify it as suitable for archiving? 

Toshiba: Yes, there are higher capacities and better performing products on the market than the P300, but these usually come with a higher price tag. The P300 2TB is defined by us as an entry-level product, for everyday personal use. When referring to terms such as gaming, archiving etc. it refers to a personal general use, not to professional or enterprise use. Therefore, for some end-customers, the P300 can be a very good and cost-effective solution for a range of activities whether it is saving games, video footage or archiving data.

Blocks & Files: A 7,200rpm disk is not, in many people’s minds, suitable for gaming as it is too slow (compared to an SDD). How would Toshiba justify it as suitable for gaming? 

Toshiba: The experience range of people gaming these days starts from the young (educational games), teens and families all the way to the professional gamer. HDDs may not offer the same speeds as SSDs or acoustic performance a professional gamer seeks. However, they are affordable and reliable for those starting out and the mid-field. 

Additionally, hard drives can be a good option for games where loading speed is of lesser importance and where there are a higher amounts of media files and data to be stored or backed up. Actually, we recommend to use an HDD (for example with 2TB) in conjunction with an SSD for gaming as a cost-effective solution that utilizes the advantages of both technologies. This is where we see the P300 fit in.

VAST Data flash cost heading below nearline disk

Alice by John Tenniel, 1865

VAST Data is making data reduction improvements to get its per-TB cost down to nearline disk levels and looks set to plunge below that level with coming hardware and software changes.

The company supplies a Universal Storage software product that provides file and S3 storage on certified hardware configurations from partners such as Avnet. The hardware is based on a single data tier of QLC (4bits/cell) NAND with a storage-class memory tier for metadata and buffering incoming writes.

CMO and co-founder Jeff Denworth presented to an IT press tour in Silicon Valley in June and talked about software changes that are going to increase the capacity of VAST’s Universal Storage systems.

The company’s software implements a disaggregated shared everything (DASE) architecture in which stateless compute nodes can see all of the NVMe SSD capacity. The average VAST customer has 12PB (raw) of flash capacity. Data reduction using similarity hashes is used to increase effective capacity.

On average VAST reckons it has a 3:1 data reduction ratio across its installed systems, which would give the average customer 36PB of effective capacity.

He said VAST recently added adaptive chunking – meaning variable block length – to its data reduction system. Denworth said VAST ran internal testing. It thinks the combination of similarity hashing and adaptive chunking gives it a 30 to 70 percent advantage over Data Domain; PowerProtect in Dell’s new branding. The 70 percent number came from storing and reducing Commvault backup files for some SQL Servers.

Version 4.4 of VAST’s software will add data-awareness which will apply specific reductions to particular types of data, such as integers and floating point numbers. If the reduction algorithm knows that a certain piece of data is a floating point number then it can reduce it more than if it was undifferentiated data. This could be applicable to VAST storage being used in market data, life sciences and data warehouse applications.

VAST has seen an additional 25 percent data reduction in testing and believes there is more to come. A slide suggest this is worth $100,000 per petabyte – a huge saving with multi-petabyte systems. Denworth said VAST will supply specific compression algorithms for imagery and other file formats in the future, again looking to produce higher reduction ratios for these data types.

This is similar in concept to Ocarina’s content-aware file and image compression product technology which could compress image types previously thought incompressible. Dell bought Ocarina in 2010 and it’s not clear if Ocarina technology is used by the PowerProtect deduping algorithms. Denworth agreed there were parallels with Ocarina’s technology.

He thinks VAST will grow the average data reduction ratio to 4:1 to 5:1 across its fleet this year. This will enable it to match or even beat nearline HDD $/TB.

Asked about PLC (5bits/cell) flash Denworth said this will provide another 20 percent cost reduction. As 3D NAND layering goes to 200+ then you get an additional added cost reduction benefit that will compound with PLC. We think this will mean VAST $/TB will go below nearline HDD cost and that could happen in the 2024/2025 period.

This could provide a huge boost to VAST Data storage use, particularly for fast restores of cold data.

Storage news ticker – June 7

DPU and composable infrastructure supplier Fungible has announced Fungible Storage Cluster (FSC) 4.1, providing support for vSphere environments requiring high performance. Its all-flash array based on NVMe/TCP is now certified for vSphere and available for VMware virtualized environments. Customers can plug FSC storage into their ESXi servers with NVMe/TCP and get what appears to be local storage. The resulting performance is nearly identical to local storage even though it is a shared resource, Fungible said. vSphere users may also see storage cost savings since FSC’s erasure coding provides robust data protection which can be less expensive than traditional RF1 or RF2 replication.

Commvault chief revenue officer Riccardo Di Blasio has sold 17,182 shares at $65.03/share to receive $1,117,345.46. An SEC filing reports the details.

Open-source database company EDB has received “majority growth investment” from Bain Capital Private Equity to help it grow. This means Bain owns more than 50 percent of EDB, whose technology accelerates Postgres for enterprise customers. Ed Boyajian will stay on as president and CEO. Great Hill Partners, which acquired EDB in 2019, will maintain a significant shareholding. Financial terms of the transaction were not disclosed. EDB serves more than 1,500 customers in 86 countries, including leading financial services, government, media and communications, and information technology organisations such as Dell EMC, Ericsson KT Corporation, Mastercard, Nokia, Siemens, and Sony. The open-source services market is projected to reach $66 billion by 2026.

File-based collaboration supplier Panzura has appointed David Wigglesworth as its new global VP of sales. He has served in senior leadership roles at EMC, OVH, VMware, and most recently Commvault. Panzura says that since its “refounding” in 2020, led by CEO Jill Stelfox and CRO Dan Waldschmidt, it has brought seven products to market, more than doubled its annual recurring revenue (ARR), and has achieved a pace of growth 4x that of its competitors. By June 2023, the company is expecting to achieve $100 million ARR.

Object storage supplier Scality has provided an unnamed US bank a primary 80PB data lake with a secondary disaster recovery (DR) site, active/active access and sub-60-second RPO across two datacenters separated by over 1,200 miles. The bank now has a fully active secondary site that avoids the disadvantages of an idle (passive) hot-standby setup. This RING setup was part of a $100 million HPE GreenLake deployment. It provides high-performance S3 API access, with peak performance rates to support hundreds of terabytes per day of new data being written and simultaneously replicated to the other site. RING replication is near-instantaneous in asynchronous mode. In the event one of the sites suffers an outage or failure, applications will access data seamlessly and without disruption so the bank can continue normal operations, Scality says.

Software-defined storage supplier StorPool Storage has said Namecheap, the world’s second-largest domain retailer and global hosting provider, is using its storage software as part of a new hyperconverged infrastructure (HCI) platform. The hardware is Supermicro servers with 64-core AMD EPYC 7742 processors and NVMe SSDs. The system cures noisy neighbor problems with the previous fleet of servers and directly attached storage.

SUSE Rancher now has more robust container storage. Longhorn 1.3 delivers an enhanced API using Kubernetes CRDs, which allows Longhorn settings to be customized via kubectl and GitOps-based tools. Customers also get a new storage area network, which accelerates storage replication performance through dedicated NICs, volume cloning support that duplicates environments, and persistent data for scaling, testing, and validating cloud-native apps.

NetApp crams ransomware, single subscription features and more into hybrid multi-cloud

NetApp
NetApp CloudJumper

NetApp has announced improved ransomware protection, hybrid cloud storage in a single subscription, unified management in a single user interface, and closer collaboration with VMware to help transition workloads to the cloud.

The idea is to provide a single NetApp storage experience to the on-premises and public cloud worlds and the latest announcements help bring this closer.

Ronen Schwartz, SVP for Cloud Volumes Service at NetApp, said in a statement: “With NetApp’s simplified management and consumption experience, organizations can enjoy improved security, manageability, speed of operations, and cost savings.”

There’s an array of separate new features in this hybrid multi-cloud announcement starting with ONTAP. Version 9.11.1 of NetApp’s AFF and FAS storage array OS adds enhanced ransomware detection and expanded recovery from ransomware attacks.

In June last year Cloud Manager was announced by NetApp with a central console for accessing hybrid cloud services:

  • Cloud Volumes – ONTAP Cloud Volumes in AWS, Azure and GCP
  • Cloud Backup as-a-service for on-premises and in-cloud ONTAP data, with StorageGRID supported both as a source and a target
  • Cloud Data Sense for data discovery, classification and governance in NetApp’s hybrid cloud
  • Cloud Insights to visualise and optimise hybrid cloud deployments
  • Cloud Tiering to move cold data to lower-cost storage, including on-premises StorageGRID
  • Astra, which supports on-premises and in-cloud Kubernetes-orchestrated container workloads.

Cloud Manager can now manage Keystone services, track software licenses, monitor infrastructure health and provide proactive recommendations to optimize costs and data protection with automated actions. 

Keystone is NetApp’s storage-as-a-service (STaaS) subscription billing option. It gets added STaaS for hybrid cloud, a single hybrid cloud subscription covering Cloud Volumes ONTAP and Cloud Backup. Customers have the ability to dynamically move capacity and supporting licenses across clouds and can reallocate on-premises spend to cloud spend on a quarterly basis. There is a Keystone Advisor in AIQ to size conversion of existing systems to the Keystone service.

Cloud Backup and Cloud Data Sense also get better anti-ransomware features. Cloud Insights now has autonomous ransomware protection integration and NetApp’s Professional Services has a Ransomware Protection and Recovery service. A NetApp graphic details new anti-ransomware features;

NetApp has been certified by VMware for use as an external supplemental NFS datastore with VMware Cloud Services, across AWS, Azure and Google Cloud. Such a datastore can be used for data-intensive workloads running in a single cloud or across multi-cloud environments. NetApp says it’s the only vendor certified. It says it can deliver the same levels of enterprise-class data management that joint NetApp/VMware customers have been used to on-premises to workloads running in any of the major public clouds. 

The NetApp supplemental VMware Cloud Services datastore facility is in private preview for AWS, Azure and GCP. Keystone hybrid STaaS early access is now available. A Ronen Schwartz blog provides background context.

Storage news ticker – June 6

Arcserve announced the expansion of its OneXafe family of immutable data storage products with the OneXafe 4500 Series. Its capacity can be up to 216TB. It needs near-zero deployment effort and integrates with existing OneXafe 4400 Series clusters. The 4500 has built-in ransomware protection (logical air-gap), and a scale-out clustered design with a single global file system. OneXafe offers global inline deduplication and data compression. 

Baffle announced the availability of its Data Protection Service Transform for Apache Kafka for on-the-fly data protection. Developers, data engineers, and operators can benefit from automated data de-identification and protection as information is ingested into the cloud and used by applications. Baffle automatically transforms data on the fly as it moves into the pipeline with a plug-in that utilizes the Single Message Transform (SMT) capability, de-identifies sensitive data, and controls who can access and use that data in the business. The Baffle DPS Transform has been verified by Confluent. Baffle, Kafka, and Confluent customers now have simplified integration of security controls into an Apache Kafka stream with the Baffle DPS Transform. 

Dell has upgraded its Azure Stack HCI system with extra security, Azure Arc management, Nvidia A30 GPU and Azure Virtual Desktop (AVD) service support. Azure Stack is Microsoft’s Azure public cloud Hyper-V and HCI stack software running in Microsoft partners’ hardware, and Dell is one such partner, with its servers and storage. Arc can now serve as the centralized control plane for distributed on-premises Azure Stack HCI deployments.

The security additions include:

  • Dell HCI Configuration Profile Policies for Azure Arc and Windows Admin Center (WAC) that prevent malicious threats and inadvertent changes to operating system, BIOS, and network settings; 
  • Dell Infrastructure Lock that protects system and configuration changes from unauthorized users;
  • Microsoft’s Secured Core server to add  infrastructure hardware security.

Composable systems supplier GigaIO announced the rack-scale GigaIO Composability Appliance: University Edition, powered by AMD, purpose built for the higher education market. It can be used in a classroom or laboratory setting without requiring dedicated IT expertise. The appliance, delivered with Nvidia Bright Cluster Manager pre-installed, is a complete, future-proofed composable infrastructure system that provides cloud-like agility to on-premises infrastructure, allowing cloud bursting. It can connect AMD accelerators, AMD-powered servers, and other devices in a seamless dynamic fabric. The University Edition units are container-ready and easily composed via bare metal, Future iterations of the appliance will bring composability to Manufacturing and Life Science users over the coming year. 

MariaDB announced a collaboration with MindsDB to make machine learning predictions accessible with MariaDB SkySQL. This should simplify analyzing and predicting future trends, and put ML capabilities into the hands of MariaDB users. By using MindsDB in SkySQL data science and data engineering teams can increase their organization’s predictive capabilities. MindsDB’s open source framework allows ML models to be identified and developed quickly using AutoML and then deployed at speed and scale with AI Tables in MariaDB. MindsDB enables database users to get predictions as database tables, using simple queries to unlock the value in the data they already have. 

Software-defined storage supplier OSNexus announced its QuantaStor platform now integrates with Resilio and its peer-to-peer N-way sync platform. QuantaStor users can synchronize NAS storage across clusters globally in real time. The combination enables myriad of collaboration workflows, especially in industries like media & entertainment, where sharing large files and remote work is common. The Resilio agent has been containerised to run within each QuantaStor system.

The v2 release of Redgate Software‘s SQL Data Catalog provides a simple, policy-driven approach to data protection. It automatically scans columns within databases and uses intelligent rules to make recommendations about how they should be classified. It auto-generates static data masking sets from the classification metadata that can be used to protect the databases. SQL Data Catalog v2 marks a step change in this process by significantly reducing the time it takes to go from identification and classification to protection, and making maintenance far simpler. When connected to a SQL Server instance, it automatically examines both the schema and data of each database to determine where personal or sensitive data is stored.

MSP-focussed cloud backup supplier Redstor has added customizable user access management to its software, designed to simplify the implementation of a zero-trust policy. MSPs can create and manage user identities within a single interface, customize and control who has access to data and systems, stop the spread of compromised login credentials, have single sign-on, multi-layer security and tighter IAM control, update security policies instantly to comply with regulations, change access privileges across an entire environment in one action, and ensure secure collaboration for greater productivity by setting up third-party permissions without jeopardizing network security.

Rubrik has a new CISO advisory board chairperson: Chris Krebs, who is the former director of the Cybersecurity and Infrastructure Security Agency (CISA) within the US Department of Homeland Security. He should open a few enterprise and federal doors for Rubrik’s data protection/security reps.

Data integrator and manager Semarchy announced an update to its unified data platform: a connector for the xDI component with Snowflake-specific capabilities. It automates Snowflake’s optimizations, capabilities, and specificities. The connector means Semarchy’s ELT architecture can leverage the processing capabilities of the Snowflake engine to run all data flows, and add features such as the data flows universal mapping graphical design and automated replication capabilities.

StorCentric announced GA of Nexsan EZ-NAS network attached storage with a 1U form factor and four drives with up to 72TB of raw capacity and 1.5GB/sec of throughput. This EZ-NAS array is ideal for small and medium sized businesses (SMBs) and large enterprises’ edge deployments. The product has in-line compression, Active Directory support and data-at-rest encryption. EZ-NAS comes with the Retrospect software for optional add-on services, including data backup, cloud connector and ransomware anomaly detection.

Toshiba Electronics Europe GmbH announced the highspin P300 3.5-inch Desktop PC Hard Drive with 2TB storage capacity. Tosh says it is “designed for desktop, PC computing, gaming and storage applications, where performance, capacity and reliability are all critical, these drives support 7200rpm operation and each feature a 6Gbit/sec SATA interface.” This almost seems like a joke when Tosh is delivering 18TB nearline drives spinning at 7200rpm. Considering the needs of gaming apps, that speed is not fast. It’s slow, especially considering that these are SMR (shingled magnetic recording) drives with slow data rewrite speeds. Also, if capacity really is critical then 2TB is somewhat small.

Tosh says the drive has a 256MB buffer, which mitigates the SMR slowdown, and a sustained transfer rate up to 210 MiB/sec – a 19 percent increase over its conventional P300 Desktop PC Hard Drive. We’ve asked Toshiba twice about the slow overall speed and low capacity and received no reply.

Cloud storage supplier Wasabi has opened a new storage region located in Singapore. This is Wasabi’s 13th storage region globally and its 4th in APAC, following Tokyo, Osaka, and most recently Sydney. It has appointed former SAS Institute Japan executive Michael King to serve as VP and GM, APAC. Wasabi serves customers in over 100 countries, storing data ranging from backups, disaster and ransomware recovery, archiving, video surveillance, sports data, media and entertainment files, and more.

Clumio cuts anti-ransomware SecureVault expense with lower-cost version

AWS cloud protector Clumio has cut the cost of its virtual air-gapped SecureVault with a SecureVault Lite version priced at 30 percent less.

SecureVault protects AWS backups by storing them outside of the customer’s own AWS account, with copies that are immutable and cannot be deleted. That keeps bad actors and their malware at bay in a logical or virtual air gap, and is part of the Clumio Protect product suite. SecureVault Lite does the same as SecureVault but at a lower cost and only for for Amazon Elastic Compute Cloud (EC2) and Elastic Block Store (EBS) volumes.

Poojan Kumar.

Clumio CEO and co-founder Poojan Kumar gave put a statement: “With the increased threat of ransomware and insider attacks to organizations’ business-critical data in the cloud, implementing a cloud data protection strategy that delivers air-gapped, immutable backups has become table-stakes. In fact, cyber insurance companies now assess risk based on whether organizations have saved backups outside of their access domain.”

By making SecureVaut Lite lower cost than the main SecureVault service Clumio should be able to increase its penetration of the Amazon EC2 and EBS user base. SecureVault Lite backups are priced at $0.035 per GB per month – a similar cost, Clumio says, to local in-account snapshots. There is a 30-day minimum retention requirement and restores cost $0.04/GB. The software provides a calendar view to find all recovery points and has rapid recovery of EC2 instances or EBS volumes to any AWS account.

Data is encrypted at-rest and in-flight, and customers can bring their own keys. SecureVault backups can be protected in our out of the production region and recovered to any AWS account.

It features multi-factor authentication (MFA) with Single Sign-On (SSO) integration, access controls for assets and roles, and no delete button.

The product and underlying controls are compliant with HIPAA, PCI DSS, ISO 27001, AICPA SOC. Reports in Clumio Protect and Clumio Discover enable compliance requirements can be met.

Kumar said that, by adding the SecureVault Lite product, Clumio is making air-gapped backups in AWS effortless and accessible to all.

Clumio Protect protects Amazon EBS, EC2, RDS, S3, Microsoft 365 and and VMware Cloud on AWS while SecureVault Lite is restricted to EBS and EC2. There is more information available  here.

Storage news ticker – June 4

Storage news ticker
Library of Congress image.ights Advisory: No known restrictions on publication.

AWS’s Elastic Block Store (EBS) now supports Elastic Volumes and Fast Snapshot Restore (FSR) for io2 Block Express. That means you can use Elastic Volumes to dynamically increase the capacity and tune the performance of an io2 Block Express volume with no downtime or performance impact. A fully initialized io2 Block Express volume can be created from a Fast Snapshot Restore (FSR) enabled snapshot. Such volumes instantly deliver their provisioned performance. An io2 Block Express volume runs on the EBS Block Express architectures and delivers up to 4x higher throughput, IOPS, and capacity than io2 volumes, and is designed to deliver sub-millisecond latency and 99.999 percent durability. You can provision a single io2 volume that delivers up to 256,000 IOPS, 4000MB/sec of throughput, and storage capacity of up to 64TiB for running mission-critical deployments of Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics. More info here.

Dell Technologies is bundling Datadobi’s DobiMigrate software with its PowerStoreOS 3.0 release for the PowerStore unified file and block arrays. Customers can use it to migrate data off old non-Dell and Dell systems onto PowerStore. If a customer opts to use the included service, they can simply reach out to the Datadobi team for a Starter Pack to get the ball rolling on the project. PowerStoreOS 3.0 is PowerStore’s third major release in two years. This is great validation of DobiMigrate by Dell.

IBM announced Spectrum Scale Version v5.1.4 went to general availability on June 3 2022. One of the new features is scanning in online mode by using the mmfsckx command. This scans Spectrum Scale file systems and reports metadata corruptions, if any, even while the file system is mounted and in use. To repair corruptions in a file system, use the offline mode of the mmfsck command. A “Fine Grain Write Sharing” feature enables the performance of non-overlapping small strided writes to a shared file from a parallel application to be optimized through the gpfsFineGrainWriteSharing_t hint. The optimization can be tuned by using the configuration parameters that are prefixed with “dataship” and defined in the mmchconfig command. For more information, see gpfs_fcntl() subroutine and mmchconfig command. This has a massive impact for performance and benchmarking. More details can be found here.

The open source immutable immudb database added automatic data versioning with extensive querying capabilities, saying it’s an an industry first, plus new levels of support, and a 40 per cent performance improvement. The versioning enables time travel navigation to see exactly what changed when while using tamper-proof, immutable records. Data in immudb comes with cryptographic verification at every transaction to ensure there is no tampering possible. immudb supports key/value and SQL data and AWS S3 storage cloud access. Codenotary, the primary contributor to the open source immudb, announced three levels of support: community (free), project ($3,000/year) and business ($16,000/year).

High-end Atay supplier Infinidat has hired James Lewis as channel director, EMEA & APJ, based in Frankfurt. Most recently, Lewis worked for Data Interchange as head of channel sales and was the strategy and growth officer for Altdata Technology Solutions, focusing on the cyber security market.  He also spent 15 years at EMC and RSA, based in London and Frankfurt.

Kioxia says it has completed the acquisition of Chubu Toshiba Engineering Corp. from Toshiba Digital Solutions Corporation (a subsidiary of Toshiba Corporation). Chubu Toshiba Engineering  will operate as a wholly owned subsidiary of Kioxia Corp. under the name of Kioxia Engineering Corporation. It offers semiconductor industry engineering services, including development, production, and manufacturing.

Kioxia as been awarded the 2022 Invention Prize by the National Commendation for Invention for its Invention of Optimization of Read Method for Multi-level Flash Memory (patent no. 4892307). In conventional bit-coding, reading a certain bit requires a greater number of determination operations (i.e., more intense reading) compared to reading other bits; it causes increased compound errors in reading such bits. This means that more chip area is required to store the data arising from the increased number of ECCs. This increase in determination operations with conventional bit-coding increases read-latency for those bits. By adopting a new and more evenly distributed-assigned bit-coding, Kioxia’s technology reduces the expected maximum error rate in multi-level (TLC or greater) flash memory, and reduces the chip area required for storing error correction codes (ECCs). In addition, this breakthrough development improves the maximum read latency of flash memory.

Lenovo announced Chalmers University of Technology is using Lenovo and Nvidia’s technology infrastructure to power its large-scale computer resource, Alvis. This is a national supercomputer resource that helps researchers carry out academic research. Chalmers University of Technology is in Gothenburg – home to the EU’s largest research initiative, Graphene Flagship. This is Lenovo’s largest HPC (High Performance Computing) cluster for AI and ML in the Europe, Middle East and Africa region. Lenovo is delivering a scalable cluster with a variety of Lenovo ThinkSystem servers to deliver the right mix of Nvidia GPUs. The storage system has two tiers:

  • Flash tier based on Weka.io – ~0.6PB running on Lenovo ThinkSystem SR630 nodes with internal NVMe SSDs;
  • Capacity tier based on Ceph – ~7PB capacity based on SR630 nodes attached to ThinkSystem D3284 JBODs.

NVMe/TCP storage provider Lightbits has added world-renowned technology innovator and visionary Dr Yoav Intrator to its advisory board. Dr Intrator is a sought-after global executive with extensive expertise in financial services, HighTech, Telco, Software as a Service (SaaS), cloud technologies, artificial intelligence (AI), and machine learning (ML). Currently a board member of technology startups and a former board member of the Wall Street Technology Association (WSTA), he is also the co-founder of Ri$KQ and has operated at a C-level as president and board member of JPMorgan Israel Technology Center. He spent many years developing new and disruptive technologies at Microsoft Corporation.

Lightbits has also announced its TCO Calculator and Configurator tools. Developed in collaboration with Intel, these freely available online tools provide Cloud Service Providers (CSPs), Financial Services, and Telco organizations with an intuitive way of determining the value of the Lightbits Cloud Data Platform. The TCO Calculator provides price and performance comparisons of the Lightbits Cloud Data Platform against other direct-attached storage, HCI and software-defined storage solutions. It encompasses the full TCO environment of software, hardware, support, power, cooling, space, and administration time. The tool highlights the TCO savings that can be derived from using the Lightbits software with Intel hardware. Users can download the results.

Composable systems software supplier Liqid has has appointed VMware Cloud CTO Marc Fleischmann to its board of directors. He will collaborate with the Liqid board and the company’s leadership team to identify opportunities to expand Liqid Matrix composable disaggregated infrastructure (CDI) software into new solutions and services for Liqid’s customers and partners. Before joining VMware as Cloud CTO, Fleischman was founder and CEO for storage software company Datera, and social gaming company Smeet. Broadcom has just agreed to buy VMware and, perhaps, intends to sharply reduce its R&D spend, limiting a CTO’s effective input.

Nebulon, which supplies add-in cards operated from the cloud to manage fleets of servers, has appointed Paul Brodie as its vice president of global channel sales and promoted Martin Cooper to vice president of customer experience. Brodie will oversee efforts to expand its OEM and channel partner-driven business, coming to Nebulon from IT operations management (ITOM) platform vendor OpsRamp where he held the same title and grew the company’s global channel sales operation. Previously, Brodie served as VP of OEM and channel sales at AIOps platform company Virtana, and before that spent 13 years at Brocade leading various OEM and channel sales teams. Cooper will lead an end-to-end technical go-to-market and customer satisfaction function, help drive new OEM partner-led customer acquisition, develop post-transaction services and enhancements, and lead the overall support of existing clients.

The ESG wave rises higher. Hyperscale data analytics startup Ocient announced its continued commitment to digital sustainability and carbon neutrality with carbon offsets, green datacenters and employee engagement programs. In addition to committing to carbon neutrality, Ocient supports initiatives to develop the next generation of talent in the industry while prioritizing diversity and inclusion. Ocient has committed to improving the environment through various trail-building, trash clean-up, and other employee-led initiatives. By minimizing Ocient’s carbon footprint and purchasing offsets for the carbon emissions generated by the company, all of Ocient’s operations, including 100 percent of OcientCloud deployments and all of its datacenter operations are carbon neutral – making Ocient a net zero carbon company.

A quick primer on HPE Cray Frontier’s parallel file system storage

The exciting news about the HPE Cray-built Frontier supercomputer formally passing the exascale test made me curious about its storage system. I pointed my grey matter at various reports and technical documents to understand its massively parallel structure better and write a beginners’ guide to Frontier storage.

Be warned. It contains a lot of three-letter abbreviations. HPE’s exascale Frontier supercomputer has: 

  • An overall Orion file storage system; 
  • A  multi-tier Lustre parallel file system-based ClusterStor E1000 storage system on which Orion is layered; 
  • An in-system SSD storage setup integrated into the Cray EX supercomputer, with local SSDs directly connected to compute nodes by PCIe 4. 

The Lustre ClusterStor system has a massive tier of disk capacity which is front-ended by a smaller tier of NVMe SSDs. These in turn link to near-compute node SSD storage capacity which feed the Frontier cores.

Orion 

The Oak Ridge Leadership Computing Facility (OLCF) has Orion as a center-wide file system. It uses Lustre and ZFS software, and is possibly the largest and fastest single Posix namespace in the world. There are three Orion tiers: 

  • a 480x NVMe flash drive metadata tier; 
  • a 5,400x NVMe SSD performance tier with 11.5PB of capacity based on E1000 SSU-F devices; 
  • a 47,700x HDD capacity tier with 679PB of capacity based on E1000 SSU-D devices.

There are 40 Lustre metadata server nodes and 450 Lustre object storage service (OSS) nodes. 

A metadata server manages metadata operations for the file system and is set up with two nodes in an active:passive relationship. Each links to a metadata target system which contains all the actual metadata for that server and is configured as a RAID 10 array.

There are also 160 Orion nodes used for routing. Such LNET routing nodes run network fabric or address range translation between directly attached clients and remote, network-connected client compute and workstation resources. They enable compute clusters to talk to a single shared file system.

Here is a Seagate diagram of a Lustre configuration:

The routing and metadata server nodes exist to manage and make very fast data movement between the bulk Lustre storage devices, object storage servers (OSSs) and their object storage targets (OSTs) possible. HPE Cray’s ClusterStor arrays are used to build the OSS and OST structure.

ClusterStor

There is more than 700PB of Cray ClusterStor E1000 capacity in Frontier, with peak write speeds of >35 TB/sec, peak read speeds of >75 TB/sec, and >15 billion random read IOPS. 

ClusterStor supports two backend file systems for Lustre:

  • LDISKFS provides the highest performance – both in throughput and IOPS;
  • OpenZFS provides a broader set of storage features like for example data compression.

The combination of both back-end file systems creates a cost-effective setup for delivering a single shared namespace for clustered high-performance compute nodes running modeling and simulation (mod/sim), AI, or high performance data analytics (HPDA) workloads. 

Orion is based on ClusterStor E1000 storage system hybrid Scalable Storage Units (SSU). This hybrid SSU has two Object Storage Servers (OSS) which link to one performance-optimized object storage device (OST) and two capacity-optimized OSTs; three component OSTs in total:

  • 24x NVMe SSDs for performance (E1000 SSU-F for flash);
  • 106x HDD for capacity (E1000 SSU-D for disk);
  • 106x HDD for capacity (E1000 SSU-D). 

The hybrid SSU was developed for OCLF but is now being made generally available as an E1000 configuration option. It is an alternative to original or classic four-way OSS designs. An example hybrid SSU-F and SSU-D configuration looks like this:

E1000 Scalable Storage Unit – All Flash Array (SSU-F)

A ClusterStor E1000 SSU-F provides flash-based file I/O data services and network request handling for the file system with a pair of Lustre object storage servers (OSS) each configured with one or more Lustre object storage target(s) (OSTs) to store and retrieve the portions of the file system data that are committed to it. 

The SSU-F is a 2U storage enclosure with a high-availability (HA) configuration of dual PSUs, dual active:active server modules, known as embedded application controllers (EAC), and 24x PCIe 4 NVMe flash drives.

Each OSS runs on one of the server modules, forming a node, and the two OSS nodes operate as an HA pair. Under normal operation each OSS node owns and operates one of two Lustre Object Storage Targets (OST) in the SSU-F. If an OSS failover happens then the HA partner of the failed OSS operates both OSTs.

Normally both OSSs are active concurrently, each operating on its own exclusive subset of the available OSTs. Thus each OST is active:passive.

A ClusterStor E1000 SSU-F is populated with 24x SSDs. For a throughput optimized configuration, approximately two halves of the capacity are each configured with ClusterStor’s GridRAID declustered parity and sparing RAID system using LDISKFS. For an IOPs optimized SSU-F configuration, a different RAID scheme is used to improve small random I/O workloads. 

Each controller can be configured with two or three high-speed network adapters configured with Multi-Rail LNet to exploit maximum throughput performance per SSU-F. A ClusterStor E1000 configuration can be scaled to many SSU-Fs and/or combined with SSU-Ds to achieve specified performance requirements. 

E1000 Scalable Storage Unit – Disk (SSU-D)

The E1000 SSU-D provides HDD-based file I/O data services and network request handling for the file system with similar OSS and OST features to the SSU-F. Specifically an SSU-D is a 2U storage enclosure with an HA configuration of dual PSUs, dual server modules (EACs) and SAS HBAs for connectivity to a JBOD disk enclosure. The number of JBODs is customer-configured on order to be 1, 2, or 4.

Each JBOD is configured with 106x SAS HDDs and contains two Lustre OSTs, each configured with ClusterStor’s GridRAID declustered parity and sparing RAID system using LDISKFS or OpenZFS. 

As with the SSU-F, each OSS runs on one of the server modules, forming a node, and the two OSS nodes operate as an HA pair. Normally each OSS node owns and operates one of two Lustre Object Storage Targets (OST) in the SSU-D. If an OSS failover happens then the HA partner of the failed OSS operates both OSTs. Both OSSs are concurrently active with each operating on its exclusive subset of the available active:passive OSTs.

ClusterStor E1000 can be scaled to many SSU-Ds and/or combined with SSU-Fs to achieve specified performance requirements.

Comment

Frontier’s Lustre/ClusterStor system is split, and server and target nodes for metadata storage, flash-based data storage and capacity disk-based storage – plus the router nodes so that data referencing or moving compute processes – are separated from basic data storage processing, and enable the whole distributed structure to operate in parallel and at high speed.

Such a complex multi-component system is needed by Frontier to keep its compute nodes fed with the data they need and take away (write) data they produce without bottlenecks freezing cores with IO waits. This structural split between data storage and data access managing nodes may well be needed by hyperscaler IT systems as they approach exascale. They might even be in use deep inside hyperscaler datacenters already.

Note

The ClusterStor E1000 also supports Nvidia Magnum IO GPUDirect Storage (GDS), which creates a direct data path between the E1000 storage system and GPU memory to increase I/O speed and so overall performance.

The four storage horsemen of the epochalypse

Forgive the headline pun but apocalypse it is not. We have compared the storage growth rates for Dell, HPE, NetApp, and Pure Storage and spotted standout differences with HPE declining, Dell starting an upswing, NetApp rising on the back of 8 consecutive growth quarters, and Pure starting its eighth year of growth, albeit with a two-quarter hiccup in 2021. 

Our charts track storage revenues by quarter within fiscal years, and reveal quarterly revenue changes by fiscal year. Industry leader Dell’s pattern since 2018 is growth to 2019 and then a gentle downturn for two and a half years until Q3 of 2022 when growth restarted, accelerating sharply in its most recent quarter.

HPE exhibits a growth stoppage in Q4 2019, a quarter earlier than Dell, then declines until Q1 2021 after which growth restarts but gets snuffed out three quarters later with its most recent quarter showing a three percent decline to $1.1 billion. Its overall trend-line on the chart is one of decline.

NetApp’s history goes back earlier but, like Dell and HPE, it shows growth halting three years ago; actually in Q4 2019, and plunging throughout 2020. It then exhibits a consistent rise for eight straight quarters, beating both HPE and Dell in the percentage growth rate stakes.

That’s good stuff but NetApp, in turn, is outshone by Pure Storage, the smallest supplier in our foursome, whose growth rate consistency and revenue rise is spectacular.

All this prompts us to ask why these four suppliers have such different revenue change patterns.

One factor is that Pure only sells all-flash arrays (AFAs) to the on-premises and near-cloud markets while the others have wider product portfolios, with entry-level, mid-range, and high-end products, AFAs, disk and hybrid flash/disk arrays, purpose-built backup arrays (not NetApp), scale-out filers (Dell) and hyper-converged appliances (not NetApp again).

All four are transitioning away from perpetual licenses to as-a-service offerings, building a storage software presence in the public clouds, and providing storage for containerized workloads.

In general all four are pushing AFA products, so why is Pure doing so well? Does it simply have a better product? It is tempting to say that Dell and HPE are mature suppliers with less room in the market for dramatic growth Pure-style. NetApp is also a mature company, and it is growing at a faster rate than either Dell or HPE. It is possibly benefiting from being earlier into the public cloud arena with its Data Fabric concept, success in partnering with Amazon Web Services, Microsoft Azure, and Google Cloud, and recent CloudOps product services.

Pure would say that it is growing at a faster rate because it has better products, a better upgrade program, and better as-a-service offerings. The others will certainly dispute that but their arguments are made less convincing by either negative growth or significantly slower growth rates than Pure.

HPE revenues flat-line amid supply-chain woes

HPE GreenLake
HPE GreenLake

HPE revenues grew hardly at all in its second 2022 quarter, restrained by supply difficulties, but demand was robust and both orders and backlog grew strongly.

Revenues in the quarter ended April 30 were $6.7 billion, up just 0.2 percent year-on-year, with a profit of $250 million, 3.5 percent less than a year ago. The earnings were near the midpoint of HPE’s guidance with negative effects from the Ukraine war, HPE’s Russia operations closedown, and the China shutdown.

This prompted CEO Antonio Neri to mainly focus on underlying positivity in his results statement: ”Persistent demand led to another quarter of significant order growth and higher revenue for HPE, underscoring the accelerating interest customers have in our unique edge-to-cloud portfolio and our HPE GreenLake platform.” Mentioning “higher revenue” with just 0.2 percent revenue growth is a tad unbelievable.

Neri said he is ”optimistic that demand will continue to be strong, given our customers’ need to accelerate their business resilience and competitiveness. We remain focused on innovating for our customers and on executing with discipline so that we translate that demand into profitable growth for HPE.”

Yes, well, profitable growth has been lacking in HPE for quite some time, and was lacking this quarter too. It was not able to satisfy all the demand for its products due to supply chain woes, and stressed that the underlying business signals are very positive. For example:

  • Annualised revenue run-rate (ARR) was up 25 percent on the year to $829 million
  • Total as-a-service orders were up 107 percent; the third consecutive quarter of orders doubling

Business unit splits

HPE splits its business into several segments and the picture there was mixed;

There are two growth segments; HPC and AI on the one hand and Intelligent Edge on the other. We can see from the table above that these grew revenues year-on-year while the others were almost flat (Compute) or declined.

Yet orders for Compute were 20 percent-plus higher year-on-year for the 4th quarter in a row. The Storage backlog increased to record levels and this segment had the highest as-a-service ARR growth. There was double-digit year-on-year growth in Nimble, Big Data and HCI (Simplivity) although Storage revenue overall declined for the second quarter in a row. That indicates that 3PAR/Primera and the Alletra 9000 did not do so well this quarter.

Within Intelligent Edge orders grew more than 35 percent for the fifth consecutive quarter. Aruba Services revenue was up double-digits from the prior-year period and Intelligent Edge as-a-Service ARR was up 50 percent-plus from the prior-year period. 

The HPC & AI segment had delayed acceptances and the order backlog grew to an impressive $3 billion with 15 percent-plus year-on-year order growth.

This is a jam-tomorrow story, and HPE’s revenues were hobbled by supply chain issues, unlike the revenues in the latest quarters for Dell, NetApp, and Pure Storage. Compared to them, and relatively-speaking, HPE is under-performing in its response to the supply-chain problems that are affecting the entire IT hardware industry.

Another issue is that HPE is withdrawing from Russia and Belarus and took a $126 million charge because of this in the quarter.

Financial summary

  • Gross margin: 32.4 percent, down 170 basis points on the year primarily due to $105 million of Russia-related charges 
  • Diluted EPS:  $0.19, flat from the prior-year period mostly due to $126 million of Russia-related charges 
  • Cash flow from operations: $379 million
  • Free cash flow: -$211 million in line with normal seasonality
  • Capital returns to shareholders of $214 million in the form of share repurchases and dividends 

HPE has not provided a revenue target for the next quarter but suggests it will see 3 to 4 percent revenue growth for the full 2022 year (adjusted for currency changes).

Comment

Storage, for HPE, is not a growth business.  

Like Compute, Storage is classed by HPE as a core business while HPC & AI and Intelligent Edge (Aruba) are the two growth businesses. The aim with core businesses is more to preserve revenues and profitability, and keep margins up than build revenues higher. In contrast, storage is seen as very much a growth business for the storage-product-dominated NetApp and Pure Storage. It’s also seen as a growth opportunity by Dell. And that difference shows in all three of these vendors’ much better storage results. 

It prompts the asking of a question: has HPE made a strategic mistake by not classing Compute and Storage as growth businesses? Is it, as a result, not spending enough R&D dollars to build better – and winning – Compute and Storage products? 

Look at this another way: HPE’s two growth businesses are simply not growing fast enough to offset declines elsewhere, as a chart of HPE’s quarterly segment revenues shows:

We can see that the overall growth in HPC & AI is slight and patchy, and only gradual in the Intelligent Edge segment. The much bigger generally declining $3 billion Compute and billion-dollar-plus Storage segments, not to mention flattish Corporate Investments and gently declining Financial Services areas, mask the positive single digit percent improvements in the two sub-billion dollar growth segments.

HPC & AI could and should grow much more in the future, propelled by HPE’s Frontier exascale supercomputer win. This business is lumpy, going up and down quarter by quarter, and revenues are not yet growing consistently or strongly. But HPE has a $3 billion backlog here so it should turn sharply upwards eventually. The Intelligent Edge needs double-digit growth to start making a real difference to HPE’s revenues and we are waiting for that to happen. Again, the order growth is promising.

All of this is taking place against the background of a transition away from perpetual license sales to GreenLake subscription revenues and that has a dampening effect on revenue growth. And the supply chain problems are also holding back revenue growth. But demand is strong, orders growing, billings growing, and ARR growing so we should – maybe – give HPE the benefit of any doubt and look forward to a glowing future. Antonio and his execs hopefully know what they are doing. 

Let’s close with a comment from EVP and CFO Tarek Robbiati: “With record levels of high-quality backlog, we are well positioned for growth in FY22 and beyond.” 

NetApp grows revenues 8 percent just as it said it would – but CloudOps held it back

Consistency is valued at NetApp and it has grown revenues consistently for eight quarters in a row with its latest fourth fiscal 2022 quarter’s results. But public cloud growth let the side down as poorly integrated CloudOps acquisitions were hard to sell.

Revenues in the quarter ended April 29 were $1.68 billion, 8 percent more than a year ago, with a profit of $259 million, down 20 percent from the year-ago $324 million. Full fiscal 2022 revenues were up 10 percent to $6.32 billion with profits of $937 million, up 28.4 percent on the year.

CEO George Kurian’s results statement said: “Our solid fourth quarter results cap off a strong year. We made sustained progress against our strategic goals: gaining share in enterprise storage, expanding our public cloud business, and, most notably, delivering record levels of gross margin dollars, operating income, and earnings per share.” He talked of an our alignment to customer priorities, strong balance sheet, and prudent operational management. 

Fourth quarter’s financial summary

  • Hybrid Cloud segment revenue: $1.56 billion, compared to $1.49 billion a year ago
  • Public Cloud segment revenue: $120 million, compared to $66 million last year, 99 percent higher
  • Gross margin: 65.7 percent compared to prior quarter’s 67.3 percent and 68.3 percent before that
  • Cash, cash equivalents, and investments: $4.13 billion at quarter end
  • Cash provided by operations: $411 million, compared to $559 million 12 months ago
  • Share repurchase and dividends: Returned $361 million to shareholders through share repurchases and cash dividends

Full-year summary

  • Billings: $6.7 billion, 13 percent higher than a year ago
  • Hybrid Cloud segment revenue: $5.92 billion, compared to last year’s $5.55 billion
  • Public Cloud segment revenue: $396 million, about double the year-ago $199 million
  • Cash provided by operations: $1.21 billion compared to $1.33 billion last year
  • Share repurchase and dividends: Returned $1.05 billion to shareholders through share repurchases and cash dividends

In the Hybrid Cloud category, product revenues were $894 million, up 6 percent annually, with support and other services contributing $666 million. NetApp said it gained share in enterprise storage with strong growth in all-flash array (AFA) and object-storage products. The AFA run rate is $3.2 billion, the same as last quarter, and 12 percent more than a year ago. Actual AFA revenues were up 20 percent annually in the quarter. Object-storage revenues grew faster, at 49 percent.

Public Cloud annual recurring revenue (ARR) was $505 million, 68 percent more than 12 months ago, with strength in Cloud Storage, led by Azure NetApp Files, but it was lower than hoped for. That was due to shortfalls in the Cloud Insights and Spot areas, which grew less than expected, not helped by sales force attrition, particularly with Spot.

Kurian referred to this in the earnings call, saying: “Our Public Cloud ARR came short of our expectations. Demand for our cloud storage solutions was strong in Q4. We also saw a healthy number of new customer additions across both cloud storage and cloud operations services in the quarter. Unfortunately, these tailwinds were not enough to offset the lower than expected growth created by higher churn, lower expansion rates, and sales force turnover in our cloud operations portfolio.”

NetApp has made organizational changes to increase its focus on renewal and expansion motions, refreshed the sales team and strengthened the leadership ranks. CFO Mike Berry talked about improving the operational rigor across the CloudOps products and NetApp is speeding up the integration of its CloudOps product portfolio, particularly Instaclustr, so that it’s easier to buy. This should also help the sales force cross-sell and upsell NetApp products and services to its CloudOps customers.

It emerged on the call that since some point this year, sales reps cannot hit their numbers without selling cloud as part of their overall quotas.

Kurian admitted mistakes had been made: “I think where we could do better is learn from the mistakes we made around integration, and we’re going to – everybody learns from that and we’re going to own that.”

In general NetApp plans to slow the pace of CloudOps-related acquisitions and reprioritize its use of cash in FY 2023 to favor shareholder returns. It is convinced it can achieve $2 billion in ARR exiting fiscal year 2026.

In the Hybrid Cloud segment of its business, issues with supply chains hindered its ability to ship product. Product revenues grew 10 percent in the full year, but only 6 percent in the fourth quarter reflecting this. Berry mentioned supply-constrained shipments, elevated freight and logistical expense, and component cost headwinds.

The company’s revenue growth rate has declined during the year, starting at 11.9 percent in Q1 and passing through 10.6 percent and 9.8 percent to the latest quarter’s 8 percent. Gross margin has also declined, with Berry saying Q4 should be the trough with gross margin improving during fiscal 2023. Pricing changes – increases – will help it as will supply-chain improvements. It sees customer demand as being steady and its ability to satisfy that demand will be gated by supply-chain issues, as it has been for the past two quarters.

Eight consecutive growth quarters in 2021 and 2022.

NetApp has not grown its revenues in the quarter anywhere near Pure’s 50 percent growth rate. Instead it’s nearly matched Dell’s 9 percent storage revenue growth rate. Pure’s run rate is $2.48 billion, which compares to NetApp’s AFA run rate of $3.2 billion, up as we have seen 12 percent annually. If Pure continues growing faster than NetApp’s AFA revenues then it could eventually overtake NetApp on the AFA front.

Neither has NetApp seen customers wanting to pull shipments forward as happened with Pure in its comparable quarter.

Asked about the competitive environment and if it had changed, Kurian answered: “I think it’s pretty much the same, Pure and NetApp taking share from Dell and HP and several other players. So I would characterize it as no fundamental change, to be honest.”

The outlook for NetApp’s next quarter (Q1 FY 2023) is for revenues between $1.475 billion and $1.625 billion, $1.55 billion at the mid-point which Berry said is 6 percent higher than the year-ago quarter. Full FY 2023 revenues are expected to be 6 to 8 percent higher than for FY 2022.

NetApp anticipates sustained demand for its AFA and object-storage products, and continued share gain momentum, which should lead to product revenue growth in the mid-single digits.

SCSI

SCSI – Small Computer Systems Interface pronounced as ‘Scuzzee.’ It is an interconnect standard for PCs and servers linking to peripheral devices, such as disk drives and SSDs, and covers physical connections and data transfer. It was originally a parallel interface but later moved to a serial one. Serial-Attached SCSI (SAS) is a physical version of this and iSCSI is another version. See Wikipedia for a detailed look at SCSI.