Home Blog Page 328

Five Ways Dell EMC PowerStore Doesn’t Measure Up to HPE Nimble Storage

measuring tape
measuring tape

Sponsored Dell EMC recently announced PowerStore[1], their new mid-range storage platform intended to replace Unity, SC Series and XtremIO[2]. They spotlighted features including multi-array clustering, data-in-place upgrades and automation – features that we agree are important, but are also table stakes, as HPE Nimble Storage has been delivering them for years. And despite positioning PowerStore for the data era, it’s not doing nearly enough to deliver what enterprises need today from their storage platform.

What enterprises need today goes beyond the spec sheet. With data being the key to digital transformation, organizations need a proven storage platform that can run their business-critical apps, eliminate application disruptions, deliver the cloud experience, and unlock the potential of hybrid cloud.

That’s where HPE Nimble Storage shines and that’s where Dell EMC PowerStore falls short.

1. Running Business-Critical Apps

Enterprises are increasingly reliant on applications to handle everything from back-end operations to the delivery of products and services. That is why proven availability, protecting data, and ensuring applications stay up are more important than ever before.

But Dell EMC has the market[3] and HPE scratching our heads – with PowerStore only supporting RAID 5, old technology that can lead to catastrophic data loss with more than one drive failure. In comparison, with HPE Nimble Storage, enterprises can sustain 3 simultaneous drive failures with Triple+ Parity, protection that’s several orders of magnitude more resilient than RAID 5.

PowerStore is also marketed as “designed for 6x9s” availability[4]. But being “designed for” availability versus actually having a measured track record in customer production environments are vastly different. HPE Nimble Storage has 6x9s of proven availability[5] based on real, achieved values (as opposed to theoretical projections) and is measured for its entire installed base. And as enterprises entrust their data with us, we guarantee 6x9s availability for every HPE Nimble Storage customer.

2. Eliminating Application Disruptions

Enterprises need to be always-on, always-fast, and always agile. They don’t have time to fight fires and react to problems. But with countless variables and potential issues across the infrastructure stack, IT continues to be held back reacting to problems and dealing with disruptions.

The only way to get ahead is with intelligence. Intelligence that predicts problems. Intelligence that sees from storage to virtual machines. Intelligence that takes action to prevent disruptions. Dell EMC attempts to answer the call for intelligence with CloudIQ, an application that “provides for simple monitoring and troubleshooting” for storage systems[6]. But with the complexity that lives in IT today, this is simply not doing enough as it leaves customers with more questions than answers.

HPE InfoSight, on the other hand, is true intelligence. Since 2010, it has analysed more than 1,250 trillion data points from over 150,000 systems and has saved customers more than 1.5 million hours of lost productivity[7]. It uses machine-learning to predict and prevent 86% of problems before customers can be impacted[8]. And, its intelligence goes beyond storage to give deep insights into virtual infrastructure that improves application performance and optimizes resources.

The intelligence that enterprises can count on to ensure their apps stay up is HPE InfoSight.

3. Delivering the Cloud Experience

PowerStore has fewer knobs than Unity – a much needed improvement. But it’s not nearly enough. Our customers tell us they want to deliver services and get out of the business of managing infrastructure. This requires having the right foundation for their on-premises cloud with the simplicity of consumption, the flexibility to support any app, and the elasticity to scale on-demand.

HPE Nimble Storage is a platform that goes beyond making storage “easy to manage” – to be a foundation for on-premises cloud. HPE Nimble Storage dHCI is that foundation, a category creating, disaggregated HCI, delivering the HCI experience but with the performance, availability, and resource efficiency needed for business-critical apps. And as announced in May, enterprises can have an on-premises cloud by consuming HPE Nimble Storage dHCI as a service through HPE GreenLake.

But what about a cloud experience for the edge? The edge is in need of modernization as enterprises look to streamline their multi-site and remote IT environments. Dell EMC positions PowerStore AppsOn here[9] , but it’s not purpose-built for the job. HPE SimpliVity is an edge-optimized HCI platform with simple multi-site management, a software-defined scale-out architecture, and built-in data protection, making it the ideal choice for remote sites.

4. Unlocking the Potential of Hybrid Cloud

Every IT leader today looks at hybrid cloud as a potential enabler of innovation, only to realize the overwhelming challenges that exist with fragmented clouds. With data at the center of innovation, realizing the potential of hybrid cloud requires an architecture – a fabric – that provides a seamless experience with the flexibility to move data across clouds during its lifecycle from test/dev, production, analytics, to data protection.

HPE delivers that seamless experience by extending HPE Nimble Storage to the public cloud with HPE Cloud Volumes. HPE Cloud Volumes is a suite of enterprise cloud data services that enables customers, in minutes, to provision storage on-demand and bridges the divide between on-premises and public cloud. It brings consistent data services, bi-directional data mobility, and the ability to access the public cloud, unlocking hybrid cloud use cases from data migration, data protection, test/dev, containers, analytics, to running enterprise applications in the cloud.

While Dell EMC can connect their storage to the public cloud, HPE Nimble Storage goes further with a true cloud service, consumable on-demand, that helps our customers maximize their agility and innovation and optimize their economics with no cloud lock-in, nothing to manage, and no more headaches.

5. Delivering an Experience You’ll Love

On top of everything – from being a better fit for business-critical apps, being more intelligent and predictive, delivering on-premises cloud, to making hybrid cloud valuable – HPE Nimble Storage delivers a customer experience that simply excels.

A perfect example is support. Multi-tiered support that shuffles customers from Level 1 to Level 2 to Level 3 is reactive and too slow to resolve problems. This isn’t the case with HPE Nimble Storage, as customers have direct access to Level 3 with Level 1 and Level 2 support cases predicted and automated. That means no more escalations, 73% fewer support tickets, and 85% less time spent resolving storage problems[10]…and not to mention the friendliest support engineers who help you solve problems even if they’re outside of storage.

Rethink What Storage Needs to Do for You

HPE Nimble Storage reimagines enterprise storage with a unique solution that simplifies IT and unlocks enterprise agility across the data lifecycle. It transforms operations with artificial intelligence and it gives you an experience that you’ll truly love as you can see here from this video.

If you are a current customer of Dell EMC midrange storage products, we would welcome the opportunity to show you how HPE Nimble Storage can elevate your experience. Here are a dozen reasons why organizations are moving from Dell EMC to HPE Nimble Storage. And we can help make the investment in HPE Nimble Storage easier with financials offers that generate cash from your existing storage assets.

Learn more about HPE Nimble Storage, and how after a decade of innovation, it’s stepping into the future.

This article is sponsored by HPE.

Commvault hooks up Metallic with Azure Blobs

Commvault has integrated Metallic SaaS data protection with Azure Blob Storage, enabling customers to save on-premises storage capacity.

Commvault’s cloud-native Metallic service receives backup data from on-premises and in-cloud applications and stores it in AWS and Azure object storage. An existing Office 365 service saves its data in Azure. Now Metallic can save any data ingested on-premises by Commvault’s Backup and Recovery software and HyperScale X appliances. This encompasses physical, virtual and containerised workloads.

Commvault GM Manoj Naor issued a quote: “The need to leverage the cloud is only accelerating, and having simple, direct access to cloud storage as a primary or secondary backup target allows us to facilitate our customers’ journeys to the cloud while also providing a critical step in ransomware readiness with an air-gapped cloud copy.”

That’s air-gapped in the sense that, although network-linked, the Metallic-generated data copy cannot be directly accessed by the customer. So the data is virtually air-gapped, unlike a physically air-gapped tape cartridge.

Ranga Rajagopalan, VP of product management, answered our question about this point; “Yes. We’ve had customers defeat ransomware attacks with our virtual air-gapped approach.” He said there were “encryption and other hardening techniques, and separation of management and access for the Azure account. Customers can spin up their entire environment in the cloud.”

The new service is managed through the Commvault Command Centre. The fully managed Metallic Cloud Storage Service is available across North America, EMEA and APAC, and Metallic SaaS is available in North America and ANZ.

Huawei has fastest storage array in the world. SPC-1 says so

Huawei has smashed the SPC-1 benchmark, with an all-NVMe flash OceanStor array that is double the performance pg previous title holder Fujitsu at 40 per cent higher cost.

SPC-1 tests the ability of a storage array’s to process IOs for a business-critical workload. Huawei’s OceanStor Dorado 18000 V6 used 576 x NVMe SSDs, each with 1.92TB capacity, to score 21,002,561 SPC-1 IOPS. This translates into a price performance rating of $429.10/KIOPS.

Fujitsu’s ETERNUS DX89800 S4 scored 10,001,522 IOPS with a $644.16/KIOPS price-performance. It used slower SAS SSDs than the Huawei config and 16Gbit/s Fibre Channel links, compared with 32Gbit/s FC for the Huawei,

We have charted the top 10 SPC-1 results on an IOPS vs price-performance chart:

Top-10 SPC-1 results

Comment

Dell, HPE or NetApp could theoretically cobble together an all-NVMe SSD system with, say, 1,000 SSDs and sufficient 32Gbit/s FC links to handle the bandwidth. But a $10m-plus, 30 million-plus SPC-1 IOPS array would be a fairly esoteric array  and vendors would ask: “Is it worth it?” 

The array world is moving to radically faster NVMe-over Fabrics links to accessing servers and that is not reflected in the SPC-1 benchmark test. So why build a benchmark record-beating array with interconnect technology that’s going to be superseded? They’d see it as dead end. Don’t expect to see any US supplier at the top of the SPC-1 charts anytime soon.

Veeam embraces container backup by buying Kasten

Veeam is buying Kasten, a Kubernetes-orchestrated container data protection startup, for $150m in cash and stock.

Kasten’s K10 software will continue to be available independently and Veeam will also integrate it into its own backup & replication functionality. Veeam’s goal is to simplify enterprise data management by covering data protection for virtual machines, physical servers, SaaS applications, cloud workloads and containers in one platform.

Danny Allan, Veeam CTO, said in a statement: “With the acquisition of our partner Kasten, we are taking a very important next step to accommodate our customers’ shift to container adoption in order to protect Kubernetes-native workloads on-premises and across multi-cloud environments.” 

Kasten founders

Niraj Tolia, Kasten CEO, said: “The enterprise landscape is shifting as applications rapidly transition from monoliths to containers and microservices… Veeam’s success has been a beacon of inspiration for the Kasten team and we are very excited to join forces with a company where there is so much philosophical alignment.”

Kasten was started in January 2017 by Tolia and engineering VP Vaibhav Kamra. The company raised $3m in a March 2017 seed round and $14m in an August 2019 A-round. Veeam set up a reseller partnership with Kasten in May this year. Veeam and Kasten are both part of Insight Partners‘ investment portfolio.

Kasten’s K10 software snapshots the entire state of an application container – not just the data that needs protecting. That means the K10-protected container can be migrated to a different system and instantiated there, and also sent to a disaster recovery site.

Veeam’s Kasten purchase follows last month’s $370m acquisition of Portworx by Pure Storage. This M&A activity highlights the growing importance of Kubernetes-orchestrated containers to enterprise application development and deployment.

This week in storage with Fujitsu, HPE, Intel and more

This week’s data storage standouts include Intel spinning out a fast interconnect business; HPE and Marvell’s high-availability NVMe boot drive kit for ProLiant servers; and Fujitsu going through its own, branded digital transformation

Intel spins out Cornelis

Intel has spun out Cornelis Networks, founded by former Intel interconnect veterans Philip Murphy Jr., Vladimir Tamarkin and Gunnar Gunnarsson.

Cornelis will compete with Nvidia’s Mellanox business unit and technology, and possibly also HPE’s Slingshot interconnect. The latter is used in Cray Shasta supercomputers and HPE’s Apollo high performance computing servers.

Cornelis aims to commercialise Intel’s Omni-Path Architecture (OPA), a low-latency HPC-focused interconnect technology. The technology comes from Intel’s 2012 acquisition of QLogic’s TrueScale InfiniBand technology and Cray’s Aries interconnect IP and engineers, acquired by Intel in April 2012. 

Corneli’s initial funding is a $20m A-round led by Intel Capital, Downing Ventures, and Chestnut Street Ventures.

Fujitsu’s digital twin

Fujitsu is investing $1bn in a massive digital transformation project, which it is calling “Fujitra”.

The aim is to transform rigid corporate cultures such as “vertical division between different units” and “overplanning” by utilising frameworks such as Fujitsu’s Purpose, design-thinking, and agile methodology. Fujitsu’s purpose or mission is “to make the world more sustainable by building trust in society through innovation,” which seems entirely Japanese in its scope and seriousness.

Fujitsu will introduce a common digital service throughout the company to collect and analyse quantitative and qualitative data frequently and to manage actions based on such data. All information is centralised in real time to create a Fujitsu digital twin. 

Fujitsu has appointed DX officers for each of the 15 corporate and business units as well as five overseas regions. They will be responsible for promoting reforms across divisions, advance company-wide measures in each division and region, and lead DX at each division level.

Fujitra will be introduced at Fujitsu ActivateNow, an online global event, on Wednesday, October 14.

HPE and Marvell’s NVMe boot switch

Marvell’s 88NR2241 is an intelligent NVMe switch that enables data centres to aggregate, increase reliability and manage resources between multiple NVMe SSD controllers. The 88NR2241 creates enterprise-class performance, system reliability, redundancy, and serviceability with consumer-class NVMe SSDs linked by PCIe. This NVMe switch has a DRAM-less architecture and supports low latency NVMe transactions with minimum overhead. 

HPE NS204i-p NVMe RAID 1 accelerator

HPE has implemented a customised version of the 88NR2241 for ProLIant servers, calling it an NVMe RAID 1 accelerator. One HPE NS204i-p NVMe OS Boot Device is a PCIe add-in card that includes two 480GB M.2 NVMe SSDs. This enables customers to mirror their OS through dedicated RAID 1.  

The accelerator’s dedicated hardware RAID 1 OS boot mirroring eliminates downtime due to a failed OS drive – if one drive fails the business continues running. HPE OS Boot Devices are certified for VMware and Microsoft Azure Stack HCI for increased flexibility.

AWS Partner network news

  • Data protector Druva has achieved Amazon Web Services (AWS) Digital Workplace Competency status. This is Druva’s third AWS Competency designation. Druva has also been certified as VMware Ready for VMware Cloud.
  • Cloud file storage supplier Nasuni has achieved AWS Digital Workplace Competency status. This status is intended to help customers find AWS Partner Network (APN) Partners offering AWS-based products and services in the cloud.
  • Kubernetes storage platform supplier Portworx has achieved Advanced Technology Partner status in the AWS Partner Network (APN). 

The shorter items

DigiTimes speculates that China-based memory chipmakers ChangXin Memory Technologies (CXMT) and Yangtze Memory Technologies (YMTC) could be added to the US trade ban export list. This list is currently restricting deliveries of US technology-based semiconductor shipments to Huawei.  

The Nikkei Asian Review reports SK hynix wanted to buy some more shares in Kioxia, taking its stake from 14.4 per cent to 14.96 per cent stake from its existing 14.4 per cent holding. That would link Kioxia and SK hynix in a defensive pact against emerging Chinese NAND suppliers. This plan isnow delayed as Kioxia has postponed its IPO.

Veritas has bought data governance supplier Globalnet to extend its digital compliance and governance portfolio, with visibility into 80-plus new content sources. These include Microsoft Teams, Slack, Zoom, Symphony and Bloomberg.

Dell has plunked Actifio‘s Database Cloning Appliance (DCA) and Cloud Connect products on Dell EMC PowerEdge servers, VxRail and PowerFlex. Sales are fulfilled by Dell Technologies OEM Solutions.

Enmotus has launched an AI-enabled FuzeDrive SSD with 900GB and 1.6TB capacity points. It blends high-speed, high endurance static SLC (1 bit/cell) with QLC (4bits/cell) on the same M.2 board. AI code analyses usage patterns and automatically moves active and write intensive data to the SLC portion of the drive. This speeds drive response and lengthens its endurance.

Exagrid claims it has the only non-network-facing tiered backup storage solution with delayed deletes and immutable deduplication objects. When a ransomware attack occurs, this approach ensures that data can be recovered or VMs booted from the ExaGrid Tiered Backup Storage system. Not only can the primary storage be restored, but all retained backups remain intact. Check out an Exagrid 2 minute video.

Deduplicating storage software supplier FalconStor has announced the integration between AC&NC’s JetStor hardware with StorSafe, its long-term data retention and reinstatement offering, and StorGuard, its business continuity product.

HCL Technologies has brought its Actian Avalanche data warehouse migration tool to the Google Cloud Platform.

MemVerge has announced the general availability of its Memory Machine software which transforms DRAM and persistent memory such as Optane into a software-defined memory pool. The software provides access to persistent memory without changes to applications and speeds persistent memory with DRAM-like performance. Penguin Computing use Optane Persistent Memory and Memory Machine software to reduce Facebook Deep Learning Recommendation (DLRM) inference times by more than 35x over SSD.

SanDisk Extreme and Extreme PRO SSDS

Western Digital’s SanDisk operation has announced a new line of Extreme portable SSDs with nearly twice the speed of the previous generation. The Extreme and Extreme PRO products use the NVMe interface and with capacities up to 2TB. They offer a password interface and encryption. The Extreme reads and writes at up to 1,000 MB/s while the Extreme PRO achieves upon to 2,000MB/sec.

Nearline drives are bright spot in Gartner HDD forecast

Nearline disk drive capacity shipments and revenues will grow at double-digit percentages between 2019 and 2024, according to Gartner. The tech research firm predicts other disk categories will decline, with notebook HDDs heading the way of the dodo.

Aaron Rakers, a senior analyst at Wells Fargo, presented subscribers with Gartner hard disk drive (HDD) market notes for Q3 2020 and estimates for 2019-2024.

The total HDD market will decline by 6 per cent y/y in 2020 to $20.7bn, which follows a 12 per cent y/y decline in 2019 at $22.1bn. However, revenue should reach $21.9bn in 2021 (+5 per cent) and $22.6bn in 2022 (+4 per cent).

Gartner forecasts HDD market revenue overall will decline at 1.5 per cent CAGR through to 2024, with sales propped up by nearline disk drive growth and a small surveillance drive contribution.

Nearline 3.5-inch high capacity HDD exabytes shipped will grow 39 per cent 2019-2024 CAGR. Revenue is estimated to grow at 14 per cent CAGR, expanding from $8.9 billion in 2019 to $17.1 billion by 2024.

This implies nearline revenue will grow share of total HDD revenue from 40 per cent in 2019 to 55 per cent in 2020, 62 per cent in 2021, and c.84 per cent by 2024, according to Rakers.

Nearline HDD capacity will expand from 52 per cent of total HDD capacity shipped in 2019 to 70 per cent in 2020 and 86 per cent by 2024.

Total non-nearline HDD capacity shipped will decline at -1.3 per cent CAGR  2019-2024. Overall non-nearline HDD revenue will decline from $13.2bn in 2019 and $9.4bn in 2020 to less than $3.4bn by 2024.

SSD cannibalisation will eat up mission-critical enterprise HDD revenue – $1.8bn in 2019, and zero in 2024. Gartner expects  notebook PCs to move to a 100 per cent SSD/flash attach rate by 2024, an increase from 88 per cent in 2020.

The mobile and consumer HDD market will decline from $5.65bn in 2019 and $3.7bn in 2020 to slightly more than $820m by 2024. The 3.5-inch client and consumer HDD market is estimated to decline from $5.72bn in 2019 and $4.2bn in 2020 to $2.55bn by 2024. 

Surveillance drives will see some growth; with shipped units growing at nearly four per cent CAGR for 2019-2024.

3D XPoint patent suit against Micron and Intel is allowed to proceed

A US lawsuit filed two years ago by the liquidator of a bankrupt patent company against Intel and Micron has been allowed to proceed. The ruling issued on October 1 by Judge Thomas J. Tucker, exposes Intel to claims that its has no right to its core 3D XPoint technology.

ECDL Trust accuses Micron and Intel of colluding fraudulently in transferring intellectual property to Micron and arranging royalty free licenses with Intel. The plaintiff also alleges Intel has no right to certain technology used in its Optane 3D XPoint products.

In its filing, ECDL Trust said the technology, including non-volatile chalcongenide phase-change memory (PCM) and an Ovonic switch, was developed by Energy Conversion Devices (ECD, a company founded by Stan Ovshinsky to commercialise his inventions.

In 1998-1999, ECD signed a deal with Micron CTO Tyler Lowrey and Ovonyx, a newly-formed company that was set up to commercialise this IP. The royalty deal required Ovonyx to pay 0.5 per cent of its revenues on a quarterly basis to ECD. Lowrey subsequently became Ovonyx CEO.

Ovonyx earned $58m revenues from sub-licensing the ECD technology from the beginning of 2000 until 2012, when it stopped paying royalties to ECD.

Ovonyx stockholders included ECD, Lowrey, Intel, and former Micron CEO and chairman Ward Parkinson. He was also a VP and Director of Ovonyx.

ECD went bankrupt under Chapter 11 in February 2012. Energy Conversion Devices Liquidation Trust (ECDL Trust) was set up in the ECD liquidation plan and ECD subsequently sold its Ovonyx stock to Micron in August 2012. Micron owned 100 per cent of Ovonyx after July 2015.

In July 2015 the patents held by Ovonyx were transferred to a separate company, Ovonyx Memory Technology LLC (OMC), three days after Intel and Micron publicly launched 3D XPoint memory.

ECDL Trust argues the royalty payments to ECD should have continued after 2012 and is suing Lowrey, Ovonyx, OMC, Intel and Micron for the missing payments, in a Michigan bankruptcy court.

The defendants in the ECDL Trust lawsuit filed seven motions to dismiss ECDL Trust’s claims. Judge Tucker issued his judgement on October 1st, granting some motions and denying others. Tucker denied OMT’s motion to dismiss the fraudulent transfer complaint, and denied Intel’s motion to dismiss ECDL Trust’s claim it has no right to the Ovonyx IP used in 3D XPoint products.

The lawsuit is Case No. 12-43166 in the United States Bankruptcy Court, Eastern district of Michigan, Southern Division.

Hardware fault shuts Tokyo Stock Exchange for the day

A data storage hardware failure was to blame for day-long Tokyo Stock Exchange outage on Thursday October 1.

In a press conference, TSE executives “squarely accepted responsibility for the incident, rather than trying to deflect blame onto the system vendor Fujitsu Ltd,” Bloomberg reports. “All the responsibility lies with us as the market operator, TSE CEO Koichiro Miyahara said. “Fujitsu is merely a vendor that supplies the equipment.”

Fujitsu spokesman Takeo Tanaka said: “We apologise for any inconvenience caused to the concerned parties because of a failure in the hardware we delivered.” Fujitsu is working with the TSE to prevent a recurrence of the problem.

This is a refreshing example of companies taking it on the chin.

Failover failure

The Tokyo Stock Exchange (TSE) runs on a Fujitsu-developed Arrowhead hardware and software system. On Thursday, a data storage component, called the Number 1 Shared Disk device, detected a memory error.

in the event of primary database failure, the database software was supposed to initiate an automated switchover to the secondary database. However, the automated failover process to the Number 2 Shared Disk device did not happen.

TSE IT staff manually forced a change over to the Number 2 device. But they then faced a total system reboot to start trading, with ongoing trading orders left hanging and incomplete.

That was unacceptable and the exchange, the third largest in the world, had to shut down while a proper recovery took place.

The Shared Disk device has a central role, storing management data used by 400 Fujitsu Primergy RX2540 M4 system servers in the Arrowhead trading system. It also handles commands and ID/password combinations for terminals that monitor trades.

ETERNUS 8900 S4 Array.

Arrowhead uses Fujitsu’s PRIMEFLEX for HA Database, a converged infrastructure system, with software running on integrated Fujitsu hardware. This includes PCIe-connected SSDs and an ETERNUS storage array. There are primary and secondary databases synchronised through mirroring technology. Fujitsu discontinued the PRIMEFLEX for HA Database in March 2017. 

AWS Outposts racks up S3 support

Amazon has added S3 support to the on-premises AWS Outposts cloud-in-a-rack.

Using the Outposts converged infrastructure rack, customers deploy an all-AWS hybrid cloud within their own data centres.

AWS Outposts rack

They can now execute applications on the Outpost servers using faster-access local data instead of S3 stores in the AWS cloud. The same S3 Console, APIs, and SDKs is used for for both environments.

This enables admins to redundantly store data across multiple devices and servers on an Outposts rack. The new service is size limited: users can only add 48TB or 96TB of S3 storage capacity to each rack and create up to 100 buckets.

If you are using no more than 11TB of EBS storage on an existing Outpost today you can add 48TB of S3 storage with no hardware changes on the existing systems. Other configurations require additional hardware.

AWS video about S3 on Outposts

All S3 data stored in Outposts is encrypted with SSE-S3. An AWS DataSync service moves data to and from AWS cloud regions. The transfer is encrypted and can be automated, with selectable network bandwidth. The data can be deduped and compressed to lower network costs.

AWS Outposts is not priced as a pay-as-you-go service. Customers purchase capacity for a three-year term with a number of different payment schedules.

AWS Outposts launched late last year and at the time came with Elastic Block Store (EBS) support. RDS (Relational Database System) support added in July. In September, various third-party filesystems and data protection services were made available from Clumio, Cohesity, Commvault, CTERA, Pure Storage, Qumulo and WekaIO, with more added since.

Glean more information from an AWS News Blog by tech evangelist Martin Beeby.

GigaOm: Qumulo tops Scale Out File Systems leader group

GigaOm lists 20 scale-out file system (SOFS) suppliers in a hotly-contested market, and a dozen, led by Qumulo, are duking it out in a leaders’ category.

Report author Enrico Signoretti places the suppliers in a 4-circle, 4-axis, 4-quadrant Radar screen. Qumulo, VAST Data, Quobyte, DDN, and Commvault are the top five in the Leaders’ area.

GigaOm SOFS radar screen diagram,

Unstructured data today accounts for up to 90 per cent of the total data under management for many enterprises. This explains the strong demand for SOFS and why so many suppliers are jockeying for position. The vendors form three distinct groups.

Scale-out file systems have received a consistent boost as enterprises have turned to high-performance computing (HPC) set-ups to analyse and process massive datasets. That means the parallel access file systems such as IBM Spectrum Scale, Panasas have moved into enterprise space. Conversely, the mainstream enterprise filers such as Dell EMC Isilon and NetApp have moved upmarket.

Also a bunch of startups such as Qumulo, VAST Data and Weka have arrived on the scene. They scent an opportunity to scale more and speed performance with software and hardware developments.

Signoretti comments: “Most of the complexity of scale-out architectures is hidden behind the scenes today. Systems are more balanced than in the past, with improved efficiency, while providing the same or even better performance. All of this is thanks to the adoption of new technologies and integrations that take advantage of public cloud and other types of storage systems.”

He says the list of suppliers in the market is very long: “All scale to multi-petabyte capacities or more, and most exhibit performance as their other primary characteristic. But it is also true that there is increasing demand for data management and hybrid cloud integration – an area where most solutions remain immature. Many vendors are concentrating their efforts in this area to meet growing demand.”

Signoretti’s reports are available to subscribers but you can read the introductory paragraphs and inspect the diagrams by visiting GigaOm’s website.

VMware wants to play nice with Nvidia DPUs

VMware and Nvidia announced yesterday they are working to make VMware software work better with Nvidia chips. They say the joint initiative, dubbed Project Monterey, will “introduce a new security model that offloads hypervisor, networking, security and storage tasks from the CPU to the DPU”.

The aim is to offload hypervisor, networking, security and storage tasks from a host CPU to Nvidia’s BlueField data processing unit (DPU). This should be useful for AI, machine learning, and high-throughput, data-centric applications, according to the companies.

Nvidia CEO Jensen Huang said in the launch announcement: “Nvidia DPUs will give companies the ability to build secure, programmable, software-defined data centres that can accelerate all enterprise applications at exceptional value.”

Paul Perez, SVP and CTO, Infrastructure Solutions Group at Dell Technologies, also provided a statement: “We believe the enterprise of the future will comprise a disaggregated and composable environment.” 

SmartNIC, DPU and BlueField-2

Dell said VMware Cloud Foundation will be able to maintain compute virtualization on the server CPU while offloading networking and storage I/O functions to the SmartNIC CPU. VMware has taken the first step to achieve this by enabling VMware ESXi to run on SmartNICs.

A SmartNIC or DPU is a programmable co-processor that runs non-application tasks from a server CPU, so enabling the server to run more applications faster. DPUs can compose disaggregated data centre server compute, networking and storage resources. They can also function as intelligent network interface cards that provide security services and network acceleration.

Nvidia’s BlueField-2 is a Mellanox system-on-chip (SoC) that integrates a ConnectX-6 Dx ASIC network adapter with a PCIe Gen 4 x16 lane switch, 2 x 25/50/100 GbitE or 1 x 200GbitE ports, and an array of 8-core, 64-bit Arm processors. This provides an integrated crypto engine for IPsec and TLS cryptography, integrated RDMA and NVMe-oF acceleration, and dedupe and compression.

Use cases

Three use cases are envisaged. First, BlueField-2 can be used with disaggregated storage, which it virtualizes and enables remote, networked storage to be part of a composable infrastructure. Second, BlueField-2 can provision bare metal servers as a CSP operator service to cloud tenants.

VMware said it will re-architect VMware Cloud Foundation to enable disaggregation of the server including support for bare metal servers, a new Cloud Foundation facility. It will enable an application running on one physical server to consume hardware accelerator resources such as FPGAs from other physical servers. 

With ESXi running on the SmartNIC, customers will be able to use a single management framework to manage all their virtualized and bare metal compute infrastructure.

Thirdly, BlueField-2 can be used for micro-segmentation at endpoints to isolate application workloads and their resources from each other. 

There is a security aspect to Project Monterey. Each SmartNIC is capable of running a fully-featured stateful firewall and advanced security suite. Up to thousands of tiny firewalls will be able to be deployed and automatically tuned to protect specific application services that make up the application.

Project Monterey is available as preview code.

Multiple open DPU partnering

VMware is collaborating with Intel, Nvidia and Pensando, and system vendors Dell, HPE and Lenovo to deliver Project Monterey systems. Dell said it could deliver automated systems using SmartNICS from a broad set of vendors.

DPU suppliers include three startups: Fungible, Nebulon, and Pensando. Pensando recently announced it will provide its DPU as a factory-supported option on HPE servers across the VMware Cloud Foundation product line, including vSphere, VSAN, and NSX. Customers will be able to access Pensando’s platform directly within VMware hardware. 

Second VMware Nvidia partnership

Separately, VMware announced at VMworld 2020 yesterday that it is jointly building a deployment platform for VMware-controlled servers to run AI software on attached Nvidia A100 GPUs. The platform combines VMware’s vSphere, Cloud Foundation and Tanzu container orchestration software with Nvidia’s NGC software.  

NGC (Nvidia GPU Cloud) is a website catalogue of GPU-optimised software for deep learning, machine learning, and high performance computing. NGC software is supported on a select set of pre-tested Nvidia A100-powered servers expected from leading system manufacturers.

Scality’s SOFS in Azure makes Blobs blindingly fast

Scality has reached 1Tbit/sec in technical previews of its SOFS (scale-out file system) software on Azure.

According to a Scality FAQ, SOFS in Azure has “really massive performance that is typically achievable only by specialised HPC file systems on costly direct  attached storage. The real power here is that linear scaling of cloud VMs lets SOFS achieve essentially any level of performance. This is very powerful, and can be used to solve key use cases in energy research, media and  entertainment, big data analytics, AI/ML and more.” 

The object storage supplier suggests many other cloud file services can only use capacity allocated to the local virtual machines they sit on. For example, Microsoft Azure Files has limitations in single file system size (100TB storage capacity) and maximum throughput per file system (300MB/sec). 

Scality SOFS on the other hand scales out to hundreds of petabytes with over 80GB/sec of performance.

The preview code has been measured at 1 terabit per second (c125,000MB/sec) on Azure premium flash storage, and the performance scales out linearly for both read and write workloads. Azure SOFS scales to over 650Gbits/sec (81,250MB/sec) throughput on Azure’s Blob Hot Access Tier.

SOFS in Azure

Giorgio Regni

Scality announced the Azure SOFS development in February. File system metadata is stored in the Azure Cosmos DB and file data payloads are kept as objects in Azure Blob storage.

Giorgio Regni, CTO, said: “In late 2019, we decided to port our proven SOFS code base to Azure. … some of Azure’s differentiated features, such as a single API for storage tiers and Azure Data Lake Storage (ADLS), enabled us to quickly deliver an integrated solution on top of Azure Blob storage.”  

By using the Blob service for data, instead of the Azure Files service,  customers get charged $0.0184 per GB per month. Azure Files costs more – $0.24/GiB/month for the premium Files service, $0.06/GiB/month for transaction-optimised and $0.0255/GiB/month for general purpose (hot) files, and $0.015/GiB/month for cool tiers.

Scality Azure SOFS diagram.

SOFS in Azure supports SMB 3.0, and NFS v3 and v4.1. Data remains in native Azure format and is fully accessible by any Azure service.

Target customers need 100TB or more of storage capacity, GB/sec to 100 GB/sec of throughput, and on-demand, bursty use cases (applications do not require 100 per cent full-time processing).

SOFS in Azure is stateless and delivered as a software image that can be deployed on Azure cloud Virtual Machine (VM) instances. It is hosted in a customer’s Azure subscription and connects to the customer’s Azure Blob storage accounts. Any number of virtual machines can be spun up on-demand to linearly scale performance, and SOFS tiers data across Azure Blob to optimise performance and costs.

SOFS in Azure is available for selected customers.