Home Blog Page 330

VMware and Nutanix grab lion’s share of spoils, in ‘difficult quarter’ for HCI vendors

IDC’s converged systems tracker for Q2 2020 shows a decline in a “difficult quarter”. But HCI sales did grow well in China and Japan, and HPE did well with its HCI systems.

As is now traditional, VMware led Nutanix and these two were a long way ahead of trailing pair HPE and Cisco.

IDC splits the market three ways;

  • HCI – $1.9bn revenues, 1.1 per cent growth, 47.1 per cent revenue share
  • Certified Reference Systems & Integrated Infrastructure – $1.5bn -7.6 per cent decline, 39.1 per cent share
  • Integrated Platforms – $544m -13.1per cent decline, 13.8 per cent share

IDC senior research analyst Paul Maguranis said in a statement: “The certified reference systems & integrated infrastructure and integrated platforms segments both declined this quarter while the hyperconverged systems segment was able to witness modest growth despite headwinds in the market.”

Charting revenue share trends since 2017 shows that certified reference systems like FlexPod are closing the gap with HCI (hyperconverged infrastructure), as HCI revenues decline from a peak at the end of 2019; 

B&F Chart using IDC numbers.

IDC looks at the top three vendors in the HCI market, checking the branded product sales and, separately, the HCI software owner’s sales. Both VMware and Nutanix HCI software is sold by other vendors such as HPE.

Branded HC sales numbers show a growth spurt by HPE;

HPE’s Y/Y 53.5 per cent revenue growth, from $84.7m to $130m, stands out from declines by Dell, Nutanix and the rest of the market. The picture revealed by slicing revenues by SW owner is quite different.

HPE’s 2Q20 revenues were $83.3m, meaning it sold ($130m – $83.3m = ) $46.7m of someone else’s HCI SW running on its servers. We understand this to be a mix of VMware and Nutanix. Without this software, HPE’s own SimpliVity HCI sales – (IDC only counts SimpliVity) – fell 4.9 per cent. HPE notes Nimble dHCI grew 112 per cent year-over-year in the period.

This HCI software owners table also shows Cisco suffering a 32.8 per cent fall in HCI revenues Y/Y. Both VMware and Nutanix outgrew the market.

We checked to see if HCI is taking more revenue from the overall external storage market. The Q2 IDC numbers for both storage categories are:

  • HCI – $1.9bn revenue; 1.1 per cent change Y/Y.
  • External storage – $6.26bn revenue, -1.1 per cent change Y/Y.

Charting the trends to see the longer term picture shows a growing gap between the two; 

B&F Chart using IDC numbers.

The HCI-external storage sales gap (blue line on chart) has declined for the most recent two quarters in what could be a seasonal pattern. The blue line shows fourth quarter HCI sales in any year on the chart peak higher than fourth quarter external storage sales. 

NetApp is morphing into cloud beast. But can it change its spots quickly enough?

Reports emerging from this week’s NetApp’s Financial Analysts’ Day show the storage hardware veteran is changing its identity to present itself as a hybrid cloud data services company.

But according to William Blair analyst Jason Ader, “NetApp remains a ‘show-me’ story —in particular with respect to meeting its cloud targets and growing share in a crowded on- premises storage market.” 

Wells Fargo senior analyst Aaron Rakers said NetApp emphasised the importance of its Cloud Data Services (CDS) unit. He quoted this claim from Anthony Lye, who runs CDS: “NetApp’s Cloud Volumes is the #1 shared storage platform (NFS, SMB), which is sold as a first party solution by Microsoft and available on GCP and AWS.

“The Spot acquisition brings compute and storage optimisation with a portfolio of services that provision, run, and scale applications in multiple clouds. With CloudJumper, NetApp can provide a virtual desktop service (VDS) and a managed virtual desktop service in the cloud. The company’s Cloud Insights solution then provides real time analytics across these services.” 

CEO George Kurian said: ‘NetApp’s strategy is to bring the simplicity and flexibility of cloud to the enterprise data center and to bring enterprise data services to the public cloud.” He noted that 93 per cent of enterprises now have a multicloud strategy and 87 per cent have a hybrid cloud strategy.

Cloud first

NetApp expects to drive revenue growth via all-flash array share gains and CDS expansion, Rakers reported. “The company now expects CDS ARR (Annual Recurring Revenue) to reach $250-$300m exiting F2021 (+141 per cent y/y at midpoint), $400-$500m exiting F2022 (+64 per cent y/y at midpoint), and >$1B exiting F2025. This compares to NetApp exiting F1Q21 with ARR at $178m.” 

NetApp pointed out 80 per cent of its engineers are software engineers. All-in-all, Rakers thinks “NetApp will benefit from its continued work to establish a software-first narrative.” 

Ader also highlighted NetApp’s CDS-first story. “To hammer home its bullish outlook on the cloud opportunity – and its current momentum – the company initiated a cloud ARR target of $1bn in fiscal 2025.”

“Management sees cloud services as complementary versus cannibalistic to its business,” he wrote, “with the firm already seeing its cloud technology bringing new logos and workloads into the fold.”

He said NetApp provided “incremental detail on the two strategic priorities outlined in recent quarters: 1) expanding the scale and scope of its cloud business, aided by the recent acquisitions of Spot, CloudJumper, and Talon and its deep partnerships with CSPs; and 2) returning to enterprise storage market share gains, primarily through differentiation in all-flash arrays (AFAs) and object storage.”

All-flash

Ader reported NetApp will have “a mid-range storage refresh (from hybrid systems to AFAs)”. Timings were not mentioned. Also, “Management noted that AFAs still only account for about 25 per cent of NetApp’s installed base systems, leaving significant runway for AFA refreshes over the next several years.”

Ader said NetApp’s Brad Anderson “commented that he expects virtually all performance workloads will move to all-flash in the near term while an increasing percentage of capacity-based systems will move to all-flash in the future as the cost of NAND flash comes down.”

NetApp estimates nine per cent CAGR for the AFA market, and 13 per cent CAGR for object storage. It sees object storage growing as a cold data tier for all-flash array data and as archiving data for verticals such as media and entertainment, and oil and gas.

“Management noted that NetApp has grown its object storage sales a striking 50 per cent annually over the last five years, largely under the radar of investors,” Ader wrote.

Seagate ships 18TB Exos disk drive

Seagate has begun shipping the 18TB nearline Exos X18 disk drive and is quoting an MSRP of $561.75.

The nearline disk drive capacity race is hotting up – Western Digital has shipped 18TB drives since July and Seagate aims to ship 20TB drives by the end of the year. We expect Seagate to ship 18TB IronWolf NAS drives later this month. 

The company today also released upgraded Exos AP controllers that enable its AP-2U12 storage arrays to handle Exos X18 drives.

Ken Claffey, GM of enterprise data solutions at Seagate, said the Exos X18 drives and upgraded controllers means the company “offers the industry’s leading density and configurability with ease of deployment for data lakes and private storage clouds.”

Exos X18

The helium-filled Exos X18 spins at 7,200rpm and has either single 6Gbits/SATA or dual 12Gbit/s SAS interfaces. The drive has a 256MB cache, features Instant Secure Erase, a 2,.5m hours MTBF rating, and comes with a five-year warranty.

We think it has nine platters, like the 16TB Exos, and understand it is the highest capacity conventionally recorded drive Seagate will produce – before moving to heat-assisted magnetic recording (HAMR) technology at the 20TB capacity level.

AP arrays

Seagate today announced the Exos AP 2U12 array, using the 18TB spinners, and said the AP 4U100 is getting a new controller. It also has a Nytro AP 2U24 system coming soon. 

With the latest announcement there are four AP systems:

  • Exos AP 2U12 –  12 x 3.5-inch 18TB Exos drives (216TB)
  • Nytro AP 2U24 – 24 x 2.5-inch SSDs (coming soon)
  • AP 4U100 – up to 1.6PB 
  • AP 5U84 – up to 1.344PB

Seagate got into storage arrays in 2015 via the $694m acquisition of Dot Hill, an OEM supplier of drive array chassis. This business is now part of Seagate’s Cloud Systems division. The portfolio includes the Exos drives and Application Platform (AP) arrays, which Seagate describes as compute and convergence platforms. They are sold to OEM customers such as Cloudian, in a continuation of the Dot Hill business  model.

WANdisco launches LiveData Migrator on AWS

WANdisco launched LiveData Migrator on AWS today. This is automated, self-service software for moving unstructured data while the source dataset is in active use. WANdisco said customers can initiate migration in minutes, without any consultancy or professional services. The company is offering a free trial with up 5TB data moved.

CEO David Richard said: “Regardless of company size or technical expertise, LiveData Migrator enables businesses to migrate their data risk-free to the cloud on a massive scale without any disruption to business operations.”

GoDaddy has already signed up. It is using LiveData Migrator to initially migrate 500 TB of actively-used Hadoop data to AWS S3. The aim is to make the data available for cloud analytics using AWS EMR. That 500TB will expand to multiple petabytes.

Wayne Peacock, chief data and analytics officer, said: “Rather than running an internal time-consuming and costly manual migration project, using LiveData Migrator has helped us avoid disruption to our production processes and made 70 TB of data immediately available for Amazon S3 testing.”

WANdisco supplies Live Data Platform replication facilities to move data from a data centre to another data centre or AWS and, in preview mode, Azure. “Live” means that the source data can be read and updated while migration is ongoing to a target site.

The UK company is heavily loss-making and the revenue picture is muddied as it transitions from selling perpetual licences to subscription. However, it is buoyed by a recent capital raise of £25m and on the back of strong demand for its Azure software “expects to deliver significant revenue growth in FY2021 with the Board expecting a minimum revenue of $35 million.”

Snowflake completes biggest software IPO of all time

Snowflakes shares begin trading today on the NYSE, with the company raising $3.36bn at a valuation of $33.6bn. This is the largest software IPO of all time.

Frank Slootman
Frank Slootman, Snowflake chairman and CEO

Snowflake CEO Frank Slootman received an IPO boost when the Sage of Omaha, Warren Buffet, said his Berkshire Hathaway investment business will invest $250m in Snowflake, and Salesforce, an existing investor, is buying another $250m in stock.

Some $1.4bn of VC capital was invested in Snowflake before the IPO and a $479m G-round in February valued the company at $12.4bn.

Snowflake, established in 2012, provides a public cloud-based data warehouse, running on AWS, Azure and Google. Using Snowflake’s cloud facilities means customers can run analytic routines without owning and operating their own data warehouse systems.

Western Digital speeds up shingled disk write performance

Phalanx of Sumerian troops
Phalanx of Sumerian troops

Western Digital has found a software partner called Kalista.IO that makes host-managed shingled magnetic recording drives appear like normal drives to applications.

Shingled Magnetic Recording crams more read tracks on a platter than conventional disks and so increases drive capacity. The technology partially overlap write tracks – taking advantage of the fact that a disk drive’s write tracks are wider than read tracks. The tracks are organised into groups or zones, with a space between them.

However, an entire zone of tracks has to be read, erased and re-written when new data replaces old data on an SMR drive. This is time consuming and can sometimes result in longer than normal latencies when a lot of writes are queued to a drive. These so-called tail latencies mean SMR drive performance is inconsistent and unpredictable.

The SMR drive can manage the rewrite process (drive-managed SMR) or a host server can do the job by modifying applications or adding a software layer (host-managed SMR.) The need for this application change has slowed SMR adoption. Now WD thinks it has found a way to overcome this obstacle.

Phalanx

Kalista IO has developed ‘Olympus’ software which makes SMR or zoned disk drives faster at writes. Olympus is available as a hardware appliance, software package and virtualized image. It has two primary services: storage (Phalanx) with a unified object, block and file interface, and compute orchestration (Abacus). They can be deployed, scaled and used independently.

Phalanx works with conventional HDDs and SSDs and zoned block devices like SMR HDDs. There is no need to recompile or change existing application software and there is no Linux kernel module to install.

Phalanx concept

The software has a row and column architecture that prevents concentration of ‘hot writes’ to a drive. This minimises IO contention and scales performance with capacity.

The software has a device-aware and log structured data layout. Phalanx eliminates random writes and minimises IO contention by evenly distributing the sequential write workload across drives, thus preventing hot write areas. Performance and capacity is scaled out by adding drives. 

Phalanx has three internal software layers:

  • Data access layer provides file, object and block interfaces 
  • IO engine layer calculates optimal data placement and prioritises incoming requests 
  • Device management layer optimises IO requests for each device and manages its state 

There is a separate metadata store to increase scalability, efficiency and robustness.

Phalanx with Ultrastar

Kalista is working with Western Digital to apply Phalanx to WD’s Ultrastar HC600 host-managed SMR drives, with Hadoop and Ceph usage in mind. The companies say Phalanx provides consistent and predictable performance with no long tail latencies.

The 14TB HC530 is a non-SMR, conventionally recorded drive. The 15TB HC620, is an SMR drive.

A joint Kalista and WD Phalanx solution brief describes the performance advantages in Hadoop and Ceph workloads. The testers compared the performance of Phalanx with host managed SMR HDDs against legacy file systems with conventional non-SMR HDDs.

Phalanx delivered a 58 per cent increase in average IOPS and a 10x gain in performance consistency with Ceph Rados write bench. It provided a 19 per cent higher average throughput with Hadoop TestDFSIO read bench. 

Kalista.IO

LinkedIn lists one Kalista IO employee – Albert Chen, who founded the company last year. Chen is a software engineer whose career includes stints at 3Com, Nishan, MaXXan Systems, Microsoft and Western Digital. At WD he was an ecosystem architecture engineering program director. 

Pure Storage buys Portworx for $370m

Pure Storage is buying Portworx, the container storage software startup, for $370m in cash.

Update. Pure Storage says it is buying Portworx. 16 September 2020.

By acquiring Portworx, Pure is obtaining one of the leading container storage storage suppliers and improve its ability to supply storage to enterprises who have adopted Kubernetes. It says it will be able to provide Kubernetes data services that run on any cloud, any storage, and any infrastructure.

Pure’s chairman and CEO Charles Giancarlo issued a quote: “We are thrilled to have the Portworx team and their groundbreaking technology joining us at Pure to expand our success in delivering multi-cloud data services for Kubernetes.” It is Pure’s largest acquisition to date.

Murli Thirumale, Portworx CEO and co-founder, said: “The traction and growth we see in our business daily shows that containers and Kubernetes are fundamental to the next-generation application architecture and thus competitiveness. We are excited for the accelerated growth and customer impact we will be able to achieve as a part of Pure.”

Last month Blocks and Files asked Pure international CTO Alex McMullan about the company’s intentions towards Portworx. “We’re always looking to explore ways of growing our business,” he replied, “but right now I can’t confirm anything specific.”

Portworx supplies container storage software, which includes backup and restore features. The company was founded in 2015 by Murli Thirumale and CTO Gou Rao, who previously set up deduplication startup Ocarina and sold it to Dell in 2010.

Murli Thirumale (L) and Gou Rao

To date, Portworx has bagged $55.5m in three funding rounds, including $27m last year. There are around 100 employees. Sales grew 500 per cent from an undisclosed number in 2016 to 2017 and 100 per cent from 2018 to 2019. The company claims its Q2 2020 performance was a record.

All-flash array vendor Pure supplies on-premises FlashArray block, FlashBlade for file and object storage, and has a Cloud Block Store offering for public clouds. Pure has supported container storage since 2018 with the Pure Storage Orchestrator (PSO). This receives storage provisioning requests and responds in real time, setting up paths between containers and the back-end Pure arrays.

Pure says that, by combining Portworx container data services with Pure’s data platforms and PSO software, Pure will provide a comprehensive suite of data services that can be deployed in-cloud, on bare metal, or on enterprise arrays, all natively orchestrated in Kubernetes.

In April, Pure teamed up with Kasten to integrate its storage arrays with Kasten’s containerised app backup.

VAST Data tackles AI workloads at LightSpeed

VAST Data is enabling customers to tackle bigger AI and HPC workloads courtesy of a new storage architecture that works hand in glove with Nvidia GPU Direct.

Unusually the technology – called LightSpeed – employs the NFS storage protocol rather than the parallel access file system that most HPC competitors use. VAST claims the LightSpeed NVMe drive enclosure overall delivers twice the AI workload performance of the prior chassis.

VAST says a legacy NFS client system typically deliver around 2GB/s and that’s why HPC systems have used parallel access file systems for many years to get higher speeds. LightSpeed nodes can each deliver 40GB/s to Nvidia GPUs, with the GPU Direct interface using the NFS protocol.

The company has demonstrated 92GB/s data delivery speed using plain NFS – nearly 3X the performance of VAST’s NFS-over-RDMA and nearly 50X the performance of standard TCP-based NFS.

LightSpeed customer Bruce Rosen, the exec director at the Martinos Center for Biomedical Imaging, provided a supporting quote: “VAST delivered an all-flash solution at a cost that not only allowed us to upgrade to all-flash and eliminate our storage tiers, but also saved us enough to pay for more GPUs to accelerate our research.”

Jeff Denworth, VAST Data’s VP Product, proclaimed: “Let data scientists dance with their data by always feeding their AI systems with training and inference data in real-time. The LightSpeed concept is simple: no HPC storage muss, no budgetary fuss, and performance that is only limited by your ambition.”

LightSpeed clusters

There are three branded LightSpeed cluster configurations, using nodes with a 2U x 500TB NVMe SSD chassis, VAST server and Ethernet or InfiniBand connectivity;

  • The Line – 2 x Lightspeed nodes putting out 80GB/sec supporting 32 x Nvidia GPUs
  • The Pentagon – 5 x Lightspeed nodes providing  200GB/sec for 80 x Nvidia GPUs
  • The Decagon  – 10 x Lightspeed nodes delivering 400GB/sec to 160 x Nvidia GPUs

VAST says it provides 2:1 data reduction and so a LightSpeed enclosure provides 1PB of effective capacity.

LightSpeed boxes are heavily skewed towards read performance; the Line cluster achieves 10GB/s when writing data – just 12.5 per cent of the read speed. The same relative read write performance difference applies to the Pentagon and Decagon clusters. 

AI is characterised by read-heavy, random I/O workloads, according to VAST, which cites deep learning frameworks such as computer vision (PyTorch, Tensorflow), natural language processing (BERT), big data (Apache Spark) and genomics (Parabricks). 

Read more in a LightSpeed eBook.

Seven storage suppliers settle on AWS Outposts

Seven storage suppliers have announced AWS Outposts support, in a co-ordinated PR move by AWS.

AWS Outposts is a fully managed service based on a converged infrastructure rack, Amazon’s public cloud in a box, providing an AWS computing environment within a customer’s data centre. It is integrated with the public AWS cloud for a consistent AWS experience across a customer’s data centres and its AWS accounts.

Scale-out filesystem vendor Qumulo has announced Outposts support by getting certified in the AWS Outposts Ready programme.

AWS Outposts rack.

Qumulo said customers can run and manage Qumulo file data workloads on-premises or so may choose to run them in AWS Outposts using Qumulo’s file system. They can connect that file data to AWS and run services like Amazon SageMaker (AI) and AWS IoT Greengrass.

Joshua Burgin, GM for AWS Outposts at Amazon Web Services, issued a quote: “With Qumulo File Data Platform for AWS Outposts, customers can benefit from a comprehensive data management solution for any application in their environment, on AWS Outposts, or in AWS Regions, for a truly consistent hybrid experience.”

Clumio, Cohesity, Commvault, CTERA, Pure Storage and WekaIO have also announced an AWS outposts Ready badge and most supplied a version of the Burgin quote above, adapted to refer to their own products. We expect more storage suppliers to follow suit.

ObjectScale: Dell EMC bets on another horse in the object storage race

DEll EMC racing towards multi-cloud IT

Dell EMC is developing a cloud-native object storage called ObjectScale that will work with Tanzu and provision object storage via Kubernetes.

The company will integrate ObjectScale software with the vSAN Data Persistence Platform and so work with VMware Cloud Foundation.

ObjectScale will enable object storage to be deployed anywhere that VMware Cloud Foundation with Tanzu is running – the data centre, an edge location,or public cloud. Tanzu is VMware’s initiative to enable vSphere to run Kubernetes-orchestrated containerised apps alongside its own virtual machines.

Travis Vigil, Dell EMC SVP product management, writes in a blog: “We have engineered our object storage platform to take advantage of Kubernetes’ native automated deployment, scaling and management capabilities. This next generation of object storage software will be lighter, faster and deployable on existing ESXi-virtualized infrastructure.”

Lee Caswell, VP Marketing, CPBU, VMware, said: “ObjectScale on VMware Cloud Foundation is an important, new part of our cloud-native toolset.”

ObjectScale is a distributed, scale-out multi-tenant system with S3-compatible APIs and a single global namespace with eventual consistency. Dell EMC says it will have superior performance for both small and large objects.

Eventual consistency means that information about a write to one node’s data store is rippled through the distributed system over time so that the nodes see the same data. This contrasts with a relational database’s ACID consistency in which a write to the database is not committed until all database users are guaranteed to see the same data value.

ObjectScale software runs on VMware nodes with vSAN to form an underlying cloud. This enables the nodes to cross-replicate for greater access, reliability and redundancy.

Every ObjectScale function is built as an independent layer, Dell says, making them horizontally scalable across all nodes and enabling high availability.

ECS and ObjectScale

Dell EMC already has an object storage platform called ECS. A Dell EMC spokesperson said: “We see the two products, ECS and ObjectScale, as serving two unique use cases and deployment models.

“Today we have a purpose-built object storage platform in ECS and will satisfy demand for a modernized, software-defined object storage platform with ObjectScale. Both products are important part of our portfolio and both are designed to provide enterprise grade object storage. However, they are each optimised for different usage and deployment scenarios.”

The spokesperson added: “ObjectScale is great for edge use cases or when organisations want to deploy flexible, lightweight object storage close to the applications they support and integrates with VMware vSphere and VCF. Because of its software-defined nature and rich S3 compatibility, ObjectScale can also be deployed on existing virtualized infrastructure.”

Regarding ECS: “[It] is our current enterprise-grade object storage system, which most customers procure as a fully integrated turnkey appliance. … We are committed to delivering innovation on ECS, and you’ll see more updates and announcements on the ECS front later this year.

The revamped Dell EMC Isilon range, now called PowerScale, also has object storage capabilities.

An ObjectScale Early Access programme will soon be available to enterprise customers in beta on VMware Cloud Foundation with Tanzu.

VMware adds files, objects and more in vSAN two-step

VMware is extending Kubernetes capabilities by offering third party integration hooks to vSAN. The company today also announced the next vSAN update will add file services and Active Directory integration and cloud-native object and file storage.

The company is adding the Kubernetes API and vSAN Data Persistence platform with Tanzu to VMware Cloud Foundation 4.1. This enables certified ISV partners hook up their storage software to vSAN via a vSAN Direct facility. On launch day, three partners have signed up: MinIO, Cloudian and DataStax. 

AB Periasamy

AB Periasamy, CEO of MinIO, said the VMware initiative “represents a tectonic shift for the Kubernetes community. The native Kubernetes API enables VMware’s customers to immediately access a rich ecosystem of cloud-native applications, many of which are natively integrated with MinIO.”

vSAN Direct gives the ISVs direct access to the storage drives used by vSAN. It enables vSAN to present object storage, file storage and a NOSQL database to cloud-native applications controlled by VMware Tanzu through Kubernetes. A vCenter admin can provision such persistent storage services to the apps using these layers of functionality.

B&F diagram showing vSAN DP and ISVs

MinIO markets its object storage as being fast. The vSAN DP integration enables MinIO to operate at the supervisor cluster level, directly on top of vSAN Direct. By using this data path, MinIO operates at read and write speeds approaching 183 GB/sec and 171 GB/sec respectively. 

DataStax intends to introduce vSAN DP plug-ins to enable enterprises to use Kubernetes APIs to provision and scale its stateful services.

DataStax vSAN DP integration diagram

Lee Caswell, VP Marketing, CPBU, VMware said: “We have partnered closely with DataStax so that customers can run, manage, and protect DataStax Enterprise NoSQL stateful workloads seamlessly on the VMware vSAN Data Persistence platform and minimise replication and management overhead.”

Cloudian said proof-of-concept evaluations of coming Cloudian vSAN DP software are in planning with multiple customers, and the software is planned for release by year end. 

Dell said an as yet unannounced product, Dell EMC ObjectScale, will integrated with vSAN DP. It also said developers, with all this new software, can use Kubernetes APIs to provision and scale stateful services, from the ISVs. They can do this on-demand in a self-service model with minimal administrator intervention.

vSAN 7 update 1

VMware also today announced Update 1 to vSAN 7. With this, HCI Mesh is introduced to separate compute and storage. This allows incremental scaling and capacity sharing across a cluster. 

Jeff Boudreau, president, Dell Technologies Infrastructure Solutions Group, issued a quote: “As the first HCI solution with a fully integrated Tanzu portfolio, Dell EMC VxRail offers the fastest path to Kubernetes when deployed with vSphere with Tanzu and a fully integrated Kubernetes environment when deployed with VMware Cloud Foundation with Tanzu.”

Users also get the option of running in compression-only mode, whereas before they had to run both compression and deduplication. This releases CPU cycles back to applications while still providing a measure of data reduction.

The update brings in Active Directory integration and Kerberos network authentication support, and the addition of Microsoft’s SMB v3 and v2.1 file access protocols.

VMware vSphere with Tanzu, vSphere 7 Update 1, vSAN 7 Update 1, and VMware Cloud Foundation 4.1 with Tanzu, are all expected to become available in VMware’s Q3 FY21 (ending Oct. 30, 2020). 

Scality adds all-flash support to RING object and file storage

Scality has added all-flash support for RING8 object storage software. The tweak takes Scality RING8 into new performance-sensitive file and object  workloads.

Scality RING already supports flash use for metadata and internal indexes but data payloads were stored on hard disk drives until now. The company thinks it is now practical and cost effective to support data payloads on flash, thanks to the decreasing cost of SSDs, through TLC and QLC 3D NAND.

Giorgio Regni, Scality CTO and co-founder, issued a quote: “This isn’t Scality’s first rodeo with flash. We have a decade of experience using flash media in the RING architecture to accelerate a high percentage of operations for file and object access.”

Giorgio Regni.

These workloads include single-tier CCTV, fast tier1 backup and restore, tier 1 medical imaging (PACS), and Big data analytics on file. Scality also has a scale-out file storage capability and notes many analytics applications such as Splunk support file storage while also embracing the S3 object API.

The company claims object storage competitors that support all-flash typically do not have an integrated file access capability. They rely instead on a file access gateway which can limit scalability and performance to the file end-point servers. Scality claims RING8 performs faster than these competitors, without supplying examples, and scales out. It says vendors with integrated file capability – such as Pure Storage’s FlashBlade – use proprietary hardware.

Benchmarks

In testing, a RING8 system had data payloads stored in six HPE Apollo 4200 servers, each with 16 x 12Gbits/s SAS SSDs, 96 SSDs. This setup delivered more than 10GB/sec to accessing clients across a 40Gbit/s Ethernet link, fully saturating it.

The six Apollo boxes were connected across 100GbitE to a seventh Apollo Connector server running S3/SMB software, and the >10GB/sec was demonstrated with S3 and SMB.

Five client systems, with 1MB block sizes, read and wrote 1GB objects or files from the Connector. RING8 used 4+2 erasure coding, so 50 per cent more data was written to RING8 from the Connector.

A Scality FAQ claims “at least a 50 per cent performance boost from using just SAS HDD (since we are network limited, it is possible the increase is even higher.)” The company notes that the SMB file system protocol lack parallelism and so there are welcome performance benefits from scale-out RING8’s SSDs. 

Scality reckons RING8 all-flash performance will increase as more nodes are added and the company will run tests with 100GbitE client access networking to check this out.

It said lower latency makes all-flash RING8 especially suitable to applications needing random access to small files. There is proportionally less time saving with large files. It also says many if not most object workloads today feature large objects. Therefore all-flash RING8 can serve both needs whereas the random-access small file workloads were not previously available to it.

Cloudian added all-flash support to its HyperStore object storage last week, saying it can deliver over 2GB/sec read performance per node. Six such nodes would deliver 12GB/sec read performance.