Home Blog Page 349

Nexsan QLC E32F all-flash array is ‘seven times faster than 10K HDDs’

Nexsan has announced the E32F, a 32-bay storage array that fits snugly between the low-end 18-drive bay E18F all-flash array and the high-end E-Series 48P and 60P E-series arrays.

The E48P and E60P are hybrid flash and disk drive arrays in a 4U base chassis with 48 or 60 drives. The E32F has a 2U chassis – in common with the E18F – which is filled with TLC (3bits/cell) or QLC SSDs.

The E32F has a 245TB raw capacity maximum when filled with QLC drives and 491TB when filled with TLC drives – even though QLC flash SSDs use 33.3 per cent denser NAND than TLC SSDs. That’s because the maximum capacity TLC drive that Nexsan supports is 15.36TB and the highest capacity QLC drive it supports is 7.68TB.

Even so, Nexsan claims the QLC E32F is the the most compact, high density QLC SSD system on the market. It said the QLC E32F, which is priced below the TLC E32F, delivers seven times the performance of 10,000rpm HDDs, making it “an ideal replacement for HDDs in performance-sensitive workloads”.

The E32F is expandable with two additional drive enclosures up to 2PB-plus and supports 16 Gbit/s Fibre Channel and 10 Gbit/s iSCSI connectivity. The array’s software includes power management, snapshots, replication and integrations with Windows, VMware, Veeam, Commvault, Hyper-V and Xen.

Other QLC flash drive arrays include models from VAST Data and the Pure Storage FlashArray//C.

Intel lets us have a sneaky peek at 144-layer flash and gen 2 Optane plans

Intel’s Non-volatile Memory Solutions Group briefed the press today on its NAND and Optane plans and status. The TL;DR version is that Intel SSDs are all going 144-layer; gen 2 Optane drives will use PCIe gen 4 – and we’ll hear more in June.

Rob Crooke, head of Intel NSG, who led the briefing, said Keystone Harbor, a 144-layer 3D NAND, QLC SSD is on track to ship later this year. Also, Intel’s entire SSD portfolio will transition to 144-layer NAND in 2021. For collectors of Intel codewords, that technology is called Arbordale Plus.

See how Intel’s 144-layers compares to the competition

Crooke announced Intel QLC SSD shipments have surpassed 10 million units, citing “tremendous momentum” for a game-changing technology. He said Intel is still developing its PLC (5 bits per cell) SSD technology.

Intel will ship the Alder Stream Optane SSD in single port form this year and dual port form in 2021. Alder Stream uses second generation 3D XPoint media, with four layers instead of gen 1’s two. It has a new controller ASIC with new firmware and the aforementioned PCIe 4 link.

Gen 1 2-layer 3D XPoint diagram.

Alder Stream capacity points are still being planned. The current DC P4800X dual-port drive is available with 375GB, 750GB, and 1.5TB capacities. A simple doubling, consequent on the layer count doubling, would mean 3TB maximum capacity.

Crooke said Intel uses its Rio Rancho, New Mexico facility for Optane-branded 3D XPoint technology development and has a number of plants capable of making 3D XPoint chips but has not decided yet what goes where. In the meantime, the company retains a chip supply agreement with Micron.

3D XPoint media is used in the H10 drive, which twins it with QLC NAND, Optane SSDs and Optane Persistent Memory. Intel said it is seeing increased momentum with Optane. For example, more than 85 per cent of customers that run a proof-of-concept test go on to deploy Optane in production mode. Kristie Mann, Intel senior director for Optane DC persistent memory products, said more than 200 of the Fortune 500 are Optane customers.

She confirmed a gen 2 3D XPoint DIMM, codenamed Barlow Pass, will launch later this year and told us to expect an Optane update in June. Current gen 1 DIMM capacities are 128GB, 256GB and 512GB. Perhaps we’ll see a 1TB DIMM product as well as the Alder Stream product announced at the June event.

The company said it has no plans for external fit, portable Optane products.

HPE plunks AMD EPYC 7002 into remote worker VDI-friendly SimpliVity HCI system

HPE has recast its hyperconverged systems, adding gen 2 AMD processors to increase power, raise VDI performance and cut costs for the work-from-home era.

These are general purpose systems but HPE’s messaging today is all about working from home, virtual desktops, and lowering costs. As part of this, HPE is making the Nimble Storage dHCI systems available though a Greenlake pay-as-you-go scheme. Greenlake already includes SimpliVity HCI systems.

Patrick Osborne, general manager of HPE SimpliVity, said in a statement: “The covid-19 global pandemic is an unprecedented situation that is affecting all businesses, our communities, and our way of life … customers are looking to rapidly unleash mobile productivity and desktop virtualization, and HPE SimpliVity and Nimble Storage dHCI solutions provide performance and flexible payment options for our customers.”

The low-end 1U SimpliVity 325 is based on the ProLiant DL325 Gen10 server, which uses a gen 2 AMD EPYC 7002 series processor.

On average, the SimpliVity 325 delivers double the virtual desktops per server than rival HCI vendors and cuts per-user VDI costs by 50 per cent, HPE claims. The higher capacity SimpliVity 380s continue to use Intel Xeon CPUs.

Nimble Storage dHCI twins a Nimble storage array with ProLiant servers and the supported server range is expanded to 32 models. This includes the DL325 and DL386 systems with gen 2 EPYC processors and Intel-powered DL560 and DL580 servers. 

Nimble Storage dHCI.

A Nimble dHCI software update includes one-click, unified software upgrades for the server firmware, hypervisor and storage software.

Rob Collins, head of infrastructure and service management at PetSure, gets some airtime in HPE’s press release: “In the past, we were using AWS for our VDI workloads, but the costs were too high for us. With HPE Nimble Storage dHCI, we are able to reduce our operating costs by 50 per cent, improve our performance 2X for our VDI workloads, and achieve 50 per cent faster application provisioning.”

You can check out a SimpliVity 325 spec sheet and the Nimble dHCI spec sheet.

The SimpliVity 325 Gen10 is on order now and ships at the end of the month. Nimble Storage dHCI systems, with the expanded server support and software upgrade, are available in the middle of the year. Current Nimble Storage dHCI systems are available now through GreenLake.

Datrium extends VMware DRaaS to edge and ROBO

Datrium has made its DRaaS with VMware Cloud on AWS disaster recovery service for VMware available to edge and remote office and branch office (ROBO) sites.

The company enables a data centre VMware site to failover to the VMware Cloud on AWS if it is afflicted with a ransomware attack or other disaster. Datrium spins up the VMs and runs them there until the data centre disaster is fixed. The VMs then failback to the now clean data centre.

In a blog, Datrium CTO Sazzala Reddy said the company is “now extending that product to protect edge sites by enabling enterprises to store backups both locally and in the cloud, giving them the choice of recovery path based on WAN bandwidth. It’s a modern disaster recovery software for edge environments and remote office/branch office (ROBO) deployments.” 

Datrium software runs in the edge or ROBO site – it is downloadable to third party on-premises systems. The technology is based on immutable snapshots that are sent to the AWS VMware Cloud facility.

IT staff can “create an onsite backup copy of data, and they can recover locally or failover on demand to Datrium DRaaS with VMware Cloud on AWS, in the event of a ransomware attack or other disasters.” Recovery can be a single-click operation, using runbook automation.

There can be hundred or thousands of edge sites, all managed via a central SaaS portal, Reddy says. “Datrium DRaaS for Edge environments is easy to use, safe and secure, low cost, and low touch.”

Datrium has also extended VMware coverage with Datrium DRaaS Connect for VMware workloads running on HCI, SAN and NAS systems. 

We are told Datrium will support the Azure cloud by the end of the year. We might expect that the company will support Hyper-V virtualized servers by then.

Nutanix EMEA staff asked to take two weeks’ unpaid leave

Nutanix logo
Nutanix logo

Hyperconverger Nutanix has now asked its staff in Europe, the Middle East and Africa to take two weeks of unpaid leave as it tries to contain costs during the COVID-19 pandemic.

This comes after some of its US staff – including 1,465 workers in the state of California – were given two separate week-long unpaid leave periods between now and late October.

A Nutanix statement said: “Like many companies in today’s COVID-19 economic environment, we’re taking proactive steps to help minimize the long-term impact on our global team members and our customers. These steps include two, week-long unpaid furloughs for many of our U.S. team members over the course of the next six months.  

“We have also asked our staff outside of the U.S. to take a total of two weeks of voluntary unpaid leave, again over the course of the next six months.  Furloughed US staff, as well as staff outside the US who voluntarily take unpaid leave, will maintain their benefits and employment status with Nutanix while on furlough / unpaid leave.”

Many businesses in Europe have furloughed employees, with some governments paying part of their wages and salaries. Others are putting large scale layoffs in place, such as Virgin Atlantic, which said yesterday it was laying off 3,000 UK employees, 30 per cent of all roles in the country, and closing its London Gatwick base.

Nutanix’s unpaid leave scheme is on a far smaller scale, however, and it is not making permanent layoffs.

The firm added: “We have carefully scripted these actions to minimize the impact on our customers, with all Nutanix services fully available during this time.  Our philosophy as we navigate the COVID-19 pandemic is to do the ‘most good’ with the least amount of harm for all of our employees, and these actions will help us achieve that.”

We have asked Nutanix if its CEO, CFO and similar level executives would also be taking two weeks’ unpaid leave. A spokesperson replied: ” I can confirm this is the case.”

AWS slashes log data search prices with UltraWarm ElasticSearch service

AWS has made it cheaper to search large volumes of log data by inserting an ‘UltraWarm’ storage tier between cheap, slow S3 and fast, expensive Elastic Block Store (EBS).

Using the open source Elasticsearch with UltraWarm is one-tenth the cost of other options, according to AWS. It does not say what those other options are – but it is safe to say it costs more to use ElasticSearch with EBS, which puts log data in Elastic Block Store (EBS) volumes attached to each Elasticsearch node.

Raju Gulabani, AWS VP of databases and analytics, said in a statement: “Our customers tell us that log data offers a wealth of operational and security insights, but that the storage of log data quickly adds up, and proves cost-prohibitive over the medium and long term. UltraWarm is the most cost-effective Elasticsearch-compatible storage solution available. It is also performance-optimised, so customers can investigate and interactively visualise their data while they embrace data at scale.”

Log data in EBS is classified as hot data and copied to replicas to ensure its durability. EBS space is also reserved for Linux and for Elasticsearch. The log data is organised into shards – indices of  documents – and a primary shard is the main index. A 10GiB primary shard takes up about 26GiB of EBS storage, due to the overhead that AWS requires. Elasticsearch customers pay for the entire space.

S3 storage provides durability, removing the need for replicas, and abstracts EBS-required operating system or service considerations. But it can only be searched slowly, whereas the UltraWarm tier is searchable by Elasticsearch and “provides the type of snappy, interactive experience that Elasticsearch customers expect”.

Mix and match

UltraWarm is a distributed cache layered above S3. It is populated with frequently accessed blocks of data from S3 and placement algorithms are used to identify less frequently accessed blocks in the cache and shunt them back to S3.

You pay for the UltraWarm storage you use. For example, “An ultrawarm1.large.elasticsearch instance can address up to 20 TiB of storage on S3, but if you store only 1 TiB of data, you’re only billed for 1 TiB of data.”

Customers pay an hourly rate for the storage provisioned and an hourly rate for each UltraWarm node. Pricing details can be found on the Elasticsearch website pricing page.

AWS suggests you use a hot EBS tier for indexing, updating, and getting the fastest access to data. The UltraWarm tier would be used for less frequently accessed data but where a fast search response is still needed. So, the customer could put current data in EBS and historical data in the UltraWarm tier and access both tiers, using the Elasticsearch Kibana interface.

UltraWarm supports Elasticsearch application programming interfaces (APIs), tools, and features, including enterprise-grade security with fine-grained access control, encryption at rest and in flight, integrated alerting and SQL querying.

UltraWarm is available on Amazon Elasticsearch version 6.8 and above in US East (N. Virginia, Ohio), US West (Oregon, N. California), AWS GovCloud (US-Gov-East, US-Gov-West), Canada (Central), South America (Sao Paulo), EU (Ireland, London, Frankfurt, Paris, Stockholm), Asia Pacific (Singapore, Sydney, Tokyo, Seoul, Mumbai, Hong Kong), China (Beijing, Ningxia), and Middle East (Bahrain), with additional regions coming soon.

Dell EMC launches PowerStore into converged storage array seas

Dell EMC today launched PowerStore, the much anticipated storage array line that unifies its formerly overlapping midrange products. This is the most important development in data storage hardware for many years and PowerStore will be the benchmark for the rest of the storage array industry to compare, contrast and compete with. Let the midrange storage array wars commence.

PowerStore has new hardware and software designed from the ground up, new consumption business models, management and support arrangements. Dell EMC said PowerStore – or Midrange.NEXT, as it was called before launch – is designed for a data-centric era with storage needed for physical, virtual and containerised workloads.

In a statement today, Dan Inbar, president and GM, storage at Dell Technologies, referred to a “constant tug-of-war between supporting the ever-increasing number of workloads, from traditional IT applications to data analytics, and the reality of cost constraints, limitations and complexity of their existing IT infrastructure”.

Dell Technologies’ new approach needed slide. PowerStore is the new architecture

He said PowerStore “blends automation, next generation technology, and a novel software architecture to deliver infrastructure that helps organisations address these needs”.

Travis Vigil, SVP product management, told Blocks & Files: “More than 1,000 engineers worked in a collaborative effort on PowerStore, including VMware engineers.” Separate teams developed the migration facilities from existing Dell EMC midrange systems.

No more silos

Dell EMC is launching PowerStore to unify different storage hardware silos accumulated over the years as EMC acquired competitors such as Compellent (SC) and XtremIO, and Dell acquired competitors such as EqualLogic (PS) and Dell, before acquiring EMC and its ownership of VMware.

PowerStore is a migration destination from these now legacy midrange lines, and its integration with VMware means it should on that basis alone become the default midrange storage offering for existing customers.

In addition, Dell EMC has equipped PowerStore with simpler and more powerful management, support for containers and lower latency analytics that make it attractive for new workloads.

Also, a single midrange storage array line reduces the costs of maintaining separately engineered storage hardware and software product lines, notably the SC, Unity and XtremIO all-flash and hybrid arrays, which are subsumed into PowerStore. They will be maintained and supported during an unspecified transition period.

Blocks & Files asked IDC analyst Eric Burgener about ‘Power; branding: ““Dell EMC has been slowly moving to the Power brand over time with new system announcements, something that makes the optics of their product line look a little better. In 2018 they announced PowerMax, in 2019 they announced PowerProtect, and now they’ve announced PowerStore.  We’ll see continued movement in this direction with future platform announcements from the vendor.”

Competition

PowerStore is an answer to and competes against all-flash storage arrays, such as:

  • HPE’s Primera, 3PAR and Nimble arrays which pioneered predictive analytics,
  • Hitachi Vantara’s recently updated VSP E990
  • IBM’s newly-updated FlashSystem
  • NetApp’s ONTAP AFF arrays
  • Pure Storage FlashArray and FlashBlade and its Evergreen business model

It will also compete against Kaminario, VAST Data, Excelero, Pavilion and StorCentric’s Vexata arrays.

In general, the mainstream storage hardware suppliers have single OS all-flash product lines, with the notable exceptions of HPE and – until today – Dell EMC. They have all developed subscription pricing models and are moving towards flexibly timed, data-in-place upgrades. They have varying degrees of public cloud integration and most vendors have embraced Kubernetes, with CSI (container storage interface) plugins.

PowerStore enables Dell EMC to tick these boxes with similar or better features: a single OS product line, flexible data-in-place, upgrades, subscription pricing, public cloud integration and Kubernetes support.

Dell EMC has probably the best integration going with VMware and the unique ability to run application VMs in the array (see below). If it has provided a sufficient performance boost, and the migration facilities are as good as claimed, PowerStore should find a natural home in the Dell EMC customer base and enable the company to gain new customers.

We asked Burgener about PowerStore coming up against NetApp: “I don’t think PowerStore will compete against the NetApp EF Series,” he replied. “The EFs are typically sold for dedicated high performance workloads that don’t require a lot of enterprise-class data services. SANtricity, the EF storage OS, is more limited in that sense, and they are block-only.  PowerStore is a unified (block/file) system with much more comprehensive data services. A single EF can handle 2M IOPS whereas it takes an appliance pair with PowerStore to do that.  

“Latencies are probably pretty close if you put Optane SSDs into PowerStore (the EF boasts sub 100 microsecond latencies with NAND flash-based NVMe SSDs and NVMe-oF as the host connection). I do think the PowerStore models would compete more with the NetApp A220 and A400 models though (both of which run ONTAP).” 

Let’s delve into PowerStore’s hardware, software, management, monitoring, migration, consumption and upgrade facilities.

Hardware scaling surprise

PowerStore has a standard dual-controller architecture with clusterable appliances, which scales in a novel fashion.

There are five PowerStore models – the 1000, 3000, 5000, 7000 and 9000. They each have exactly the same capacity as the others. In other words, there is no capacity scaling across the model range.

All five PowerStores have four processors, with two per controller. But they vary by compute cores and memory; the 1000 has 32 cores, the 3000 gets 48, the 5000 given 64, the 7000 has 80 and the 9000 has 112.

DRAM grows in the same way, with 384GB for the PowerStore 1000 and steps of 768GB, 1,152GB, 1,536GB across the range to the 9000’s 2,560GB. Thus performance, but not capacity, grows across the range.

The base enclosure is a 2U x 25-slot unit and two active:active controllers or nodes. The controllers use Xeon Scalable processors and these process data from dual-ported NVMe NAND or Optane drives. 

PowerStore expansion cabinet with bezel removed

There can be 1.92TB, 3.84TB, 7.68TB or 15.36TB NAND SSDs or 375GB or 750GB Optane drives. All the drives are self-encrypting. 

Providing scale-up capacity are up to three 25-slot SAS SSD expansion cabs. They connect to the base box across a redundantly paired four-lane x 12Gbit/s SAS backplane, which provides continuous drive access to hosts in the event of node or port failure. The maximum drive count is 96 and SSD capacities are 1.92TB, 3.84TB and 7.68TB.

In effect we have two tiers off flash storage; a fast NVMe drive tier and a slower SAS drive tier.

Burgener told Blocks & Files: “I believe they also allow HDDs in the expansion cabinets, making this system what IDC calls a Fusion Hybrid Array (FHA).”

Data reduction is provided in two ways. Compression is hardware-assisted, using Intel QuickASsist Technology (QAT) hardware. The PowerStoreOS software also provides in-line, always-on deduplication.

Dell guarantees a 4:1 data reduction ratio, so the capacity numbers are:

  • Base enclosure: up to 384TB raw, up to 1,536TB effective
  • Expansion cab: up to 192TB raw, up to 768TB effective
  • Overhead for virtual RAID, spare space, system/metadata: about 20 per cent
  • Total capacity: up to 718TB usable per appliance

The total raw capacity combines 384TB of NVMe flash and 576TB of slower-access SAS flash.

Dell is adding NVMe-over-Fabrics access at a future data but the full benefit is restricted to the base enclosures only.

Ports, RAID, availability and performance

PowerStore supports RAID 5. An appliance can have up to 16 x 16/32Gbit/s Fibre Channel ports or 24 x 10Gbase-T iSCSI or 24 x 10/25GButE viSCSI ports. The overall appliance port count maxes at 24.

Rear view of PowersStore base enclosure.

With redundant components, PowerStore has a six ’nines’ (99.9999 per cent) availability rating.

Dell said a PowerStore 9000 four-appliance cluster with all-NVMe drives is seven times faster than the Unity XT 880 all-flash array, when running a 70:30 random read and write IO mix and active compression and de-dupe.

Burgener told us: “If you just compare PowerStore to the previous Unity XT, it’s a very nice price performance and scalability bump with some interesting new features such as hardware-assisted compression, AppsON, and four nodes instead of just two.”

A Dell spokesperson told Blocks & Files: “PowerStore performance, like all storage systems, varies by workload (block size, mix of IO, etc.). In the particular workload used to compare to Unity XT, we are seeing over 2 million IOPS with no impact to inline deduplication and compression… we expect our customers to see less than 500 microseconds of response time during normal storage operation.”

Burgener said: “From the numbers that Dell EMC shared, each appliance pair can handle up to 2 million IOPS and they can extend that to up to 4 nodes for a total of around 4 million IOPS (that’s a federated cluster like clustered ONTAP, not a true distributed environment like IBM Spectrum Scale).”

How does PowerStore compare to the high-end PowerMAX array? An entry level PowerMAX 2000 supplies up to 7.5m IOPS, sub-100μs read latency and 1PB effective capacity, delivers up to 7.5m IOPS, has sub-100μs read latency and 1PB effective capacity. Those figures make PowerStore look decidedly…midrange.

Clustering

PowerStore appliances can be clustered to provide scale-out capacity with up to four appliances per cluster. 

This Dell slide says there can be up to 8 active:active nodes in a cluster. A node is a controller, and appliances come in dual-controller form, so a cluster can have up to four appliances.

The clustering scale-out depends upon a software-defined configuration into X and T systems.

The PowerStoreOS runs on the bare PowerStore metal in T systems, which supply block, VVOLs and file storage, and represent the standard deployment mode. File storage runs as a container.

The supported file access protocols in T mode are; NFSv3, NFSv4, NFSv4.1; CIFS (SMB 1), SMB 2, SMB 3.0, SMB 3.02, and SMB 3.1.1; FTP and SFTP. The file stack in PowerStore will become the standard file stack across Dell EMC’s portfolio.

PowerStore T model appliances can be configured as either Block Optimized or Unified (block and file). The T-mode system supports 3-way NDMP backups. 3-way NDMP transfers both the metadata and backup data over the LAN. A white paper explains the file facilities in more detail.

PowerStore T clusters:

  • 96 drives per appliance,
  • four appliances per cluster
  • 898.56TB raw capacity per appliance
  • 718TB usable capacity per appliance
  • 2.8PB effective capacity per appliance after overhead for virtual RAID, etc.
  • 11.3PB effective capacity per cluster

PowerStore X clusters

In PowerStore X systems an ESXi hypervisor runs on the bare controller metal and the PowerStoreOS runs as a VM inside it. A certain amount of the controller compute power is reserved for this PowerStore VM. According to Dell, PowerStore is the only storage array with such a built-in hypervisor.

The X models do not scale out and provide block and VVOLs storage but not file-level access. Scale-out on PowerStore X will be available in a future release.

PowerStore X system:

  • 96 drives per appliance
  • one appliance per cluster – in effect, no clustering for now
  • 898.56TB raw capacity per appliance
  • 718TB usable capacity per appliance
  • 2.8PB effective capacity per appliance

AppsON

With PowerStore X models the array can run applications in other VMs alongside the PowerStoreOS VM, and Dell calls this AppsON – applications on the array. This software means compute is being brought to the storage, according to the company.

This compute-in-storage idea has been tried before, notably by Coho Data which crashed in August 2017. And in 2013, EMC touted the addition of compute to VMAX and Isilon storage. It never delivered the capability and instead developed hyperconverged VxRail systems, using VMware’s VSAN software.

According to Dell, AppsON is a good fit for data-intensive workloads with analytics run locally on the array from streamed-in data sources: Splunk, Flink and Spark, for example. This is useful in internet edge PowerStore deployments.

A Dell spokesperson told us: “A native file import technology is planned for an upcoming release.” Scale-out will also be supported in a future release.

The lack of file support is not seen as a problem by Scott Sinclair, an ESG analyst: “I would expect the majority of implementations to leverage the block-based storage.  If I am running an app in a VM with VVOLs, I typically care more about the storage capabilities, performance, and features, than I do about the underlying storage protocol.”

A PowerStore data sheet diagram confirms any VMware virtualized application can run on PowerStore:

This capability brings controller sizing considerations in its wake. Typically, customers will buy an array like PowerStore with controller power and array capacity sized for their storage workloads. A single node PowerStore X is effectively a server with NVMe direct-attached storage (DAS) and potential direct-access SAS SSD expansion storage. It could be sized like any other server plus DAS, but at the same time it is an array supporting host server access. Sizing for these two workloads could be tricky.

Dell’s VxRail hyperconverged (HCI) systems, are built for general purpose compute and storage needs. PowerStore AppsON is for specialised storage-specific workloads such as anti-virus checking, data protection, and latency-sensitive, real-time analytics at edge locations.

In Burgener’s view there will be “a little confusion between which HCI product to buy, but note that Dell EMC emphasizes that PowerStore has more enterprise class storage capabilities than the HCI products”.

Dell said hypervisor mode provides storage OS abstraction – a hint perhaps that other storage operating systems could be used in the system?

Through its integration with VMware, PowerStore supports VAAI, VASA, VVOLs and the VMware Cloud Foundation. Also, VMware’s bare metal ESXi hypervisor runs on the array and PowerStore works with vRealize Orchestrator. This tight integration enables VMware-skilled admin staff to quickly adapt to PowerStore, Dell said.

Software power station

Dell has developed PowerStoreOS to support legacy and cloud-native workloads. The OS has a high degree of automation and, naturally enough, VMware integration.

PowerStoreOS has a modular, cloud-native architecture where individual OS components are isolated as microservices. This containerised approach will enable Dell to add functions such as deployment modes more quickly. Burgener said new features “may come out faster as well -because they don’t have to ship an entire new storage OS, just the feature in a module which communicates with the rest of the storage OS through APIs.”

PowerStoreOS supports access by physical and virtual server applications, databases and containerised applications via a CSI plug-in. 

PowerStoreOS has thin provisioning and always-on data deduplication. In conjunction with hardware compression, this provides a guaranteed 4:1 average data reduction ratio. 

The array software has Dynamic RAID capability, offering RAID 5 (4+1/8+1).

PowerStoreOS supports automated workflows. IT and DevOps users can programmatically provision PowerStore resources from an application toolset such as VMware, Kubernetes and Ansible. PowerStore resources are available from a self-service catalogue.

Snapshot and replication

PowerStoreOS has a newly-developed snapshot engine and built-in replication. It can also produce thin clones. The snapshot facility integrates with Dell EMC’s Avamar and Networker backup products.

PowerStore AppsON VMs can be moved via vMotion to VxRail hyperconverged systems, any PowerEdge ESXi server system and VMware Cloud Foundation or vice versa. That means PowerStore data or storage instances can be moved from the edge to the core to the cloud.

PowerStore is supported with Dell EMC Cloud Storage Services, which connects the system directly to the user’s public cloud as a managed service. Cloud Storage Services can provide DRaaS (disaster recovery as a service) to VMware Cloud on Amazon Web Services.

PowerStore will – “this summer” – become a storage option in PowerOne, Dell’s autonomous infrastructure cloud platform.

Management and monitoring

The PowerStore array has built-in automation and uses machine learning (ML) to optimise system resources and make it more autonomous. The ML engine handles initial volume placement, migrations and issue resolution. It also monitors and fine-tunes array performance.

ML is used after auto-discovering a new PowerStore cluster node to rebalance the cluster’s data storage workload across the now enlarged cluster. The ML engine makes data placement recommendations and up to 99 per cent of storage admin time can be saved in such volume rebalancing, according to Dell.

As we note above, PowerStore has fast NVMe drives in the base enclosure and slower SAS drives in the expansion cabinets. We asked Dell if the PowerStoreOS machine learning data placement facility takes account of this and places the hottest data in NVMe flash and cooler data in SAS flash during migration? Is there movement of cooler data from NVMe flash to the SSD flash as fresh hot data comes into the system?

A spokesperson told us: “The PowerStore OS automatically adds [SAS] flash drives to a combined common pool and automatically places data across the pool.” That indicates there is a single pool – no separate NVMe and SAS storage tiers.

ESG’s Sinclair said: “PowerStore’s NVMe storage, combined with its data reduction technology, [means] the NVMe performance offered by a single system is large enough to meet most application demands. And if it isn’t and if I still needed NVMe, I would be more likely to take advantage of the scale-out design to maximize performance as I increased capacity.”

PowerStore uses Dell’s CloudIQ SaS storage monitoring software to spot anomalies before trouble occur, detect failures, optimise operations and predict future capacity needs. Dell plans to extend CloudIQ’s scope beyond storage arrays to include monitoring servers and switches.

Migration

According to Dell EMC, migration from PS, SC, Unity, VNX and XtremIO arrays to PowerStore is completed in seven-to-10 clicks. Hosts are remapped transparently, and offloaded, keeping workload performance high during the data transfer.

Native tools for this are included in the PowerStoreOS. There is Native Block migration from Dell EMC Unity, VNX, SC Series and PS Series. A Dell spokesperson told us the company will ship this capability for XtremIO – “shortly”.

A range of other migration methods include VPLEX and PowerPath/ME to host-based tools such as vMotion and Linux LVM, and migration offerings from Dell Technologies Services.

Datadobi, a storage software migration specialist, told Blocks & Files it is an official migration system, “fully tested and confirmed with Dell PowerStore”. Datadobi could be considered as a potential migrator of data from non-Dell EMC arrays to PowerStore such as HPE 3PAR, Nimble, and NetApp.

Deployment

PowerStore can be deployed at internet edge sites, where rack-mounted systems are supported, and also in core data centres. In deployment it could run alongside VxRail systems and servers running ESXi and integrated with VMware Cloud Foundation systems.

The public cloud is available as a secondary or backing store for PowerStore. Dell EMC Cloud Storage Services directly connects PowerStore to all major public clouds including Amazon Web Services (AWS), Azure and Google Cloud as a managed service. Cloud Storage Services provide data recovery as a service (DRaaS) to VMware Cloud on AWS. 

Managed service providers can use PowerStore to deliver storage services or colocate it with the public cloud for use by public cloud compute.

Consumption and Upgrades

Dell EMC offers various array payment arrangements, including Pay As You Grow and Flex on Demand metered usage. There is also a pay per use environment across the customer infrastructure. Customers can choose between two flexible pay-per-use consumption models with short-and-long term commitment options, including a new one year term for flexible consumption. 

Customers can scale up a PowerStore array by adding SAS SSD expansion shelves. Dell has three Anytime Upgrade options with data-in-place upgrades; 

  • Next-Gen: Upgrade appliance nodes (controllers) to next generation equivalent models 
  • Higher Model: Upgrade to more powerful nodes within the current generation 
  • Scale-Out: Apply a discount to expand your environment with a second system equal to current model. 

No additional purchases or licensing is required and upgrades can take place at anytime within a PowerStore user contract, without triggering contract renewal. All three upgrades are non-disruptive, preserving existing drive and expansion enclosures. 

(Note that upgrades are available after 180 days and they require purchase of a minimum three-year ProSupport Plus with Anytime Upgrade Select or Standard add-on option at point of sale to qualify.) 

Dell EMC claims that PowerStore users will never need to migrate or endure forklift upgrades again, and there is no downtime with data-in-place hardware and software upgrades.

PowerStore is generally available globally and will be included as an option for PowerOne autonomous infrastructure this summer. 

Software futures

PowerStore does not yet support NVMe-over-Fabrics. Dell said NVME-oF capabilities will come in a future version of the software.

PowerStore X scale-out will also come in a future release, a Dell spokesperson told us: “At first, we believe that, given AppsON is a new capability, most early deployments will only require single nodes. When customers are ready to expand, we plan to be ready, and new Anytime Upgrades as part of our Future-Proof Program are designed to make it easier and more cost-effective to add capacity.”

At that time a scale-out clustered PowerStore X will effectively be a hyperconverged system, like VxRail.

Dell will also develop a software-only version of PowerStoreOS, capable of running on industry-standard servers and also, we suppose, in the public cloud. That will extend the actual PowerStore environment from edge and core to the cloud.


Cash-rich Cohesity blames pandemic for job cuts

Cohesity today cited the covid-19 pandemic for laying off or furloughing “a small percentage” of its 1300-strong workforce. The data management startup is cutting the jobs just one month after completing a $250m funding round.

The company declined to specify numbers, but said in a statement today that it “remains focused on spending investment dollars wisely to ensure fiscal responsibility and long-term success”.

Cohesity added: “To manage through this time of economic uncertainty and volatility, Cohesity has taken steps to reduce our operating expenses. Unfortunately, as part of that effort, a small percentage of employees have been furloughed or are no longer with the company. This is not a decision the company takes lightly. We value contributions from each and every employee and regret that the pandemic has created this challenging period.”

In semi-related news, Nutanix is to furlough 1465 staff – a quarter of the workforce – for two weeks, on a rolling basis between now and October.

Your occasional storage digest, featuring Pure Storage and others

FlashBlade gets file and object replication

Pure Storage has announced V3.0 of its Purity//FB FlashBlade operating system. FlashBlade is Pure’s unified, scale-out all-flash file and object storage system. New features include:

  • File Replication for disaster recovery of file systems. Read-only data in the target replication site enables data validation and DR testing.
  • Object Replication – replication of object data between two FlashBlades improves the experience for geographically distributed users by providing lower access latency and increasing read throughout. Replication of object data in native format from FlashBlade to Amazon S3 means customers can use cloud for a secondary copy, or use public cloud services to access data generated on-premises. 
  • S3 Fast Copy – S3 copy processes within the same FlashBlade Bucket now use “reference-based copies” – Data is not physically copied so the process is faster. Fast Copy does not apply to S3 uploads and copying or copying between different buckets.
  • Zero touch provisioning (ZTP) – After FlashBlade hardware is installed, ZTP completes the setup remotely; an IP address is automatically obtained via the management port via DHCP. A REST token (“PURESETUP”) allows access to the array with a set of released APIs to perform its basic configuration and setup the static management network. When setup completes  the “PURESETUP” token becomes invalid and DHCP is terminated.

V3 also has File system Rollback; a file system data protection feature enabling fast recovery of file systems from snapshots, and NFS v4.1 Kerberos authentication. There are Audit Logs and SNMP support enhancements for improved security, alerting, and monitoring capabilities.

FlashBlade now has a peak backup speed of 90TB/hour and peak restore speed of 270TB/hour.

Public cloud disk drive and SSD ships

Wells Fargo analyst Aaron Rakers told subscribers cloud-driven near line HDD units are now approaching 70 per cent of total HDD industry capacity shipped, and account for more than 60 per cent of total HDD industry revenue. 

Also enterprise SSDs are estimates to account for 20-25 per cent of total NAND flash industry bits shipped, with cloud accounting for 50-60 per cent or more of total bit consumption. 

Shorts

DigiTimes has reported (paywall access)  Western Digital is increasing enterprise disk drive prices by up to 10 per cent due to pandemic-caused production and supply chain cost increases. A WD spokesperson told Blocks & Files the company does not comment on its pricing.

NetApp is partnering with Iguazio so that NetApp’s ONTAP AI on-premises storage and public cloud Cloud Volumes storage participate in Iguazio’s machine learning data pipeline software. Iguazio is compatible with KubeFlow 1.0 machine learning software.

Alluxio, which supplies open source cloud data orchestration software, announced an offering in collaboration with Intel to offer an in-memory acceleration layer with 2nd Gen Intel Xeon Scalable processors and Intel Optane persistent memory. Benchmarking results show 2.1x faster completion for decision support queries when adding Alluxio and PMem compared to only using disaggregated S3 object storage. An I/O intensive benchmark delivers a 3.4x speedup over disaggregated S3 object storage and a 1.3x speedup over a co-located compute and storage architecture.

Broadcom’s Emulex Fibre Channel host bus adapters (HBAs)  support ESXi v7.0, and provide NVMe-oF over FC to/from ESXi v7.0 hosts. NetApp, Broadcom and VMware have a validated NVMe/FC server and storage SAN setup.

China’s CXMT (ChangXin Memory Technologies) has signed a DRAM patent license agreement with Rambus, strengthening its potential as a DRAM chip supplier.

FileShadow has announced an integration partnership with Fujitsu, allowing consumers to scan documents, from Fujitsu scanners, directly into their FileShadow Cloud Storage Vault. FileShadow collects the file, preserves it with its secure cloud vault and curates it further with machine learning (ML)-generated tags for images and optical character recognition (OCR) of written text.

GigaSpaces, the provider of InsightEdge, an in-memory real-time analytics and data processing platform, has closed a $12m round of funding. Fortissimo Capital led the round, joined by existing investors Claridge Israel and BRM Group. Total funding is now $53m.

MemSQL has announced that it has been selected as a Red Hat Marketplace certified vendor.

Supermicro has introduced BigTwin SuperServers and Ultra systems validated to work with Red Hat’s hyperconverged infrastructure software.

Backblaze assails Big Three cloud download ‘tax’, slashes S3 prices

Backblaze, the cloud backup vendor, is picking a fight with Amazon in its own back yard by offering much cheaper S3-compatible cloud storage and quicker downloads.

The company has released S3-compatible APIs in a beta test to enable customers to redirect data workflows using S3 targets to Backblaze B2 Cloud Storage. It claims it offers infinitely scalable, durable offsite storage at a quarter of the price of other options, meaning Amazon S3, Azure, and Google Cloud Storage.

Backblaze storage pod

Blocks & Files asked a Backblaze spokesperson about price: “B2 Cloud Storage prices are not changing for people who want to use S3 APIs.There is one price for storage – $0.005 per GB per month… that’s one quarter of the price of S3, GCS, and Azure,” he said.

Gleb Budman talking about Backblaze in YouTube video

“On top of that, the Big Three have complicated tiered pricing that requires pricing tables to sort out. In addition, downloading data from B2 Cloud Storage is $0.01 per GB – one ninth of the price of S3, GCS, Azure. Again, the Big Three have complex pricing tables just for downloads.

“The tax that the Big Three charge for using your data in downloading is astounding and reflects the walled garden approach that Backblaze has disrupted.”

Blocks & Files asked how does Backblaze B2 egress pricing and access times compare to AWS Glacier?

The spokesperson said: “Glacier is the closest in terms of pricing, but Glacier is not a good comparison. That is cold storage. But since Backblaze B2 Cloud Storage is hot the performance is more appropriately compared to S3, Azure, and GSC. That’s the beauty of it, one quarter the cost, but near 1:1 performance. This is why media and entertainment companies love it for their workflows. They can use B2 as active archive, among countless other things.”

IBM Aspera, Veeam, Quantum, Igneous, LucidLink, Storage Made Easy and other suppliers said they will support B2 Cloud Storage as a destination for customers using their S3 workflows.

Backblaze passed a milestone of storing an exabyte of customer data in March, and has built its business from being founded in 2007 with just $3m in external funding.

Check out a Backblaze pricing calculator to find out more.

Clumio debuts ‘air-gapped’ backup for Microsoft 365

Clumio customers can now protect their Microsoft 365 workloads using the startup’s AWS-based data protection as a service. Backup account-separation is a key aspect of the new facility.

Clumio backs up Microsoft 365 user data in separate accounts and claims that this imposes an air-gap between the user’s account and the backup data. We think this stretches the meaning of ‘air-gap’, which generally describes offline tape cartridges that have no network connection, rather than two separate public cloud accounts.

Microsoft advises Microsoft 365 users to “regularly back up your content and data that you stored on the services or store using third-party apps and services.”  Clumio says it’s the best such third party backup service – competitors include Druva – citing superior ransomware protection.

A spokesperson told us: “Clumio’s service backs up the data outside the customer’s account in an immutable format to ensure that data cannot be compromised. This means that even when the bad guys get access to the customer’s network, they have no access to compromise Clumio’s data.”

He claimed: “Other solutions use the same security credentials for backup and production. Others keep backup copies, backup storage, or compute, in the customer’s production accounts leaving them exposed to ransomware or data loss if the account credentials are compromised.”

Clumio’s Cloud Data Fabric backs up VMware virtual machines running vSphere on-premises or in the VMware Cloud, using AWS S3 object storage. Clumio also provides general SaaS Backup for AWS, backing up apps in AWS accounts that use EC2 and EBS, and storing them in a separate account.

Read a Clumio blog to find out more.

Strong flash performance underpins Western Digital Q3

Western Digital revenues climbed 14 per cent on the back of record flash memory performance to $4.18bn in the third fiscal 2020 quarter ended April 3. The company generated $17m net income, a big improvement on the -$581m loss for the same period last year.

David Goeckeler, WD’s new CEO, said in a statement: “While I couldn’t have anticipated the unprecedented series of events that have transpired, I’m very proud of how the company has responded to an extremely dynamic environment with dedicated focus both on our employees’ safety as well as delivering our market leading technology to our customers.”

Flash up, HDD down

WD’s HDD revenues fell 2.4 per cent in the quarter to $2.1bn. Total disk exabyte shipments fell six per cent Q/Q but capacity enterprise exabyte shipments grew 50 per cent Y/Y. WD said this means it maintained the leading position in the capacity enterprise drive category.

WD’s flash revenues jumped 28 per cent to $2.1bn, and now equals the company’s HDD business in size.

Earnings call

In the earnings call Goeckler said: “We encountered some supply disruptions in the quarter. However, due to the efforts of our operations team, we saw supply trends improve as the quarter progressed.” However, there were “additional costs associated with logistics and other manufacturing activity.”

The disk issue

In the earnings call Wells Fargo analyst Aaron Rakers commented: “It looks like you definitely kind of underperformed some of your peers on nearline” – Seagate’s high-capacity enterprise disk drives, in other words.

Goeckeler replied: “On the nearline side, I mean, look, we’re happy with where the product performed. The 14-terabyte is still performing well. 18-terabyte shipped for revenue this quarter, as we talked about. That we made that commitment, we delivered on that… we’re happy with where the portfolio is.”

The problem, as we see it, is that Seagate is shipping a lot of 16TB drives, unlike WD which is focusing on 14TB drives. WD is pinning hopes on its 18TB drive doing well, while Seagate has a 20TB drive on its way.

Rakers’ chart showing WD’s loss of nearline disk exabytes shipped market share

In a mail to subscribers Rakers estimated WD has lost nearline drive market share to Seagate, with a nine per cent Q/Q drop to 45 per cent (see chart above.)

WD’s combined HDD and SSD client devices revenues grew 13 per cent in the quarter to $1.83bn. The company attributed pandemic-induced home working fuelled strong demand for notebook SSDs. Data Centre product revenues grew 2 per cent to $1.5bn and Client Solutions (consumer retail products) brought in $821m, up 2 per cent on the year, with retail sales affected by the pandemic.

The Q4 outlook is $4.35bn revenues at the mid-point estimate, up 19.7 per cent and would mean full fy2020 revenues of $16.86bn, up 1.6 per cent. CFO Bob Eulau anticipates fourth quarter client SSD revenues will grow strongly as working from home continues. Also new games consoles will use more flash instead of disk storage.

WD has suspended dividend payments to conserve cash.