Dell Technologies World Dell EMC has added an Isilon H5600 general scale-out file storage cluster node, using a mix of SSDs and disk drives. It comes with a new version of the OneFS operating system, v8.2, which enables Isilon clusters to grow to 252 nodes, storing up to 58PB.
This is the first cluster node count jump for many years – Isilon clusters traditionally have a 144 node limit. The upgrade equips Isilon to cope with much bigger data sets.
An Isilon cluster can now offer up to 945 GB/sec aggregate throughput, darn close to 1TB/sec The OS refresh also increases cluster performance by up to 75 per cent to 15.8m IOPS.
The H5600 offers more capacity and performance than the current entry and mid-range hybrid models but is slower than the high-end hybrid Isilon system.
Isilon hardware range
There are three Isilon hardware product groups:
H5600 chassis.
F800 and F810 all-flash nodes for the highest performance,
H600, H500, H5600, and H400 hybrid flash/disk general nodes
A200 and A2000 nearline and archive nodes
By tabulating the main hybrid range’s hardware characteristics we can see that the H5600 essentially uses the high-end H600’s CPU and memory while retaining the 3.5-inch capacity disk drives from the H400 and H500. It has 20 extra drive bays compared to these systems, and a drive capacity jump from 8TB to 10TB.
This takes the H5600 chassis capacity up to 800TB, almost doubling the 480TB limit of the H400 and H500.
OneFS 8.2 software is available now and the H5600 will be generally available in June 2019.
Dell Technologies World Dell EMC’s new Unity XT arrays add drives, memory and CPU power, with the hybrid models getting a stronger upgrade punch than their all flash cousins.
Unity arrays are unified midrange file and block arrays, positioned between entry-level PowerVaults and high-end PowerMax arrays.
They are available as all-flash and hybrid SSD/disk drive systems, with four products in each category. The current all-flash models are the 350F, 450F, 550F and 650F and the hybrid models are the 300, 400, 500 and 600.
The new all-flash Unity XT models are the 380F, 480F, 680F and 880F, and Dell EMC said the 880F is twice as fast as its predecessor, the 650F. They are also 67 per cent faster than their closest un-named competitor.
A table gives a snapshot of how they differ from the old Unity all-flash arrays:
Sticking the memory and maximum drive numbers in a chart makes the differences between old and new readily apparent:
The blue line connects the old old Unity AFAs which are positioned in a 2D space defined by memory capacity and the maximum number of drives.
The green line connects the new models and it is obvious that they have more drives and more memory.
We can do the same for the hybrid models, with a table first:
A second 2D scatter chart shows a larger difference between the old and new hybrid arrays than between the flash Unity arrays:
The blue line connects the old systems, with the green line connecting the new systems. In terms of maximum drives and memory, one configuration is common between old and new. The old 500 has the same memory and maximum drive count as the new 380. This is now the entry-model instead of upper mid-point in the range.
The new hybrids have the same max drive count and memory capacity as the new Unity AFAs.
The new Unity arrays are SAS-based but are said to be ready for NVMe adoption. NVMe support had been expected…but not yet.
Unity arrays are validated as building blocks for the Dell Technologies Cloud. Unity XT Series arrays will be generally available in July 2019.
Western Digital was hit by weak disk drive demand and over-supply-induced flash price cuts in its third fiscal 2019 quarter. When will things improve for the flash, SSD and disk drive manufacturer?
Revenues of $3.7bn were 26 per cent down on a year ago. Last year’s $61m net income turned into a thumping $581m loss.
Discussing the results Steve Milligan, Western Digital CEO, saw “initial indications of improving trends…our expectation is for the demand environment to further improve for both flash and hard drive products for the balance of calendar 2019.”
Revenues and profits go up – and revenues and profits go down.
Q3 2019 revenue fell in all three business divisions: Data Centre Devices and Solutions, Client Solutions, and Client Devices.
Data Centre devices revenues were $1.25bn ($1.66bn a year ago)
Client Devices revenues were $1.63bn ($2.3bn)
Client Solutions revenues were $0.8bn ($1.04bn)
We charted the recent disk drive unit ship historical trends:
A pattern of disk drive unit shipments decline.
There is a discernible recent quarter-on-quarter pickup in data centre disk drive units.
WD attributed the fall in mobile embedded revenues to weak handset demand. Notebook and desktop revenue declined due to seasonality and flash price declines.
In the data centre devices unit, demand for capacity enterprise drives was better than expected.
WD was hit by a double whammy in the quarter. Total HDD revenues were $2.64bn a year ago and $2.1bn in the latest quarter. The equivalent flash revenues were $2.4bn a year ago and $1.6bn in Q3 fy’19. Wells Fargo senior analyst Aaron Rakers said flash capacity shipped was down 5 per cent year on year.
It’s not all bad. Rakers noted “client SSD Exabyte shipments more than doubled year-on-year,”
And WD’s OEMs are qualifying new NVMe eSSDs, with shipments expected to start in three months or so.
The Intel Optane DC SSD D4800X (NVMe) offers a “24x7” available data path and super-fast storage, breaking through bottlenecks to increase the value of stored data. Intel Corporation on April 2, 2019, introduced a portfolio of data-centric tools to help its customers extract more value from their data. (Credit: Intel Corporation)
Dell Technologies World Dell EMC’s PowerMax arrays will incorporate Intel’s dual-port Optane SSD memory.
The technology is intended for applications that require 24×7 access to the data on the drive. Dual port redundancy means data can still be accessed through the second controller and NVMe port in the event of a single failure or controller upgrade.
The Optane DC D4800X has an NVMe interface with two PCIe gen 3 lanes per port. The single port P4800X uses all 4 lanes and has more performance. Intel has not yet revealed the performance of the D4800X. It will have the same 30DWPD endurance as the P4800X, according to Rob Crooke, Intel’s general manager of the non-volatile memory solutions group.
Intel Optane SSD DC D4800X
The D4800X comes in a U.2 (2.5-inch drive) format with 375GB, 750GB and 1.5TB capacities.
Intel said the D4800X is “the result of a three-year collaboration between Intel and Dell to improve the performance, management, quality, reliability, and scalability of storage.”
Dell’s high-end PowerMax array is the first to use this dual-port Optane SSD. Other storage array suppliers will probably announce similar support.
According to Amazon, customers want to process locally created or ingested data in real-time for in-field analysis and automated decision making, using the same resources and methods as in in the cloud. Hence the addition of block storage volumes used by Amazon EC2 instances running in the Snowball Edge appliance.
Snowball users fill up the ruggedised system with data for local processing, and then transport it to an Amazon data centre for AWS cloud upload.
The appliance can use Performance-Optimized NVMe SSD volumes, while capacity and throughput-oriented workloads store data on Capacity Optimized Hard Disk Drive (HDD) volumes. Applications with capacity and long-term storage needs can use the Amazon S3 object storage API.
Amazon Snowball Edge device
To provide block storage to EC2 Amazon Machine Image-based applications on the Edge box, customers dynamically create and attach block storage volumes to the AMIs. Such volumes do not have to be pre-provisioned and they grow elastically to their defined size.
Find out more from an Amazon blog by Wayne Duso, AWS general manager for Hybrid and Edge.
Pavilion Data Systems, the NVMe-over-Fabrics array maker, has tweaked its software to pump performance to fresh heights.
Taking an untraditional approach, The RF100 series appliance incorporates a bank of controllers talking PCIe to NVMe SSDs.
Pavilion Data array.
In November 2018, the company recorded RF10 performance at up to 120GB/sec read bandwidth, 60GB/sec write bandwidth and average read latency of 117μs.
V2.2 of Pavilion software improves write performance by 50 per cent to up to 90 GB/second. Other features in the upgrade include:
Write latency as low as 40μs
RAID-6 protection which allows for two disk failures within a RAID set before any data is lost
SWARM recovery for RAID rebuilds which can rebuild a single 2TB SSD drive in under ten minutes, a task that can often take several hours
Consistency groups for snapshots
Competition in the NVMe-oF market is sparking continuous product development. Pavilion claims its multi-controller architecture arrays deliver more scale and performance than dual-controller arrays.
It also has a bring-your-own-drive attribute, as customers can populate Pavilion’s array with their existing 25-inch NVME drives or buy from the suppler of their choice. This should help reduce cost/GB compared with competitors who supply bundled and high-priced drives.
Today’s storage briefs include Violin arrays adopting NVMe, NEC using Scale Computing to build a hyperconverged offering, Formulus Black getting its software added to HPE servers but not by HPE, and Hitachi Vantara bigging up object storage.
It is pre-configured in three levels – base, mid-range and power – and can be customised if pre-packed models don’t meet the needs of a customer.
The software, with in-built hypervisor, is pre-installed, and the customer just needs to add networking information to deploy the appliance.
New appliances can be added into a running cluster seamlessly and within minutes. Different models and capacities can be used together in various combinations to scale out resources as needed.
NEC’s HCI product is available in calendar Q2 2019 through its channel partners.
Hitachi Vantara’s IDC object survey
Hitachi Vantarra has sponsored an IDC InfoBrief, titled “Object Storage: Foundational Technology for Top IT Initiatives.” In the survey, 80 per cent of respondents believe object storage can support their top three IT initiatives related to data storage – which include security, Internet of Things and analytics for unstructured data.
To date, object storage has been used predominately for archival, according to Hitachi Vantara. That’s about to change, with object storage also used as a production data store for analytics routines, the company argues.
It quotes Amita Potnis, IDC research manager, file and object-based storage systems: “As we continue to see adoption across all environments – both on-and off-premises – features like All-flash will help usher in new, more performant use cases beyond archiving that drive additional value for organisations.”
All-flash object stores will provide much faster data access than disk-based stores and the latest 96-layer QLC (4bits/cell) 3D NAND SSDs will provide more affordable flash storage than the current TLC (3bits/cell) 64-layer 3D NAND.
The company supplies its HCP object storage platform and Pentaho analytics software. The all-flash HCP G10 configuration is the fastest performer. But Hitachi V said it has also sped up disk-based models with Skylake processors, and added capacity with 14TB disk drives and more drive bays in the product chassis.
Violin and NVMe
Violin Systems is adding NVMe SSDs to its XVS 8 all-flash array, lowering data access latency and also doubling usable capacity to 151TB.
According to IDC, by 2021 NVMe-based storage systems will drive over 50 per cent of all primary external storage system revenues. After that NVMe will be table stakesin all-flash arrays, meaning customers won’t buy all-flash arrays that don’t have NVMe drive support.
The software in the 3U XVS 8 provides deduplication and compression and effective capacity is up to six times greater than the usable capacity.
Formulus Black and HPE Proliants
Formulus Black has signed a reseller deal with Nth Generation, a US IT consulting and engineering firm, to provide its ForsaOS software integrated with HPE ProLiant servers.
Wayne Rickard, Formulus Black’s chief marketing officer, said: “Nth Generation…will be able to deliver tailor-made solutions that satisfy the needs of customers requiring exceptional performance, even when utilising more-affordable, mid-tier Xeon processors.”
ForsaOS uses bit-marker technology to run applications in memory much faster than if they were run with working sets partially in memory and partially in storage. To accomplish this, CPUs have to do extra work in decoding and encoding bit markers.
Testing on an HPE ProLiant DL380 Gen10 server, featuring mid-range Intel Xeon Gold 6126 processors and 24 x 32GB HPE DDR4-2666 MT/s DIMMs, yielded performance from 4.5 to 7.8 million random 4K IOPs. Average latencies for writes were 2.8µs and reads were 1.8µs per IOP with CPU utilisation below 50 per cent.
The results were achieved running VDBENCH on ForsaOS in host mode (non-virtualized). Formulus Black claims this performance is unmatchable by any SSD or other I/O-bound technology.
ForsaOS screenshot
ForsaOS treats memory as storage and completes in-memory system backup and restoration in minutes instead of hours or days, according to Forumulus Black. By identifying and encoding data patterns using algorithms, the software enables more data to be securely persisted in memory, using powerfail protection to write DRAM contents to storage if power is lost.
The software stack can run any workload in memory without modification on any server.
Liqid, the composable systems maker, is extending connectivity fabric support to enable customers to dynamically compose servers from pools of CPU, GPU, FPGA, NVMe, and NICs regardless of underlying fabric type.
Liqid’s soon-to-be-released Command Center 2.2 software will support PCIe Gens 3 and 4, Ethernet (10/25/100 Gbit/s (10/25/100 Gbit/s), and Infiniband, and lays the foundation for supporting Gen-Z specifications.
Liqid is a member of the Gen-Z consortium and will actively support Gen Z in future releases of Liqid Command Center Software.
Customers will be able to simultaneously compose infrastructure across multiple fabric types. Resources such as NVMe (RoCE) storage, GPUs (with GPU-oF meaning GPUDirect RDMA) and FPGAs are deployed as needed via multiple fabrics, and monitored through a single GUI.
Liqid multi-fabric scheme.
Sumit Puri, Liqid CEO, gave out a quote: “Providing Ethernet and Infiniband composability in addition to PCIe is a natural extension of our expertise in fabric management.”
Removing the defacto requirement for a single composable systems fabric is a good thing. It will be interesting to see if other compassable systems suppliers such as DriveScale,HPE (Synergy) and Dell EMC follow suit.
General availability of Liqid Command Center 2.2 is expected in the second half of 2019.
Dell Technologies World Dell introduced Microsoft, its hot new date in the cloud, at Dell Technologies World in Las Vegas today.
The company is extending VMware Cloud to support Azure as well as AWS and Google Cloud Platform. It announced the GCP tie-in at Google Cloud Next 2019 in San Francisco earlier this month.
The aim is to build a VMware Cloud environment covering on-premises, AWS, GCP Azure environments with a consistent set of capabilities. In addition Dell will provide integration with Azure-specific offerings.
Microsoft and Dell are offering Azure VMware Solutions built on VMware Cloud Foundation which represents software-defined compute, storage, networking and management deployed in Azure.
Azure VMware Solutions are first-party services from Microsoft developed in collaboration with VMware Cloud Verified partners CloudSimple and Virtustream, which is, if course, a Dell Technologies company.
Specifically, the two are announcing:
A fully native, supported, and certified VMware cloud infrastructure on Microsoft Azure.
Joint Microsoft 365 and VMware Workspace ONE customers will be able to manage Office 365 across devices via cloud-based integration with Microsoft Intune and Azure Active Directory.
VMware will extend the capabilities of Microsoft Windows Virtual Desktop by using VMware Horizon Cloud on Microsoft Azure.
Dell and Microsoft will explore bringing specific Azure services to VMware on-premises customers, but no details were supplied.
Customers should be able to migrate, extend and run existing VMware workloads from on-premises environments to Azure without the need to re-architect applications or retool operations. They should be able to use a single model for operations based on established tools, skills and processes.
This complements the VMware Cloud-VxRail HCI announcements also made today. We can envisage vSphere VMs and data being moved tri-directionally between on-premises VMware Clouds and AWS and Azure.
Dell and Microsoft suggest scenarios Azure VMware Solutions will support include application migration and datacenter expansion, disaster recovery and business continuity and modern application development, implying containers are involved as well.
They suggest Windows 10 adoption can be helped by the integration of Windows Autopilot and Dell Device Provisioning and Deployment Services, like Dell ProDeploy, enabled by the integration of Microsoft 365, Workspace ONE, and Dell Provisioning Services.
Three CEO quotes
Microsoft CEO Satya Nadella said: “Together with Dell Technologies and VMware, we are providing our mutual customers with an integrated cloud experience and digital workplace solutions.”
Michael Dell, chairman and CEO of Dell Technologies, said: “Our goal is to provide a single view from edge to core to cloud – an integrated platform for our customers’ digital future.”
VMware CEO Pat Gelsinger was on side too: “These innovative cloud and client offerings will deliver customers even more value, provide more flexibility to accelerate their hybrid multi-cloud and multi-device journey, and accelerate the digital transformation of their business.”
This adoption of Azure alongside AWS by VMware should allay concerns that VMware and Microsoft were at cross-purposes in the hybrid multi-cloud. It also strengthens VMware’s appeal to customers with Azure-based operations.
Pure Storage is rounding out its Evergreen Storage Service with movable block capacity and a backup offering. It is developing a block storage offering on AWS and has released a VM Analytics storage performance tool to investigate latency issues on its own- and third-party storage arrays.
E Pluribus Unum
Evergreen Storage Service (ES2) is now a unified subscription model across hybrid environments: on-premises, hosted and in the public cloud. It allows customers to move all or any portion of their pay-per-use block storage capacity between environments without any adjustments to their contract. The subscription is for the block capacity in general, not for the block capacity in a certain location or array. It comes with an integrated set of tools and APIs.
Customers could use it to test new configurations before full deployment by moving data without needing new contracts or subscriptions.
Backup
Pure’s ES2 for backup data is a flash-to-flash-to cloud architecture that harnesses the company’s FlashBlade and ObjectEngine arrays. With this addition, ES2 provides storage as-a-service for block, file, object, and backup data.
Choc-a-Block
Pure is developing a Cloud Block Store (CBS), block storage that runs on Amazon Web Services It is described as industrial-strength and is intended for mission-critical applications in the cloud. Pure does not define “industrial strength” but presumably it will position Amazon’s rival EBS as less robust.
CBS could be used to provide a public cloud disaster recovery service with data asynchronously replicated to it from Pure on-premises systems.
It is in limited beta test and will be generally available in the second half of 2019. CBS will be available under ES2 through the unified subscription model.
Pane relief
Pure1VM Analytics is a cloud-based, full stack, performance analytics tool to help IT storage admins identify the root cause of performance issues. It works with Pure and competing array products through a single pane of glass to more quickly identify and address performance bottlenecks.
Customers with storage latency issues can use VM Analytics to gain visibility across VMs, hosts, volumes, data stores and storage, identify bottlenecks with a graphical map of the entire infrastructure, and filter problematic VMs or arrays. VM Analytics supports hybrid cloud infrastructure, which allows customers to use storage and VMs in a private or public cloud.
Pure1 VM Analytics is available for free trial and no Pure hardware is required.
Dell Technologies World Dell Technologies Cloud – the much signalled Project Dimension that puts VMware Cloud on on-premises kit, has landed at Dell Technologies World.
Dell EMC’s VxRail hyperconverged system is the hardware base.
Dell today also announced VMware Cloud on Dell EMC, a fully-managed data centre-as-a-service offering. In other words this is Dell Technologies Cloud (DTC) with a public cloud delivery model.
Hybrid cloud should be easier when public and private components are similar. The idea here is that VMware Cloud offers consistent infrastructure and operations for IT resources, across public and private clouds and edge locations. Virtual machines and data can move bi-directionally between the private and VMware Cloud-supported public clouds.
Customers one development and deployment environment and one contract negotiation throat to choke.
According to Dell, when used as an operational hub for hybrid cloud environments, Dell Technologies Cloud can reduce the total cost of ownership by up to 47 per cent compared to native public cloud. The company presents its calculations via a sponsored IDC White Paper, Benefits of the Consistent Hybrid Cloud: A Total Cost of Ownership Analysis of the Dell Technologies Cloud,” published this month.
Dell Technologies Cloud includes services covering security, data protection and lifecycle management.
DCaaS
VMware Cloud on Dell EMC is Dell Technologies’ VMware infrastructure installed in your local data centre on VxRail HCI kit, and consumed as a cloud service. VMware fully manages the service.
Dell Technologies Cloud Platforms are available globally now, while Cloud Data Center-as-a-Service, delivered as VMware Cloud on Dell EMC with VxRail, is available in beta deployments with limited customer availability planned for the second half of 2019.
Jason Ader, analyst at the investment bank William Blair, has produced a useful Nutanix review for subscribers. He delivers a coherent and useful deep dive into Nutanix’s products and strategies.
Nutanix aims to expand from its core hyperconverged infrastructure (HCI) starting point to become a full stack supplier in a hybrid multi-cloud world. Ader thinks this is the right strategy to deliver sustainable differentiation in the HCI market “and fight against the persistent pull of the public cloud”.
Nutanix’s product set is divided into three groups:
Core and its basic HCI technology
Essentials add-on features to help customers build a file-based automated private cloud
Enterprise products to support multiple-cloud services, including containerisation, block and object storage
The diagram below positions them and their component products:
There is a separate storage offerings diagram:
Multi-cloud control plane
Nutanix has built a multi-cloud control plane to enable customers to move applications between on-premises data centres and the public clouds and vice versa.
Move is the first iteration of Nutanix application multi-cloud mobility tool. This is a rebranding of Xtract for VMs, a tool for migrating Hyper-V and VMware ESXi virtual machines to Nutanix’s AHV hypervisor. V3.0 Move adds the migration of AWS EC2 VMs to AHV and the ability for Move services to run as Docker containers.
Nutanix will add the ability to migrate from AHV to AWS, with Move automatically mapping AHV VMs to the closest EC2 instance type, based on the AHV VM’s size. Move is available as a plug-in to Beam and Calm as part of an operations suite.
The overall Nutanix product set and the company’s strategy represents an audacious and far reaching bet by that it can create a full stack for a hybrid, multi-cloud world through organic development and acquisition. No other HCI vendor has an equivalent strategy.B