Home Blog Page 402

HPE composes SimpliVity love letter to Synergy

HPE plans to decompose SimpliVity hyperconverged systems into its Synergy pool of composable system.

HPE launched Synergy composable systems technology in 2015. The underlying idea is to organise major IT components in an IT resource pool, thus ensuring they are not stranded inside fixed server configurations. Envisage a rack-level collection of processors, memory, storage, virtualization software and network connectivity.

Synergy rack storage tray

When a server system is needed to run an app it is dynamically composed from the appropriate elements. The app is executed and the resource elements are returned to the pool for re-use.

Synergy processor tray

According to HPE, composable systems lower system management burden and improve resource usage. Benefits include:

  • 25 per cent lower IT infrastructure costs by eliminating over-provisioning and stranded capacity
  • 71 per cent less staff time per server deployment and 30 per cent higher application team productivity by increasing operational efficiency and rapid deployment of IT resources
  • 60 per cent more efficient IT infrastructure teams by reducing complexity and manual tasks

Dell EMC, Western Digital, DriveScale, Liqid and Kaminario are  all developing composable systems.

HPE Synergy is $1.5bn business

At HPE Discover in Las Vegas this month, CEO Antonio Neri said in a keynote: “We…knew that the future was about composability, so we were the first to bring to market a composable cloud experience to the datac entre in HP Synergy, which is now a $1.5bn business.”

HPE said more than 3000 customers have bought Synergy systems and report 78 per cent year-over-year revenues growth.

The company recently added ProLiant gen 10 360, 380 and 560 servers its Synergy-based composable rack infrastructure. Once composed these can support Nimble arrays and vSAN, and a pool of hyperconverged SimpliVity.

Decomposing SimpliVity

But HPE wants to go further than including SimpliVity merely as a usable element in a composable system The company wants to dynamically compose SimpliVity systems. In other words, take the compute, networking, storage and software elements of a SimpliVity system and turn into a composed system at application run-time.

Hyperconverged systems combine and integrate servers, storage, virtualization software and network connectivity into a single and scale-out system.

Paul Miller,VP of marketing for HPE’s converged data center infrastructure business, discussed the combination of Synergy and SimpliVity last week in an interview with our sister publication The Next Platform.

Miller said the company is “not composing hyperconverged yet. That’s what we want to do, compose hyperconverged as a workload. We’re going to focus on unique workloads that you can instantaneously compose.”

Net:net

HPE thinks composable systems are the next step in the route to simpler and more automated IT operations, better resource utilisation and lower cost of ownership

But there is an interesting side effect when hyperconverged systems become composable. With hyperconvergence, tightly integrated server, storage, virtualization and network connectivity are treated as Lego blocks and operated as single, scale-out systems.

In a composable set-up, a hyperconverged system becomes a software abstraction, a composability template, defining a set of IT resources for a particular kind of workload. Hyperconvergence as an orderable and separate IT technology ceases to exist.

Adding SimpliVity to Synergy gives you synergies, so to speak. The marketeers can certainly have fun with that notion. 

Private cloud is more expensive than trad IT storage. Who knew?

Private cloud storage costs more than traditional IT storage and both cost way more than public cloud storage.

That is our summary of recent IDC numbers, covered by Wells Fargo senior analyst Aaron Rakers in his weekly newsletter to clients.

IDC has totted up storage spending and capacity shipped numbers for the first 2019 quarter. The analyst firm provides a breakdown between public cloud, private cloud and traditional IT, enabling us to work out the cost of each kind of storage.

The spending numbers:

  • Public cloud   – $2.813bn -down 14.8 per cent y-o-y
  • Private Cloud – $2.157bn – up 23.8 per cent y-o-y
  • Trad IT            – $6.024bn – down 3.3 per cent y-o-y
  • TOTAL            – $10.994bn – down 2.5 per cent y-o-y

The capacity shipped numbers;

  • Public cloud   – 51.62 EB    – down 20.4 per cent y-o-y
  • Private Cloud – 8.881 EB    – up 11.4 per cent y-o-y
  • Trad IT            – 31.957 EB  – up 17.0 per cent y-o-y
  • TOTAL            – 92.458 EB  – down 7.7 per cent y-o-y

This means we can calculate the overall cost per EB of public, private and traditional IT storage;

Clearly, public cloud is the best value storage per dollar spent. Traditional IT storage is next, albeit more than three times more expensive. But private cloud storage is almost five times more expensive than public cloud storage and more costly than traditional IT storage.

Why should that be the case? Any ideas?

Your occasional storage digest, featuring PCIe, HPE and Qumulo, Tosh, and more

Let’s end the week with some storage news stamps for your album. Slot in our PCIe v6 and v4 items, HPE snuggling closer to Qumulo, FileShadow adding file tagging, Hazelcast getting in-memory compute development cash, and…

PCIe v6.0

The PCIe SIG has announced v6.0 of the PCIe spec, less than a month after the V5.0 spec went live. It doubles PCIe 5.0 speed to 64GT/sec and 256GB/sec across 16 full duplex lanes, as the table shows:

This represents an eight-fold increase on today’s PCIe v3.0 speed for endpoints such as graphics cards, SSDs, Wi-Fi, and Ethernet cards.

V6.0 PCIe specifications include PAM-4 ((Pulse Amplitude Modulation with 4 levels) encoding, low-latency Forward Error Correction (FEC), and maintain backwards compatibility.

The need for this speed and the fast development of quicker PCIe comes down to CPUs and GPUs greedily taking in data for AI and machine learning, as well as the usual suspects of high-performance computing and general networking and storage needs. 

Storage class memory is coming and that will take in and write data fast to/from memory or through a direct PCIe pipe. Both cases need the ability to read and write data faster to/from peripheral devices through a PCIe link.

NVMe devices should run faster as well.

PCIe SIG (Peripheral Component Interconnect Special Interest Group) members can find out more here.

PCIe v4.0 and Toshiba

Toshiba is developing SSDs using the PCIe 4.0 spec, which doubles PCIe v3.0 speed. Tosh subsidiary Toshiba Memory America Inc. tested prototypes and engineering samples of PCIe v4.0 SSDs at the PCI-SIG Compliance Workshop #109 in Burlingame, California.

Compliance testing is completed against both PCI-SIG maintained systems and other manufacturers of PCI products.

Kazusa Tomonaga, senior SSD marketing manager at Toshiba Memory America, said: “We are pleased with the results of the early testing at the PCI-SIG 4.0 workshop. We are on track and moving forward with our release plans for PCIe 4.0-capable SSD products.”

Blocks & Files thinks this will happen in 2020. Maybe other vendors will go direct to v5. We reckon  PCIe v5 and 6 might overlap in 2021. It’s a peripheral interconnect speed feast.

HPE hugs Qumulo

HPE has extended its reseller agreement with Qumulo to include the public cloud. Previously the company sold Qumulo’s high-performance parallel file system software running only on HPE ProLiant and Apollo servers on-premises.

HPE said its customers can use Qumulo’s hybrid cloud file software to gain real-time visibility, scale and control of data both on-premises and in public clouds, including Amazon Web Services and Google Cloud Platform. Qumulo enables data replication between clouds for migration or multi-copy requirements.

Qumulo’s software does not support Azure.

File tagging by FileShadow

FileShadow has added custom tagging capabilities, user-created metadata, to improve file classification and selection.

The FileShadow products, hosted on Google Cloud and IBM Cloud, combine remote and local files into a metadata-described vault, making their files and photos searchable and available.

That means files in cloud storage accounts – Box, Dropbox, Google Drive, OneDrive, OneDrive for Business, Adobe Creative Cloud and Lightroom, iDrive, and local storage.

Local storage encompasses MacOS, Windows Desktops, Windows Virtual Desktops, Drobo network and direct attached storage (NAS/DAS) and Drobo NAS appliances.

Automatically-generated metadata includes location (GPS), optical character recognition of PDFs and machine learning-generated tags for images. A picture of someone golfing might automatically generate tags such as golf, golf ball, golf club, golf course, professional golfer and sand wedge, even if those terms are not in the file name.

Users can select one file or as many files as they want to adjust. They can add new tags, or edit or remove any auto-generated tags that aren’t relevant to their files in the vault.

Tyrone Pike, CEO of FileShadow, came up with a canned quote; “With FileShadow, customers can collect all of their content, and fine-tune the metadata tags so they can efficiently locate their files.”

FileShadow is free of charge for up to 100GB of data. Subscriptions are available for more storage ($15/month for 1TB; $25/month for 2TB; each additional terabyte is $10/month). Subscriptions for FileShadow for Virtual Desktops are available for $25/month for 2TB, and each additional terabyte is $10/month.

Hazelcast gets lots of $ for in-memory computing

Hazelcast, an in-memory computing startup, has nabbed $21.5m of D-round funding, taking the total raised to $30.5m. It was founded in 2008 and took in a $2.5m A-round in 2013, an $11m B-round in 2014. Five years on and it’s got a larger slug of cash to speed its product roadmap and beef up its sales teams.

Hazelcast has come up with a frankly awful marketing term, saying it provides a System of Now. It defines this as an ultra-fast processing architecture for mission-critical applications where microseconds matter. The old “real-time” term doesn’t cut it any more. Sigh.

A canned quote from Kelly Herrell, Hazelcast CEO, goes like this: “Hazelcast enables its customers to establish a System of Now through its platform that scales linearly and delivers the industry’s fastest processing for stored and streaming data, leaving competitive offerings far behind as data sizes and processing loads grow.”

Hazelcast said its annual recurring revenue has grown 300 per cent in the past three years, without providing any base revenue numbers. A useless claim then. More substantively, the CEO’s blog claims hundreds of commercial customers worldwide.

Shorts

Actifio, the copy data management software vendor, announced Multi-Cloud Mobility and Automation, providing enterprises with disaster recovery (DR) by using the on-demand capabilities of Amazon AWS, Microsoft Azure, Google Cloud Platform, and IBM Cloud. It said DR is a killer app for the public cloud and it provides it at the lowest cost, with no fear of lock-in to any cloud platform.

Cisco and IBM are announcing support for IBM Cloud Private on Cisco HyperFlex and HyperFlex Edge hyperconverged infrastructure. They want to have a common developer experience across on-premise and cloud environments. Read a blog about it here.

Cloudian announced a fully native S3-compatible object storage offering for VMware Cloud Providers that is managed directly from VMware vCloud Director. VMware Cloud Providers can deploy Cloudian as a storage appliance or as software-defined storage anywhere in the world, and vCloud Director manages the storage, from a single location if desired, and provides native automation of workflows.

DDN said it has captured the number-one spot in the IO500 10-node benchmark, announced today by the Virtual Institute for IO at ISC in Frankfurt, Germany. DDN ranked in three of the top five system results as well.

Global Computing Components, a specialised business within Tech Data, has signed a pan-European agreement with Toshiba to offer its client and enterprise internal hard disk drive (HDD) portfolio to customers.

Email cybersecurity firm Mimecast will be opening a new London office in November. It’s a 79,000 sqft unit at 1FA Broadgate.

Hyperconverged software vendor Pivot3 is partnering with Red River, a company providing IT to US federal agencies. Rance Poehler, chief revenue officer at Pivot3, said; “The federal channel is critical to Pivot3’s growth and success.” The partnership comes after the two have won a few federal deals that indicated they could do more if they got organised to work closer together.

The UK’s Coventry University is using Rubrik products to protect its user and research data across Nutanix, Microsoft and AWS environments. 

Scale Computing, a hyperconverged systems vendor, announced its KVM-based hypervisor in its HC3 product family is supported by Parallels Remote Application Server 17 (Parallels RAS). This enables administrators to rapidly provision and manage virtual machines (VM) thin clones centrally from Parallels RAS Console to make VDI solutions faster, more affordable and easier to use.


Veritas bundles NetBackup with DR and storage capacity reporting

Veritas has updated NetBackup and combined it with InfoScale disaster recovery software and Information Studio data space reporting into the Enterprise Data Services Platform.

It said the aim is to make data management simpler by bundling enterprise protection, availability and insights into a single entity.

Blocks & Files diagram

The small and medium enterprise Backup Exec product is not included.

NetBackup

NetBackup v8.2 supports more than 500 data sources and 150-plus storage targets, including 60 cloud providers, and now gets:

Virtual infrastructure protection

  • Agentless architecture for VMware
  • Support for RedHat Virtualization and OpenStack
  • Docker Certified backup and recovery offering for containers

Boosted public cloud support

  • 2X faster backups to the cloud
  • Support for Amazon Web Services (AWS) Snowball Edge, AWS access controls, AWS Glacier and Glacier Deep Archive, and Veritas Cloud Catalyst on AWS
  • Automated disaster recovery to and in the cloud
  • Cloud-native data protection with application consistency for Oracle, Microsoft SQL, and MongoDB

API, snapshots and self-service

  • API approach to enable app-integrated and automated data protection
  • Backup, orchestration, cataloging and replication with native snapshot technologies
  • Self-service with ServiceNow and VMware vRealize plugins

The Docker Certified backup and recovery offering for containers is claimed to be the first in the industry.

InfoScale

InfoScale provide enterprise IT service continuity with  resiliency and software-defined storage for services in a data centre infrastructure with four individual products; 

  • Foundation – app-aware storage management, 
  • Availability – high availability and disaster recovery, 
  • Storage – storage management across heterogeneous server and storage environments,
  • Enterprise – combination of storage management and application availability.

There’s an overview here and InfoScale now has:

  • Ability to cluster AWS Availability Zones for migrated, mission-critical applications
  • Support for Chef and Ansible platforms
  • IPv6 support
  • Support for for Nutanix, Dell EMC ScaleIO, and NVM Express
  • Security enhancements

Information Studio

This product classifies content, such as files containing personally identifiable information (PII), and provides capacity take-up and projects storage capacity trends. Analytic reports describe global, regional, data centre, and server level overviews of storage capacity trends.

The software includes APTARE, integrated with Veritas in March 2019, analytics for unified backup reporting and storage data across on-premises and hybrid-cloud environments. It supports hybrid cloud storage and backup systems and technologies such as OpenStack, software-defined storage and flash arrays.

The latest version gets:

  • Connectors to 20+ cloud and on-premises data repositories including NetBackup,
  • Visual rendering of metadata to identify what exists, where it exists, and who has access – Information Map data visibility functionality,
  • Classification of data to identify personally identifiable information (PII), 
  • Deletion to reclaim storage resources, lower costs, and reduce risks,

Availability

Blocks & Files asked Veritas about availability

B&F: Is the Veritas Enterprise Data Services platform available as a single orderable offering? 

Veritas: No. The platform is not available as a single orderable offering for purchase, but it is available as separate components under the three pillars of Availability, Protection and Insights.

B&F: When will the Veritas Enterprise Data Services platform be available, either as a single offering, or as separate components?

Veritas: “The Enterprise Data Services platform is already available as separate components under Availability, Protection and Insights.

B&F: Does the Veritas Enterprise Data Services platform have a single management layer or are the three component products separately managed with their own management interfaces?

Veritas: The Enterprise Data Services platform does not have a single management layer. However, the components share a common metadata catalogue and common UX design language so can be managed at an individual level with connectivity into other components. As an example, InfoStudio and APTARE can access the NetBackup catalogue to provide rich data and IT insights/analytics.

Want to compare HCI vendors side-by-side? This is the tool for you

WhatMatrix, the enterprise tech comparison site, has built a free, detailed online tool that delves into products from 10 hyperconverged suppliers. We like it.

The tool drills down into Nutanix, DataCore, Datrium, VMware, Pivot3, NetApp, Cisco, Dell EMC, Microsoft SDS, HPE SimpliVity and Scale Computing.

WhatMatrix ranks the comanies in the above supplier order too, and examines the general offer, design and deployment, workload support, server support, storage support, data availability, data services, and management categories. There are traffic light rankings for each category element, and element notes and overall category percentage stores.

A leaderboard tab shows the top six suppliers with arrow buttons – circled in the image below – to move to the right to see other lower-ranked suppliers and back again.

You can select up to three suppliers for detailed side-by-side comparisons and enhanced analytics are available for DataCore, Datrium and Scale Computing, with more to come.

This three-way report generation can take a minute or so with a few seconds wait before a spinning wheel appears on screen to indicate activity – we used a Mac.

The result consists of an overview followed by detailed comparison data:

Three-way look at NetApp, Cisco and HPE
Traffic light style look at design and ceployment category for NetApp, Cisco and HPE. The text font is a tad small.

The traffic light categories are:

  • Green – fully supported
  • Yellow – limited support
  • Red – Not supported

Grey boxes have information in them with no ranking. Clicking on a traffic light element box gets you detailed information, such as this external performance validation for HPE’s Simplivity:

This lets you see the information source details and the content owner (in this case, WhatMatrix HCI and SDS consultant Herman Rutten) and submit feedback.

Net:net

WhatMatrix has produced the most detailed hyperconverged supplier-product ranking tool to date. It’s free, it’s detailed and it’s transparent.

Yes, it’s sometimes hard to read and the report generation response time is a tad slow. but the quality of the information means it is well worth putting up with these minor niggles.

HPE’s Primera is Meaty Beaty Big and Bouncy

HPE’s newly-launched Primera array is a performance beast – and that’s before it gets NVMe over Fabrics and storage class memory.

HPE staffers told Blocks & Files in a product briefing that a 4U 4 controller system could in lab conditions pump out 2.3 million IOPS and 75GB/sec of data with sub-millisecond latency. This approaches high performance computing territory.

Primera achieves this with a pool of memory encircled by four parallelised ASICs per controller. That is a massively-parallel design. Real-world performance may vary but there’s a lot of headroom for that variance.

We learnt the system’s maximum raw disk capacity, in the 4U C650/670 nodes, is 737TB. At 48 drives/node this means 15.36TB drives.

Fibre channel now, NVMe-oF later

Primera will go faster still when it gets system NVMe over Fabrics and storage-class memory Optane drives.

The system hasn’t got NVMe-oF yet because HPE’s customers told it to focus on Fibre Channel first and look at adding NVMe-oF later. They are, we think, risk-averse and regard NVMe-oF as still experimental.

However, NVMe-oF support is already baked into the Primera OS, Ram Gopichandran, HPE’s director for 3PAR portfolio product management, said. A user-installable OS upgrade can be turned on within minutes.

A Primera node can have one or two expansion trays. Gopichandran said there could be NVMe-oF access to these.

HPE is awaiting Intel’s launch of dual-port Optane drives before introducing Optane support. Dual-porting is needed because Primera is designed to be 100 per cent available, and port failure on a single port Optane drive would compromise this feature.

When Intel is ready Primera will get a U.2 Optane drive transplant and, boom, Optane caching/tiering etc. here we come.

Service mentality

Storage interfaces, such as files (NFS or SMB for example), key:value store and object (S3 presumably) are services to be added via the OS update process. Once installed they will run as native services, HPE said.

That means Primera could simultaneously support file, block, object and key:value store access at some stage. (Just to confirm, file services are not included at present.)

The HPErs stressed that the containerised service upgrade process is unique, the system is designed to handle the IO load of NVMe-oF and SCM, and there are lots of exciting things to come.

SK hynix bids to join enterprise SSD gang with PE6000 drive

SK hynix is dipping a toe into the enterprise SSD pool with an 8TB NVMe drive; the PE6000.

This, SK hynix’s first NVMe eSSD, is built with 72-layer 3D NAND formatted in TLC mode (3bits/cell). It comes in U.2 (2.5-inch drive) and M.2 formats with the M.2 version capped at 4TB.

The Korean semiconductor firm is keen to emphasise the drive uses flash made in its own fab and incorporates a self-made controller too. Vertical integration rules, OK!

ThThe PE6000 is with OEMs for qualification and mass production is scheduled next year.

SK hynix eSSD

Today we know that random write IOPS are up to 160,000, and random read IOPS 620,000, with sequential read bandwidth being 3.2GB/sec. The average latency number is 95us. That’s it performance-wise – SK hynix has not yet revealed sequential write bandwidth, and endurance.

We do know it uses PCIe gen 3 with 4 lanes and NVMe v1.3a and there are 8 NAND channels. Blocks & Files will update this story if we receive more information.

Bit part

The PE6000 is optimised for low power consumption, according to SK hynix, with the supplied figure being 160,000 random write IOPS at less than 14W.

Some 8TB enterprise SSDs are available from Micron (9300 MAX), Samsung (NGSFF NF1), Toshiba (CD5) and Western Digital (Ultrastar DC). All but the Samsung NGSFF and Western Digital Ultrastar DC have sequential read bandwidths greater than the SK hynix drive.

The 7.68TB Ultrastar DC cranks sequential read bandwith up to 2.77GB/sec but its power draw is typically less than 10.75W. This is better than the SK hynix drive.

Most of the other suppliers have lower random write IOPS numbers. SK hynix’s 160,000 random write IOPS number is exceeded only by Micron’s 9300 MAX which hits 310,000 and has a maximum capacity of 12.8TB. A PRO version of that drive reaches a monster 15.36TB but its capacity comes with lower IOPS performance: 150,000 random write IOPs.

Only the Micron 9300s have higher random read IOPS, at up to 850,000, than SK hynix’ PE6000, while the Toshiba and Western Digital drives have lower numbers; 500,000 and 363,750 respectively.

Writer’s block

The SK Hynix drive is for write-intensive or mixed use where its random write IOPS performance stands out. Based on what we know so far, Western Digital’s Ultrastar DC could be more power-efficient for roughly equivalent performance.

SK hynix said it has a 16TB version coming by the end of the year, which is made from denser 96-layer NAND. This may perform better than today’s 72-layer product.

Druva dives into VC money trough to surface with $130m and unicorn status

Data protection as a service startup Druva has trousered $130m in a G-round of VC funding.

Druva backs up its data protection with data management, archival services and analytics and disaster recovery. Its services are based on Amazon Web Services infrastructure and the company partners with Amazon’s own backup service.

Data protection is a great business to be in. There’s always fresh data to protect, every day in fact, and threats like ransomware to contend with. Customers stick with their backup provider like glue.

There are more than 4,000 Druva customers, including 10 per cent of the Fortune 500. That means there’s 90 per cent to go for.

By offering cloud-based protection Druva enables its users to do away with on-site equipment. The VCs are likely thinking that, as public cloud use grows, Druva’s business should grow with it. They like the protection song that Jaspreet Singh, founder and CEO of Druva, is singing and want to join the chorus.

Total funding for the business, founded in 2008, is now $328m. This round was led by new investor Viking Global Investors, along with Neuberger Berman and Atreides Management, and existing investors Riverwood Capital, Tenaya Capital, and Nexus Venture Partners.

How big is a Unicorn?

A Druva source tells us: “With this round, we have crossed the $1bn valuation mark,” which makes Druva a so-called Unicorn startup.

He said: “Druva is approaching $100M in annually recurring revenue…our cloud workloads business is growing more than 50 per cent year-on-year for the last three years. The adoption of cloud continues unabated and the market is turning our way, setting the stage for exciting years to come.”

Druva will spend the VC cash on world-wide expansion, marketing and sales as well as product development, where a technology acquisition is on the horizon.

Our Druva source said: ‘”In early July, we will be announcing the acquisition of CloudLanes. The company…has focused on enabling enterprises to more easily move data from on-premises to any cloud, between clouds, and utilise software from any cloud to enhance the integrity of their data.”

The global expansion includes Europe where Nick Turner has been hired as VP of sales for EMEA to expand Druva’s presence in the region. Within EMEA the UK is a key geography for Druva in the next 12 months.

Bill Losch, the CFO of Okta, has joined Druva’s board of directors. Druva said Okta is the No.1 SaaS-based security solution in the market right now. Datera CEO Guy Churchward is also on Druva’s board.

HPE trickles out more Primera hardware specs

HPE has opened the door on Primera’s hardware configuration – a little.

Field CTO Nick Triantos says in a LinkedIn article that Primera is based on a mostly re-designed 3PAR OS which does not run in existing 3PAR systems.

Primera is the only tier 0 array he claims, that features;

  • Symmetric Active/Active on the front-end and back-end
  • Customer installable (Rack to Apps in 20 mins)
  • Customer HW upgradable/serviceable (Nodes, Adapters, Drives, Enclosures, Power Supplies, I/O Modules, Cables, SFP)
  • Data in-place HW upgrades
  • User driven SW updates (5 mins)
  • AI Driven intelligence with embedded algorithms and recommendations from InfoSight
  • Multiple HW Acceleration engines (ASIC) per node with point-to-point non-blocking connectivity for massive parallelisation required for NVMe and NVM-oF
  • No need for separate Service Processor
  • Onboard Element Manager
  • Sea of data sensors – New SW design allows for a ton of sensors for monitoring, alerting, hot spot and IO outlier detection, system headroom, and recommendations.

Triantos supplies some hardware details, saying there are three all flash models; A630, A650, A670, and three hybrid ones; C630, C650 and C670, which he calls a Converged flash system; weird.

The A630 and C630 are 2U x 24-slot boxes containing 2 nodes. The A650/A670 and C650/C670 are 4U boxes which can scale from 2  to 4 nodes. The 630 chassis can contain 650 and 670 nodes.

Data-in-place upgrades are supported from a 630 to a 650 (2U or 4U) and a 670 (2U or 4U.)

The A630 and C630 have a single ASIC per node while the A and C 650 and 670 systems have 4 ASICs per node.

Triantos says ASICs and scale-out architectures are becoming very important to drive consistent, predictable performance at scale for mission critical apps. The main bottlenecks to array performance are CPU and DRAM, so ASIC acceleration is necessary.

The Primera cache is unified, with no separate control and  data caches. The system will dynamically adjust the Read/Write cache allocation based on workload demands.

RAID 6 is the only drive failure protection option. As capacity is added the Primera OS dynamically changes the RAID configuration, while prioritising user I/O over internal tasks.

Primera supports deduplication as well as compression. The Primera OS will run dedupe and compression in the X86 CPUs, the ASIC or the QAT; whichever best supports consistent performance under load.

Virtual storage volumes can be thinly-provisioned or thinly-provisioned with deduplication and compression turned on.

Triantos says Primera, with its so-called Timeless Storage business model, has all inclusive SW packaging, flat support pricing forever, a node refresh every 3 years without the need to renew Timeless for 3 more years, media trade-in credits, and a 30 day satisfaction guarantee as well as a 100 percent data availability guarantee.

HPE will cough up 20 percent of the initial purchase price every time data is not available. That’s pretty clear cut.

ASIC extra

We were told a little more about the gen 6 ASIC in Primera. It features a lot of parallelisation. This is needed to work with new media that can benefit from it without inundating the CPUs.

And that means, we understand, NVMe-over Fabric accessed drives and storage-class memory.

Net:net

Triantos does not provide any performance data comparing existing 3PAR systems to the new Primera ones. This suggests that any performance increase is not that impressive.

He also does not say Primera supports storage-class memory now or NVMe-over-Fabrics access. Blocks & Files thinks Pure Storage is way ahead of HPE’s Primera with its NVMe-oF support, as is IBM with Storwize and NetApp with its Max Data technology.

NVME-oF is obviously coming, but when? Why won’t HPE say?

Dell EMC has its Midrange.Next unified mid-range coming and Primera is in place to answer that.

HPE will be hoping its ASIC acceleration negates these technology differences for now, and that its InfoSight management beats other suppliers’ offerings, by making Primera far simpler and inexpensive to operate.

HPE doubles up on HCI and hugs the hybrid cloud

HPE Discover HPE yesterday unveiled two hyperconverged systems, a composability extension and much hybrid cloud activity at at its Discover jamboree in Las Vegas. Let’s take a quick look.

HPE Nimble Storage dHCI

This system combines HPE’s ProLiant server with Nimble storage trays to add another hyperconverged product alongside HPE’s Simplicity systems.

HPE said the Nimble Storage dHCI combines the simplicity of hyperconvergence with the flexibility of a converged system.

It features:

  • Native, full-stack intelligence from storage to VMs and policy-based automation. 
  • Over 6-9s of data availability (99.9999 per cent) and consistent, high performance
  • Ability to independently scale compute and storage non-disruptively.

Nimble CEO Suresh Vasudevan talked about a possible Nimble HCI offering in February 2017.  At the time Nimble was an independent maker of all-flash and hybrid arrays with six “nines” availability, that were monitored and managed with cloud-based analytics – InfoSight. HPE bought Nimble in March that year.

Moor Insight & Strategy senior analyst Steve McDowell told us by mail: “The Nimble HCI is being branded as the ear-catching ‘dHCI’, with ‘d;standing for ‘disaggregated’.  Frustratingly vague screenshot from the briefing deck below.”

Nimble dHCI slide with frustratingly vague screenshot

Language logicians will enjoy the paradoxical notion of having a disaggregated (non-hyperconverged) hyperconverged (aggregated) system. Presumably it’s a single product to order and a single product to install. But like NetApp’s HCI, just because it quacks like a duck and walks like a duck it doesn’t necessarily mean it’s a duck.

HPE said dHCI is an intelligent platform the enables ordinary and demanding apps to be deployed fast. It has sub-millisecond latency and is managed though InfoSight.

There is no dHCI configuration data available from HPE, which seems quite extraordinary. We understand detailed specs will be available in August. For now we know that three network switches are supported: StoreFabric 2100M, FlexFabric 5700 and Cisco Nexus 3K/5K/9K.

An HPE source told us dHCI supports Proliant Gen 9/10 servers and Gen 5 Nimble arrays with Nimble OS v5.1.2 and up.

Odd duck

McDowell called dHCI an “odd-duck announcement…It flies in the face of HPE’s stance that HCI is all about SimpliVity. dHCI started out as Nimble’s ‘Nutanix killer’ product back in 2017… t’s the same stack. HPE shelved it after the acquisition and disintegrated the team, only to dust it off and productise it this week as dHCI.”

He think dHCI is a “nice approach to HCI with its independent scalability, running directly on the array, etc, and the integration with vCenter is really well done. That said, it’s confusing understanding how that fits into a SimpliVity-first HCI world.“

Beta test experience

Marc Bingham, a senior pre-sales architect at Cristie Data has been involved with dHCI beta testing and likes the product. He blogs: “HCI is predominantly a management and purchasing experience moreover, from a technical standpoint, it’s a scale out architectural design rather than the familiar 3-tier design.”

He says: “dHCI is essentially high performance HPE Nimble Storage, FlexFabric SAN switches and Proliant servers converged together into a stack…deployment, operation and day-to-day management tasks have been hugely simplified …day-to-day tasks, such as adding more hosts or provisioning more storage, are simple “one click” processes.”

Storage, compute and networking can be scaled independently of each other. The whole stack plugs directly into the HPE Infosight portal and support model. Bingham suggests one way if looking at dHCI is to see it as “Converged Infrastructure with significant improvement to the management experience.”

HPE SimpliVity

There are two new SimpliVity HCI products: the 325 and 380.

The 1U 325 is for for remote and branch offices and is an AMD EPYC processor system with all-flash storage. The 2U 380 is a larger capacity, storage-centric product for long-term storage. It can centrally aggregate storage from multiple edge SimpliVity systems.

The 380 is presented as a backup and archive node for VMs and workloads but is old news -HPE announced it in March 2017, then basing it on a gen 10 ProLiant server. What is new? Disk drives.

This 380 uses a mix of 4TB disk drives and SSDs. As a 2U system it’s clearly optimised more for small physical space than high disk capacity, as 16TB drives are available but come in a 3.5-inch format. The smaller 2.5-inch format drives, such as Seagate’s Exos 10E2400, top out at 4TB, a quarter of that capacity.

The 380 houses up to 12 x 2.5-inch drives and that gives it a 48TB maximum raw capacity; not that dense in terms of TB per rack unit – 24TB/U .

You have to scale the thing out to get more capacity, and can have up to 16 in a cluster.

HPE said the system backs up and restores a 1TB VM in 60 seconds. Also there is no need for WAN optimisation devices or expensive network bandwidth capabilities to link the edge SimpliVity products.

HPE has exteneded Infosight, its cloud-based storage, server and networking management service, to include SimpliVity systems.

HPE also announced  automated configuration of Aruba switches during deployment of new SimpliVity HCI nodes.

HPE composable rack

HPE is adding ProLiant gen 10 360, 380 and 560 servers its Synergy-based composable rack infrastructure. This can use either VSAN, HPE Storage arrays or, now, the SimpliVity HCI as the storage component.

Customers can deploy a pool of SimpliVity HCI nodes alone or alongside the other storage.

Hybrid cloud with Google

HPE and Google are providing a hybrid cloud for containers. It involves Google Cloud’s Anthos, HPE’s ProLiant servers and Nimble storage on-premises, HPE Cloud Data Services, and HPE GreenLake.

HPE’s Cloud Volumes will provide a storage service for Google Cloud Platform and other public clouds in the third quarter of this year. 

This hybrid cloud offering will feature bi-directional data and application workload mobility, multi-cloud flexibility, unified hybrid management, and the choice to consume the hybrid cloud as-a-Service, since a planned GreenLake for Google Cloud’s Anthos offering aims to provide the entire hybrid cloud as-a-Service.

Developers can write software in the cloud and can deploy deploy the application on-premises, or vice-versa, protect to the cloud and recover in the cloud, and gain the flexibility to run applications in multiple clouds.

HPE Cloud Volumes with Equinix

Customers using Equinix data centre facilities will be able to use the Equinix Marketplace to get Data as a Service based on HPE Cloud Volumes. HPE Cloud Volumes will be available over high-speed connectivity to compute in Equinix data centres.

Availability

SimpliVity with InfoSight will be available in August 2019. The Nimble dHCI will be available after that in the fourth quarter.

HPE Composable Rack is available in August 2019 in the United States, United Kingdom, Ireland, France, Germany, and Australia. HPE Nimble Storage hybrid cloud capabilities with Google Cloud’s Anthos is available in Summer 2019.

HPE Cloud Volumes support for Google Cloud Platform will appear this summer and Cloud Volumes in the Equinix Marketplace is a third quarter 2019 offering.

HPE scales out 3PAR to build massively parallel Primera line

HPE’s Primera array, launched today, is an evolutionary upgrade of its 3PAR platform, with expanded InfoSight management and claimed Nimble array ease-of-use.

Our sister publication The Register carries the announcement story here. On this page Blocks & Files tries to figure out the speeds and feeds. This is a bit of a headscratcher as HPE has released next to no pertinent information – a glaring oversight for such a big launch.

Rant over. We have gleaned what we can and inferred the information from an HPE release, presentation deck, and HPE sponsored IDC white paper(registration required) as HPE has declined to supply data sheets.

3PAR for the course

Primera can be considered a next-generation 3PAR design as 3PAR’s ASIC-based architecture is still used.

According to HPE, Primera uses a new scale-out system architecture to support massive parallelism. This is highly optimised for solid state storage.

Steve McDowell of Moor Insights Strategy, said in an email interview: “Primera is absolutely an evolution of 3PAR. It was built by 3PAR engineers. It’s based around an update to 3PAR’s ASIC. The Primera OS is based on 3PAR’s operating environment. At the same time, HPE is being very careful to distinguish that it’s a new product. I think that says less about what Primera is today, and more about how it will be basis for HPE’s high-end storage moving forward. This is HPE’s “highend.next”.

We cannot compare Primera to the obvious competitive array candidates: Dell EMC PowerMax, Infinidat InfiniBox, NetAPP AFF and Pure Storage FlashArray, as we lack enough speeds and feeds information.

Primera will also compete with NVMe-oF startup products such as Apeiron, E8, Excelero and Pavilion.

McDowell said: “From a speeds/feeds perspective, I have no doubt that Primera will be competitive with PowerMax, AFF, FlashArray//X, and Infinidat. It’s less about technology in that space today, with all players being more or less equal depending on workload and day-of-the-week, and more about positioning and filling out the portfolio.” x

He thinks Primera will do well against Dell EMC: “The HPE sales teams have cracked the nut and figured out how to sell storage against Dell and Pure – those are the players who HPE is running into most as it closes business. Primera gives them great ammunition in that fight.”

Blocks & Files believes HP will focus on streamlined management through InfoSight and the 100 per cent availability guarantee as its main competitive differentiators, with performance once SCM and NVMe/NVMe-oF technologies are supported inside the array.

The hardware

There are three Primera models, the 630, 650 and 670.  HPE has not provided comparison information and, yes, we have asked.

These are built from nodes or controllers, the terms are synonymous, and up to four nodes can be combined in a single system. Each node plugs into a passive backplane; avoiding cabling complexity. The system comes in two sizes:

  • 2U24 with two controllers
  • 4U48 with four controllers

Each controller has two Intel Skylake CPUs and up to four ASICs. HPE says this is a massively parallel system ad we might have expected more nodes/controllers to justify that term. An HPE source said: “It’s massively parallelized inside the 4-node architecture, that’s true. But it’s not some gigantic scale-out box. It’s a high end box with all fancy data services that’s easy to consume.”

The 4U48 Primera node building block

We have asked HPE if the 2U24 building block has 24 drive slots, and the 4U48 one has 48 slots. A source tells us there are 12 drives per rack unit, which implies that there are 24 slots in the 2U controller and 48 in the 4U one.

Node à la mode

There are eight dual-purpose (SAS/NVMe) disk slots per controller pair. At time of writing HPE has not published raw capacity numbers per drive or revealed the available drive types.

An HPE source told us: “System is primarily all flash but there will be options to get it with spinning drives for archival type needs.”

A node can have up to 1PB of effective capacity in 2U (or 2PB in 4U), with additional external storage capacity expansion available in both form factors. HPE is not providing data reduction ratios nor does it detail expansion cabinet capacity details or numbers.

In this absence, we rely on our completely scientific speculative back of the envelope calculation and note a 2U x 24 slot system with 1PB of effective capacity would have 1PB/24 capacity per drive; 42.66TB/drive. If we assume a 1PB = 1,000TB and a 2.5:1 data reduction ratio then that gives us 16.66TB/drives. Possibly coincidentally, this is is pretty close to the 16TB drives Seagate has just announced.

A 4-node system can have up to eight Skylake CPUs and up to 16 ASICs. That’s the maximum system today.

Blocks & Files diagram of Primera hardware

There can be up to 12 host ports per node, hence 48 in total across the 4 nodes, These ports have 25GbitE or 32Gbit/s FC connectivity. NVMe-over Fabrics is not mentioned by HPE.

The nodes have redundant, hot-pluggable controllers, disk devices, and power and cooling modules.

Primera has a so-called “all-active architecture,” with all controllers and cache active all the time, to provide low latency and high throughput. HPE has not released performance numbers for latency or throughput.

This slides notes 1.5m IOPS, without saying if that is per-node or per system and what kind of IOPS they are.

Gen 6 ASIC

The sixth generation ASIC provides zero detect, SHA-256, X/OR, cluster communications, and data movement functions. Its design is said to optimise internode concurrency and feature a “lockless” data integrity mechanism.

Data reduction (Inline compression) runs in either a QAT (Intel Quick Assist Technology] chip or a controller CPU, depending on maximum real-time efficiency. This is determined by the system’s AI/ML-driven self-optimisation.

The data reduction is built-in and always-on but can be turned off.

One HPE source said Primera has dedicated hardware, the ASICS, to help with ultra fast media that would otherwise overwhelm the fastest CPUs.

Moor Insights’ McDowell thinks this ASIC may be used in the future to upgrade 3PAR systems.

Storage class memory

HPE said Primera is built for storage class memory, without specifying if any SCM media is actually used. We have asked and are waiting a reply.

In November 2018 HPE said it would add Optane caching to the 3PAR arrays, calling the scheme Memory-driven flash.

Services-centric OS

The Primera system features:

  • RAID, multi-pathing with transparent failover
  • Thin provisioning
  • Snapshots
  • QoS
  • Online replacement of failed components,
  • Non-disruptive upgrades
  • Replication options including stretch clusters
  • On-disk data protection
  • Self-healing erasure-coded data layout which varies based on the size of the system and is adjusted in real time for optimum performance and availability.

Features associated with high-end HPE storage – RAID, thin provisioning, snapshots, quality of service, replication, etc. – are implemented as independent services for the Primera storage OS.

Feature cans be added or modified without requiring a recompile of the entire OS. Such service upgrades take five minutes or less. HPE claims this approach enables Primera to be upgraded faster, easily, more frequently, and with less risk than other high-end storage systems. Blocks & Files understands that this may not be the case for the base OS code.

McDowell, told us: “The new OS uses containers to provide isolation for data services – this is different from 3PAR’s traditional approach. It’s also (interestingly) the approach that Dell has said is core to its forthcoming Midrange.next.”

According to the IDC white paper, “System updates are all pre-validated prior to installation by looking at configurations across the entire installed base (using HPE InfoSight) to identify predictive signatures for that particular update to minimise deployment risk.”

There is no mention of any file storage capability, although 3PAR has this with its File Persona.

Primera management

HPE stresses that its cloud-based InfoSight is AI-driven and manages servers, storage, networking and virtualization layers. It can predict and prevent issues, and accelerate application performance.

The IDC whitepaper states: ‘The system generally follows an ‘API first’ management strategy, with prebuilt automation for VMware vCenter, Virtual Volumes, and the vRealize Suite.”

HPE’s pitch here is that data centre systems and storage arrays such as Primera are becoming too complex for people to manage effectively, and AI software is needed to augment or replace human efforts.

The IDC white paper notes: “Fewer and fewer organizations will be able to rely entirely on humans to ensure that IT infrastructure meets service-level agreements (SLAs) most efficiently and cost effectively.”

Performance

InfoSight AI models trained in the cloud are embedded in the array for real-time analytics to ensure consistent performance for application workloads, according to HPE.

The system predicts application performance with new workloads using an on-board AI workload fingerprinting and headroom analysis engine

We are told Primera has consistent, but unspecified, low latency. An HPE source said: “Latencies even with large configurations under pressure are in the low hundreds of microseconds.”

This is maintained at scale. Our source said: “It’s easy for most systems to maintain low latency for small capacities and specific simple types of workloads (like doing single block size benchmarks across a small working set), but doing so across a maxed-out system subjected to very mixed real workloads is far harder.”

Primera is 122 per cent faster running Oracle, according to HPE, without revealing what the base system is or specifying the Oracle software used.

Data protection

Data protection is provided through Recovery Manager Central (RMC), which provides application-managed snapshots and data movement from 3PAR to secondary StoreOnceSystems, Nimble hybrid arrays, and onwards to HPE Cloud Bank Storage or Scality object storage for longer-term retention. Pumping out data to AWS, Azure and Google clouds is supported.

RMC provides application-aware data protection for Oracle, SAP, SQL Server, and virtual machines.

How Primera stacks up with the rest of HPE’s storage lines

Our HPE sources say that Primera replaces no existing storage product. However, Blocks & Files thinks Primera will ultimately replace the 3PAR line as HPE’s mission-critical storage array. For now 3PAR is mission-critical and Primera is high-end mission-critical.

Blocks & Files suggested 3PAR and Primera positioning.

Nimble arrays remain as HPE’s business-critical arrays for enterprises and small and medium-business. The XP arrays continue to have a role as mainframe-connected systems.

Primera will have data mobility from both 3PAR and Nimble arrays.

The overall HPE storage portfolio, including the to-be-acquired Cray ClusterStor arrays and new Nimble dHCI product, looks like this;

Net:net

Primera promises to be a powerful and highly-reliable storage array for hybrid cloud use, with potentially the best management in the industry. But until performance data is released we can’t judge how powerful. It appears to lack current NVMe-oF and SCM support and also lacks file capability. We expect these features to be added in due course.

Double-headed Seagate disk drives? Yes, on their way

Seagate will introduce 18TB, 20TB+ and double-headed disk drives by the end of 2020.

Seagate CEO David Mosley, signalled the company’s intentions at a recent investor briefing hosted by Wells Fargo Securities.

In his presentation Mosley said he expected 16TB nearline drives to be the company’s biggest product by early/mid-2020. It recently announced 16TB Exos, IronWolf and IronWolf Pro drives, and is the first hard drive vendor to announce 16TB drives.

Western Digital, Seagate’s arch-rival, also has a 16TB drive on its way. It will use eight platters and 16 heads, in contrast with Seagate’s nine platters and 18 heads in its 16TB drive. Fewer platters and heads mean lower costs.

Seagate is planning an 18TB Shingled-Magnetic Recording (SMR) version of this 16TB technology. It expects to intro 20TB+ HAMR-based nearline HDDs in calendar 2020.

It will also introduce double-headed drives featuring MACH.2 Multi- Actuator Technology by the end of calendar 2019. 

Aaron Rakers, Wells Fargo senior analyst, who attended the Seagate presentation, noted: “The first solutions will incorporate two actuators on a single pivot point with each actuator controlling half of the drives read/write head arms – providing as much as a 2x increase in performance (demonstrating ~480MB/s sustained throughput); the first major performance gain in HDDs seen in years.”

Market watcher

At the investor briefing Seagate pointed to strong growth in the surveillance market. It cited estimates by the market research firm TrendFocus that branded surveillance HDD shipments will grow from ~25.62 million in 2018 to ~48.2 million units shipped by 2025. This represents 13.5 per cent shipment compound annual growth. Over the same period average drive capacity for surveillance disks will grow from 4TB to 8TB.

Game consoles appear to be moving away from 500GB, 1TB, and 2TB HDDs to Flash/SSD storage. This will affect disk drive sales, but Seagate’s Mosley, said he had “no comment” about this right now. He did say Flash and HDDs will both play an important role in the anticipated expansion of gaming content.

Rakers said Seagate remains committed to participating in the enterprise SSD market, but does not anticipate any significant revenue ramp.