Home Blog Page 397

Nutanix reduces cash burn – but when will it turn a profit?

Nutanix delivered poor topline results for its fourth fiscal 2019 quarter and full year, with quarterly revenue down and deeper losses. A good outlook, a smaller loss per share than expected plus internal growth indicators made it all okay and shares rose 19 per cent in post-market trading.

But when will the hyperconvergence vendor stop losing money?

Wells Fargo senior analyst Aaron Rakers said “Nutanix’s F4Q19 results and F1Q20 (Oct ’19) guide should be considered net-positive; albeit it remains difficult to envision Nutanix’s path to profitability.”

Jason Ader, an analyst at William Blair, said: “We believe Nutanix has… a path to cash breakeven within the next few years…. While Nutanix’s subscription transition has been painful, it should yield better visibility/predictability, customer lifetime value, and sales leverage over time.”

Cash breakeven “within the next few years” could mean three to five years time, and that’s breakeven, not a profit.

The Q1 fy2020 outlook is for revenues between $300m and $310m, which compares to $313.3m reported in Q1 fy2019. Shares rose 13.5 per cent after the results were announced.

Background

Nutanix believes customers want and need subscription software that can run on-premises and in the public cloud. Tying software to hardware is the wrong way to go in a hybrid cloud era. Nutanix says it is easier for a hybrid cloud with a Linux-based on-premises offering such as Nutanix to work with the big public cloud players which also use Linus, than non-Linux competitors such as VMware.

Consequently it is shifting from hardware life-bounded license sales to subscription billing. And it is also getting out of selling hardware. The transition has resulted in “revenue compression”. Nutanix says this is necessary pain and it will gain more predictable, consistent and larger revenues as a result of the two switches. It also claims competitors will have to go through the same painful transition.

Additionally the company reported revenue generation issues last quarter, which it attributed to a spending switch from lead generation to engineering and product development at the start of fiscal 2019. There was also a shortfall in sales rep headcount and sales execution issues in the USA.

Lastly, competitors NetApp and HPE have reported a slowdown in large systems sales to enterprises, due to macro-conomic factors such as trade disputes, and this may have affected Nutanix too..

The numbers

In a statement CEO Dheeraj Pandey declared: “We delivered a solid fourth quarter and believe our performance reflects our execution improvements and the meaningful progress we have made transitioning our business to a subscription model.” 

The revenue and net income numbers were:

Q4 fy2019 revenues were down 1.3 per cent while the loss more than doubled. Full year revenues increased 6.9 per cent and the loss for the year also more than doubled.

Under the surface of this picture of a high-growth company apparently running towards the buffers, there are encouraging signs.

  • Q4 revenue came in at the top end of its $290m -$300m guidance.
  • Nutanix added 990 new customers in the quarter, taking the total to 14,180, and the highest increase for 6 quarters.
  • Sales and marketing headcount increased by 254, compared to 107 and 207 in the two prior quarters respectively.
  • Exited quarter with more than 2.5 times the order backlog of the prior quarter.
  • Subscription revenues were $196m, up 72 per cent, representing 65 per cent of revenues. The company is targeting 75 per cent subscription revenues in 12 months. 
  • Software and support revenues rose seven per cent to $286.9m.
  • Gross margin of 80 per cent rose 2.3 per cent and is a record.
  • Loss per share of $0.57 vs analyst expectations of $0.64

Free cashflow was -$33.3m though, compared to $6.5m a year ago.

Subscriptions soaring

A look at the revenue mix trends shows the rising subscription revenue trend:

The revenue mix shows the increasing importance of subscription revenues and declining hardware ones. Non-portable SW is Nutanix SW delivered with and licensed to particular hardware. The proportion of this is declining as well. Portable SW can run on-premises or in the public cloud.

Sales of software above the basic hyperconverged hypervisor layer rose, CFO Duston Williams noted: ”26 per cent of our deals included a product outside our core offering.” He’s talking about software such as Files, Era for database workloads, Cam and other multi-cloud workflow software.

Earnings Call

Pandey said in the earnings call: “Q4 was a good quarter for us, as we beat Street expectations on total billings and revenue and by $15 million each for software and support buildings and revenue… our Q4 results demonstrated measurable progress in our subscription transformation, our pipeline funnel, our sales re-enablement, our simpler messaging on the platform versus new apps and our hybrid cloud journey.”

He added: “Our solid quarter-over-quarter billings and revenue growth, as well as our progress in sales hiring are clear indicators that our execution is improving and our market remain strong.”

Wiliams said the subscription transition cost Nutanix $20m to $25m in revenues in the quarter.

Asked about signs of macro-economic weakness, he said: “There’s really been no additional signs or signals that we’ve seen.” The effects of its own subscription transition have masked any indication of the enterprise buying slowdown that has affected NetApp and HPE.

Sales force segmentation

Nutanix plans to split its salesforce between the enterprise and to the commercial mid-market. Pandey said: “We have segmented commercial completely out of our enterprise sales figures and [are] building a focused U.S. commercial sales leadership and an organization under them.”

It wants reduced sales person involvement “so prospects can go without any human touch from digital ads to our clusters in the cloud with a few clicks.”

Comment

Are the glory growth days, with 20 per cent plus revenue growth/quarter, over for Nutanix? The subscription transition will last through its fiscal 2020 and there’s no certainty beyond that. The company believes it will be better positioned than icompetitors for the hybrid multi-clouds oftware subscription era, but that means it predicts pain for them, not growth for itself.

Nutanix’ hardware-centric channel has to get used to selling subscription software deals and that’s uncharted waters. But there are good growth signs, including customer growth and the increased number of large deals.

Nutanix is an enigma. It’s like a hot air balloon that’s stopped rising while it changes its fuel to an untried type, hovering in mid-air waiting for the old fuel to burn off and the new stuff to start working

At some stage it has to turn a profit, and that means either growing revenues by more than $200m/quarter, the current net loss, and/or taking cost more than $200m/quarter costs out of the company. The first seems fanciful at the moment and the second … well, let’s not go there.

How long before SSDs replace nearline disk drives?

Aaron Rakers, the Wells Fargo analyst, thinks enterprise storage buyers will start to prefer SSDs when prices fall to five times or less that of hard disk drives. They are cheaper to operate than disk drives, needing less power and cooling, and are much faster to access.

So when will the wholesale switch from nearline HDD to SSDs begin? We don’t have a clear picture yet but a chart of $/TB costs for enterprise SSDs and nearline disk drives shows how much closer the two storage mediums have come in the past 18 months.

It is unwise to extrapolate too much but it is clear the general trend direction is that Enterprise SSD cost per terabyte is falling faster than nearline disk drive cost/TB. Our chart below shows the price premium for enterprise SSDs has dropped from 18x in the fourth 2017 quarter to 9x in the second 2019 quarter.

Chart prepared by Wells Fargo senior analyst Aaron Rakers, using IDC and TrendForce data.

Business disk drive sales are mostly dependent on high capacity – 8TB to 14TB, spinning at 7.2K rpm – drives that to store secondary or nearline data. Fast mission-critical disk drives, spinning at 10K to 15K rpm, have largely given way to faster access SSDs.

Disk drives are getting a capacity jump from the use of heat- and microwave-assisted magnetic recording (HAMR) and a speed boost through dual read-write head technology.

However, SSD capacities are growing even faster. QLC (4 bits/cell) NAND adds 25 per cent more capacity, compared to current TLC (3bits/cell). And layer counts in 3D NAND are set to rise from the 64-layer mainstream and arriving 96-layer product to 128-layers and beyond.

A 128 layer 3D NAND die will have twice the capacity of a 64-layer die and a third more than a 96-layer die. With our simplistic math that means a 128Gbit 64-layer TLC die will become a 340Gbit 128-layer QLC die and cost substantially less per TB to make than the 64-layer TLC 128Gbit version.

And PLC (5bits/cell) flash is being developed, with 25 per cent more capacity than QLC flash.

This time next year the three disk drive makers -Seagate, Toshiba and Western Digital – could see their nearline business start spinning down.

Toshiba and Western Digital are partners in NAND fabrication and make and sell SSDs. Seagate has no NAND fab interest and is a small player in the SSD market, making it the most vulnerable to SSD cannibalisation of the nearline disk business.

Micron’s against this view

Colm Lysaght, Micron’s Senior Director, Marketing Strategy and Innovation, took issue with this and told Blocks & Files by email: “I don’t argue with the cost trends shown ]above], and clearly SSD price/GB will get closer to HDD price/GB over time.

“I also don’t dispute the lower TCO of an SSD compared to an HDD, nor the faster access.

“However, the raw number of EB needed for a “wholesale switch” from nearline HDD to SSD is far too large for the NAND flash industry to contemplate. The capital investment needed to generate the EB required (which will continue to grow at a rate of about 30 per cent per year) is prohibitively expensive.

“SSDs may nibble (and maybe even munch) at the nearline HDD market, but both will coexist for many years to come.”

Good points, all.

Rakers rebuts

Wells Fargo senior analyst Aron Rakers told subscribers: “While we see most investors already appreciating the SSD replacement of mission-critical HDDs, we have seen little debate at this point over the possible encroachment into nearline HDD workloads. 

“However, we think this could evolve as the NAND industry looks to integrate / leverage QLC-based 3D NAND (note: Pure Storage expected to intro new QLC-based all-Flash arrays for lower-performance primary storage applications next week) and we have also seen SSD $/GB fall to sub-10x vs. nearline HDDs.”

Our take is that SSD encroachment of nearline drives could start but will take many years to complete, if it does complete, because vast amounts of new SSD foundry capacity will be needed.


Enterprise buying slowdown clips HPE storage wings

HPE storage revenues dipped five per cent Y/Y to $844m in its third 2019 quarter results, confirming NetApp’s enterprise buying slowdown signal as Trumponomics around China affect enterprise confidence.

Overall revenues slipped seven per cent to $7.2bn. Within that, server sales dipped 12 per cent to $3.15bn.

In the earnings call yesterday, CEO Antonio Neri talked of an uneven market and disciplined execution, which expanded profitability across the company while revenues fell.

The overall revenue downturn was attributed to portfolio rationalisation (getting out of cheap servers for hyperscalers) and macroeconomic factors. He said: “We continue to see uneven demand due in part to ongoing trade tensions which impact market stability and customer confidence.”

“This is showing up in elongated sale cycles, particularly in larger deals.” In storage Neri said HPE: “overall experienced a modest revenue decline against the tougher market backdrop in year-over-year compare.” 

Storage sales

But there was a bright spot. In constant currency terms Nimble array sales grew 21 per cent while the SimpliVity hyperconverged product segment grew at a more modest four per cent, way down from its 25 per cent growth in the previous quarter.

The Synergy composable systems business grew most rapidly, up 28 per cent. This was also a marked reduction on its 78 per cent sales growth in the prior quarter. 

The SimpiVity and Synergy growth slowdowns could be attributed to enterprise buyers slowing purchases.

It’s clear that the enterprise 3PAR array business did nor grow, and the newly announced Primera arrays were too recent to affect sresults. 

Wells Fargo senior analyst Aaron Rakers told subscribers the: “Primera (3PAR) high-end cycle will materialize through 2H2019.”

SMB and US sales

Neri noted: “The SMB mid-market continues to be strong, and this is where we are putting a lot of emphasis on what we call the no touch low touch model for the transaction of high velocity business.” 

As with NetApp, HPE acknowledged a US sales execution problem, which it is tying to fix: “We are actively addressing the sales coverage model in the United States. I am pleased with the actions taken to date that have resulted in positive [growth] with our US product business, which was up over 40 per cent sequentially.”

But the fundamentals are strong: “Explosion of data will continue to fuel underlying demand for solutions to help protect, store, manage and analyze their data. And this is where we are laser focused. We have a strong portfolio solutions and services that span the Intelligent Edge and hybrid cloud.”

Looking for growth

HPE’s strategic areas of product investment are high-performance computing, hyperconverged infrastructure, hybrid cloud (meaning servers and storage) and the PointNext/GreenLake subscriptions business.

Neri pointed out underlying signs of health with improving gross and operating margins and a record level of year-to-date free cash flow ($860m). HPE is shift its product mix towards higher-margin, higher value, software-defined products delivered as a service.

This will help it maintain margins as commodity prices, such as DRAM and NAND, fall. CFO Tarek Robbiati pointed out: “Fiscal year ’19 was not a year where we wanted to dial-up to growth. Fiscal year ’19 is a year, where we had to deliver on EPS commitment, drive free cash flow. These are the two most important metrics, prepare ourselves to dial-up to growth in the subsequent quarters.”

HPE’s fiscal 2020 then is being signalled as a revenue growth year, with Robbiati confirming that: “notwithstanding the consolidation of Cray, we will grow our business overall.” The Cray acquisition is expected to close in October.

The company has raised its non-GAAP earnings per share (EPS) outlook for the year, the seventh quarter in succession it has done this. There’s confidence and disciplined execution in action.

But when will the enterprise buying slowdown end? The China-US trade tensions could well affect the next quarter and the one after that. Trumponomics are unpredictable.

Intel-Micron 3D XPoint revenues to hit $3bn+ in 2023

3D XPoint media revenues will grow to more than $3bn in 2023, propelled by system software support and lowered manufacturing costs.

XPoint expert analyst presentations at the Flash Memory Summit 2019 in San Jose this month were based on a belief that XPoint is now a viable storage-class memory (also called persistent memory) and sales will take off in 2019 or 2020.

Jim Handy, the memory guy at Objective Analysis, showed a chart of predicted XPoint revenues in his FMS 2019 presentation:

Mark Webb of MKW Ventures Consulting, also showed an XPoint projected revenue table.

Webb looks out a year further than Handy, and includes non-Optane media in 2024. Blocks & Files thought a direct comparison between the Handy and Webb projections would be a good idea. We guesstimated numbers from Handy’s chart and put them into a spreadsheet model.

Webb skipped 2020 and 2022 in his slide but Blocks & Files interpolated numbers in these years by splitting the difference between the previous and following years to arrive at this table;

That enabled us to chart and compare both sets of numbers:

Handy thinks XPoint revenues start quite slowly in 2020 but then accelerates out to 2023. Webb differs with a near-$1bn number this year and then a linear rise out to 2023. At this point, Handy’s projected revenues surpass Webb’s.

Webb and Handy are making best-estimate assumptions about Optane DIMM and XPoint SSD sales; their relationship to Intel Cascade Lake server sales; the ratio of DRAM to XPoint DIMMs in servers; and the entry of Micron into the XPoint market.

They both arrive at $3bn- $3.6bn revenue range in 2023.

Gen 2 XPoint

Intel and Micron are planning for its generation 2 of Optane in 2020, according to Webb. He estimates it will have 4 stacks, instead of Gen 1’s 2, but similar lithography. It will still be SLC (1 bit/cell) but have a 35 per cent lower bit cost with its 256Gbit die than Gen 1’s 128Gbit die.

He said there should be measurable Gen 2 XPoint volume in late 2020.

In Webb’s view Micron will not ramp its own Gen 1 XPoint sales. Its plan is to wait for Gen 2 and then develop its own markets. Blocks & Files thinks that Micron may make its XPoint media usable as memory by AMD processors, also perhaps ARM systems, IBM POWER processors and GPUs.

Webb points out that Micron’s Lehi, Utah plant is the only XPoint fab and will be able to support Intel’s needs and Micron’s initial ramp. It will output 45m GB/month of Gen 1 XPoint in the fourth quarter and 5m GB/month of Gen 2 XPoint. A year later the Gen 1 output will be the same but Gen 2 will have risen to 25m GB/month.

Under the terms of the Intel-Micron joint venture, now being dissolved, Micron is required to provide XPoint capacity to Intel for another year from October 2019. It could agree to provide capacity for longer still, thus saving Intel the cost of developing its own XPoint fab.

With an XPoint fab likely costing north of $10bn to develop from scratch and the Lehi fab capable of of doubling or tripling output Intel could defer building its own fab for some time.

Note. the Handy and Webb presentations scan be found in the presentation set downloadable from the FMS 2019 website.

VMware Cloud on Dell EMC is ready for those private moments

Dell Technologies has announced VMware Cloud on Dell EMC (VCDE), a grandly-titled data centre as a service.

This is part of its Dell Technologies Cloud, whereby Dell on-premises infrastructure operate private clouds. Dell supports elements of that infrastructure on AWS, Azure, Google and other public clouds.

Hybrid public-private clouds are supported with consistent management and operations. The Dell Technologies Cloud products are available under a pay-for-use, Flex on Demand model. HPE GreenLake offers this idea as well.

Dell’s VCDE in essence means VMware Cloud Foundation – VMware’s software-defined data centre (SDDC) stack, running on VxRail hyperconverged infrastructure (HCI). Components include SDDC Manager, vSphere, vSAN, NSX and vRealize.

The new service will support automated deployment of VMware PKS (Pivotal Kubernetes Services) on VxRail, adding Kubernetes and containers support to the VMware virtual machine mix.

Dell has produced a number of ‘Cloud Validated Designs’ for its servers, storage and networking equipment – for instance, PowerMax and Unity XT storage arrays and PowerEdge MX servers – that integrate with VMware Cloud Foundation. These designs are less integrated than VxRail and enable compute and storage to be scaled separately.

VCDE includes VMware’s vSphere, VSAN, and is available initially to US customers. Dell said the Dell EMC version is the first to market. It also said Dell EMC is its preferred data protection partner for VCDE. VMware and Lenovo discussed beta-testing this DCaaS (data centre as a service) idea under the Project Dimension banner in November 2018.

Lastly, it is Kubernetes everywhere in HCI land. Competitor Nutanix has its Karbon product which supports Kubernetes. HPE’s SimpliVity also supports Kubernetes via a GitHub download. NetApp’s hyperconverged products also support Kubernetes. 

Enterprise storage suppliers bring their toys to VMworld

Suppliers of ancillary products are integrating themselves deeper into the VMware environment as it becomes a dominant platform for enterprise server computing. Let’s look at some of the products on show at VMworld in San Francisco this week. And be sure to check out The Register’s VMWorld first day coverage.

Our VMworld round-up shows how customers can get SANs from direct-attach storage, multi-tiered AWS backup and archiving, and fast NVMe storage fabric networking across ordinary TCP, data centre Ethernet and Infiniband too. Disaster recovery can be co-ordinated through vCentre via VAIO with Zerto’s latest offering.

ATTO, Cohesity and Lightbits

ATTO’s Xstream CORE FC 7550 and 700 accelerated protocol bridges have achieved VMware Ready status. These bridges can be used to build a Fibre Channel SAN by connecting direct-attached SAS storage to up to 64 physical hosts.

Cohesity, which produces so-called hyperconverged secondary storage products, said its DataProtect and DataPlatform offerings have attained “VMware Partner Ready for VMware Cloud on AWS” validation. These products enable customers to back up and recover virtual machines (VMs) running on VMware Cloud on AWS.

They can can archive VM backups to Amazon S3 and Amazon Glacier for long-term retention and get lower  cloud storage costs with deduplication. Customers can recover data, whether it be a single file or VM or their entire VM environment from AWS.

Lightbits Labs rolled out the Gen 2 version of its Lightfield card which provides NVMe/TCP links to storage. It features improvements in performance and power efficiency, cost, and a smaller, half-height, half-length (HHHL) PCIe add-in card form factor. Lightbits said LightOS users can use NVME-oF disaggregation, enabling the heterogeneous clustering of vSAN nodes with local drives or diskless hosts.

Mellanox

Mellanox has introduced ConnectX-6 Dx and BlueField-2 SmartNICs and I/O Processing Unit (IPU) products. The ConnectX-6 Dx SmartNICs provide up to two ports of 25, 50 or 100Gbit/s, or a single port of 200Gbit/s, Ethernet connectivity and PCIe 4.0 host connectivity. 

Its hardware offload engines include IPsec and TLS inline data-in-motion crypto, advanced network virtualization, RDMA over Converged Ethernet (RoCE), and NVMe over Fabrics (NVMe-oF) storage accelerations.  

Mellanox said BlueField-2 based SmartNICs act as a coprocessor to transform bare-metal and virtualized environments using software-defined networking, NVMe SNAP storage disaggregation, and enhanced security capabilities.

The BlueField-2 IPU integrates the ConnectX-6 Dx with a set of Arm processor cores, memory interfaces in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gbit/s.

Charlie Boyle, VP and GM of DGX systems at Nvidia, provided a prepared quote: “We look forward to the new networking, security, and storage capabilities in the Mellanox ConnectX-6 Dx and BlueField-2, which we expect will greatly accelerate training and inference workloads in the data centre and at the edge.”

Zerto

Disaster recovery supplier Zerto announced the launch of an Early Access Program for its platform certified with VMware’s VAIO framework. VAIO  is vSphere APIs for I/O Filtering and it enables vendors to develop filters that can intercept IO requests from a VM to a virtual disk.

A VAIO-certified version of Zerto’s IT Resilience Platform  is expected to become GA in Q4 2019. Zerto says users will be able to create VM replication policies in vCenter’s Storage Policy Based Management for use in continuous data protection and disaster recovery. It will support Secure Boot for vSphere hosts which, Zerto says, is important for government agencies and highly secure environments.


Now that’s what I call future proofing. IBM makes world’s first quantum computing-safe tape drive

IBM is developing cryptographic security measures to protect archived data against attacks by quantum computers.

You can’t be too careful, IBM says: “While years away, data can be harvested today, stored and decrypted in the future with a powerful enough quantum computer.”

In a press statement this week, Vadim Lyubashevsky, a cryptographer at IBM Research, said: “IBM Research has been developing cryptographic algorithms that are designed to be resistant to the potential security concerns posed by quantum computers.” 

IBM Research is working with IBM’s tape development teams to implement new encryption algorithms in a TS1160 tape drive’s firmware. It seems a long technology journey between the far shores of quantum computing and the humble tape drive. Why the fuss?

Big Blue is developing quantum computers and envisages these becoming enormously powerful, and advancing bio-informatics, chemistry, and AI. Our sister publication The Register recently described quantum computers as forever-nearly-here

Nevertheless IBM research scientists say that they could arrive in 10 to 30 year’s time and could be used to defeat encryption methods used today. Therefore, banks and other organisations with lots of sensitive archived information need to prepare now and deploy new encryption algorithms designed to defeat quantum computer attacks.

Up the Khyber and into Star Trek

IIBM has designed a couple of algorithms based on two cryptographic primitives – Kyber, a secure key encapsulation mechanism, and Dilithium, a secure digital signature concept. 

Lyubashevsky said these are “based on the hardness of mathematical problems that have been studied since the 1980s and have not succumbed to any algorithmic attacks, either classical or quantum.”

Cyber and Dilithium are included in the CRYSTALS (Cryptographic Suite for Algebraic Lattices), developed by IBM in collaboration with several academic and commercial partners including ENS Lyon, Ruhr-Universität Bochum, Centrum Wiskunde & Informatica and Radboud University.

IBM has made CRYSTALS open source and submitted it to NIST for standardisation. It is also donating algorithms and support to a number of open source projects such as OpenQuantumSafe.org.

IBM TS1160 tape drive getting entangled in quantum computing.

The company said it has tested CRYSTALS successfully on a prototype IBM TS1160 tape drive using both Kyber and Dilithium in combination with symmetric AES-256 encryption, enabling what it calls the world’s first quantum computing-safe tape drive. 

Air gaps, attacks and cloud marketing

This is all very well but there are no attack-capable quantum computers and so the drive cannot be fully tested. Also since archive tapes are stored offline they are air gap-protected against malicious access attack.

Because of this, the idea of tape as a possible attack target seems odd – until we realise that the IBM mainframes used by banks will have IBM tape drives. Also, the IBM Cloud organisation is involved and tape is used for archiving data in its public cloud.

IBM will begin to provide quantum-safe cryptography services on the IBM public cloud in 2020. It will safeguard data in transit by enhancing TLS/SSL (Transport Layer Security/Secure Sockets Layer) implementations in its Cloud services using the CRYSTALS algorithms.

It is offering a ‘Quantum Risk Assessmen’t from IBM Security straight away. Read more in an IBM blog.

A bit of a storage rounder-upper

August is coming to a close. Here is a round-up of storage news for the past few days.

AI-powered personal NAS

Say hello to Latticework’s Amber, a personal NAS that made its debut this week. It is also the “first-ever, AI-powered smart storage platform that allows users to safely backup, store and organise their digital data in the privacy of your own home while being able to access and share remotely with Amber’s Personal Hybrid Cloud.”

The classy-looking Amber box is a cube with rounded corners and a LED ring light in its top. 

It has an Intel Gemini Lake (Celeron/Pentium Silver class)  CPU, two mirrored hard disk drives (2TB or 4TB max), loads of ports and a WiFi router. Amber is about the size of a bookshelf speaker and provides a personal hybrid cloud. It features music/video streaming and has, Latticeworks says, AI-powered data organisation for photos, videos and other files.

The personal hybrid cloud aspect means the box can be accessed over the internet and data copied to the LatticeNest cloud for protection and wide access and sharing.

Yes, but what does AI-powered mean? The tech specs say it has AI-Powered Facial Indexing.. In other words it has algorithms for the recognition of faces in images; How disappointing.

Amber is available in the USA now, at $549, and may appear in Europe and Asia in the next few quarters.

Batman Vexata and Robin.io

All-flash array vendor Vexata is pairing up with Robin.io to add Kubernetes capability. This means its array can provide storage for stateful containers in Kubernetes-orchestrated devops shops.

Robin.io is a hyperconverged Kubernetes system i.e. it has its own storage. Why buddy with Vexata? The answer is Robin.io has no intrinsic physical storage capability – its container-based software sits between the application and the infrastructure. Enter Vexata and its array. The Robin platform extends Kubernetes with built-in storage, networking, and application management for Oracle databases and its ecosystem of applications such as WebLogic Server, Oracle RAC and EBS.

Vexata now gets to play in this sandpit. The two said customers can provision databases in minutes, scale on-demand, reduce costs, and accelerate Oracle workloads. 

StorCentric, Vexata’s parent company, intends to add a Robin.io facility to its Nexsan arrays, which provide unified file and block storage and secure archiving.

Robin.io was founded in 2013 and has taken in $46m in funding.

Igneous puts DataDiscover in the AWS Marketplace

Igneous ’s basic UDMaaS – Unstructured Data Management as a service – product provides data protection, movement and discovery for file and object data. There are three components; DataDiscover (file analytics), Data Protect (backup, archive, recover) and Data Flow (copy, move, sync.)

DataDiscover continuously finds files in NAS stores through API integrations for devices from NetApp, Isilon, Pure Flashblade, and Qumulo. It supports any NFS- or SMB-based filesystem. Igneous says it scans billions of files in the space of hours, totting up how much data customers have on primary NAS, how old it is, and where it’s located.

The idea is they can shunt off old file data to cheaper storage; AWS’ S3 Glacier Deep Archive, for example, and so save money.

Try out a DataDiscover test drive.

Shorts

Cloud storage supplier Backblaze has opened a data centre in Amsterdam, its first in Europe. It has two data centres in California and one in Arizona. It is also introducing the concept of regions, starting with EU Central and US West.

IBM is open-sourcing reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximise memory bandwidth between processors and attached devices. It’s contributing the RTL for the OpenCAPI reference designs to the OpenCAPI Consortium.

Object Matrix has launched the latest version of its media focused object storage software, MatrixStore v4.1. It features compatibility with more hardware and operating systems, and updates to MatrixStore Vision, web admin and monitoring.

The ObjectiveFS v6.4 release includes fixes and performance improvements  and optimisations from past releases, including 350+MB/sec read and write of large files.

Panzura has announced availability of its Log Analytics Service (LAS) within the Vision.AI offering. This multi-cloud log analytics platform claims always-on availability, one-click deployment and flexible scale at a fraction of the cost of Splunk

StorageCraft announced that an Evaluator Group TCO study shows that StorageCraft’s OneXafe scale-out converged solution has an almost 3:1 TCO advantage over traditional data protection systems that use data protection software and servers backing data up to a dedicated data protection storage system.

Storj Labs has a Tardigrade decentralized cloud storage platform offering storage, scalability into the exabyte range, security, and efficiency for individuals and organisations. It says that, about half the time, Storj is actually 2x faster than Amazon S3. 

Veritas plays nicely with VMware in the cloud

Data protection stalwart Veritas has ported the Enterprise Data Services Platform (EDSP) for VMware onto the AWS, Azure and Google clouds.

For data protection vendors, the capability to backup to and within public clouds is becoming table stakes, along with providing data analytics, compliance reporting and anti-malware.

Veritas is a traditional legacy on-premises data protection vendor. Newer competitors such as Cohesity, Druva, Rubrik and Veeam provide central management for distributed data protection environments, support compliance needs and enable analytics to be run on the backed-up data.

That’s the background for Veritas combining disparate products into EDSP and now adding cloud support.

EDSP

EDSP shoehorns NetBackup protection, availability and Aptare analytic insights into a single product.

With the latest release customers can now easily recover granularly without mounting the virtual disk image (VMDK), which reduces recovery times. Alternatively they can automate and orchestrate recovery at scale. 

EDSP can also help migrations to the cloud.

Veritas has made EDSP available on-premises and in the three main public clouds for VMware users. A central management facility looks after a distributed EDSP environment. All this gets EDSP hybrid and multi-cloud credibility with VMware users.

Perhaps Hyper-V and KVM hypervisor users will get a version of EDSP in the future.


Pure Storage sees no enterprise sales slowdown – yet

Pure Storage made a net loss of $66m on fiscal Q2 revenues up 28 per cent to $396.3m from a year ago. The results were better than analyst forecasts, but shares still fell overnight in response to reduced forward outlook from the company.

Pure quarterly revenue and net income history; showing a seasonal sawtooth pattern

Pure reported:

  • Non-GAAP profit $275m, up from $210m a year ago,
  • Gross Margin – 69.4 per cent, was 68 per cent a year ago,
  • Free cash flow was $19.9m compared to -$11.9m a year ago,
  • Cash and investments of $1.19bn,
  • Product revenue grew 24 per cent year-on-year to $300.1m.
  • Support and subscription revenue grew 42 per cent year-on-year to $96.2m,

The USA accounted for 74 per cent of sales, according to Pure, which gained 450 new customers in the quarter. It said the customer gain was the highest in any Q2 and takes the total past 6,600.

Earnings call

In the earnings call Chairman and CEO Charlie Giancarlo said; “Looking at the market as a whole, Pure is clearly out-executing our traditional competitors, some of whom have expressed concerns around the macro economy. We do not believe the macro environment has affected us this past quarter.”

President David Hatfield added: “Pure is growing approximately 10x faster than any competitor and their rate of spending on innovation is on average 2x less than Pure’s R&D investment… Our win rates continued to hold nicely.”

He said: “Our gross margins continue to be industry-leading, with product margins well above our competitors.”

Pure said it has a lot of new products coming in the next three quarters, and these will help it take more market share.

Outlook, analysts and enterprise buying slowdown

The outlook for the next quarter is $440m at the mid point, 18 per cent higher than the year-ago quarter but a lower grow rate than the current quarter’s 28 per cent. The full year outlook is pared back from $1.76bn to $1.68bn. 

Pure execs said the new outlook was a case of prudence and due to lower NAND prices; not in response to an enterprise spending slowdown, as NetApp experienced and Cisco and Intel have mentioned.

Wells Fargo senior analyst Aaron Rakers told subscribers; “We think a reduced forward outlook from Pure has been expected following NetApp’s weak July quarter results and reduced F2020 guide… along with weak enterprise results from Cisco and Intel.”

William Blair analyst Jason Ader sent this message to his readers; “The guide-down was attributed primarily to a precipitous decline in NAND component pricing in the second quarter (which is affecting end-user pricing and deal sizes) and secondarily to increased caution on the macro environment (although Pure has yet to see an impact on its business.)”

Pure CFO Tim Ritters is leaving the company for personal reasons and there were fulsome comments about his great performance as CFO.

Datrium DRaaS takes the disaster out of recovery

Datrium today introduced a disaster recovery-as-a-service (DRaaS) using the VMware Cloud on AWS as the remote DR data centre.

The service is imaginatively called “Datrium DRaaS with VMware Cloud on AWS” and the hybrid converged system supplier said it is much simpler for customers than running their own remote data centre. There is a consistent management environment and Datrium handles all ordering, billing and support. It claims its DRaaS is up to 10 times less expensive that a traditional DR operation, but has not yet provided an example.

Datrium organised a fun quote from Steve Duplessie, senior analyst at ESG: “Let’s face it – DR has been more disaster than recovery. It has been virtually impossible to have legitimate DR for decades, and the problem just keeps getting worse with exponential data growth and overall complexity.”

And here’s another from Bryan Betts, Principal Analyst, Freeform Dynamics. “Cloud-based disaster recovery has been something of a Holy Grail—great in theory, but a lot harder to achieve in practice, thanks to the complexity of marrying up disparate environments,” he said. “Datrium DRaaS has the potential to completely flip that around – given testing and careful planning, of course! – by guaranteeing a matching DR setup in the cloud.”

Datrium DRaaS, the features

Datrium copies on-premises virtual machine (VM) data to the AWS cloud. Should disaster strike it can then start up a replacement cloud version of the on-premises DVX systems.

Datrium DRaaS diagram

Data upload takes place with deduplicated and compressed VM snapshots stored in AWS S3. These VMs are stored in native vSphere format and provide the basis for incremental backup and DR.

Datrium’s DRaaS offers a 30-minute recovery compliance objective (RCO) with autonomous compliance checks and just-in-time creation of VMware SDDCs from the S3-stored snapshots. 

The service also delivers a recovery point objective (RPO) from five minutes to multiple years ago, supporting primary and backup data simultaneously.

Single glass of pane

Datrium’s DRaaS runs on the company’s Automatrix data platform, which converges primary storage, backup, disaster recovery, encryption and data mobility capabilities into a single offering with a consistent data plane.

DR operation can be tested in an isolated and non-disruptive way. There is a consistent management environment covering the on-premises and cloud-resident Datrium systems. Customers can operate their cloud DR site and primary site with vSphere

FailBack is automated and can recover VMs when the primary site is operational again or to recover from a ransomware attack. To minimise egress charges only changed data is sent back from the cloud.

Datrium DRaaS currently supports recovery to AWS regions including Asia Pacific (Tokyo), Canada (Central), Europe (London) and the East and West United States. Contact Datrium for pricing.

Competition

Druva also offers DRaaS based on AWS. On-premises VMs or ones in the VMware Cloud on AWS are backed up to Druva’s cloud in AWS, converted to EBS snapshots, and can be spun up as EC2 instances if disaster strikes the source site.

Zerto sells combined backup and DR facilities covering any-to-any mobility between vSphere, Hyper-V, AWS, IBM Cloud, Microsoft Azure and any of several hundred Zerto-powered Cloud Service Providers (CSPs).

Western Digital makes fast and fat portable hard drives for gamers

Western Digital today announced five external drives for gamers. The new WD_Black brand comprises four disk drives, including two Xbox-specific offerings, and one SSD.

They are designed to be fat and fast. In its release blurb WD said the products were “dedicated to gamers who face the dreaded challenge of choosing which of their favourite games to sacrifice when they reach the storage capacity limit of their gaming station.”

First, the hard disk drives;

  • P10 – 2TB – 5TB with USB 3.2 Gen 1 port and 3-year warranty. Costs $90 for 2TB to $149.99 for 5TB.
  • P10 Xbox One- 3TB – 5TB – 2 months Xbox Game Pass Ultimate membership. Costs $110 for 3TB or $150 for 5TB.
  • D10 – 8TB – 7,200rpm and up to 250MB/sec, 3-year warranty and active cooling (fan). Costs $200
  • D10 Xbox One – 12TB – 7,200rpm – 3 months Xbox Game Pass Ultimate membership. Costs $300.

The P50 solid state drive has 2TB capacity and supports USB 3.2 gen 2×2 port with speeds up to 2000MB/sec and comes with a 5-year warranty.

The P10 Game Drive for Xbox One, D10 Game Drive and 10 for Xbox One are available this quarter. The P50 Game Drive is expected to be available in calendar Q4.

Internal disk drives

We’re puzzled about what actual disk drives are inside the P10 and D10 portable products.

Examining the dimensions of the P10 and D10 products shows that the D10 drives are physically larger and that there are two P10 sizes:

Going by their physical size and larger capacities the D10 drives are 3.5-inch format products as smaller 2.5-inch format products top out well before 8TB capacities. 

It looks like the P10 range is based on 2.5-inch format drives and the 4 and 5TB models have an extra platter or two. This is why their height is 8mm larger and they weigh 0.9kg more than the 2TB product.

The P10’s spin speed isn’t revealed. We’ve asked WD what specific drives are inside the P10 products and how fast they rotate.

WD told us by mail; “Our  WD_Black P10 2.5-inch gaming drives are 5400RPM class. The 2TB is 2-platter design (7mm) and 3, 4 and 5TB are a 5-platter design (15 mm).”