Home Blog Page 433

Composable system startup DriveScale sets up EMEA office

Henk Jan Spanjaard has been hired to be a VP and general manager of EMEA for DriveScale as it enters Europe.

DriveScale is a composable systems startup, competing with HPE’s Synergy and systems from Liqid, Western Digital (OpenFlex) and, latterly, Dell EMC with its MX7000

The idea is that composable systems make up virtual server systems from pools of compute, storage and networking resources, which makes better use of those resources than stranding some of them in physical servers, converged or HCI systems where they can be left unused.

Spanjaard’s career features a series of similar posts with other US companies, such as NetApp.

Gene Banman, CEO of DriveScale, said the company is experiencing  high demand from Europe. He is “excited to be opening our first European office. Henk Jan is a strong addition to our growing team. His proven track record leading sales and operations of high tech companies will help us build market awareness and scale across EMEA regions.”

Rubrik goes on boardroom celebrity trophy hunt

Mega-financed data management startup Rubrik has appointed ex-Cisco boss John Chambers as a board advisor.

Chambers, now Chairman Emeritus of Cisco, and founder and CEO of JC2 Ventures, has also invested in Rubrik, adding to its near $300m funding total.

John Chambers.

JC2 Ventures says it’s about “Teaming with strategic, visionary startups, we hope to create new jobs, new markets and new possibilities for generations to come.” JC2? Where did that name come from? Jesus Chri…..? No, no. How could we ever think that!

Bipul Sinha, Co-founder and CEO, Rubrik, released a canned quote: “John’s ability to identify market transitions and capitalize on them to maximum effect has made him a go-to advisor for world leaders.”

World leaders! Are we supposed to think of the likes of Donald Trump, Vladimir Putin, Angela Merkel and so forth?

Chambers’ canned quote went like this: “When I talk to Rubrik CEO Bipul Sinha about his vision for growing and scaling Rubrik, I have no doubt that the company is on its way to becoming a multibillion dollar business.”

Are we seeing somewhat excessive mutual flattery here?

Rubrik has also appointed Avon Puri, former VMware VP of Business Applications, as its Chief Information Officer (CIO). B&F

V7 Hive Fabric bees swarm out from AI-ready re-invented Atlantis apicultural HCI offering

HiveIO, which bought failed Atlantis Computing’s HCI and VDI tech, has released v7 of its SW saying it eliminates trad HCI vendor complexity.


Hive Fabric is a software HCI (Hyper-Converged Infrastructure) offering based on KVM. V7.0 delivers a Cluster Resource Scheduler (CRS) and virtual servers which enables running multiple mixed-application workloads on a single infrastructure.

Hive Fabric diagram.

CRS monitors resource use across the Hive cluster and can move guest VMs between servers to ensure operational efficiency.

Dan Newton, CEO of HiveIO, claims: “With Hive Fabric, users can consolidate disparate meta-data into a single stream enabling your IT team to gain valuable insight into their business that leads to innovation and increases ROI.”

HiveIO says the fabric’s Message Bus, which holds the cluster’s metadata, is AI-read; a hint that AI techniques like machine learning are coming to boost manageability.

CTO and co-founder Kevin Mcnamara said: “IT professionals [will be able to] leverage an AI engine to provide proactive support.”

HiveIO history

  • Founded in 2015 by CTO Kevin Mcnamara and Chief Development Officer OIfer Bezalel with a $1.8m A-round of funding and Joe Makoid as CEO and President,
  • November 2015 – launched software-defined Infrastructure-as-a-Service offering for SMBs, including VDI, virtualized servers, infrastructure, storage, and security,
  • B-round of funding in 2017 from Citrix Systems and three other investors,
  • July, 2017 – Acquires Atlantis Computing HCI and VDI technology assets in 2017 for undisclosed sum,
  • November, 2017 – Joe Makoid leaves to become CEO and President at MakTRAX,
  • April, 2018 – Dan Newton, ex-GM and SVP at Rackspace, hired as CEO,
  • August 2018 – releases Hive Fabric v7.0.

Trad HCI complexity

On the trad HCI vendor complexity point HiveIO claims “Hive Fabric 7.0 satisfies the performance, scale, and security that enterprises demand while eliminating confusing licensing and multi-vendor integration and management that come with traditional hyper-converged infrastructure.”

Er, Blocks & Files has been told by HCI vendors, such as Cisco (HyperFlex), Dell EMC (VxRAIL), HPE (Simplivity), Nutanix, Scale and others that the whole point of HCI is to buy scale-out boxes that integrate all the HW and SW and licenses needed in one product with one SKU, eliminating complexity – but, hey, marketing!

We’ve asked HiveIO to respond to this point, and Newton said this regarding HCI vs. HCF: “Hive Fabric version 7.0 has all of the functionality of traditional HCI products but is offered under one license and by one vendor, which eliminates a variety of unnecessary costs. Hive Fabric also comes with its own native broker and gateway. Fees and vendor lock-ins from other solution providers can be detrimental to the functionality of the data center. So to make things even more simple, Hive Fabric can run on any x86 hardware without additional overlay, so the solution can be integrated into existing data centers, unlike many products on the market today.”

But B&F thinks you still have to buy the HCI hardware, which you don’t with an all-in-1 HCI system. We think this HiveIO point is a dubious one.

Get a Hive Fabric overview here. V7.0 Hive Fabric is available now. B&F

Buy Pavilion Data array and lease drive slots

NVMe-ver-Fabrics array supplier Pavilion Data has a new business model; buy the array and controller and lease drive slots.

Customer choose the level of controller performance they want and subscribe to a number of drive slots, populating the bays with whatever drives they want. It’s a BYOM – Bring Your Own Media – model called OPENCHOICE Storage.

OPENCHOICE Storage has an all-inclusive, cloud-like subscription model that bundles maintenance, support and software into a single price that will not change for the life of the array. Customers can license more slots as needed

The premise is that customers can buy SSDs on the open market at a lower cost than buying them from array suppliers. Pavilion expects the $/GB to be half of what an all-flash array vendor might charge.

Pavilion says CPU, memory and SSD manufacturers are constantly innovating, but the three-year refresh cycles of traditional storage OEMs prevent customers from taking advantage of these innovations.  Its OPENCHOICE Storage enables IT to buy, scale and repurpose media on its own timelines.

Pavilion’s Head of Products, Jeff Sosa, told us; “OPENChoice was initially wanted to offer to some of the large hyperscale-type accounts, since they already buy SSDs for their servers with DAS.  Since we use the same kind of SSDs in our platform, it made sense to allow them to continue buying SSDs the same way.   In addition, it offers customers a lot of flexibility to change and mix/match media to fit specific application needs in real-time, even within the same chassis by using different tiers of flash in the same box.

“Other customers have been interested in repurposing resources when projects come and go, so this allows them to remove devices they no longer need in the array and repurpose them elsewhere, and replace them with something else in our array.  Therefore, flexibility and cost savings are the key business drivers behind it.”

CEO Gurpreet Singh says this eliminates capacity-based pricing: “With a 50 percent lower cost of acquisition than all flash arrays and three times better utilization than DAS, OPENCHOICE makes storage infinitely simpler to procure and manage.”

The company has qualified drives from the major drive manufacturers and says it qualifies new drives on a regular basis. B&F

Cohesity just BOOMS!

Secondary storage hyper-converger Cohesity saw revenues grow 300 per cent in its fiscal 2018 year and introduced Helios SaaS management.

The fiscal year closed July 31, and Cohesity also quadrupled its customer count year-over-year.  It said customer adoption accelerated 75 per cent in the fourth quarter compared to the third.

It said 81 per cent of its partners grew their business in excess of 100 per cent in FY 2018, while 75 per cent  grew their business by more than 200 per cent.

CEO and founder Mohit Aron said: “Cohesity empowers customers to consolidate a patchwork of secondary silos including backup, files and objects, test/dev, archiving, analytics, and cloud with a platform that scales in a way no other vendor can match.”

The company has tripled its headcount to nearly 700 globally. Within the last eight months, it christened its new its headquarters in downtown San Jose while also opening its second major US office in Raleigh, North Carolina.

Helios

Helios provides a single dashboard with analytics and machine learning capabilities to hopefully generate insights from data. Customers can;

●        Access a smart assistant for IT: Administrators can define service-level agreements (SLAs) by job and workload and Cohesity SmartAssist will automatically evaluate required versus available resources (e.g., compute, storage) across all clusters to meet SLAs or suggest changes as needed.

●        Utilise machine learning for operational insights: Helios evaluates how infrastructure is being used and utilises ML to determine what adjustments or modifications may be necessary in the future. Eg, based on current storage utilisation and workload makeup, Helios will proactively recommend when additional resources might be needed to continue meeting business requirements.

●        Benefit from peer comparisons: Helios evaluates a customers’ operational metadata sets and also analyses operational metadata from Cohesity customers globally. Organisations can compare infrastructure utilisation against anonymised benchmarks from their peers and uncover best practices.

●      Benefit from proactive health checks: Based on global operational metadata, Helios looks at the actions taken by an IT administrator across their sites and alerts them, for example, of anomalies that don’t follow best practices (e.g., if a financial services customer forgets to turn on encryption). Helios will then suggest a recommended path forward.

●        Relax with automation: Customers get peace of mind in knowing Helios can automate corrective action for failure of non-critical system resources, including proactively notifying the Cohesity global support team to ensure that any issue is quickly addressed.

●        Access a “crystal ball”: Helios empowers administrators to test the impact of future changes across their clusters before rolling them out. Administrators can also evaluate alternative approaches with “what-if” analyses powered by machine learning.

Customers can move data from one location to another, maximising capacity and improving disaster recovery while optimising costs. They can also use the Cohesity Analytics Workbench (AWB) to get insights from data, starting with three applications:

●        Detect patterns that help ensure compliance: With the Pattern Finder application, enterprises can search their secondary data for strings of characters that match the pattern of sensitive or personal data, for example, a Social Security number or a phone number. Organisations can then take corrective action to comply with stringent regulatory requirements.

●        Pinpoint bad passwords to mitigate risks: The smart Password Detection application gives companies a way to improve security by searching across global data sets to uncover passwords that fail to meet best practices, such as those that include personal or company names, or are stored in plain text.

●        Transcode videos to optimise for capacity: To help optimise resources, IT admins — including those within the security, surveillance and healthcare industries — can use the Video Compression application to reduce the size of large media files while optimising space.

Download the Cohesity Helios data sheet here.

Newcomer StorCentric biz buys Nexsan and Drobo

StorCentric is the brand new home for the Nexsan array and Drobo prosumer storage businesses, having bought them from their current owners.


Behind this apparently simple transaction is a tight mesh of private equity manoeuvres.

First we have to have a history lesson.

In 2007 Nexsan Technologies, founded in 1999, was a storage subsystem developer with SATA disk arrays and an Assureon content-addressed array for archiving.

Data Robotics was founded in 2007 by Geoff Barrall and developed a consumer/prosumer file storage product called Drobo. Barrall had previously founded BlueArc, the NAS supplier bought by HDS.

Connected Data was founded in 2012, also by Geoff Barrall. Its product is the private cloud file sync’n’share Transporter device.

Data Robotics, now with its name changed to Drobo, was bought by/merged with Connected Data in 2013.

Nexsan was bought by Imation for $120m in 2013. Imation was then in a loss-making and restructuring phase.

Drobo was bought by an investment group in May 2015, with ex-Brocade exec Mihir Shah becoming its CEO.

Imation bought Connected Data for $7.5million in October 2015,

Imation subsequently imploded, with Nexsan and Connected Data being bought in a non-cash deal by private equity house Spear Point Capital Management in Jan/February 2017. Ron Bienvenu, Spear Point co-founder and managing partner, becoming its CEO and exec vice-chair. Trevor Colhoun, SpearPoint co-founder and managing partner was exec chairman.

Geoff Barrall became the COO and John Westfield was hired as Nexsan’s CFO. Victoria Grey was hired as the CMO. She is no longer there. Neither is Geoff Barrall.

StorCentric

What has happened is that a new company, StorCentric, has been set up, with funds from Humilis Holdings and some others. Now it begins to get murky.

Spear Point Capital is being merged into Humilis Holdings and a Humilis managing partner is Trevor Colhoun.

StorCentric’s board has four members; Trevor Colhoun, Mihir Shah who is StorCentric’s CEO, Mike Edwards of the Atlas Technology Group investment bank and a Drobo board member, and Peter Richards who we understand has stints at Empire Capital and now Dune Road Capital in his CV.

Details of the buying of Nexsan and Drobo by StorCentric have not been revealed, neither the cash value or its instantiation in cash or shares. Equity was provided by existing investors and the balance was a bank loan.

StorCentric will run Nexsan and Drobo as separate divisions. We’re told Connected Data is really not a major part of this. The technology is there but not being sold any more.

At Nexsan Gregg Pugmire is the Global VP of Sales and will have responsibility for both brand and revenue.

Read Fenner is the Global VP of Sales at Drobo and will have responsibility for both brand and revenue.

Nexsan CTO, and founder, Gary Watson, is StorCentric’s CTO.

Where do we go from here?

Mihir Shah tells us there is still a lack of profitability in the storage industry. The setting up of StorCentric is about taking two established brands with solid reputations and building on them to make one profitable, technically rich company.

It will have over 150 employees across locations in North America, Europe and Asia, and says it will help customers scale their businesses while accessing, backing up and archiving critical data.

A Shah quote said: “StorCentric is strongly positioned for future growth and innovation in the storage industry. We will continue to execute on our growth strategy, both organic and through acquisitions. Our focus will be on additional software and hardware products that address the needs of our customers and partners.”

Nineteen years after founding Nexsan a Watson quote said: “Both Drobo and Nexsan have been developing new integration points to the evolving hybrid cloud world and these acquisitions gives us the opportunity to work together on developing solutions which automatically optimise both on-premises and on-cloud information management and governance.”

Watch this space for a hopefully simpler and developing story.

Igneous: Bye-bye proprietary hardware. Hello data management services

Igneous Systems has seen the light and is becoming a data management supplier using commodity hardware.


In common with other data protection vendors selling proprietary hardware,  Igneous faces strong competition from players who treat data protection as an entry to the wider data management market. These rivals offer data analytics and insight, and use commodity hardware.

Igneous combines hybrid cloud services with a technically advanced proprietary on-premises hardware appliance. These are nanoservers –  disk drives with added ARM compute. The software includes replication-based tiering off to the public cloud (AWS), and features delivered as a service.

The company protects primary data-handling filers – basically offering NAS backup. But it has always had data services ambitions and now it is making its  move into this market. At the same time it is deepening integrations with three key filer players – Pure Storage, Dell EMC Isilon and Qumulo.

This is a competitive necessity. The company needs to separate itself from the pure-play pack of backup vendors and from fast-growing unstructured data management startups such as Cohesity and Rubrik.

Three new hardware integrations*, on top of the existing NAS filer integrations, help fulfil both needs:

  • Pure Storage FlashBlade – NFS, SMB and S3 object integration
  • Dell EMC Isilon – direct API integration with OneFS providing concurrent multi-protocol support for NFS and SMB, with switching of ACLs and permissions
  • Qumulo QF2 – direct API integration via a strategic alliance

Got to pick a data set or two

Igneous classifies the market into customers with structured data sets (database and VMs) up to 100TB; unstructured data sets up to 10PB or so; and more scalable and larger needs.

For the first group,  Cohesity and Rubrik provide more modern and resource-efficient data protection than legacy backup software tools such as Veritas and CommVault, according to an Igneous spokesperson.

Traditional backup engines can capture NFS and SMB file permissions like Igneous but “they’re based on the NDMP backup protocol, which may take too long to complete and may impact filer performance,” he said.

“Neither Veritas nor Commvault supports Isilon multi-protocol (NFS+SMB) permission protection, API-level integration with Qumulo, or Object support with Pure FlashBlade.”

Igneous says it can protect customers with structured data out to the 100s of TBs and unstructured data beyond 10s of PBs, and do this more affordably and with simpler management. This is a vague boast. There is no simple structured-unstructured data capacity level crossover point and Igneous is claiming tricky-to-prove advantages.

The company is to rebrand its product set from the Hybrid Storage Cloud to something that better reflects the component data services it offers in this cloud. It will  extend public cloud support, possibly adding Azure alongside AWS.

Other NAS supplier integrations are coming too, such as, possibly, WekaIO.

Commodity exchange

As part of its new software focus, Igneous is developing a generic hardware strategy for its on-premises appliance. This will involve close relationships with some unnamed hardware vendors.  And it is not too much of a stretch to assume that the partners will provide popular capacity-focused file storage boxes.

Igneous is also exploring a web-crawler approach to check thought NAS data sets, building up metadata such as an index and doing analytic things with that indexed data. It may also extend data provisioning capability.

Igneous is a work in progress, in product development mode as it pivots from proprietary hardware. It’ is extending data services and adding NAS array integrations to compete with strong data management services competition. Actifio, Cohesity, Druva and Rubrik are just some of the companies breathing down its neck. Object storage suppliers are also hastening into the NAS access, data management area.

Like molten lava, Igneous has to settle on and find its way across an existing landscape before it cools, solidifies, gets stuck and can flow no more. You gotta keep flowing Igneous. Standing still is not allowed. ®

* Pure Storage customers must run Purity for FlashBlade version 2.1.3 or higher with API version 1.1 or higher to protect Object shares with Igneous.

Dell EMC Isilon OneFS 7.2.0 or higher can begin importing and protecting multi-protocol shares with Igneous.

Customers using Qumulo QF2 version 2.7.6 or higher can protect their system with Igneous.

StorageCraft builds Rubrik-esque system for SMBs

StorageCraft aims to provide single system Cohesity and Rubrik-style enterprise converged scale-out storage and data protection for small and medium businesses.

The company bought SMB object storage startup Exablox in January 2017. Exablox built deduplicated scale-out filer arrays using object storage internally.

The OneXafe box from StorageCraft is claimed to be a converged scale-out storage and data protection system for small and medium entreprises’ physical and virtual servers. Exablox had combined StorageCraft’s ShadowProtect backup SW with its OneBlox nodes in October 2016.

At the time we said StorageCraft’s portfolio includes an integrated suite of backup, disaster recovery, system migration, virtualisation and data protection software running on Windows and Linux for small and medium-sized businesses.

Now, in effect the StorageCraft and Exablox technologies have been combined, with StorageCraft saying neither NetBackup, Veeam nor Veritas can offer such a complete or affordable system for SMBs. Roughly similar systems to OneXafe from Cohesity and Rubrik are said to be designed for larger enterprises and be more costly.

There are three models, with OneXafe 4412 and 4417 capacity-optimised systems and an all-flash 5412 version offering instant recovery and fast-access unstructured data to serve primary virtual server production applications.

OneXafe has a patented distributed object-based file system – the Exablox SW technology – that is integrated with data protection services. The file system delivers NFS and SMB data access for primary and secondary storage.

It scales from a single node to a multi-petabyte cluster. The management is delivered remotely as-a-service by a OneSystem facility.

OneXafe has both VMware and VSS integration and provides work flow and capacity usage analytics. The data protection is SLA-based and policy-driven, and data recovery can be pretty much instant; a patented VirtualBoot feature is claimed to deliver terabyte (TB) virtual machine recovery in less than a second regardless of whether deployed on-premises or in the cloud.

StorageCraft_OneXafe

StorageCraft OneXafe

An on-premises OneXafe system’s policy’s can replicate data to StorageCraft Cloud Services with a claimed single-click recovery of data, compute, and network services in an orchestrated recovery workflow.

Will SMB’s want a single storage scale-out box with cloud-based management to handle all their storage and data protection needs? If their current system is a complex and costly mess than OneXafe might be an appealing fix.

Datasheets can be found here. OneXafe disk-based 4400 starts at less than $14,000 for 144TB (c$0.095/GB) with the all-flash 5400 series starting at less than $30,000 for 38TB ($0.77/GB). How do these stack up against Rubrik and Cohesity?

A web source says Rubrik’s R334 (3-node, 36TB) backup appliance unit list price is around $100,000 MSRP. That is $2.71/GB, more expensive than StorageCraft’s OneXafe 5400 at $0.77/GB.

A Cohesity pricing web source says a a 3-node Cohesity C2300 (48TB raw, using 4TB disk drives, 800GB PCIe Flash cards per node) starts at $90,000. That’s $1.83/GB, again more expensive than OneXafe’s 4400 at $0.095/GB.

OneXafe products are available from StorageCraft’s channel.

Did NetApp over-pay buying SolidFire for $870 million?

Did NetApp pay too much for all-flash array vendor SolidFire?

Its latest results prompted financial analysts to update their clients on NetApp’s finances and one* included SolidFire revenues for the four fiscal 2018 quarters. They were $36.3m, $23.5m, $25m and $46m; a total of $130.8m for the year.

NetApp paid $870m for SolidFire in late 2015. A $130.8m run rate is 15 per cent of that, seemingly a not bad return. But that is not the actual return, because the actual return on the SolidFire investment is the profit on that $130.8m.

What is it?

We don’t know, as NetApp doesn’t reveal that information. However it does reveal its own revenues and profits, and the profit can be presented as a percentage of its revenues.

In NetApp’s fiscal 2018 quarters its profits as a percentage of its revenues were 9.92, 12.32, -33.29 and 16.52. That negative third quarter numberwas caused by US tax law changes.

Let’s ignore that number and use the other three to calculate NetApp’s average profit percentage for the year. It comes to 12.92 per cent.

We can apply that to the SolidFire revenues to work out a notional SolidFire profit for the year; $16.9m.

That is 1.94 per cent of the SolidFire acquisition cost. In other words, $870m was invested and is getting a 1.94 per cent return. Is this good or bad?

We understand, simplistically, that a business can evaluate an internal rate of return on investment by comparing it to their internal cost of raising cash, the weighted average cost of capital (WACC). That would be the cost of raising the $870m in this case.

Another way if looking at it is to use an internal rate of return (IRR) and saying that returns acquisition investments must be higher than an IRR threshold level.

If an acquisition brings in more than the WACC or IRR then it is a reasonable deal. Some business sources suggest a large and mature company could have a 7 or 8 per cent WACC.

A venture capitalist company might have a 50 per cent WACC while a private equity concern might have a 35 per cent one; both representing a risk premium, with the VC investment risk being higher than the private equity level.

We could class NetApp as a large and mature company. It seems unlikely that a SolidFire return on investment of 1.94 per cent would exceed NetApp’s WACC or IRR percentage.

Therefore, on the basis of the calculations and assumptions above, NetApp overpaid for SolidFire.

Does this argument stand up?

William Blair analyst Jason Ader said: “Your math makes sense. But I would note that a big reason in my mind for the SolidFire acquisition was to get their hands on its HCI tech, which at the time of the deal, was still under development. While NetApp HCI is still small revenue for NetApp today, it could be much more significant over time, which could help justify the purchase price.”

*Wells Fargo senior analyst Aaron Rakers.

NetApp swaggers over competitors with rampaging flash array results

NetApp is growing revenues solidly with a booming flash array business, competitors in perceived disarray, and its Data Fabric concept resonating with customers looking to the public cloud.

Its first fiscal 2019 quarter revenues were $1.47bn, 12 per cent more than a year ago*. There was a profit of $283m, 58.7 per cent higher than the year-ago $166m. Product revenue was $875m, up 20 per cent annually, software maintenance was $229m (up 3 per cent y-o-y) and $370m came in from hardware maintenance and other services. This was flat y-o-y. Gross margin was 66 per cent.

All-flash revenues exhibited an annual $2.2bn run rate, up 50 per cent year-on-year. Installed base flash array penetration is around 14 per cent; it was under 10 per cent a year ago; implying there is a lot of potential flash array demand inside its customers.

Financially it is in rude health, with free cash flow being 18 per cent of revenue and increasing 22 per cent year-on-year. Some $605m was spent on share repurchases and cash dividends. NetApp closed the quarter with $4.8bn in cash and short-term investments.

CEO George Kurian’s release quote said: “Enterprises are signalling strong confidence in NetApp by making long-term investments to enable the NetApp Data Fabric across their entire enterprise.”

NetApp CEO George Kurian.

NetApp said its cloud data services run rate was $20m.

SolidFire array revenues were $47m according to IDC. The all-flash EF series and ONTAP arrays brought in $433.4m according to senior Wells Fargo analyst Aaron Rakers. HCI revenues were not identified.

What did the earnings call tell us? Kurian said NetApp was starting the fiscal year: “in a stronger position than we’ve been in for years. Revenue, gross margin, operating margin and earnings per share were all above our guidance.”

Kurian said: “as businesses become more data-intensive and builds more data-intensive environments like machine learning, high performance storage and data services are benefiting from spending intention.” HCI product revenues were not separately identified in NetApp’s results. Asked by Rakers about this and NetApp’s HCI product progress, Kurian said: “With regard to whether we will breakout HCI or not, we’ll provide you that clarity when we do it.” Take that, Aaron.

He added: “At this point, HCI is a part of our product revenue. We have seen a broader number of competitive engagements and we are winning our share, right? … We’ve had good competitive wins. … we have some really exciting product announcements at NetApp Insight, and you’ll see how we build NetApp HCI into our compelling Data Fabric story there.”

NetApp Insight will be held in Las Vegas, October 22-24., and it should also contain news about the object storage StorageGRID product.

Kurian talked about the effect of lowering 3D NAND flash prices: “As prices ameliorate for 3D NAND, which we are starting to see clearly, that performance [disk] drive, the 10-K drive segment will concede to the all-flash array segment.”

Then he moved on to the competing suppliers, saying: “I think, the legacy competitors are in a variety of states of challenge. I think, if you look at the large players like EMC or HP, they’re still trying to rationalize their lead flash portfolio, because none of their products is complete.

“If you look at players like IBM and Hitachi and Fujitsu and Oracle, they basically conceded and are no longer in sort of new deployment considerations. They’re essentially defending the installed base and the start-ups are challenged.

“They have essentially been fast on product innovation, and they don’t have the market reach to compete. So it will be, I see some sense of consolidation coming up in the marketplace and we will benefit from that, because we’re very well-positioned.”

One questioner asked about NetApp’s view on composable infrastructure and HCI, to which Kurian replied: “Our solution has many of the elements of composable in it.”

We might hear more about that in October.

Next quarter revenues are expected to be between $1.45bn and $1.55bn, a 5.6 per cent rise on the year-ago Q2’s $1.42bn at the mid-point.

<b>*</b> Note; the fiscal 2017 first quarter revenues have been recalculated to $1.32bn based on NetApp’s adoption of the new accounting standard ASC 606. Originally they were $1.33bn, with profits of $136m, also retrospectively recalculated to be $131m.

Pavilion compares RoCE and TCP NVMe over Fabrics performance

Pavilion Data says NVMe over Fabrics using TCP adds less than 100µs latency to RDMA RoCE and is usable at data centre scale.

It is an NVMe-over-Fabrics (NVMe-oF) flash array pioneer and is already supporting simultaneous RoCE and TCP NVMe-oF transports.

Head of Products Jeff Sosa told B&F: “We are … supporting NVMe-over-TCP.  The NVMe-over-TCP standard is ready to be ratified any time now, and is expected to be before the end of the year.

“We actually have a customer who is deploying both NVMe-oF with RoCE and TCP from one of our arrays simultaneously.”

Pavilion says NVMe-oF provides the performance of DAS, with the operational benefits of SAN. It’s implementation has full HA and no single point of failure. It says it offloads host processing with centralised data management.

That data management, when used for MongoDB for example, allows;

  • Centralized Storage allows writeable clones to be instantly presented to secondary hosts, avoiding copying data over the network
  • Dynamically increase disk space size on-demand in any host
  • Instantly back up the entire cluster using high-speed snapshots
  • Rapidly deploy a copy of the entire cluster for Test/Dev/QA by using Clones
  • Eliminate the need for log forwarding by having each node write log data directly to a shared storage location
  • Orchestrate and automate all operations using Pavilion REST APIs

Pavilion compared NVMe-oF performance over RoCE and TCP with from 1 to 20 client accessors, and found average TCP latency was 183µs and RoCE’s 107µs, TCP being 71 per cent slower.

A Pavilion customer NVMe-oF TCP deployment was data centre rather than rack scale with up to 6 switch-hops between clients and storage. It was focused on random write-latency serving 1,000s of 10GbitE non-RDMA (NVMe-oF over TCP) clients and a few dozen 25GbitE RDMA (NVME-oF with RoCE) clients.

The equipment included Mellanox QSA28 adapters enabling 10/25GbitE breakouts with optical fibre. There were 4 x switch ports consumed to connect 16 x Array/Target ports physically cabled. Eight ports were dedicated to RDMA and 8 dedicated to NVMeTCP; both options equally “on the menu.”

There were no speed-transitions between the storage array and 10GbitE or 25GbitE clients with a reduced risk of over-whelming port-buffers.

Early results put NVMeTCP (~200µs) at twice that of RoCEv2 (~100µs) but half that of NVMe-backed iSCSI (~400µs). On-going experimentation and tuning is pushing these numbers, including iSCSI, lower.

It produced a table indicating how RoCE and TCP NVMe-oF strengths differed;

Suppliers supporting NVMe-oF using TCP as well as Pavilion include Lightbits, Solarflare and Toshiba (Kumoscale.) Will we see other NVMe-oF startups and mainstream storage array suppliers supporting TCP as an NVMe-oF transport? There are no signs yet but it would seem an easy enough (relatively) technology to adopt.

Pavilion’s message here is basically, unless you need the absolute lowest possible access latency, then deploying NVMe-oF using standard Ethernet looks quite feasible and more affordable than alternative NVMe-oF transports – unless perhaps you run NVMe-oF over Fibre Channel.

B&F wonders how FC and TCP compare as transports for NVMe-oF. If any supplier knows please do get in touch. B&F

 

What the Dell? Qumulo gets on to PowerEdge servers

Qumulo’s QF2 (Qumulo File Fabric) software is now available on Dell’s PowerEdge R740xd storage server, a 2U 2-socket Xeon Skylake, all-NVMe flash drive system.

The QF2 SW is also available on Qumulo’s own storage server HW, HPE Apollo HW and in the AWS cloud.

It will compete, as a scale-out file system, with Dell EMC’s own Isilon gear, as well as with IBM’s Spectrum Scale SW, Panasas parallel file system SW, WekaIO, and also Elastifile. 

Qumulo says the Dell QF2 incarnation can scale out to 100s of petabytes and tens of billions of files. It claims QF2 is the highest performance file storage system in the data centre and the public cloud, being the most scalable and most efficient. 

There is no independent, industry-standard validation of these claims as, so far, it is absent from industry-standard benchmarks, such as the one which features Spectrum Scale, WekaIO and NetApp amongst others.

Having Dell server support adds a HW platform and males it easier to sell its SW into Dell’s server customer base, as well as enabling Qumulo to broadcast an anti-lock-in message.