Rubrik’s worldwide sales boss has left the company to spend more time with his family.
Mark Smith’s tenure as EVP for Global Sales and Business Development at the lavishly funded startup was just 19 months.
Mark Smith, now ex-EVP Global Sales and Business Development
A statement from a Rubrik spokesperson said: “I can confirm Mark Smith is leaving Rubrik after completing the company’s biggest bookings quarter ever. After 25 years of working in high velocity companies, Mark is looking forward to spending more time with his family.
“We thank Mark for his passion, dedication and hard work and wish him the very best. Mike Tornincasa will serve as the interim head of global sales and business development, effective immediately.”
Therefore Smith has left the building already. It’s understood he is in his mid-sixties – he was in the class of 1972 at Calhoun Highschool, Merrick, NY – but the abruptness of this move indicates it’s probably not a normal retirement.
InfiniteIO, a startup building a metadata accelerator for NAS arrays, has received $10.3 million B-round funding.
It brings total funding to $13.7 million for the 2012-founded startup led by Mark Cree.
Mark Cree, Infinite IO’s CEO and co-founder.
It says its network switch-like technology responds to metadata requests faster than the most advanced all-flash storage arrays, significantly increasing the performance of existing storage systems. It has to do this as, otherwise, there ls no need for its product in an all-flash NAS world.
Inifnite IO performance with NetApp A200 all-flash array.
When metadata indicates files are no longer being accessed, they are migrated to low-cost cloud storage, without sacrificing security or availability.
That makes its technology into more of a file storage controller with HSM pretensions and an overlap with Komprise.
Stuart Larkins, a partner at Chicago Ventures, referred to this in a canned quote: “Infinite io’s metadata-driven approach to data management addresses a large market need by solving the data management problem without impacting storage performance – all other approaches force the customer to compromise performance when managing data at scale and usually involve other drawbacks like capacity-based pricing.”
The B-round was led by former Motorola CEO and Cleversafe Chairman Chris Galvin, and Chris Galvin’s son David of Three Fish Capital and formerly of IBM. It also includes capital from Chicago Ventures, John Anderson, Dougherty and Company, Equus Holdings, and PV Ventures.
Three members of the storage great-and-good society also put in cash:
Dean Drako, founder and former CEO of Barracuda Networks and founder and current CEO of Eagle Eye Networks,
Brett Hurt, co-founder and former CEO of Bazaarvoice and current co-founder and CEO of data.world,
Bill Miller, co-founder and former CTO of Storage Networks and current CEO of X-IO-technologies.
The new money will be used to scale operations globally and help the company grow faster.
Date warehousing in the cloud startup Snowflake has hired a CFO.
Thomas Tuchscherer starts at once and his CV is interesting; before Prior to Snowflake, he spent eight years as Talend’s CFO, leading Talend through a successful IPO and helping the company increase revenue 20x.
Before Talend, he had finance exec stints at SAP, Business Objects and Cartesis.
It looks like Snowflake could be gearing up for an IPO.
Thomas Tuchscherer
Snowflake CEO, Bob Muglia issued a welcome quote saying the usual good stuff about a new hires: “I am delighted to have him on board to help Snowflake navigate through this phase of rapid growth and global expansion.”
Tuchscherer’s quote was the usual mind-numbing PR gush: “Snowflake has developed a disruptive, cloud-built data warehouse-as-a-service and the company’s number-one value is to ‘put customers first’. Snowflake has a tremendous market opportunity in transforming how companies share data across and beyond their organisations. I look forward to working with the Snowflake team to serve our customers, grow our business and realise our mission of enabling every organisation to be data-driven.”
B&F thinks that Snowflake’s IPO will be tremendous, and make lots of people very rich. B&F
Toshiba has added a high-capacity helium-filled 3.5-inch form factor SAS disk drive alongside its equivalent SATA product.
Like the SATA-using MG07 ACA, the MG07 SCA has with its 12 and 14TB capacities but uses a dual-port 12 Gbit/s SAS interface instead of the MG07 ACA’s 6Gbit/s SATA link.
The MG07ACA has already spawned an MN07 NAS drive in 10TB and 14TB capacities. Both Western Digital and Seagate have 14TB helium-filled drive products and product categories spread out as the technology matures.
The MG07 SCA, which effectively replaces Toshiba’s MG06 SCA air-filled drive with its 6Gbit/s SASA interface and 10TB capacity, is intended for 24 x 7 use. It is said to be 50 per cent more power-efficient than the MG06, and is rated for a 2.5m hours MTTF, 550 total TB transferred per year workload rating and 600,000 load/unload cycles.
Toshiba MG07 SCA
There are 9 platters in the 14TB product; we understand there are 8 in the 12TB model.
It comes with a 256MiB cache and transfers data at 242MiB/s at the 12TB capacity level and 248MiB/s at the 14TB level.
Now that all three drive manufacturers have 14TB products, and Seagate is anticipating shipping a multi-actuator 20TB HAMR-technology product in 2020 we ask ourselves if there will be one more capacity jump with current magnetic recording technology before then? Perhaps to 16TB? The manufacturers are not saying.
Toshiba’s MG07 SCA drive is sampling now. You can check out a spec sheet here. B&F
Updated HPE storage products products and more sales heads should bring in the dollars and end the storage growth doldrums.
Its third fiscal 2018 quarter results showed 4 per cent revenue growth which is okay and 173% profits growth , which is great, but storage was ho-hum, with revenues of $852.4m, up just 1% on the $844m recorded a year ago.
“HPE: upside continues; server unit declines and slow storage growth in focus” was the headline from Wells Fargo senior analyst Aaron Rakers.
Server units were down as HPE has extracted itself from the low margin hyperscaler business, with units down in the low double digit percentage area but ASPs up around 25%.
Rakers writes: “Hyper-converged revenue (SimpliVity + Composable Infrastructure) was +130 % y/y now standing at a +$1B/annum run rate.” HPE CEO Antonio Neri says this 130% is three times the market growth rate.
Neri said in the earnings call that: “HPE Synergy delivered record revenue and has more than 1,600 customers.” Synergy is HPE’s composable infrastructure system.
Neri called out: “17% growth in big data storage.” He expects “improved organic growth in Q4 as we drive increased sales productivity and as our latest storage offerings gain customer traction.” HPE could do with that; it has no big-hitting all-flash array and updated 3PAR (with InfoSight) and Nimble offerings have yet to drive revenues higher.
Tom Stonsifer, the CFO, also pointed out that; “We should start seeing the benefits of the increased number of new sales specialists that we hired earlier in this year.” Taken with the 3PAR and Nimble product upgrades; “We expect the growth rates in storage to pick up next quarter.”
Neri emphasised this: “We are very confident about our ability to grow storage because we have a differentiated portfolio, very autonomous in many way, self healing.”
Rakers’ view on HPE storage was; “Revenue at $887m was below our $926m estimate.; +1% y/y, which we view as negative given the relative strength seen in the overall storage market. Interestingly, HPE did not provide a growth disclosure for All-Flash (vs. +20% y/y in F2Q18).” This was unlike prior quarters.
This implies that HPE all-flash array revenues were less than 20 per cent, which contrasts with NetApp’s estimated 50% growth and Pure’s 34% rise. We (B&F) think HPE all-flash array revenues are poor compared to those from Dell EMC, IBM. NetApp and Pure.
quarterly storage revenue history.
We don’t usually look at quarter-on-quarter changes but a year ago the third quarter’s storage revenues were up on the second and first quarters; now they are down, indicating a pattern has changed.
Rakers takes the view that “HPE’s lackluster storage growth (+1 per cent y/y) validates NetApp’s views that the storage market could be at the early stages of share consolidation among top storage-focused vendors.” B&F
StorONe says its TRU storage technology and S1 storage software ran at 1.7 million IOPS in a 2-node ESXi server system.
There was no independent verification of these numbers.
The two 2U dual-X86 processor Superemicro servers were in a high-availability configuration and a Western Digital 2U24 (2u x 24 slot) flash storage JBOD.
There were four client servers; one Oracle Linux with 4 x 16Gbit/s Fibre Channel ports and a similar Centos server, and and two Centos servers with 40GbitE links. These talked to the StorONE appliance via a 16Gbit/s FC switch and a 40GbitE switch with mixed protocols.
StorONE benchmark config diagram
This system delivered 1.7 million random 4K IOPS for random reads at less than 0.3 ms latency, 15GB/sec sequential reads (128/256KB with 30 per cent CPU utilisation), 7.5GB/sec sequential writes (128/256KB), and 10GB/sec for a mixed 80/20 read/write (128KB) workload.
StorONE claims an all-flash array would need four times as much hardware to deliver that performance. If we take that literally it means 8 servers, 96 SSDs, 8 x (4 x 16Gbit/s FC) and 8 x 40GbitE links.
It says its S1 virtual offering supports both internal VMs and external iSCSI and Fibre Channel initiators, and either ESXi or physical machines.
CEO Gal Naor claims: “StorONE is the only enterprise storage vendor able to extract the full spec out of the drives and deliver them to the applications achieving 70,000 IOPS per drive with very low latency. I’m very proud of our results as we are 5-10 times more efficient compared to anyone else in the market.” And less expensive too.
StorONE concept chart
No other enterprise storage supplier can do this, it says, and have data protection turned on as well. The TRU (Total resource Utilisation) software has been in development for seven years and redesigned traditional storage software architecture in an unspecified way to make it more efficient; think software black box.
Naor has blogged; “Enterprise storage systems suffer from extreme inefficiencies that lead to an insane waste of hardware, budgets, management attention, environmental issues (e.g. power and cooling), infrastructures, real estate and more. … In many real-world cases, enterprise storage systems yield less than 10 per cent [of] the aggregate performance of the drives installed in the system.”
StorONE says its software provides unlimited snapshots, support for all storage protocols (block, file and object) on the same drives, and support for all drive types (SAS,SATA, NVMe SSD or HDD) in the same server.
It has claimed users can achieve half a million IOPS from a system with six SSDs and a single 40GbitE Mellanox port, and, using TRU storage software, a customer’s hardware investment will match the rated IOPS, throughput and capacity of the drive
The company said interested potential customers contact it for a demo.
+Comment
These hero numbers are great – as far as they go, which is not as far as independent, authoritative tests of StorONE’s product. If the software really is this good then put it out to independent test houses.
Cloudian has closed a $94m funding round, taking its total raised to $173m, more than competitor Scality’s $152m.
The two are the leading stand-alone object storage system suppliers, with their funding dwarfing that of Swiftstack’s $23.6m.
This Cloudian E-round had contributions from Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures, Inc. and WS Investments.
The round includes a $25 million investment from Digital Alpha that was first announced in February.
Cloudian says this is the largest single round to date for a distributed file systems and object storage provider. It will use the cash to to expand its worldwide sales and marketing efforts and increase its engineering team.
One of its engineering concerns could centre on the adoption of QLC (4bits/cell) flash technology, with SSDs using it becoming the first realistic alternative to disk drives for fast access object storage.
Cloudian HS4000 array
The latest Cloudian customers include public health agencies in the US and UK, two of the top five Formula One teams, a US national research lab, an online travel market leader, a top three pharmaceutical company, a top three global car maker, a top five European bank, an Ivy League university, and one of the world’s largest global engineering companies.
CEO Michael Tso placed the round in this context: “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”
Investor Takayuki Inagawa, President and CEO of NTT DOCOMO Ventures, said: “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that supports the next generation of connected devices.”
Comment; Cloudian is going for Global 2000 enterprise customers and believes its combination of on-premises storage, with public cloud S3 backend tiering, gives it the scale and affordability needed by businesses facing data inputs from hundreds of thousands to millions of connected devices in the coming years.
A looming collision in the market is between secondary storage convergers, such as Cohesity, putting out the idea of a single, global secondary storage repository, and object storage players resolutely resistant to this idea, unless it’s their object storage that is the convergence technology.
We’ll venture the view that no object storage startup has yet had a successful IPO. Exits, such as those by Amplidata and Bycast StorageGRID, have been via acquisition.
Both Scality and Cloudian are well-funded and each have a large number of customers. Either one could be the first object storage startup to IPO. B&F
Cohesity has Helios, Rubrik it’s Polaris, Druva its DCP, and now Panzura has Vizion.ai, a SaaS-delivered file data management service.
File sync’ and sharer Panzura has developed its Vizion.ai product to catalog and index a customer’s entire global file estate, and then optimise its management. It builds up a metadata index of on-premises files on filers including NetApp, Dell EMC (inc Isilon and VNX), Hitach, Windows Server or any NFS or SMB compatible file shares.
It supports the AWS, Azure, Google Cloud Platform and IBM public clouds and incorporates a cloud-native, container-based deployment model with distributed caching and data reduction technology. Vizion.ai can be spun-up in any cloud, any region, and on-premises as a managed service.
A Panzura blog says it provides a unified view of all unstructured enterprise data, which implies that non-file-accessed object storage is not included unless it has a file gateway.
Once the index exists then Vizion.ai provides search, analysis, recovery and control, including analysis of hot, warm and cold data based on access patterns. Search and analysis are provided by Search and analytics functions are powered by what Panzura calls a hyper-scale, multi-cloud data engine; a Kubernetes- orchestrated platform with an application layer.
It has a scale-out distributed flash cache tier with global deduplication and tiering to object storage.
It integrates Elasticsearch and has machine learning for predictive analytics to help lower costs based on data access patterns and performance/cost analysis of different cloud storage locations.
Panzura points out there are more than 64 different cloud object storage/SLA pricing permutations between Google, AWS and Azure when accounting for multiple tiers with multiple regional pricing variations. Vizion.ai offers the ability to selecting the optimum tier and move data sets to it.
Vizion.ai screenshot.
The analytic tools include;
Storage Cost Optimizer applies machine learning to automatically determine the lowest cloud storage costs across multiple providers and Vizion.ai enables the movement of data between cloud storage tiers to optimize cost savings,
Storage Usage Profiler provides a heat map of what data’s hot, warm and cold based on access,
File Audit Report Filters provide administrative tools for policy enforcement to help data governance.
Searches can be run on filename, date, owner, groups and other file metadata items. An audit data search facility enables forensic discovery to track user activity across multiple files, folders and clouds. Content search is coming as are user-definable policies to create alerts.
There are plug-ins for Panzura and third-party data applications, and an API for third party developers to integrate Vizion.ai into their offerings. When integrated with Panzura’s Freedom NAS, users have the ability to restore or clone data sets from previous snapshots or backup sets to original or alternative locations.
Examples are cloning a database backup to a DevOps user or restoring a VMware backup set from a Los Angeles data centre to the eastern region of VMware Cloud in AWS.
Read a solution brief here and a white paper here. Interested customers can sign up for a free vizion.ai account here.
+Comment
Point-product file data managers are finding that their customers are heading towards the same place; one where files are stored in multiple on-premises and public cloud silos. Managing this disparate collection is difficult, especially with regard to regulatory compliance, and optimising file storage accessibility and costs.
Customers may settle on one suppliers’ global file management system and then move other file-level point product operations to the same supplier, jeopardising other suppliers’ business.
By providing its own global file management service Panzura has a defence against this and an opportunity to take business from competing point product suppliers lacking their own offering. B&F
Composable systems supplier Liqid has intro’d a fast access Element NVMe SSD with up to 32TB capacity and two form factors.
The 88NR2241 HH-HL form factor AIC product has a PCIe gen3 x8 Interface with 7 GB/sec throughput, 1.5M IOPS (4KB) and up to 4 x 8TB modules.
Prior 16TB Element AIC SSD product
A 2.5-inch format 88NR2241 has a PCIe gen3 x4 or x2x2 (Dual Port) interface, 3.6 GB/sec throughput, 850K IOPS (4KB) and 16TB max’ capacity with 4 x 4TB modules.
The 2.5-inch product has half the capacity and performance of the HH-HL product.
Back in August Liqid announced a PCIe gen 4 x16 Element NVMe SSD with Broadcom’s PLX88000 PCIe 4 switches, speeds up to ~25GB/sec and up to ~3.5m IOPS (4KB). There can be up to 8 x M.2 modules per card, with the same 32TB maximum capacity. The SSD has power loss protection and a multi-port capability for multi-host access.
The latest 88NR2241-powered SSDs are slower than the Broadcom-powered ones in both bandwidth and IOPS and have the same maximum 32TB capacity. How do the two products compare and contrast?
Liqid CEO Sumit Puri told us: “The two solutions share a lot in common, including similar performance capabilities. The main difference is that the Marvell version supports HW RAID and HW dual port support. On the PLX version RAID and Dual Port require a driver. Liqid’s expertise with the PLX version was helpful in our collaboration with Marvell on on the Marvell 88NR2241-enabled Liqid Element solutions.”
Parent Dell Technologies has child Dell EMC singing off a multi-cloud and hyperconverged songsheet at sibling VMware’s Glitter Gulch VMworld event.
The two main product families featured are the Data Domain data protection target array and VxRAIL hyper-converged lines.
VMware is pushing the notion of a multi-cloud world needing common infrastructure platforms, services and tools across the on-premises and public cloud worlds to facilitate workload transfers between the on- and off-premises environments. Dell EMC is playing very nice with this idea.
Data Domain cloudification gets use of the public cloud as a disaster recovery (DR) site, with application-consistent cloud disaster recovery in AWS and recovery to VMware Cloud on AWS.
Data Domain Virtual Edition is Data Domain software running in the cloud. V 4.0 adds KVM hypervisor support, up to 96TB of in-cloud capacity and it can use AWS S3 and Azure Hot Blob object storage for backup storage.
Dell EMC’s Data Protection Suite can use cloud storage for backup and retention as well.
Cloud Snapshot Manager gets Azure support as well as AWS.
There is a Cloud Edition of the Unity VSA (Virtual Storage Appliance) block and file access array.
It can be deployed with VMware Cloud on AWS and have up to 256TB filesystems. Dell EMC suggests using it for test and development or for DR in the cloud.
CloudIQ
CloudIQ, the no-charge cloud-based array performance analysis service, gets mobile app access via iPhone and Android. It has VMware integration, providing virtual machine-level performance and capacity insights, and has added support for PowerMax, VMAX and XtremIO in addition to the existing Unity and SC Series arrays.
There is the potential here for cross-array type analysis and, dare we suppose, data movement?
VxRail gets stretch clusters in the cloud. New options of the VMware Validated Designs for VxRail support distributed multi-availability zones architecture and multi-site deployments with disaster recovery.
Releases will be synchronised with less than 30 days between new VMware product versions and corresponding VxRAIL ones. VMware Cloud Assembly, a SaaS-based cloud management solution part of VMware Cloud Services, will feature VxRail integration.
There is also Dell EMC Networking Fabric Design Center support for VxRail. Vmware and VxRail are playing together to boost each others products; VSAN from VMware and storage and servers from Dell EMC.
VxRack SDDC gets support for VMware Cloud Foundation v2.3.2. and will support future releases. It has almost full alignment with VxRail offerings; P, E, and V series and the storage-dense S-series.
Dell deal
Dell Financial Services aims to offer easier ways to pay for multi-cloudification, with Cloud Flex for HCI requiring no up-front investment and declining payments over time with no obligation after the first 12 months.
Ready Capacity provides on-demand storage and buffer capacity that can scales to match usage changes. Flex on Demand lets users deploy an initial base capacity and pay for buffer capacity as it is used.
The Dell EMC and VMware multi-cloud blanket is getting wider and deeper as Dell seeks to keep its customer base contented and reassured, and gain new converts to the Dell EMC VMware way of doing IT.
Source Massive X-Class Solar Flare uploaded by PD Tillman; Author - NASA Goddard Space Flight Center
Who needs RoCE or iWARP if NVMe over TCP is this fast?
Dell has validated superfast Ethernet NIC-maker Solarflare’s NVMe over TCP technology, as being within 3 to 4 microseconds of RoCE NVMe.
This was much faster than Pavilion Data’s demonstration which showed NVMe TCP being about 75µs slower than NVME RoCE’s 107µs latency.
A 3 to 4µs latency difference between NVMe TCP and NVMe RoCE is immaterial.
NVMe over TCP uses ordinary Ethernet, not the more expensive lossless Data Centre Bridging (DCB) class required by RDMA-based NVMe RoCE.
Ahmet Houssein, Solarflare’s VP for Marketing and Strategic Development, said the lossy nature of ordinary Ethernet is mostly due to congestion. If that is controlled better then the loss rate falls dramatically: “Our version of ordinary Ethernet is nearly lossless. … If you take [lossy Ethernet[ away why do you need RDMA and DCB extensions for Ethernet?”
“Dell says we’re almost within 3 per cent of RoCE.”
Possibly the Pavilion demo used an early version oof Lightbits technology
Houssein said Solarflare’s kernel bypass technology, which works with TCP in user space instead of having a switch to kernel space, is not proprietary; POSIX-compliant Onload being available to anybody and its use needing no application re-writing.
Solarflare says Onload delivers half round trip latency in the 1,000 nanosecond range. In contrast the typical kernel stack latency is about 7,000 nanoseconds.
Solarflare NIC and kernel bypass
TCPDirect
Solarflare’s TCPDirect API builds on Onload by providing an interface to an implementation of TCP and UDP over IP. TCPDirect is dynamically linked into the address space of user-mode applications, and granted direct (but safe) access to Solarflere’s XtremeScale X1 hardware.
Solarflare says TCPDirect under very specific circumstances with ideal hardware, can reduce latency from 1,000 nanoseconds to 20-30 nanoseconds. According to a Google cache version of the TCPDirect user manual; “In order to achieve this, [extreme low latency] TCPDirect supports a reduced feature set and uses a proprietary API.”
Note; “To use TCPDirect, you must have access to the source code for your application, and the toolchain required to build it. You must then replace the existing calls for network access with appropriate calls from the TCPDirect API. Typically this involves replacing calls to the BSD sockets API. Finally you must recompile your application, linking in the TCPDirect library.”
In contrast; “Onload supports all of the standard BSD sockets API, meaning that no modifications are required to POSIX-compliant socket-based applications being accelerated. Like TCPDirect, Onload uses kernel bypass for applications over TCP/IP and UDP/IP protocols.”
Pavilion Data
Pavilion Data’s head of product, Jeff Sosa, commenting on the Solarflare demo, said: “As far as direct comparisons, it probably doesn’t make sense to compare results since we were running real-world workloads at higher-QD and examining average latency measured from the host generating the IO, not a single-QD IO to try to find the lowest latency number possible.
“Also, our customers’ results were using a similar methodology, over multiple subnets/switch hops in cases. In addition, Pavilion delivers full data management capabilities in the array used to produce the results, including RAID6, snapshots, and thin provisioning (it’s not a JBOF).
“Even with all that, on a system workload of 2 million IOPS, we showed that the average latency of RDMA and TCP were fairly close, and thus the driving decision factor will be cost savings through the avoidance of specialized hardware and software stacks for many users.”
How does Pavilion view NVMe over TCP and NVMe over RoCE?
“Pavilion’s position is that we will support NVMe-oF over both ROCE and TCP transports from the same platform, and don’t favor one over the other. Instead, we let our customers decide what they want to use to best meet their business requirements. The average latency results we presented at FMS were measured end-to-end (from each host) using a steady test workload of 2 Million IOPS on the Pavilion Array, which was leveraging RAID6, thin provisioning, and snapshots.
“However, when we measure only a single-IO, the latency of ROCE and TCP are within a few usec, but this is not a scenario our customers care about typically.”
Sosa emphasised Pavilion is very interested in seeing the momentum around NVMe-oF with TCP grow, and believes it has a big future. Pavilion looks forward to working with the broader community of vendors to optimize NVMe over TCP even further as the standard gets ratified and the open host protocol driver makes its way into the OS distributions, which should drive even wider adoption and lower cost for customers deploying NVMe-oF-based shared storage.
NVMe-oF supplier reactions
It is our understanding, from talking to Solarflare, that all existing NVMe-over Fabrics suppliers and startups are adding NVMe TCP to their development roadmaps.
Houssein says that, with NVMe TCP, you don’t need all-flash arrays, merely servers talking NVMe TCP to flash JBODs. You don’t have to move compute to storage with this because the network pipe is so fast.
Pure Storage will support NVMe TCP in 2019. He thinks NVMe-oF array startups will move their value up the stack into software.
The prospect offered is that NVMe-oF is an interim or starting phase on the transition of all-flash arrays from Fibre Channel (FC) and iSCSI access to either NVMe over FC or TCP. The NVMe TCP standard will be ratified in a few months and then server and all-flash system suppliers will adopt it as it provides the easiest route for current Ethernet users to upgrade to NVMe-oF storage access speed.
Ditto NVMe FC and FC SAN users.
We might expect customers to start adopting NVMe FC and TCP from 2019 onwards. once their main system suppliers have their NVMe TCP product offering ducks lined up in a row. B&