Dell has issued a data protection report saying customers are managing and protecting more data than ever, yet lack confidence in their data protection arrangements. In response it has added protection software and services including one that enables VMware VMs to be snapshotted without being stunned.
Dell’s 2021 Global Data Protection Index says business execs worry that existing data protection arrangements won’t meet future needs, and think that home working has increased data vulnerability. The survey also reveals — nice one this — the cost of data loss is 4x as high on average for organisations that work with multiple data protection vendors instead of one ($1.08 million compared to $287,000). Better to work with one then. Who might Dell’s report suggest? Hmmm …
Jeff Boudreau, President and General Manager, Infrastructure Solutions Group at Dell Technologies, offers a hint in a statement: “While ransomware attacks can be devastating for people and businesses, accepting defeat as a foregone conclusion is not the answer,” as Dell is the leading provider of data protection hardware and software.
Anyway, the two software advances both relate to the company’s PowerProtect appliances. Smart Scale is a management tool enabling up to 32 appliances to be managed as a single pool with up to three exabytes of logical capacity. Monitoring and managing a set of PowerProtect appliances is now easier.
However, Dell is not saying that PowerProtect appliances are clustered (scale-out) or that deduplication can work across the pooled appliances. But we have been told: “Deduplication is still limited to a single appliance. However, the storage unit mobility across appliances in a pool, combined with analytics, will allow us in the future to co-locate backup datasets that have better dedupe affinity on the same appliance, thus improving overall dedupe efficiencies in a pool.”
Stunning
Second, Transparent Snapshots have been added. These work on VMware virtual machines and are said to simplify and automate VM image-level backups, making them up to a claimed 5x faster and with up to 5x reduction in VM latency.
A spokesperson told us: “Previously, PowerProtect Data Manager used proxies in a given ESXi cluster to act as the Data Movers for the protection copies. A single proxy uses a minimum of four vCPUs and 8GB of RAM. Typically, multiple proxies will be deployed into a cluster based on the backup workload of the cluster (ie. the number of VMs that will need to be backed up, the size of the VMDKs, the duration of the backup window, etc.). Each of these proxies exists as a VM in the workload domain, thus those four vCPUs and 8GB of RAM, per proxy, comes from the resource pool that your other workloads share.”
So … now: “We use the Transparent Snapshots Data Mover (TSDM). TSDM is an extremely lightweight VMware signed VIB which is installed directly into each node in an ESXi cluster. Install footprint is a few megabytes, with a ‘max load’ footprint at 700MB. This releases a significant amount of workload domain resources back to the pool, which can now be used to run more of your application VMs. In addition, we made the TSDM management process transparent and automated. As your cluster scales up or down (in node count), PowerProtect Data Manager will automatically deploy new TSDM instances to new nodes, and clean up references to TSDM instances on removed nodes. Simple, clean, lightweight, and hassle free!”
And we were told: “The complete re-architecture of VM Image backups with Transparent Snapshots completely eliminates proxies of all types, replacing them with our lightweight TSDM module, embedded directly into each ESXi node, with autoscaling and zero-touch management.”
There are speed advantages to this approach. VM latency is the total round-trip time of a single disk I/O operation — the total time that an application needs to “wait” for a read or write operation to complete. With Transparent Snapshots, PowerProtect Data Manager 19.9 delivers up to 5x reduction in VM latency as compared to PowerProtect Data Manager with VADP. And there was a write latency reduction from 5ms response time with VADP to 0.97ms response time with Transparent Snapshots.
A Dell VMware slide deck from 2020 says transparent snapshotting takes place without disrupting VM execution (stunning) and making VMs unavailable.
A Dell video by Brian Reynolds, Senior Manager of Product Management, explains what’s going on:
Dell is also announcing a Managed Services for Cyber Recovery Solution in which Dell staff manage day-to-day cyber recovery vault operations and support recovery activities.
The 2021 Global Data Protection Index should be available here and a Dell blog provides background here.
PowerProtect Data Manager with Transparent Snapshots will be globally available this quarter, with no charge for customers with existing maintenance contracts. PowerProtect appliances with Smart Scale is in technology preview today and should be generally available in the first half of 2022. Dell Technologies’ Managed Services for Cyber Recovery Solutions are available globally today.
Open source Kubernetes-focused startup MayaData has undergone a CEO change as — this is almost laughable — the Chief Transformation Officer transforms the CEO who hired him out of the job and himself into it.
Evan Powell.
Coincidentally, this information comes out as Kubernetes startup Diamanta’s CEO change news broke. It’s all-change in Kubernetes startup CEO land.
Evan Powell, the ex-Nexenta CEO, became MayaData’s CEO and Chairman in June 2017 when the original ClouydByte startup renamed itself as MayaData. Powell built the company up with its OpenEBS product to the point where there was a deal with DataCore in February. Then DataCore technology was made available to MayaData and AME Cloud Ventures, DataCore Software, and venture capital and private equity firm Insight Partners invested $26 million in MayaData.
Insight Partners is also a DataCore investor. We might imagine some Insight Partners string-pulling is going on here — but we have no insight (ahem). Following the DataCore, deal Don Williams was hired as the COO and somewhat oddly named Chief Transformation Officer.
His LinkedIn profile says: “Joined to help MayaData scale in response to rapidly increasing adoption of OpenEBS for stateful workloads on Kubernetes. Responsibilities include insuring that every customer succeeds in their use of Kubernetes for data, with the assistance of OpenEBS and other MayaData software and services.”
Don Williams.
Fast forward four months and Powell is out, though still a board member, and Williams has the CEO reins in his hands. MayaData’s leadership webpage shows no sales, marketing or COO positions. Williams did what he was hired to do and, in effect, Powell hired his replacement.
This situation change is almost par for the course as far as Powell is concerned. He has a great early-stage startup CEO skillset, having built up Nexenta to the point where another CEO, Tarkan Maner, was appointed in 2013. That CEO then built it up more and helped sell the company to DDN in May 2019.
We might envisage Willians building up MayaData, on the foundations Powell laid, to the point where it gets acquired — possibly by DataCore through an Insight Partners push.
As we said yesterday commenting on Diamanti’s market situation, the Kubernetes storage world is intensely competitive, with suppliers such as NetApp Astra, Dell EMC, HPE Ezmeral, Pure-Portworx, not to mention SmartX, SUSE Rancher, StorageOS and many others duking it out for a piece of the action. It’s all good news for corporate recruitment agencies but means Williams has to get MayaData performing like a champ.
Executing a hyperconverged software to hyperconverged Kubernetes software pivot, startup Diamanti has changed its CEO — with a raft of consequent exec changes following the hiring.
Update: two more exec departures added. 10 Sep 2021. Hickey starting date changed to June 2021. 19 Jan 2022.
Prior incumbent Tom Barton assumed the CEO role in September 2018 after a less than stellar three-month episode as Tintri’s CEO in 2018, just as it was about to crash and get acquired by DDN. Barton quit Diamanti in May to co-found and run Astira in June. There is little information about this enterprise — not even a web site — and Barton’s LinkedIn profile reads: “Astira is changing the way satellites are designed, built, and managed.” Good luck with that.
Chris Hickey.
Diamanti’s board hired Chris Hickey to be the new CEO in June, the month after Barton resigned, implying no lengthy exec search process. This was an unexpected move as Hickey was previously the CEO of desktop publishing business Quark Software, being in that role for 28 months.
A person close to the situation said Hickey’s hiring was an interesting move by the board and many staff are fleeing. We note the following Diamanti exec changes:
CFO Jony Hartono left in September, replaced by Arnaldo Perez.
Communications Director Laura Finlayson resigned in August.
VP Product Brian Waldon left in June.
Chief Revenue Officer Andy Wild left in June, going to Mirantis.
VP Marketing Jenny Fong left in February to join Apptio in April.
COO Karthik Govindhasamy left in July to be the co-founder and CTO at Astira.
Founding Engineer Hiral Patel resigned in July 2021.
Field CTO/Product Evangelist Boris Kurktchiev went in August 2021.
San Jose-based Diamanti was founded in 2014 and has taken in $78 million in funding, with the last round being a $35 million C-round in 2019. That followed an $18 million B-round in 2017. Cash might be running tight. LInkedIn lists 92 employees. Its leadership web page shows no overall head of sales, no COO and no marketing head.
Taking a look at employee reviews on Glassdoor leaves one speechless, with “Cut throat snake pit” being one description and “Worst company in the world” being another. The embitterance factor on Glassdoor can be excessive as we all know, so let’s not pay overmuch attention to these eye-catching phrases, but still …
It looks as if the incoming CEO has a mountain to climb in terms of exec hires and sales coverage. The software technology looks good but Diamanti is facing competition from NetApp Astra, Dell EMC, HPE Ezmeral, Pure-Portworx, not to mention SmartX, SUSE Rancher, MayaData’s OpenEBS, StorageOS and many others. Diamanti’s employees need to pull together with a coherent and inspiring set of execs. Making that happen before competitors walk away with the market, is Hickey’s task. The eight exec leavers listed don’t think he can do it.
Although Broadcom saw an overall rise in revenues and profit in its latest quarter, sales in the server-to-storage connectivity area were down. It expects a recovery and has cash for an acquisition.
Revenues in Broadcom’s third fiscal 2021 quarter, ended August 1, were $6.78 billion, up 16 per cent on the year. There was a $1.88 billion profit, more than doubling last year’s $688 million.
We’re interested because Broadcom makes server-storage connectivity products such as Brocade host bus adapters (HBAs), SAS and NVMe connectivity products.
Hock Tan.
President and CEO Hock Tan’s announcement statement said: “Broadcom delivered record revenues in the third quarter reflecting our product and technology leadership across multiple secular growth markets in cloud, 5G infrastructure, broadband, and wireless. We are projecting the momentum to continue in the fourth quarter.”
There are two segments to its business: Semiconductor Solutions, which brought in $5.02 billion, up 19 per cent on the year; and Infrastructure Software, which reported $1.76 billion, an increase of ten per cent.
Tan said in the earnings call: “Demand continued to be strong from hyper-cloud and service provider customers. Wireless continued to have a strong year-on-year compare. And while enterprise has been on a trajectory of recovery, we believe Q3 is still early in that cycle, and that enterprise was down year on year.”
Server-storage connectivity
Inside Semiconductor Solutions, the server storage connectivity area had revenues of $673 million, which was nine per cent down on the year-ago quarter. Tan noted: “Within this, Brocade grew 27 per cent year on year, driven by the launch of new Gen 7 Fibre Channel SAN products.
Overall, Tan said: “Our [Infrastructure Solutions] products here supply mission-critical applications largely to enterprise, which, as I said earlier, was in a state of recovery. That being said, we have seen a very strong booking trajectory from traditional enterprise customers within this segment. We expect such enterprise recovery in server storage.”
This will come from aggressive migration in cloud to 18TB disk drives and a transition to next-generation SAS and NVMe products. Tan expects “Q4 server storage connectivity revenue to be up low double-digit percentage year on year.” Think two to five per cent.
The enterprise segment will grow more, with Tan saying: “Because of strong bookings that we have been seeing now for the last three months, at least from enterprise, which is going through largely on the large OEMs, who particularly integrate the products and sell it to end users, we are going to likely expect enterprise to grow double digits year on year in Q4.”
Broadcom Gen 7 Fibre Channel switches and blades.
That enterprise business growth should continue throughout 2022, Tan believes: “In fact, I would say that the engine for growth for our semiconductor business in 2022 will likely be enterprise spending, whether it’s coming from networking, one sector for us, and/or from server storage, which is largely enterprise, we see both this showing strong growth as we go into 2022.”
Broadcom is accumulating cash and could make an acquisition or indulge in more share buybacks. Tan said: “By the end of October, our fiscal year, we’ll probably see the cash net of dividends and our cash pool to be up to close to $13 billion, which is something like $6 billion, $7 billion, $8 billion above what we would, otherwise, like to carry on our books.”
SmartNICs and DPUs
Let us pronounce that HBAs are NICs (Network Interface Cards) and that an era of SmartNICs is starting. It might be that Broadcom could have an acquisitive interest in the SmartNIC area.
Broadcom is already participating in the DPU (Data Processing Unit) market, developing and shipping specialised silicon engines to drive specialised workloads for hyperscalers. Answering an analyst question, Tan said: “We have the scale. We have a lot of the IP calls and the capability to do all those chips for those multiple hyperscalers who can afford and are willing to push the envelope on specialised — I used to call it offload computing engines, be they video transcoding, machine learning, even what people call DPUs, smart NICs, otherwise called, and various other specialised engines and security hardware that we put in place in multiple cloud guys.”
Better add Broadcom to the list of DPU vendors such as Fungible, Intel and Pensando, and watch out for any SmartNIC acquisition interest.
HPE has added three new models to its StoreOnce purpose-built deduplicating backup appliance range, upping capacities and transfer speeds.
Update: HPE info added 9 September 2021 re HW boosts to performance and 5660 max usable capacity downshift from 5650. 5650 Cloud Bank max storage number corrected.
There are three tiers to the StoreOnce line: 36xx entry level, 525x mid-range and 526x high end. These systems compete with Dell EMC’s PowerProtect/DataDomain systems and also the Exagrid and Quantum DXI products. The new StoreOnce models are the 3660, 5260 and 5660.
HPE’s announcement was made via a blog by product marketeer Ashwin Shetty who rates StoreOnce highly: “This week I will borrow the concept of superheroes, and relate it to our own storage super hero — HPE StoreOnce.”
He blogs: “Today, we are happy to introduce the next generation of HPE StoreOnce Systems that can scale from smaller, remote offices, to the largest enterprises and service providers. … HPE StoreOnce modernizes data protection for your hybrid cloud environment by neutralizing threats like ransomware, simplifying operations, delivering on SLAs, and protecting data — without any lock-in, and with rapid recovery on-premises in your datacenter or low-cost archiving in the cloud.”
Shetty’s blog has a slide showing the new products:
We added them in to a table listing the old ones, with numbers obtained from HPE product spec sheets:
New models in the orange columns. The Cloud Bank concept copies older backups to AWS (S3), Azure or Scality object storage and the maximum capacity assumes 20:1 deduplication.
Capacities and speed
The existing models are based on ProLiant Gen-10 servers and we are assuming that HPE has uprated the CPUs in the new products to get the transfer speed acceleration, and increased disk capacities to get the capacities lifted.
The 3660 tops the 36xx model range with enhanced capacities and transfer speed. The 5260 matches the 566x model range with enhanced capacities and transfer speed. But the 5660 has lower maximum usable capacity and Cloud Bank capacity than the 5650, although it has a much enhanced transfer speed. We’ve asked HP to clarify what is happening here, and to confirm the new model CPU upgrades.
An HPE Spokesperson said: “We have updated the StoreOnce platform to the ProLiant DL385 Gen10+ server platform. The StoreOnce 5260 and 5660 boot from Smart Array Controller. In addition, 8 x SSDs are used for data cache in these models. The use of SSDs provides higher performance as an acceleration layer for storage.”
And: “Yes, the maximum usable capacity of the StoreOnce 5660 is 1.1PB as compared to StoreOnce 5650 (1.7PB). Based on our research and customer feedback, we found that a vast majority of customers have not utilized more than 1 PB of storage capacity and with the new StoreOnce Systems we have optimized the overall capacity to align with the customer requirements.”
The 5660 has been optimised for performance over capacity and its 105TB/hour exceeds the high-end PowerProtect DP8900’s 94TB/hour.
The transfer speed performance increases are impressive and Shetty believes Primera and Nimble array users should be pleased with faster backups — ditto for 3PAR and Alletra array userss.
The high-end 5660 can store 1.1PB locally (usable capacity) and this is exceeded by a 32-node Exagrid cluster using EX84 nodes and its 5.4PB. Exagrid has a scale-out architecture.
Shetty’s concluding paragraph goes into blogger’s overkill: “The Avengers, Spiderman, Black Panther, Superman — they have some amazing powers like flight, shape-shifting, super speed and strength. We all need superheroes in our lives, and so do our business data and applications. With our enhanced superhero — HPE StoreOnce — and its amazing capabilities, you can rest assured that your data is always protected”
The entire StoreOnce model range specs can be seen here.
GigaOM has taken an evaluative look at six distributed cloud file storage suppliers and judged CTERA as sitting atop the heap.
Distributed cloud file storage is file storage based in the public cloud, and used to power applications such as collaboration software. The six suppliers are CTERA, Hammerspace, LucidLink, Nasuni, Panzura and Peer Software. GigaOm’s radar diagram places vendors in a 2D circular space defined by a vertical maturity-to-innovation axis and a horizontal feature-play-to-platform-play axis. There are three concentric circles with the inner being for leaders, the next for challengers and the outer for new entrants.
Suppliers are given a position and a selected direction of movement arrow; forward mover, fast mover and outperformer. Here’s the diagram:
The supplier closest to the centre is the top-ranked supplier. In this case it is CTERA which is also an outperformer. The GigaOm analysts write: “CTERA is the leader of this radar and … achieves outstanding ratings across a majority of key criteria and evaluation metrics. CTERA also proved to be the absolute leader in terms of edge deployment capabilities, providing organisations with the highest flexibility, enabling simplified edge data access, and offering comprehensive edge deployment options.”
The other outperformer is Hammerspace, classified as a challenger. GigaOm’s report reads: “Slightly behind the leaders’ pack, we find Hammerspace. The solution is well-suited to serve distributed cloud file storage requirements, but was built with more versatility and use cases in mind [than only distributed cloud file storage]. This explains a stronger focus on building a robust architectural foundation that excels in hybrid and multi-cloud capabilities, as well as integration with object storage.”
Two suppliers are close to CTERA, and challengers moving into the leaders circle: Panzura and Nasuni. We are told: “Panzura excels in integration with object storage, while Nasuni offers superior data management.”
All four of these suppliers are grouped in a quadrant between the platform-play end of the horizontal axis and the innovation end of the vertical axis.
The two remaining suppliers are feature-focused but still innovative: Lucid Link and Peer Software, which is new to us. The analysts argue: “LucidLink focuses on globally, instantly accessible data with one particularity — data is streamed as it is read. Streaming makes the solution especially well-suited for industries and use cases that rely on remote access to massive, multi-terabyte files, such as the media and entertainment industry. LucidLink’s capabilities in this area are unmatched.”
As for Peer Software, it “has an interesting approach. This solution was conceived around the premise that organisations can build an abstracted distributed file service on top of existing storage infrastructure. It also supports scalability in the cloud. This makes Peer Software best in class when it comes to protecting existing storage investments and extending their usage to distributed cloud file storage.”
The report can be obtained from CETERA’s web site here. If you need reminding why CTERA has top place, then a CTERA blog will be happy to tell you all about it.
Finally we note that one supplier in this section of the market — Egnyte — is not even included.
Pure Storage has flagged a major announcement on September 28th. A financial analysts’ briefing is scheduled to follow the announcement, suggesting news that will affect investors’ views of Pure’s future revenues, costs and underlying profitability measures. The company is saying the announcement is about AIOps, the future of storage, storage and DevOps product innovations, and its as-a-Service offerings. What could it announce that could cause analysts to take stock and form a different view of the company?
We ignored the AIOps aspect, as that would be a fairly incremental move, and came up with a list of potential developments:
Hardware array refreshes;
FlashBlade software into the cloud (file, object);
Hardware array refreshes would be good. Using the latest Xeon processors, for example, supporting PCIe gen-4, that sort of thing — but they would hardly move the needle for financial analysts. Possibly committing to support DPUs from Pensando or Fungible might do that. Still, not exactly that much impact on a financial analyst’s twitch-ometer.
Porting FlashBlade software to one or more public clouds would seem both logical and good sense. It would be additive to the FlashBlade market — and we think analysts would concur, nod approvingly and move on. Ditto porting Cloud Block Store to the Google Cloud Platform. Expansion into adjacent market? Tick. Stronger competition for NetApp data fabric idea? Tick. What’s not to like? Nothing. Move on.
Adding file and block support to Cloud Block Store? Trivially there would be a naming problem: do we call it Cloud Block File Object Store? It would seem a logical extension of Pure’s public cloud capabilities and an improvement in the cross-cloud consistency of Pure’s hybrid cloud story. We can’t imagine analysts would see a downside here.
It could be achieved with another strategy: make the Purity OS software cloud-native and have it run un the public clouds. That would be a big deal with a common code tree and four deployment layers: on-premises arrays, AWS, Azure and GCP. It would be a large — very large — software effort and give Pure a great hybrid cloud story with lots of scope for software revenue growth. Cue making sure the analysts understand this. An AIOps extension could be added in to strengthen the story as well.
How about doing a Silk, Qumulo or VAST Data, and walking away from hardware manufacturing — using a close relationship with a contract manufacturer/distributor instead and certified configurations? Thus would be a major business change, and both analysts and customers would want reassuring that Pure would not lose its hardware design mojo.
A lesser hardware change would be to use commodity SSDs instead of Pure designing its own flash storage drives and controllers. Our instant reaction is a thumbs down, as Pure has consistently said its hardware is superior to COTS SSD vendors — such as Dell, HPE and NetApp — because it optimises flash efficiency, performance and endurance better than it could if it was limited by SSD constraints.
Such a change would still get analysts in a tizzy. But we don’t think it likely, even if Pure could pitch a good cost-saving and no-performance-impact story.
How about a strategic deal with a public cloud vendor similar to the AWS-NetApp FSx for ONTAP deal? That would indeed be a coup — having, say, Pure’s block storage available alongside the cloud vendor’s native block storage. We don’t think it likely, though it has to be on the “possibles” list.
Expanding the Pure-as-a-Service strategy to include all of Pure’s products would be an incremental move and so no big deal to people who had taken the basic idea on board already. Analysts would need a talking-to perhaps, to be persuaded that this was worth doing in Annual Recurring Revenue growth terms. This could be thought of as Pure doing a me-too with HPE’s GreenLake and Dell’s APEX strategies.
How about Pure acting as a cloud access broker and front-end concierge supplier, rather like NetApp with its Spot-based products? That would be big news and require new software and a concerted marketing and sales effort. AIOps could play a role here too. Our view, based on gut feelings alone, is that this is an unlikely move — although it would be good to see NetApp getting competition.
We are left thinking that the likeliest announcements will be about making more of Pure’s software available in the public clouds, plus an extension of Pure’s as-a-Service offerings and a by-the-way set of hardware refreshes. We’ll see how well our predictions match up with reality on September 28 and mentally prepare for a kicking just in case we are way off base.
NetApp is adding fast NVMe-over-TCP access to ONTAP, providing an upgrade to accelerated storage access for its iSCSI-using FAS and AFF array customers and for iSCI users generally.
NVMe-over-TCP provides remote direct memory access for data across an Ethernet TCP link. The idea of using NVMe in this way, extending the PCIe bus across a network fabric, with NVMe-oF, was first developed with lossless Ethernet (RoCE), and then extended to Fibre Channel, which NetApp already supports.
Eric Burgener, Research Vice President, Infrastructure Systems, Platforms and Technologies Group at IDC, provided a statement: “With faster-than-expected adoption of NVMe-based all-flash arrays in recent years, new technologies like NVMe over Fabrics (NVMe-oF) will continue to fuel the evolution of the enterprise storage industry. NVMe/TCP is expected to be a key technology to drive mainstream market adoption due to its ubiquity and ease of deployment. Because it is based on Ethernet, it doesn’t require new hardware investment. It is particularly attractive for hybrid-cloud deployments.”
NetApp is specifically announcing that the next major release of ONTAP, v9.10.x, will include NVMe/TCP support. Octavian Tanese, NetApp’s SVP for Hybrid Cloud Engineering, tells us there will be an easy upgrade path to NVMe/TCP in this coming release, which, we think, might arrive before the end of the year.
NVMe/TCP is not quite as fast as NVMe over ROCE or FC but it is way faster than standard iSCSI or Fibre Channel access to SAN data, as a general latency table indicates:
iSCSI and Fibre Channel — around 1,000 to 1,500µs;
NVMe/TCP — about 200µs;
NVMe/FC — about 150µs;
NVMe/ROCE — 100–120µs.
NetApp NVMe/TCP graphic.
Because NVMe/TCP uses standard Ethernet, then the same cabling that supports iSCSI external storage access can support the radically faster NVMe/TCP access. By adding NVMe/TCP support to ONTAP, existing ONTAP features are available to NVMe/TCP users — data reduction, management, protection, storage efficiency and so forth.
NVMe/TCP will also be supported by ONTAP running in the public cloud, providing an NVMe namespace covering both the on-premises and public cloud environments.
Other supplier support
NVMe/TCP is supported by startups like Lightbits Labs and also by disk drive supplier Toshiba in its Kumoscale product sold through partners such as Quanta, Supermicro and Tyan.
Startup Infinidat supports NVMe/TCP access to its InfiniBox arrays. Another startup, Pavilion Data, supports NVMe/TCP as well as NVMe over RoCE. Pure Storage said it had NVMe/TCP support on its roadmap back in June last year but nothing has appeared yet.
NetApp looks to be the first major incumbent storage supplier to support NVMe/TCP, ahead of Dell, HPE, Hitachi Vantara, IBM and Pure.
Analysts are predicting Intel’s Optane 3D XPoint memory capacity ships could exceed those of DRAM in 2028.
Update: Jim Handy points added. 7 September 2021.
We have just learned about a report by Coughlin Associates and Objective Analysis called Emerging Memories Take Off, courtesy of Tom Coughlin. The report looks at 3D XPoint, MRAM, ReRAM and other emerging memory technologies and says their revenues could grow to $44 billion by 2031. That’s because they will displace some server DRAM, and also NOR flash and SRAM — either as standalone chips or as embedded memory within ASICs and microcontrollers.
The emerging memory market is set to grow substantially with 3D XPoint revenues reaching $20 billion-plus by 2031, and standalone MRAM and STT-RAM reaching $1.7 billion in revenues by then. The report predicts that the bulk of embedded NOR and SRAM in SoCs will be replaced by embedded ReRAM and MRAM.
A chart shows XPoint capacity ships crossing the 100,000PB level in 2028 and so surpassing DRAM, whose capacity growth is slowing slightly.
Note log scale.
The chart shows XPoint capacity shipped being 1000PB this year. That number will grow 100x to 100,000PB in 2028.
Jim Handy
Jim Handy.
We asked Jim Handy, the Objective Analysis co-author of the report, about how they came to their XPoint revenue amount.
He told us: “3D XPoint shouldn’t need a lot of wafers to achieve high revenues. You may recall that, during the technology’s 2015 introduction, Intel and Micron said that it would be “10 times denser than conventional memory” (meaning DRAM). That means that it takes 1/10th as many XPoint wafers as DRAM wafers to make the same number of exabytes.”
We should note though: “That said, the Objective Analysis 3D XPoint forecast is admittedly optimistic. It’s based on a server forecast that is ordinary enough, then makes assumptions about the acceptance of 3D XPoint DIMMs in those systems (officially known as “Optane DC Persistent Memory Modules”). Optane SSDs are not a big part of the equation, although they are getting more traction in data centers than they did in PCs.
“By the end of the forecast period (2031) we assume that XPoint DIMMs will have penetrated a little over 50 per cent of all servers, and that the majority of the memory in those servers will be Optane DIMMs, with a much smaller DRAM for the really fast work, much like a cache.”
Where will Intel get the manufacturing capacity as its Rancho Rio plant, where Optane chips are made, is more of a development fab than a mass-production facility?
Handy tells us: “Given Intel’s manufacturing aspirations, Rio Rancho should be a very tiny portion of the company’s production by 2031. If XPoint volume gets high enough the economies of scale will drive down the costs and make it profitable, which XPoint has not been to date. If it’s profitable, then other companies will be interested in producing it, should Intel choose to source it externally.
“This all depends on Intel’s success in getting major server purchasers to adopt Optane, and on Intel’s willingness to continue to subsidize the technology. Both are difficult to predict.”
This puts Micron’s March 2021 withdrawal from the XPoint market in a new light. Did it have different figures for XPoint capacity growth?
Handy thinks: “I believe that Micron in 2015 expected for the XPoint market to develop faster than it did, and for Optane SSDs to be better accepted than they have been. With the lack of a sufficiently large market, and with the subsequent lack of the economies of scale, Micron had no clear path to profits. It’s unsurprising that the company dropped out of that business.”
It used to be thought that WD was betting big on a MAMR technology change — a big bang, as it were — like the change from longitudinal to perpendicular magnetic recording (PMR). Not so, says Dr Siva Sivaram, WD’s President of Technology and Strategy. Microwave-Assisted Magnetic Recording (MAMR) is part of WD’s energy-assisted perpendicular magnetic recording (ePMR) strategy. There will be a continuous stream of technology advances around ePMR, and MAMR is not being delayed.
Dr. Siva Sivaram.
We were briefed by Dr Sivaram after the OptiNAND news broke — the use of added embedded flash in a disk drive controller to provide NAND storage for drive metadata instead of storing it on disk. In its announcement, WD said: “we expect an ePMR HDD with OptiNAND to reach 50TB [in] the second half of this decade.” Which we took to mean full MAMR wasn’t needed until then.
What is full MAMR? It is surely the use of a write head with a microwave generator beaming microwaves at the bit area under the write head, making it more receptive to receiving a write signal setting its magnetic polarity. This enables smaller bits, greater areal density and higher disk capacity.
Two recent WD announcements do use energy-assist, but not in this way. The September 2020 18TB UltraStar DC HC550 and DC HC650 uses ePMR tech, applying an electrical current to the write head to lower the jitter and improve the strength of the write signal. This month’s OptiNAND adds a NAND-enhanced drive controller SoC to the mix, which processes drive metadata in a faster and more granular way, enabling tracks to be placed closer together and so raising capacity in a sample drive from 18TB to 20TB.
Dr Sivaram said: “MAMR is not being pushed away.” The ePMR technology applies to the drive’s data plane, whereas OptiNAND applies to its control plane. MAMR is part of WD’s overall ePMR technology — a series of improvements that electrically improve areal density. According to Dr Sivaram, “This is still on track.”
He says: “ePMR is a large bucket. All aspects of MAMR and HAMR are included within it.” The DC HC550/HC650 announcements referred to generation 1 of WD’s ePMR technology. There will be others. The 50TB ePMR disk prediction for the 2025–2030 period could well involve microwave use.
SMR and OptiNAND
Shingled Magnetic Recording (SMR) media disks could be one of the biggest beneficiaries of OpitiNAND technology. In an SMR write event modifying existing data on the drive, a whole block or zone of tracks has to be erased and rewritten with the new data inserted. The OptiNAND can make that operation faster, reducing an SMR drive’s write lag and bringing its performance closer to traditional drive performance.
The details were not revealed, but we might envisage that the size of an SMR write zone — the block of tracks treated as an entity — could be reduced, shortening the time needed for a data rewrite operation. We might envisage a 22 to 24TB SMR/OptiNAND drive could be coming.
Also, OptiNAND’s use means the drive’s control plane can run in parallel with data plane operations. When metadata has to be read from disk that is not the case.
Seagate and Toshiba
Dr Sivaram said: “We will have products across the board with OptiNAND.” WD is at an advantage, he says, because it has HDD and NAND firmware engineers sitting in the same room — because it makes both disk drives and SSDs. Its disk drive competitors, Seagate and Toshiba, do not.
Our thinking is that Seagate and Toshiba will be talking to NAND suppliers, such as Micron, Samsung and SK hynix, and perhaps cheekily Kioxia (WD’s NAND joint-venture partner), about adding their embedded flash to disk drive controllers.
Toshiba already uses the bias current technology to improve its disk write signals with its Flux Control MAMR concept.
In theory Seagate could use similar technology and so gain a capacity jump without going to HAMR — adding a laser heating element to its write head and reformulating the drive recording medium. Will it? That is a big, big question.
Seagate has a US disk drive head bias current optimisation patent — number 6115201 — for determining a maximum magnitude of bias current that can be safely applied to a head of a disc. It was filed in 1998. It’s a relatively small disk drive engineering world and we can imagine Seagate is well up to speed with the technology.
It would not be a surprise if, one, Seagate introduced its own bias current/flux control technology and, two, both it and Toshiba added NAND to their disk controllers to store drive metadata data off the disk, process it faster, and raise areal density.
Quantum is building out its credentials as an ADAS (Advanced Driver-Assistance Systems) supplier, reckoning a contribution of a reference architecture will help acceptance of its storage HW+SW into ADAS workflow systems. DataCore is hiring product and sales management VPs to increase its momentum management, following a string of double-digit growth quarters. SoftIron has a new storage router to enable, it hopes, enterprise-wide adoption of Ceph.
And we have our own string of news bytes to follow that — especially one about Backblaze’s ransomware ecosystem exploration.
Quantum reference architecture for ADAS
File and object storage, management and workflow supplier Quantum announced the release of an end-to-end reference architecture for Advanced Driver-Assistance Systems (ADAS) and Autonomous Driving (AD) systems.
It combines ultra-fast automotive and mil-spec NVMe edge storage device with StorNext software to capture, manage, and enrich vast quantities of sensor data to help drive the future of autonomous vehicles.
Jamie Lerner, President and CEO, Quantum, said: “Although still relatively nascent, organisations developing autonomous vehicles are at a crossroads. The volume of data being captured is increasing exponentially, presenting an urgent need for speed, capacity and cost-efficiency in the data management lifecycle.”
Quantum ADAS Reference Architecture diagram.
Test vehicles typically capture terabytes of sensor data per hour generated by multiple video cameras, LiDARs, and Radars. ADAS/AD development systems rely on collecting and processing these large amounts of unstructured data to build sophisticated Machine Learning (ML) models and algorithms, requiring intelligent and efficient data management.
The Quantum R6000, with a removable storage canister, is an ultra-fast automotive & mil-spec edge storage device explicitly developed for high-speed data capture in challenging, rugged environments including car, truck, airplane, and other moving vehicles. StorNext software can help store and direct the data from the R6000 to ADAS/AD workflows.
DataCore exec hires
Software-defined storage supplier DataCore has hired Abhijit Dey as its Chief Product Officer and Gregg Machon to be its VP for Americas Sales. Dey comes from Agari, with time at Druva, Veritas and Symantec before that. Machon’s job history in reverse order is Radiant RFID, Qumulo (VP worldwide channels & OEMs), HPE and Nimble, with SolidFire and EMC before that.
Abhijit Dey (left) and Gregg Machon (right).
DataCore has made a significant investment in R&D, resulting in an increase of technical talent of more than 40 per cent in the last two years alone while modernising software development and testing practices, opening a centre of excellence in Bangalore, India, and a new office in Austin, Texas.
It had its 12th consecutive year of positive cash flow and double-digit growth in net new revenue over the last few quarters. This is a period in which the company has added an average of over 100 net new customers per quarter, with a strong performance in government, healthcare, and CSP (cloud service provider) verticals.
SoftIron’s new storage router
SoftIron, punting itself as the world leader in task-specific appliances for scale-out data centre solutions, announced general availability of its latest HyperDrive Storage Router, the HR61000 — an intelligent services gateway that provides interoperable high-throughput storage transactions for organisations using S3 or legacy protocols such as iSCSI, NFS, and SMB.
It provides gateway services and legacy file and block integration for enterprise applications. Combined with SoftIron’s Ceph-based HyperDrive Storage appliances, organisations can use it to gain virtually limitless storage scalability, while consolidating and simplifying their legacy storage systems management.
Networking — 2x NICs (100Gbit/sec); Data resiliency — High Availability per service/protocol; Storage Protocols — iSCSI, SMB, NFS, Custom, CephFS; Management — 1x 1GbE, IPMI, Hyperdrive Manager; Power supply — Redundancy power (Dual Supplies); 120v–240v; 50Hz–60Hz; Power consumption — < 165 watts; Dimensions — 1 Rack Unit
The HR61000 Storage Router is available for POC and purchase today, either via traditional purchasing (CAPEX) and as-a-Service (OPEX) options.
Shorts
Data protection and cyber-security supplier Acronis is entering into a training partnership with Nuremberg-based qSkills. qSkills will be offering training for Acronis products to partners and end users across EMEA.
A Backblazeblog opens the door on the ransomware economy and its ecosystem of players; developers, organised crime syndicates, brokers, operators, etc. It is a fascinating read.
Backblaze ransomware ecosystem diagram.
Object storage supplier Cloudian announced record bookings for the first half of its fiscal year ending July 31, increasing 50 per cent over the same period last year. The growth was driven by strength in both reorders from existing customers and sales to new customers. The company now has approximately 650 customers worldwide, up 40 percent over the past year.
Commvaultsued Cohesity and Rubrik in April last year. Itand Rubrik have now come to an agreement on all outstanding patent litigation proceedings between themselves. Commvault CEO Sanjay Mirchandani writes: “We have reached an amicable settlement that respects our mutual intellectual property and is in the best interest of our company and shareholders.” Will a similar agreement follow between Commvault and Cohesity?
DIGISTOR Citadel encrypted SSD.
Secure, data-at-rest (DAR) suppler DIGISTOR and embedded cyber security supplier Cigent Technology announced a tech partnership to expand data security across the entire lifecycle of a storage drive from initial deployment to end-of-life for military, defence, and critical infrastructure applications. The effort will combine Cigent’s Dynamic Data Defense Engine (D³E) with DIGISTOR encrypted SSD storage products.
Data lake analysis startup Dremio has launched a global partner network which includes cloud, technology, consulting, and system integration (SI) partners such as AWS, Intel, Microsoft, Tableau, Privacera, dbt Labs, Twingo, InterWorks, and others. Features include a dedicated partner account manager, business planning, one-on-one support, education and enablement, sales and technical training and certification, and joint marketing support to drive growth. There are also provides substantial discounts, sales incentives and joint marketing funds.
FileCloud ships a cloud-agnostic enterprise file sync, sharing and data governance platform. It has announced its new Compliance Center which enables US government agencies and organisations the ability to run ITAR-compliant enterprise file share, sync, and endpoint backup solutions with necessary encryption options. Some key highlights include:
Organizations without sophisticated risk management expertise can run their own compliance solution with necessary encryption options
Automated wizard streamlines compliance to just two clicks, guiding admins through configurations and identifying any missing elements
FileCloud for ITAR provides multi-level data protection through Data Leak Prevention capabilities
Backup target appliance maker ExaGrid has signed up TIM AG as a value-added distributor in the DACH region (Germany, Austria, Switzerland).
HPE has won a $2 billion contract to provide HPC and AI services to the US National Security Agency (NSA). Product will be supplied through the GreenLake subscription business over a ten-year period. There is an HPC-as-a-Service platform based on Apollo and ProLiant servers deployed in a QTS data center and managed by HPE.
Kasten by Veeam, a supplier of Kubernetes Backup, today announced that the CyberPeace Institute has deployed Kasten K10 to protect its Kubernetes applications and reduce the risk of data loss and corruption.
Kingston DataTraveler Max.
Taking advantage of the USB-C interface, Kingston has produced a DataTraveler Max USB 3.2 gen-2 thumb drive. It delivers up to 1000MB/sec read bandwidth and 900MB/sec write bandwidth. Capacities are 256GB, 512GB and 1TB. It weighs just 12g and has a five-year warranty.
Lightbits Labs, which supplies NVMe-optimized, SW-defined elastic block storage for private and edge clouds, has been assigned a patent (11,09,3408) for “a system and method for optimizing write amplification of non-volatile memory storage media.”
The abstract reads: “A system and a method of managing storage of cached data objects on a non-volatile memory (NVM) computer storage media including at least one NVM storage device, by at least one processor, may include: receiving one or more data objects having respective Time to Live (TTL) values; storing the one or more data objects and respective TTL values at one or more physical block addresses (PBAs) of the storage media; and performing a garbage collection (GC) process on one or more PBAs of the storage media based on at least one TTL value stored at a PBA of the storage media.”
New Yorker Electronics has announced its release of the new Innodisk Industrial-grade DDR5 DRAM modules. The modules comply with all relevant JEDEC standards and are available in 16GB and 32GB capacities, as 4800MT/s. The Innodisk DDR5 DRAM also has a theoretical maximum transfer speed of 6400MT/s, doubling the rate of its predecessor, DDR4. In addition, the voltage has been dropped from 1.2V to 1.1V, reducing overall power consumption.
Server-embedded, infrastructure-as-a-service start-up Nebulon has signed up Boston to resell its product.
OWC has announced Jellyfish Manager 2.0 which works directly with Jellyfish servers, specialised shared storage devices that allow multiple post-production video editors to work simultaneously with 4K, 6K, and 8K footage. It integrates with AWS, Backblaze and Wasabi and includes the most requested cloud backup services that allow users to run scheduled backups, and if necessary, recover their data from the cloud.
OwnBackup, which supplies a cloud data protection platform, announced the acquisition of RevCult, a California-based software company that provides Salesforce security and governance solutions, often known as SaaS Security Posture Management (SSPM). SSPM helps organisations more easily secure data that is growing in volume, velocity and variety, as well as avoid exposure by continuously scanning for and eliminating configuration mistakes and mismanaged permissions, which are the top causes of cloud security failures.
Multi-cloud Kubernetes-as-a-Service supplier Platform9 has joined joined Intel’s Open Retail Initiative (ORI), whose mission is to enable retail transformation using open source, edge/IoT, and ISV ecosystem applications.
Storage tester SANBLaze announced its SBExpress Version 8.3/10.3 software release which provides NVMe SSD manufacturers the ability to test PCIe-based NVMe devices as well as NVMe-oF (NVMe over Fabrics) devices. Comprehensive test suites are included for complete verification and compliance of ZNS, VDM, TCG, OPAL/Ruby and T10/DIF specifications for NVMe devices. Enhanced Python and XML APIs provide access to all tests and features enabling integration of SANBlaze SBExpress NVMe systems into existing test infrastructure — all fully compatible with SANBlaze’s upcoming Gen-5 PCIe generation of products.
The SNIA tells the world that the latest revision of SNIA Linear Tape File System (LTFS) Format Specification, v 2.5.1, is now adopted as the international standard ISO/IEC 20919:2021 through the collaboration between SNIA and ISO. The LTFS Format Specification defines the self-describing data structure on tape for the long-term retention of data at low-cost with the benefit of data portability between the different systems and different sites using tapes. The changes from previous ISO/IEC 20919:2016 adds the improvement in storage efficiency with the incremental index recording and supports a wider variety of characters in the file name and extended attribute with the character encoding.
Storage SW startup StorONE has signed a reseller agreement with Virtual Graffiti, a California-based provider of network infrastructure systems. The agreement covers the entire StorONE product portfolio, as well as a series of next-generation hybrid storage systems built on Seagate hardware, that deliver high performance and high capacity at affordable prices.
Data integrity and integration supplier Talend has once again been named by Gartner, for the sixth consecutive time, as a Leader in the August 2021 Gartner Magic Quadrant for Data Integration Tools. For a complimentary copy of the Gartner report, click here.
Cloud storage supplier Wasabi has signed an EMEA-wide distribution contract with Exclusive Networks, a cybersecurity specialist, which has its X-OD online delivery channel. Denis Ferrand-Ajchenbaum, VP Global Vendor Alliances and Business Development at Exclusive Networks, said: “Enterprise customers are budgeting more and more for their storage needs with public cloud providers like AWS, Azure, GCP and others, and frequently getting stung by extra charges for egress and API requests. Wasabi makes consumption easier, simpler and cheaper, and we at Exclusive are delighted to be able to offer EMEA partners the opportunity to enjoy enhanced benefits.”
ReRAM develop Weebit Nano has expanded its partnership with CEA-Leti, the French research institute. As part of the agreement, Weebit will incorporate additional IP licensed from CEA-Leti into its ReRAM offerings, further improving its technical parameters such as endurance, retention and robustness. Tests show an order of magnitude improvement in array-level endurance, and a 2x increase in data retention at the same conditions compared to previous results. In addition, the technology will make it possible for Weebit to address new high-volume markets such as automotive and smart cards by enabling high-temperature reliability up to 175oC and high-temperature compatibility for wafer level packaging.
Digitimes has reported China’s YMTC is experiencing low yields (30 to 40 per cent) on its 128-layer NAND chips. Overall the NAND industry is moving to 162–172 layer NAND, leaving YMTC behind. Wells Fargo analyst Aaron Rakers notes YTMC may not achieve its capacity plans until the second half of 2022 given the lower yields thus far, while production may reach 80–85k wafers per month by the end of 2021.
Virtual storage array supplier Zadara says it’s getting good traction with its recently-launched its Federated Edge programme. This is a fully managed, distributed cloud architecture sold through a global network of MHSPs. “We see a future where there is a Federated Edge Cloud in every city in the entire world, hosted by an MHSP, allowing edge customers to deploy workloads at sub-five milliseconds no matter where they are,” said Nelson Nahum, CEO, Zadara.
Amazon held a Storage Day on September 2 and announced a whole raft of new features for files, objects, blocks, file/object transfer, and backup.
They are aimed at lowering costs through tiering data to cheaper storage classes, simplifying access, automating data movements and verifying backup status. There is a list here, with — for us — a NetApp deal and file/object transfer facility being the highlights.
An AWS blog by senior developer advocate Marcia Villalba lays out the list of announcements.
File
The first and main one is FSx for NetApp ONTAP which we covered here and which provides ONTAP as a native managed service on AWS.
The second file announcement adds intelligent tiering to Amazon’s Elastic File System (EFS). This is similar to S3 tiering, with tiers cost- and performance-optimised on the basis of file access patterns. AWS Customer CapitalOne is using this to get lower cost options for its analytics workloads.
If an AWS user has a file that is not used for a period of time, EFS Intelligent Tiering will move it to the Infrequent Access (IA) storage class. If the file is accessed again, Intelligent Tiering will automatically move it back to the Standard storage class.
File Transfer Family
A third file announcement is a completely new service. The AWS Transfer family of Managed Workflows is an onramp/offramp into AWS that automates file and object transfers via SFTP (Secure Shell (SSH) File Transfer Protocol), FTPS (File Transfer Protocol over SSL) and FTP, into and out of S3 or EFS. The use of SCP, HTTPS and AS2 transfer protocols is not supported.
AWS Storage Day screen grab.
Villalba writes: “Without using Transfer Family, you have to host and manage your own file transfer service which requires you to invest in operating and managing infrastructure, patching servers, monitoring for uptime and availability, and building one-off mechanisms to provision users and audit their activity.” The Transfer Family is a fully managed service to accomplish this.
It can scan files for malware, personal identifying information and anomalies with customised and auto-triggered file upload workflows. Errors in processing files can be automatically handled with failsafe modes as well. AWS says all this can be done with low-code automation.
AWS Transfer Family Managed Workflows lets users configure all the necessary tasks at once so that tasks can automatically run in the background. Read a Transfer Family FAQ to find out more.
No monitoring and automation charges for small objects;
No need to analyse object sizes;
No minimum storage duration for objects;
No need to analyse an object’s expected life.
According to Villlalba, “Now that there is no monitoring and automation charge for small objects and no minimum storage duration, you can use the S3 Intelligent-Tiering storage class by default for all your workloads with unknown or changing access patterns.”
S3 Multi-Region Access Points provide a global endpoint in front of buckets in multiple AWS regions. They work across multiple AWS Regions to provide better performance and resiliency. This feature dynamically routes requests over AWS’s network, to the lowest latency copy of your data, increasing read and write performance by up to a claimed 60 per cent, and providing operational resiliency.
We understand that S3 Multi-Region Access Points rely on S3 Cross Region Replication to replicate the data between the buckets in the Regions chosen by a customer. The customer selects which data is replicated to which bucket. There are replication templates available to help simplify applying replication rules to buckets.
Villalba blogs: “You can now build multi-region applications without adding complexity to your applications, with the same system architecture as if you were using a single AWS Region.”
Block
EBS direct API snapshots now support any volume up to 64TB, increased from 16TB and equal to the size of the largest EBS io2 Block Express volume. Snapshots can be recovered to EBS io2 Block Express volumes for protection and for test and dev.
AWS says it has built the first SAN for the cloud with io2 Block Express instances providing hundreds of thousands of IOPS at sub-millisecond latency and fine ‘nines’ durability. It’s claimed to be good for SAP and Oracle ERP, SharePoint, MySQL, SQL Server, SAP HANA, Oracle DB, and NoSQL databases such as Cassandra, MongoDB and CouchDB.
AWS Storage Day screen grab.
Backup
AWS Backup is a fully managed service to initiate policy-driven backups and restores of AWS applications. AWS Backup Audit Manager provides customisable controls and parameters, like backup frequency or retention period for AWS backups. It provides evidence of backup compliance for data governance, continuously tracking AWS backup activities, audits backup practices and generates audit reports.
The FSx for ONTAP facility will enable NetApp customers to use AWS much more easily. The File Transfer Family will enable other file-access and also object-access customers to do so as well. File tiering will lower the cost of longer-term file storage and the S3 tiering restriction removals will help with storing lots off smaller objects.
The upping of EBS snapshot capacity to 64TB is welcome as is the Backup Audit Manager. Altogether this set of announcements should help AWS to make progress in storing more of the world’s data.