Home Blog Page 419

Your occasional Storage Digest, featuring Asigra, Coraid, Fujitsu, BackBlaze, Toshiba and…

Asigra, Coraid, Gartner and Toshiba supply a clutch of storage news to start the week, with better backup pricing, block and mobile phone storage and a supplier-ranking look at business analytics and data warehousing

There are updates from AOMEI, BackBlaze, Fujitsu and StorageOS followed byCxO hires by Pivot3 and Pure Storage.

Asigra goes unlimited

Backup supplier Asigra has introduced an unlimited use subscription licensing model for managed service providers, saying the existing pay-as-you grow model is broken.

It says MSPs should be able to accurately forecast backup software licensing costs for customers with mixed environments with its new scheme.

Asigra bases the unlimited use pricing on the service provider’s previous year usage plus a jointly-agreed growth factor for the of the agreement. Usage is based on capacity, virtual machines, physical machines, sockets, users, and other things. There are no unexpected pricing uplifts during the year for the service provider, meaning predictable Asigra-based costs.

David Farajun, CEO of Asigra, says. “This first-of-its-kind cloud backup software licensing approach fixes the broken pay-as-you-go model, which has become a sore point for service providers globally due to unplanned pricing spikes as backup volumes surge.”

The increased usage during the term is taken into account when setting the next term’s unlimited use pricing.

Coraid extends Ethernet block storage scalability

ATA-over-Ethernet (AoE) storage vendor Coraid has updated its VSX product to v8.1.0. 

Coraid founder and CEO Brantley Coile.

VSX is a diskless storage appliance that provides network storage virtualisation. It uses Coraid’s SRX storage appliances to create an elastic storage system. SRX is Coraid EtherDrive software, providing Ethernet block storage and runs on commodity servers.  

AoE is said by Coraid to have a 10x price/performance advantage over Fibre Channel and iSCSI block storage. A  VSX with three years of software starts at $4,687.

Gartner’s business analytics MQ sees split market

Gartner’s 2019 Magic Quadrant for Data Management Solutions for Analytics (DMSA) report shows a two-way divide. There is just one visionary vendor, MarkLogic, with a cluster of eight leaders and second cluster of ten niche players. There are no challengers.

The report mentions an increasingly split market, and implies the market is maturing, saying; “Disruption slows as cloud and non-relational technology take their place beside traditional approaches, the leaders extend their lead, and distributed data approaches solidify their place as a best practice for DMSA.”

Snowflask is the only recent vendor to make the jump into the leaders’ quadrant, having been ranked as a challenger a tear ago. Google was a visionary then, and is now also in the leader’s quadrant.Others, like Cloudera and Hortonworks, are stuck in the niche players’ box.

Google will ship you a full copy of the report if you register.

Toshiba’s  mobile phone SSD goes faster

Toshiba is sampling 96-layer 3D NAND UFS v3.0 storage. UFS (Universal Flash Storage) is embedded storage for mobile devices. Toshiba’s device has 128GB, 256GB and 512GB capacities.

The UFS v3.0 standard has been issued by JEDEC and specifies a 11.5 x 13mm package.

Toshiba’s mobile phone SSD-like product contains flash memory and a controller which carries out error correction, wear levelling, logical-to-physical address translation, and bad-block management.

It says the new device has a theoretical interface speed of up to 11.6Gbit/s per lane, with two lanes providing 23.2Gbit/s. Sequential read and write performance of the 512GB device are improved by approximately 70 per cent and 80 per cent, respectively, over previous generation devices.

These complied with the JEDEC UFS v2.0 standard, topped out at 256GB capacity, and used the 64-layer 3D NAND. The maximum data rate was 1.166Gbit/s. Toshiba does not supply public sequential read and write performance numbers for either its old or new UFS devices.

Shorts

AOMEI’s Partition Assistant for Windows PC and Server helps in managing hard disks and partitions. Users can resize, move, extend, merge and split partitions. It has been revved to V8.0 with a new user interface and a modernised look and feel.

A Backblaze Cloud Backup v6.0 update enables users to save Snapshots directly to B2 Cloud Storage from their backup account and retain the Snapshots as long as they wish.

Fujitsu has updated one of its four hyperconverged infrastructure (HCI) products, the Integrated System PRIMEFLEX for VMware vSAN, to run SAP applications like HANA. It was first introduced in 2016. Fujitsu provide no hardware details of the refreshed system, merely saying it’s been optimised for SAP.  We’re trying to find out more.

Startup StorageOS, which produces cloud native storage for containers and the cloud, has achieved Red Hat Container Certification. A StorageOS Node container turns an OpenShift node into a hyper-converged storage platform, with features like high availability and dynamic volume provisioning. StorageOS is now available through the Red Hat Container Catalog to provide persistent storage for containers.

People

All the ‘P’s this week:-

Rance Poehler has joined HCI vendor Pivot3 as its VP for global sales and chief revenue officer. He joins from Dell Technologies where he was VP w-w sales for cloud client computing, an $8800m business.

 Sudhakar Mungamoori has become senior director for product marketing at storage processor startup Pliops. His previous billet was at Toshiba Memory America, where he worked on Kumoscale.

Pure Storage has hired Robson Grieve as its Chief Marketing Officer. He was previously CMO at software analytics company New Relic, and before that, was an SVP and CMO at Citrix.

IBM financials indicate weakness in storage hardware

IBM’s Q4 2018 results expose a storage hardware weakness that is unlikely to be fixed anytime soon.

This is not exactly an existential crisis for the company as storage hardware represents just two per cent of IBM’s business. But still…

IBM does not break out storage revenues – they are a component of the company’s systems business, along with z mainframes and Power servers.

According to IBM’s earnings charts, this segment posted revenues of Q4 $2.6bn, down 20 per cent on the year, as the z mainframe refresh cycle, now in its sixth quarter, winds down.

In the absence of a breakout, we estimate that storage hardware accounted for  $450m-$460m revenues in the quarter. IBM notes falls in mid-range storage revenues with increased all-flash array sales unable to offset decline elsewhere.

IBM says it is experiencing pricing pressure in a very competitive storage market. Crudely speaking, mid-range products, except the all-flash arrays, are uncompetitive and too costly. 

William Blair analyst Jason Ader, in a recent  enterprise storage market review, noted IBM appears to be de-emphasising its storage array business.

Ader may be correct in his conclusion, but IBM continues to update iproduct lines. The company added NVMe technology to its mid-range products in December 2018 and will introduce this capability across its storage portfolio in the first half of 2019.

Comment

Blocks & Files thinks that IBM has better growth prospects in storage software than with storage hardware.

The company may pull in $460m a quarter from storage hardware but this is a tiny fraction of the company’s $21.8bn quarter in Q4 2018.

Nothing IBM can do with storage hardware will significantly affect its quarterly revenue number. It just does not matter that much.

The company’s recent NVMe refresh shows it commitment to safeguarding storage hardware revenues, where it can. But that appears to be the extent of its strategy. Certainly it does not care enough about this market to take the plunge and, for example, buy Pure Storage.

Exploring the memory-flash gap through an Optane lens

Semiconductor consultant Mark Webb thinks XPoint will dominate the memory-flash performance gap and could be a $2.7bn business for Intel and Micron in 2023, with DIMM sales driving most revenue.

Webb is a credible analyst – his CV includes 23 years with Intel in NAND production and with IMTF, the joint Intel-Micron manufacturing venture.  

His chart, presented at the 2018 Flash Memory Summit, details the memory (DRAM) and NAND performance gap in terms of latency.

The latency gap exists between 10ns DRAM and 100μs (100,000ns) NAND and is being filled by persistent or storage-class memory technology products.

Intel’s Optane drives are built from 3D XPoint media produced at a Lehi, UT, foundry, operated by its IMFT joint venture with Micron.

Optane is available in SSD form, with a NVMe interface using the PCIe bus, and in DIMM form, plugging into a host’s faster CPU-memory bus.

But there are other candidate technologies and Webb positions them on a chart based on the latency time spectrum.

The other gap-filler candidates are MRAM, NAND-DRAM DIMms and alternative ReRAMS (Resistance RAMs such as ones using pure Phase Change Memory). 

They are less mature than 3D XPoint and lack an industry giant like Intel to push them. As a result Intel’s Optane brand has  advantages from a technological and product marketing perspecitve.

The technology’s advantages will soon become greater as Micron is set to enter the 3D XPoint market towards the end of this year. 

3D XPoint revenues

Webb has modelled 3D XPoint revenue out to 2024, and he thinks more than two-thirds will be attributable to DIMM products.

He notes that the DIMM forecasts contain assumptions for Cascade Lake share (Cascade Lake AP CPUs are needed to support Optane DIMMs(, server DIMM attach rates and average Optane density.

In a blog he notes Intel is off to a slow start in 2019. The Optane DIMM-supporting Cascade Lake CPU has arrived later than Intel planned and that has pushed volume ship back. He also reckons Cascade Lake take-up is below Intel’s expectations.

The attach rate for Optane DIMMs has been disappointing.  And demand for Optane on desktop PCS has yet to materialise. Intel has announced notebook Optane drives in which the Optane functions as a cache for a slug of QLC (4bits/cell) flash. That makes for a cost-effective performance SSD.

Optane DIMMS are not wholly persistent – DRAM cache contents are lost when power is switched off. This is a sales drawback.

To enable persistent Optane memory, says Webb, the application must explicitly write data to Optane, deciding where to write it in the Optane address space. This is called Application Direct Mode.

Existing application software has to be modified and this will impede take-up for Optane DIMMs.

Webb tells us; “I have all the details on what happens next on 3D XPoint with increased density and cost reduction.”

We aim to find out more.

NVMe in the data centre. A right riveting read

Chris Evans, a IT storage consultant, has written a short, useful guide to implementing NVMe in the data centre.

Non-Volatile Memory Express (NVMe) is the fastest storage drive interface, and geared to solid state drives and not disks.  Evans looks at NVMe drives and NVMe-over-Fabrics (NVMe-oF) networking. He describes NVMe technology and its roadmap, and outlines NVMe advantages.

Contents of the NVMe in the Data Centre guide

The guide also explores NVMe drive form factors, such as AIC, M.2, U.2., NF1 and the ruler format. And Evan explores various implementation alternatives for NVME-oF; Fibre Channel, Ethernet, InfiniBand and TCP.

 In the final section of this guide, the author examines products from 11 vendors.

The guide is available at no charge to subscribers to  Architecting IT or the Architecting IT blog. There is no subscription fee either. And we have no financial interest to declare.

How do you sex up a tape library? Fujitsu marketer: “Ransomware!”

Fujitsu has applied a thin coat of ransomware marketing gloss to the most unlikely thing, a new mid-range tape library called the LT140.

Here’s the pitch. Stored tapes are offline and therefore provide an air-gap defence against ransomware attacks. That means clean offline versions restored from the library replace online files affected in an attack.

Hope springs Eternus

Olivier Delachapelle, head of data centre category management at Fujitsu EMEIA, says high-capacity tape libraries should be an “essential part of any 3-2-1 data protection strategy: keep at least three copies of data, store two backup copies on different storage media, and store one offline.”

Naturally, restores from tape are slower than restores from disk. Also data added to files after the last backup but before a ransomware attack is lost in a tape restore.

The vulnerability window is typically shorter if disk-based snapshotting and backup is used, as with Cohesity’s Data Platform v6.1.1. The choice comes down to data protection strategy and price/performance judgements. Tape is usually the lower cost option.

Meet the family

The new ETERNUS LT140 is a 20-280 slot, 3-21 drive system with a 3U base box. Total capacity scales up to 8.4PB (compressed) using LTO-8 tapes.

The LT140 is sold on a pay-as-you-grow basis and scales by adding drives and cartridge slot. The system supports LTFS access and WORM (Write Once; Read Many) format cartridges.

The LT140 sits above three lower-capacity libraries:

  • LT20 S2 – 8-slot, 1-drive, 2U cabinet and 240TB max capacity
  • LT40 S2 – 24-slot, 2-drive, 2U cab, 720TB max
  • LT60 S2 – 48-slot, 4 drive, 4U cab, 1,440TB max

The LT260 and LT270 libraries cap Fujitsu’s range.

The ETERNUS LT140 is available in EMEIA from February 2019, directly from Fujitsu and through channel partners. No published prices, as per usual.

P.S.

Fujitsu does not make its own tape libraries and drives. Only IBM and HPE make LTO-8 drives and, along with SpectraLogic, make LTO-8 libraries. We reckon Fujitsu is OEMing HPE’s StoreEver MSL 3040, The two pieces of hardware have similar specs and look identical.

Fujitsu ETERNUS LT140 on left and HPE StoreEver MSL3040 on the right.

Cohesity throws kitchen sink at ransomware

Cohesity hopes the latest version of Cohesity DataPlatform (CDP) will help to transform ransomware attacks from a devastating existential threat to a manageable nuisance.

The software update, v6.1.1, adds ransomware-resistant backups, detection of potential ransomware attacks, and faster restores.

In the event of a successful attack CDP eases the pain by restoring files and virtual machines, at any point in time and in their hundreds quickly, Cohesity claims.

Cohesity’s SpanFS filesystem keeps backups (snaps) in an immutable form. And CDP writes it to a separate file if changed data is applied to a backup.

Customer security settings can deny modify and delete access to backups. This enables the blocking of change requests – whatever the credentials of the applicant. Multi-factor authentication for identity verification is available.

Customers can use the Cohesity Helios service to monitor backup changes and data ingest rates on-premises and in the public cloud. Alerts are sent to IT support and to the Cohesity support team.

Helios also monitors general files and object data, watching access frequency and the number of files being modified, added or deleted by individual applications or users. This aids faster detection of a a ransomware attack. 

Cohesity Helios dashboard

A brief walk through filer software in the AWS, Azure and Google clouds

Updated April 2019. Filer software is becoming multi-cloud, with 12 cloud filer suppliers overall and several now supporting more than one public cloud. But they all have a way to go to provide full coverage across the three main public clouds.

How do the competitors stack up? Let’s find out.

Show me the data!

Elastifile has announced availability of its software in the Amazon marketplace. It claims high performance for its scale-out filer software, citing sub-200microsecs latency and millions of IOPS. But we have no IOPS numbers and no benchmark data, thus preventing us from making comparisons.

The product is also available (April 2019) for Google’s Cloud Platform. Azure availability is on its roadmap.

SoftNAS does not support GCP. This was due last year and so it seems reasonable to see this feature released in 2019.

NetApp’s ONTAP software provides block and filer support. We do not have performance data for its various cloud incarnations.

Qumulo is another scale-out filer supplier claiming high-performance in the cloud – but without publishing detailed numbers. Its Core product is available in AWS and became avail;able in the Google Cloud Platform in April 2019.

Dell EMC’s Isilon scale out filer product had a GCP implementation in development in mid-2018 but it has not yet come to fruition. Its Unity array software can run in the VMware cloud in AWS but not natively in AWS.

In common with several competitors, Hedvig supplies block, file and object software. It says its software is high-performance but again provides no comparative data.

WekaIO does provide benchmark information, hurrah! In the absence of qualitative performance information from the other suppliers, it gets the speedy performance crown.

Object storage suppliers such as Cloudian and Scality also support file protocol access. Cloudian is able to run in AWS and Azure. Scality can connect to Azure and run in AWS and GCP.

Comment

Blocks & Files thinks some empty cells in our File Software in the Public Cloud chart will be filled in by the end of the year. 

All independent file software suppliers will go multi-cloud, supporting AWS, Amazon and GCP.  This will give them an anti-lock-in marketing cudgel to wield.

The Big Three will face pressure to support multiple file protocols, such as NFS and SMB, plus S3, Azure Blob and, maybe, HDFS. Customers will want them to add file support to on-premises instantiations such as Amazon Outpost and Azure Stack. Google lacks an on-premises presence, so far.

Nutanix, Pure, VMware are booming. Other enterprise IT vendors? Not so much

Cisco is discounting its HyperFlex hyperconverged infrastructure product, Commvault is under competitive pressure,  Microsoft AzureStack lacks traction, and NetApp is replacing IBM storage arrays, while Nutanix, Pure and VMware are booming.

This market intelligence is provided by Jason Ader, analyst at William Blair, an investment bank, who surveyed 45 resellers in North America and  Western Europe in the December 2018 quarter. He found “a continuation of the healthy [enterprise IT] spending environment” of the past few quarters due to to “normal budget flush at year-end, continued on-premises infrastructure refresh, and the strategic urgency of IT investments (which makes it difficult to delay projects.)”

He said lines of business (versus IT managers) are “driving project activity at large customers, and that this is a key reason for the current robustness in IT spending.” 

According to Ader’s survey panel, “areas of product strength include security, analytics, hybrid cloud, and HCI/ storage. Momentum in on-premises storage refresh continued in the fourth quarter, driven by data growth and outdated infrastructure. Meanwhile, backup and disaster recovery continues to be a hot space, driven by private vendors Rubrik, Cohesity, Veeam, and Zerto.”

Blair delved in more detail into at some vendors. We have reproduced his storage-related points from his communication to William Blair clients (thank you, Jason.)

Cisco

  • Cisco’s HyperFlex HCI products looked to be gaining some steam last quarter but the excitement faded somewhat this quarter, perhaps due to mixed customer satisfaction. Cisco continues to seed customers with the product and offers heavy discounts to gain market share.

Commvault

  • Reseller feedback continued to show a business with surprising resilience but also under increasing pressure, with sales turnover and competitive encroachment taking their toll.
  • At the same time, VARs are positive on Commvault’s new pricing model, which massively simplifies licensing into four main SKUs: 1) Commvault Complete Backup & Recovery, 2) Commvault HyperScale, 3) Commvault Orchestrate, and 4) Commvault Activate.
  • Specifically, customers like Commvault Complete, the company’s new consolidated backup and recovery product package, which offers simplified pricing and includes several features that were previously only available as add-ons such as snapshot management, hardware replication support, reporting, synchronization of virtual machines across locations, and file sharing.
  • We heard of growing activity for Commvault HyperScale appliances with more active deals and a growing pipeline. VARs note that the appliances have worked through early technical and go-to-market kinks, and are now sparking interest among new and existing customers given the simpler and more resilient value proposition.
  • VARs continue to see traction converting customers from one-year maintenance plans to three-year subscription agreements; this drives up-sell opportunities and locks customers in to the Commvault solution (thwarting competitive takeout bids).
  • On the competitive front, VARs are battling to keep Rubrik, Cohesity, and Veeam out of Commvault accounts.
  • We have not heard much detail on the new CEO search, though we hear murmurings that a new CEO could be announced in the first quarter.

Microsoft

  • Microsoft’s cloud business continues to be extremely strong; both Office 365 and Azure are seeing broad adoption and activity.
  • We hear of minimal traction for AzureStack thus far; several VARs believe that Microsoft has relied too much on third- party hardware manufacturers who lack incentives to push the AzureStack solution.

NetApp

  • Overall channel feedback remains positive on NetApp’s business and channel friendliness, with several large VARs posting strong growth with NetApp in the December quarter and feeling good about sustained growth in calendar 2019.
  • Storage refresh activity remains elevated due to pent-up demand in a healthy economic environment and the shift to all-flash architectures.
  • VARs and customers believe that NetApp has become an innovator again and see NetApp’s product lineup as strongly positioned, especially around AFAs and cloud data services.
  • The channel is largely positive on NetApp’s HCI technology, and while deal-count is growing, some VARs think it will be hard for NetApp to catch HCI leaders Nutanix and VMware.
  • While Dell-EMC’s market position has stabilised, its product portfolio is still viewed as long in the tooth, which creates ongoing displacement opportunities for NetApp.
  • IBM appears to be de-emphasising its storage array business, which is leading to significant replacement of IBM storage by NetApp.

Nutanix

  • A continued robust demand environment for the hyper-converged infrastructure (HCI) category from both new and existing customers—customers love the simplicity, one-click operation, and scalability of the architecture.
  • Increasing deal sizes (including more $1 million-plus deals), indicative of customer confidence in Nutanix— they increasingly see Nutanix as a strategic vendor. In the past a big deal for Nutanix was in the $300,000 range; now a big deal is north of $1 million, and we heard of several mid-seven-figure deals closed in the quarter.
  • Best-of-breed product positioning, with VMware/Dell-EMC as the main HCI competitor at this point. Nutanix does lose to Dell at times, but almost never on technical merit.
  • Strong momentum in the federal and SLED verticals—the VARs we spoke with noted minimal impact from the partial government shutdown, with DoD and intelligence agency customers open for business with growing budgets.

Pure Storage

  • Continued positive feedback from VARs on sales momentum and market share gains.
  • We continue to hear of strong deal activity driven by on-premises storage refresh and the shift to all-flash arrays.
  • VARs consistently note that Pure is a great company to work with, with first-rate salespeople, products, and support.
  • Pure’s NVMe story (NVMe across its entire product line) and unique Evergreen model (eliminating the need for forklift upgrades) are driving product differentiation and getting Pure onto more customer short lists.
  • FlashBlade is starting to become material to key channel partners as use-cases become more apparent.
  • While Dell-EMC’s market position has stabilized, its product portfolio is still viewed as long in the tooth, which creates ongoing displacement opportunities for Pure.
  • Several VARs note success with Pure displacements of legacy IBM and HDS storage.

VMware

  • VAR feedback remains almost universally positive on VMware’s sales momentum, product positioning, and cloud story.
  • Several VARs point to particular momentum with vSAN (often via Dell VxRail) as the HCI category takes hold.
  • The overall number and size of commercial VMware Cloud on AWS (VMC) opportunities is ramping up nicely with some VARs noting that the lower node count for entry-level clusters, the ability to deploy mixed node types (all-flash or hybrid), and the recent price reduction are driving higher adoption.
  • Federal VARs are extremely positive on the launch of VMC on AWS GovCloud, with a very strong pipeline developing. Several VARs we spoke to believe this can be a game-changer for VMware in the federal market.

3D XPoint needs to be bigger and cheaper. But how will it get there?

Compared to NAND flash, Intel’s Optane DIMM product is low on capacity and high on price. Increasing its capacity should lower the cost/GB and so help address both issues. How might this be done?

Optane DC Persistent Memory is Intel and Micron’s 3D XPoint (3DXP) in a DIMM form factor with 128GB, 256GB and 512GB capacities. This is the first generation of 3DXP technology and features two stacked layers of single bit cells built with a 20nm process technology and a 128Gb die.

The cell size is 0.00176 µm2; roughly half the size of a DRAM cell. 

Capacity increases in NAND technology have come from reducing cell size through smaller process geometries – traditional lithographic pitch scaling – and then implementing multiple layers and increasing the number of bits per cell. The semiconductor industry is currently transitioning from 64 to 96 layers, for example. TLC NAND has 3 bits/cell and the newly introduced QLC has 4.

3D layering used larger process sizes than the 14 – 15nm seen in 2D or planar NAND, moving to 40nm or so, and offsets the reduced number of cells per layer with a big increase in layer count.

So a quick comparison is that 3DXP is a 2-layer, single bit/cell design built with a 20nm process while current NAND is a 64-layer, 3bits/cell design made from a 40nm process.

This suggests three possibilities for 3DXP capacity increases: shrinking the process size, adding more layers, and increasing the number of bits per cell.

My supposition is that Micron will introduce a gen 2 3DXP product when it unveils its own 3DXP products towards the end of this year.  It is already making gen 1 3DXP chips in the IMFT foundry and so has no technological need to wait until the end of 2019 to introduce its own branded 3DXP product. Ergo it is tweaking the technology.

How practical are the three capacity increase ideas identified here?

Process size shrink

Micron mentions process size shrink as a possibility but supplies no example numbers.

We could speculate that gen 2 3DXP could use a 15nm process, representing a 25 per cent reduction in size, with a consequent increase in the number of XPoint dice you could obtain from a wafer. 

But we don’t know the yield of XPoint chips from a wafer and so can’t estimate the effect of this.

Layering

Doubling the layer count from 2 to 4 would double the capacity of the 3DXP die from 128Gb to 256Gb. Quadrupling it to 8 layers would get us a 512Gb die and moving to 16 layers would produce a 1Tb die.  That represents an eightfold increase in capacity.

Micron and Intel will have learnt a lot about layering from 3D NAND and it seems reasonable to assume that increasing 3DXP’s layer count is feasible and could deliver significant capacity increases.

Multi-bit cells

The recording medium in 3DXP is Phase-Change memory and IBM has demonstrated TLC Phase-Change Memory. A 3-bit cell would increase 3DXP capacity threefold, from a 128Gb die to a 384Gb die. That’s useful but  less dramatic than a layer count increase.

Development priority

If the development priority is to increase capacity then bulking up the layer count seems to deliver most bangs for the buck. Tripling the cell bit count would deliver a threefold capacity increase. Reducing the cell size would produce an increase of two to three times.

I think it makes more sense to develop one method rather than two or three, as stacking layers on a known process geometry would be more practical than stacking layers on an untried and new, smaller process geometry.

These are my initial thoughts about 3DXP technology. What does the semiconductor industry think? Umm… Micron “has decided not to comment on this occasion.”  And Intel has yet to come back to us.  Let’s see what the analysts have to say.

Jim Handy

Jim Handy, a semiconductor analyst at Objective Analysis, sees 3DXP’s market situation: “as a competitor to DRAM, not NAND flash.  This lowers the bar a considerable amount.”

“Although 3D XPoint SHOULD be cheaper than DRAM based on the cell size (as you say) or, more simply, the number of bits per die area, it has failed to achieve that goal.  his is because its production volume is too low.  The “economies of scale” are in the way.

“NAND flash had the same issue, although it  wasn’t originally vying to compete against DRAM.  You can get twice as many SLC NAND bits on a certain size die as you can DRAM bits, when they are both made using the same process geometry.  This has always  been true.  Yet, NAND flash GB were more costly than DRAM until 2004, the year when NAND flash wafer production first came within an order of magnitude of DRAM wafers. ” 

Handy concludes: “NAND had to reach near-DRAM volumes to match DRAM costs.  3D XPoint must meet the same requirement to match DRAM costs.  Yet, there’s no real market for 3D XPoint unless it is sold for sub-DRAM prices.”

How do you square this circle?

According to Handy, “Micron knows exactly what 3D XPoint costs to produce, and the company is also fully aware of what it sells for.  Guess why their 3D XPoint products have been pushed out?  Guess why Intel’s Storage Devices Group has been losing money while other  NAND flash makers have been reaping handsome profits?

He supplies a chart to illustrate this point:

“Whenever 3D XPoint volume gets sufficiently high then all this will straighten out, and Intel will profit from a product that sells for maybe half as much as DRAM but costs less than that to produce.  It’s not there yet.

“Unfortunately for Intel, the DRAM market has started its 2019 price collapse (which we have been anticipating since late 2015) and this will require 3D XPoint prices to keep pace, worsening Intel’s losses.”

Handy’s view of 3DXP layering

Handy says 3DXP Layering is more difficult than NAND Layering; “There’s a BIG difference between 3D XPoint layers and NAND layers. With NAND you put down scads of layers (32, or 48, or even 64) and then do a single lithographic step, and then it’s usually at a relaxed process like 40nm. 

“With 3D XPoint the device needs to be patterned with a more advanced (and costly) lithography, which is 20nm today, for every single layer of bits.  This is because you have to run conductors north-south on the first layer, then east-west on the next layer. 

“This not only adds phenomenally more cost than you have with a 3D NAND process, but it also complicates processing to the point where some process experts doubt that it will ever be economical to produce any kind of crosspoint chips with more than 4 bit layers.  (See this 4DS presentation from the 2016 Flash Memory Summit.)

That, if true, is a major difficulty.

Handy on adding 3DXP cell bits

“MLC and TLC are unlikely to be used for 3D XPoint because multibit technology is really slow.  Optane needs to support near-DRAM speeds.”

That’s true for Optane DIMMs but the SSD situation could differ;

“Perhaps MLC/TLC would be good for cost reductions on Optane SSDs, but the market acceptance for these has been very low so far, and I don’t expect for that to change.  The NVME interface hides most of 3D XPoint’s speed advantage.  I documented that in a 2016 blog post.

His overall conclusion is bleak: “In a nutshell, I don’t expect for 3D XPoint prices to ever approach NAND’s prices, and it can’t take advantage of any of the advancements in 3D NAND processing.  It  should, though, eventually be a cost-effective competitor to DRAM, but it’s unclear how long it will take to get there.

“Over the long term, if 3D XPoint succeeds, all systems will ship with very small DRAMs and a very large compliment of 3D XPoint Memory.  Over the next 12-18 months, though, the big challenge will be to attain costs that allow a profit while DRAM prices continue to fall.”

Here’s a second chart Handy supplied, to show DRAM spot prices being a leading indicator of DRAM contract prices:

It looks as if a tremendous price drip is coming for DRAM contract prices.

Howard Marks

The founder and chief scientist at DeepStorage Net gave us a typically pithy comment: “I’m hopeful about XPoint 2.0. [having] 8 layers so the remains of IMFT can sell Xpoint at 2X flash prices not 5X … [That] would be nice.”

Rob Peglar

Rob Peglar, President of Advanced Computation and Storage LLC, said that prospects for increasing the capacity of 3DXP DIMMs centre on increasing die capacity: “This is really about the density of each 3DXP die more than the # of die they can squeeze onto a DIMM form factor (which is standardised by JEDEC.)”

He clarified our thinking on shrinking 3DXP process size: “You can be certain that Micron won’t divulge process sizes, yields, or any other fabrication-related detail.  Par for the course with the semiconductor vendors 🙂  

“Also, if you want to speculate, know that going from 20nm to 15nm is a huge decrease; it’s not 25 per cent, it’s nearly 50 per cent, because it’s the area (2D) of the cell, not just the width (1D), that is reduced.  For comparison, NAND went from 20nm to 18nm to 16nm, and it took quite a while to do that.  20nm to 15nm is nearly impossible.”

Peglar also thinks adding bits to a 3DXP cell is problematic:

“Multi-bit is very difficult in crosspoint-style interconnections, as opposed to non-crosspoint-style interconnects.  While not outside the realm of possibility, it’s a (very) long shot.”  

The net conclusion from this is that increasing the layer count looks the most practical route to increasing 3DXP density.

Conclusion

Increasing its layer count appears to be the only viable short-term way of increasing XPoint die (and hence DIMM) capacity. This is harder to do than adding layers in 3D NAND manufacturing. Unless 3DXP’s process size can be reduced this may drastically limit 3D XP’s density increase possibilities.

Your occasional storage digest featuring Excelero, Commvault, SUSE Linux, MariaDB… and more!

Here’s a carton of storage news brought to you by the virtual drone delivery service at Blocks & Files.

Customers

PAIGE, a well-funded US start up that uses AI to tackle cancer pathology, has signed up to Igneous Unstructured Data Management as-a-Service. It uses the service to protect and manage 8PB training datasets of anonymised tissue images.  Igneous also provides a long-term protection and retention tier for the training datasets and results data post-processing. The datasets are used to train algorithms on a Pure Storage FlashBlade and NVIDIA GPU compute cluster.

Fox Sports Australia has bought a Dell Isilon all-flash F800 arrays for outside broadcast vans and its data centre. They store data used in the production of live sports in 4K UHD video resolution. This will help the Foxtel channel broadcast in 4K resolution at twice the frame rate of Foxtel’s HD channels. 

The UK’s Nottingham Building Society has bought Nexsan E18P storage to replace a six year-old 32TB IBM DS3524 array, which couldn’t store enough data. The E18P can scale to 648TB and takes up half the rack space of the DS3524. Thank heavens for large capacity disk drives.

Untold Studios, an independent creative studio based in the UK, is using the WekaIO Matrix file system on Amazon Web Services AWS to manage its visual effects (VFX) workflow. VFX artists can connect to Amazon EC2 G3 instances in the cloud and get the same performance as on-premises or local disk. Untold chose Matrix because it is cloud-native, built on AWS, and doesn’t require a big capex investment to spin up.

Shorts

Alluxio, a super-fast distributed storage system developer, has has completed an $8.5m B-round, taking total funding to $16m. Veritas founder and former CEO Mark Leslie joined the round and will serve as a board observer. Alluxio says its technology is in production at Alibaba, Baidu, Barclays, Comcast, Development Bank of Singapore, ESRI, JD.com, Lenovo, Oracle, Paypal, Tencent and Wells Fargo.

Commvault’s IntelliSnap snapshot technology is validated to work with Cisco HyperFlex hyperconverged systems to protect application workloads, file systems and virtual machines (VMs.) This complements the existing Commvault ScaleProtect with Cisco UCS offering.

Excelero, an all-flash NVMeoF startup, claims revenues quadrupled in fy2018 and won three times as many customers than the previous year. It also claims that 2017 sales were four times higher than in 2016. It has released no figures.

Excelero says it sold thousands of licenses to hyperscale web companies in 2018. Revenues were split evenly between them and OEM-led deals by Supermicro and Lenovo, and also regional resellers and system integrators including Arcastream, CMA and Pixit Media.

MariaDB, the open source database, has released a version for data warehousing and analytics. MariaDB AX can be deployed on premises or across any public, private or hybrid cloud topology. 

Rambus has bought the assets of failed Diablo Technologies. Diablo was a pioneer in developing NVDIMMs, such as Memory1, but shut down in December 2017 while it was being sued by Netlist, another NVDIMM developer, for patent infringements.

Redis Labs announced that Redis Enterprise is available for Intel Optane DC persistent memory, across multiple cloud services or as downloadable software.  The software supports products from companies participating in Intel’s hardware beta. Two memory modes are available: using Memory Mode, Redis Enterprise-powered applications can achieve comparable performance to DRAM with cost benefits. Developers get greater cost savings using AppDirect mode with Redis Enterprise, while still delivering faster performance than NVMe.

SUSE Linux is supporting Intel Optane DC persistent memory with SAP HANA, running on SUSE Linux Enterprise Server for SAP Applications. Users can optimise their workloads by moving and maintaining larger amounts of data closer to the processor and minimising the higher latency of fetching data from system storage. The support comes in SUSE Linux Enterprise 12 Service Pack 4.

Zadara says its virtual private storage arrays are available to customers of VMware Cloud on AWS.

People

Bill Wohl left his post as Chief Communications Officer at Commvault in December 2018 to join United Rentals as Head of Brand and Communications.

Joe Consul has joined Datera as its CFO, reporting to CEO Guy Churchward.

Marlena Fernandez has been hired by HCI vendor Scale Computing as its VP For Marketing. She has worked for Seagate, Symantec and Oracle.

Enterprise Storage in 2019: Keep those industry predictions rolling

Updated on January 21, 2019 with more predictions from Pure Storage, IBM and Maxta following the addition of Archive 360, Arcserve, NetApp and StorPool predictions earlier in January.

Original introduction It’s the most wonderful time of the year for crystal ball gazing so here are six predictions about IT storage trends in 2019.  There are good ideas here from our industry contributors – but do excuse the sometimes liberal application of tincture of marketing 

Purely Pure’s predictions

We have edited these predictions from Patrick Smith, EMEA CTO for brevity.

  1. Hybrid cloud improves; The arrival of a better hybrid architecture will create an environment that allows enterprises to combine the agility and simplicity of the public cloud with the enterprise functionality of on-prem. In this hybrid cloud world applications can be developed once and deployed seamlessly across owned and rented clouds.
  2. Automated, intelligent and scalable storage provisioning makes deploying large-scale container environments to an enterprise data centre possible. As a result of the development of container storage-as-a-service, in 2019, we believe that the new normal will be running production applications in containers irrespective of whether they are state-less or data-rich. Container adoption will increasingly be driven by the demand for cost-effective deployments into hybrid cloud environments with the ability to flexibly run applications either on-premises or in the public cloud.
  3. We expect NVMe over Fabrics to move from niche deployments and take a step towards the mainstream next year. It makes everything faster – databases, virtualised and containerised environments, test/dev initiatives and web-scale applications. With price competitive NVMe-based storage providing consistent low latency performance, the final piece of the puzzle will be the delivery of an end-to-end capability through the addition of NVMe-oF for front-end connectivity.

Immense Indigo’s indicators for 2019

What does Eric Herzog, Eric Herzog, VP, Product Marketing and Management, IBM Storage Systems, think we should watch out for in 2019? We’ve prepared a precis if his thoughts.

  1. All primary storage workloads should sit on flash. NVMe will also expand within the storage industry as a high-performance protocol: in storage systems, servers, and storage area network fabrics.
  2. Storage will be ‘cloudified’ with the capability of storage to transparently move data from on-premises configurations to public clouds and across private cloud deployments. You will be able to enjoy the application and workload SLAs of a private cloud and also ethe savings public clouds drive for backup and archival data.
  3. Data protection goes beyond backup and restore to be focused on how you can leverage secondary storage datasets (backups, snapshots, and replicas) to be used for DevOps, analytics and testing workloads.
  4. Storage processes should be automated across the board; for storage admins, DevOps, Docker experts, application owners, server and virtual machine administrators, and more, using APIs for automation and self-service.
  5. To enjoy the benefits of AI, your storage must have the ultimate in performance, availability and reliability. AI, at its core, requires massive amounts of data being processed accurately and reliability 365×24.  Storage is essential for this.

Maxta’s 2019 predictions

Hyper-converged infrastructure (HCI) software supplier Maxta’s CEO and founder Yoram Novick predicts that;

  1. HCI will add in software-defined storage capabilities to bypass the cluster size limitation imposed by some virtualisation software. HCI servers will be partitioned into  “App Servers” (those with applications VMs, virtualisation software, and possibly storage) and “Data Servers” (those with storage only) under a common management framework, the same HCI software can scale to 1000s of servers in the same cluster.
  2. Hybrid HCI will evolve to running the same applications on premises and in the poublic cloud. Using replication, recovery in the public cloud can be instantaneous with a near-synchronous recovery point and five 9’s availability.
  3. HCI will develop to support containers as well as virtual machines with an Abstraction Converged Infrastructure, or ACI. (Hint; Maxta will do this.)
  4. Hyperconvergence appliance vendors will not provide to prospects all the benefits of a true software approach – no HW lock-in – as HCI software vendors do. 

We edited his predictions for brevity.

Three Archive360 predictions

Archive360 archives data up to the Azure cloud. It reckons these things will happen in 2019:

1. To achieve defensible disposition of live data and ongoing auto-categorization, more companies will turn to a self-learning or “unsupervised” machine learning model, in which the program literally trains itself based on the data set provided. This means there will be no need for a training data set or training cycles.  Microsoft Azure offers machine-learning technology as an included service. 

2. Public cloud Isolated Recovery will help defeat ransomware. It refers to the recovery of known good/clean data and involves generating a “gold copy” pre-infection backup. This backup is completely isolated and air-gapped to keep the data pristine and available for use. All users are restricted except those with proper clearance. WORM drives will play a part in this.

3. Enterprises will turn to cloud-based data archiving in 2019 to respond to eDiscovery requests in a legally defensible manner, with demonstrable chain of custody and data fidelity when migrating data.

Three Arcserve predictions for 2019

1. Public cloud adoption gets scaled back because of users facing unexpected and significant fees associated with the movement and recovery of data in public clouds. Users will reduce public cloud use for disaster recovery (DR) and instead, use hybrid cloud strategies and cloud service providers (CSPs) who can offer private cloud solutions with predictable cost models.

2. Data protection offerings will incorporate artificial intelligence (AI) to predict and avert unplanned downtime from physical disasters before they happen. DR processes will get automated, intelligently restoring the most frequently accessed, cross-functional or critical data first and proactively replicate it to the cloud before a downtime event occurs.

3. Self-managed disaster recovery as a service (DRaaS) will increase in prominence as it costs less than managed DRaaS. Channel partners will add more self-service options to support growing customer demand for contractually guaranteed recovery time and point objectives (RTOs/RPOs) and expanding their addressable market free of the responsibility of managing customer environments.

NetApp’s five  predictions for 2019

These predictions are in a blog which we have somewhat savagely summarised 

1. Most new AI development will use the cloud as a proving ground as there is a rapidly growing body of AI software and service tools there.

2. Internet of Things (IoT) edge processing must be local for real-time decision-making. IoT devices and applications – with built-in services such as data analysis and data reduction – will get better, faster and smarter about deciding what data requires immediate action, what data gets sent home to the core or to the cloud, and what data can be discarded.

3. With containerisation and “server-less” technologies, the trend toward abstraction of individual systems and services will drive IT architects to design for data and data processing and to build hybrid, multi-cloud data fabrics rather than just data centres. Decision makers will rely more and more on robust yet “invisible” data services that deliver data when and where it’s needed, wherever it lives, using predictive technologies and diagnostics. These services will look after the shuttling of containers and workloads to and from the most efficient service provider solutions for the job.

4. Hybrid, multi-cloud will be the default IT architecture for most larger organisations while others will choose the simplicity and consistency of a single cloud provider. Larger organisations will demand the flexibility, neutrality and cost-effectiveness of being able to move applications between clouds. They’ll leverage containers and data fabrics to break lock-in.

5. New container-based cloud orchestration technologies will enable better hybrid cloud application development. It means development will produce applications for both public and on-premises use cases: no more porting applications back and forth. This will make it easier and easier to move workloads to where data is being generated rather than what has traditionally been the other way around.

StorPool predicts six things

1. Hybrid cloud architectures will pick up the pace in 2019. But, for more demanding workloads and sensitive data, on-premise is still king. I.e. the future is hybrid: on-premise takes the lead in traditional workloads and cloud storage is the backup option; for new-age workloads, cloud is the natural first choice and on-prem is added when performance, scale or regulation demands kick-in.

2. Software-defined storage (SDS) will gain majority market share over the next 3 to 5 years, leaving SAN arrays with a minority share. SDS buyers want to reduce vendor lock-in, make significant cost optimisations and accelerate application performance.

3. Fibre Channel (FC) is becoming an obsolete technology and adds complexity in an already complex environment, being a separate storage-only component. In 2019, it makes sense to deploy a parallel 25G standard Ethernet network, instead of upgrading an existing Fibre Channel network. At scale, the cost of the Ethernet network is 3-5 per cent of the whole project and a fraction of cost of a Fibre Channel alternative.

4. We expect next-gen storage media to gain wider adoption in 2019. Its primary use-case will still be as cache in software-defined storage systems and database servers.

On a parallel track, Intel will release large capacity Optane-based NVDIMM devices, which they are promoting as a way to extend RAM to huge capacities, at low cost, through a process similar to swapping. The software stack to take full advantage of this new hardware capability will slowly come together in 2019.

There will be a tiny amount of proper niche usage of Persistent memory, where it is used for more than a very fast SSD.

5. ARM servers enter the data centre. However this will still be a slow pickup, as wider adoption requires the proliferation of a wider ecosystem. The two prime use-cases for ARM-based servers this year are throughput-driven, batch processing workloads in the datacenter and small compute clusters on “the edge.”

6. High core-count CPUs appear. Intel and AMD are on a race to provide high core-count CPUs for servers in the datacenter and in HPC. AMD announced its 64-cores EPYC 2 CPU with overhauled architecture (9 dies per socket vs EPYC’s 4 dies per socket). At the same time, Intel announced its Cascade Lake AP CPUs, which are essentially two Xeon Scalable dies on a single (rather large) chip, scaling up to 48 cores per socket. Both products represent a new level of per-socket compute density. Products will hit the market in 2019.

While good for the user, this is “business as usual” and not that exciting.

The disappearing data lake

Don Foster, senior director at Commvault: “Not fully knowing or understanding what is being placed in a data lake, why it is stored, and if it is even of proper data integrity will have proved untenable and inefficient for mining and insight gathering. The data lake will begin to disappear in favor of technology which can discover, profile, map data where it lives, reducing storage and infrastructure costs while implementing data strategies that can truly provide insights to improve operations, mitigate risks and potentially lead to new business outcomes.”

Tools for the multi-cloud

Jon Toor, CMO at Cloudian: “IBM’s acquisition of Red Hat will reverberate throughout 2019, giving enterprises more options for designing a multi-cloud strategy and highlighting the importance of data management tools that can work across public cloud, private cloud and traditional on-premises environments.” 

Capacity

Gary Watson, CTO and founder of Nexsan: “Spinning hard drives are getting bigger, and they are still 5-10x cheaper than flash per terabyte…The reality is that the future in 2019 is most likely to be built on a world of both flash and spinning hard drives.

“Next year, we will … see capacity become a major concern for organisations, fuelled by data growth. … As we store more and more data moving forward, it’s important to protect all of it, and the cloud may not always be suitable. Finding the perfect balance between cloud and on-premises storage, for short-term and long-term data alike, will drive storage needs for the data boom and software growth in 2019.” 

A permanent home for edge computing

Alan Conboy, Office of the CTO, Scale Computing: “According to Statista the global IoT market will explode from $2.9 trillion in 2014 to $8.9 trillion in 2020. That means companies will be collecting data and insights from nearly everything we touch from the moment we wake up and likely even while we sleep. 

“In 2019, edge computing will require a new level of intelligence and automation to make those platforms practical. Where once only a smidge of data was created and processed outside a traditional data center, we will soon be at a stage where nearly every piece of data will be generated far outside the data center. This amount of data will create a permanent home for edge computing.” 

Rethinking converged solutions

Gijsbert Janssen van Doorn, technology evangelist, Zerto: “In 2018 we saw hardware vendors trying to converge the software layer into their product offering. However, all they’ve really created is a new era of vendor lock-in – a hyper-lock-in in many ways. 

“In 2019 organisations will rethink what converged solutions mean. As IT professionals increasingly look for out-of-the-box ready solutions to simplify operations, we’ll see technology vendors work together to bring more vendor-agnostic, comprehensive converged systems to market.”

Veeam has a little list

Data protector Veeam has made these predictions for 2019.

  • Multi-Cloud usage and exploitation will rise
  • Flash memory supply shortages, and prices, will improve in 2019
  • Predictive Analytics, based on telemetry data, will become mainstream and ubiquitous
  • The “versatalist” (or generalist) admin role will increasingly become the new operating model for the majority of IT organizations
  • The top 3 data protection vendors in the market continue to lose market share in 2019 (thought by us to be Commvault, Dell EMC, and Veritas)
  • The arrival of the first 5G networks will create new opportunities for resellers and CSPs to help collect, manage, store and process the higher volumes of data.

Quantum booted off NYSE

The New York Stock Exchange has delisted the storage vendor Quantum for failure to file accounts.

The company’s self-inflicted accounting mess is to blame.  Quantum wrongly recognised up to $60m in revenues for fiscal 2015-2017, rendering incorrect many filed quarterly SEC reports. 

The stock exchange suspended Quantum on January 15 after the company said it would miss its deadline to refile and get up to date. Quantum stock now trades under the symbol “QMCO” on the OTC Pink exchange.

Quantum says there is no risk to financial stability as it completed a $210m refinancing package last month.

Jamie Lerner, Quantum chairman and CEO, said in a statement that Quantum’s delisting does “not reflect on the financial health of the company, which, as indicated by our recently announced refinancing, continues to improve. We will work diligently to complete our required filings and resolve our delisting as quickly as possible.”

He did not provide a timescale for completing the required filings.