Home Blog Page 429

How does NetApp’s MAX Data use Storage-Class Memory? Let’s find out

In this article we explore NetApp’s MAX Data use of server-based Storage Class Memory to slay the data access latency dragon?

But first, a brief recap: MAX Data is server-side software, installed and run on existing or new application servers to accelerate applications. Data is fetched from a connected NetApp array and loaded into the server’s storage class memory. 

Storage class memory (SCM),  also called persistent memory (PMEM), is a byte-addressable, non-volatile medium. Access speed is faster than flash but slower than DRAM. An example is Intel’s Optane, built using 3D Xpoint memory technology. This is available in SSD form (Optane DC P4800X) and also NVDIMM form (Optane DC Persistent Memory.) 

For comparison, the Optane SSD’s average read latency is 10,000 nanoseconds; the Optane NVDIMM is 350 nanoseconds; DRAM is less than 100 nanoseconds; and an SAS SSD is about 75,000 nanoseconds.

Servers can use less expensive SCM to bulk out DRAM. This way IO-bound applications run faster because time-consuming storage array IOs are reduced in number.

ScaleMP’s Memory ONE unifies a server’s DRAM and SCM into a single virtual memory tier for all applications in the server. NetApp’s MAX Data takes a different approach, restricting its use somewhat. 

We talked to Bharat Badrinath, VP, product and solutions marketing at NetApp, to find out more about MAX Data.

MAX Data memory tier

MAX Data supports a memory tier of DRAM, NVDIMM and Optane DIMM with a storage tier of NetApp AFF with ONTAP 9.5 LUN.

According to Badrinath, this set up “allows us to use a ratio of 1:25 between the memory tier and the storage tier so we can accelerate a large existing data set to memory speeds without requiring the full application fit into memory.”

The MAX Data server architecture

NetApp can protect memory tier data with MAX Snap and MAX Recovery aligned to ONTAP based Snapshots, SnapMirror and the rest of the ONTAP data protection environment.

MAX Data protection scheme

Server details

MAX Data Runs on the Linux OS today with bare metal configurations – either a single or dual-socket server with up to 128 vCPUs. Future MAX Data versions will support hypervisor configurations.

Any Intel x86 servers can be used if DRAM is used as memory tier 1. If Optane DIMMs are used then the server CPU must support them, which means Cascade Lake AP processors for now.

MAX Data filesystem

MAX Data provides a file system, MAX FS, that spans the PMEM and the storage tier. In this case this means an external NetApp storage array, connected via NVMe-over-Fabric to the server. 

Applications that access this file-system get relatively instant access to the data for reads and writes – single-digit microseconds, so long as the data is in the memory tier.

MAX Data supports POSIX API integration as well as the memory API, so users can use block/file system semantics and memory API semantics.

Applications using a POSIX interface can run unmodified to use MAX Data.

Inside and outside

Server-side applications that do not access the MAX Data filesystem will not see, nor gain any benefit from the MAX Data-owned PMEM in the server.

There is also no PMEM inside NetApp’s arrays with this scheme. We think the company will add PMEM to its arrays in due course and so decrease their data access latency. HPE has done this with its 3PAR arrays, and Dell EMC has baked this into its 2019 plans for PowerMAX arrays.

Note that MAX Data will support any Octane DIMM capacity available on the market.

External storage sales fall behind in bull market

IDC’s Q3 2018 worldwide storage tracker shows the overall storage market is booming except for external, shared array / filer technology.

The total storage numbers reported are summarised in The Register. We tabulate IDC’s numbers, since 2017’s third quarter, below and chart them to provide a snapshot of the market and individual performance.

First, the total storage market.

IDC publishes revenue figures for the top five-to-seven suppliers only each quarter – hence the gaps in chart for some of the vendors

Dell and HPE dominate named vendors but in turn are dominated by the Original Design Manufacturers (ODMs). NetApp shows signs of leaving the pack of other suppliers behind.

The total storage number chart shows good revenue growth.

Next we look at external storage on its own:

Charting the suppliers produces this graph:


Dell is top dog by some margin as you can see from the revenue lines of its biggest competitors: NetApp, HPE, Hitachi and IBM.

How is the external storage market changing quarter by quarter? Let’s add its line to the total storage trends chart for side-by-side comparison.

The external market is growing and loosely parallels the total storage curve. But growth is anaemic compared to the more strongly growing overall storage market.

This chart shows that, after two quarters of decline in the first half of 2018, the external market has shown some quarter-on-quarter growth in the third quarter. The 2018 external storage quarterly revenue numbers are higher than their equivalent 2017 quarters though; so the 2018 quarters do show year-on-year growth.

Next we look at the trend of quarterly external storage revenue as a percentage of all storage.

This decline probably shows the influence of hyperconverged infrastructure systems and other server SANs. This also helps serve to explain the relatively slow growth of the external storage market as a whole.

We can also acknowledge that some data that would have been stored in external arrays has gone to the public cloud. This is a second reason for the percentage decline.

To sum this all up, the overall storage market has a healthy rate of growth while the external storage market is looking, well, dampened.

Seagate declares war on tape

Seagate wants to take data stored on tape and migrate it to the cloud for faster access. Its Lyve Data Services use Tape Ark software and technology as a backup data transfer magic carpet.

Out of nowhere, Seagate has announced Lyve Data Services as a tape data migration offering. It has teamed up with Tape Ark, an Australian company based in Perth, that specialises in tape recovery and tape data migration, for instance, getting data off tape and into AWS.

It supports most tape formats, meaning ½ inch reels and cassettes, and ¼ inch, 8 and 4mm tape cartridges, plus geophysical formats, video and film tapes as well.

Supported enterprise IT tape formats include AIT, DAT, DLT and Super DLT, Exabyte, LTO-1, LTO2, LTO3, LTO4, LTO5, LTO6, LTO7 and LTO8, Mammoth, QIC and Mini QIC, QIC, and VXA. There is a full list here.

Tape Ark has facilities in the USA, South America, Europe, India, the Far East and Australia. It currently spans 53 AWS Availability Zones within 18 Geographic Regions.

Tape Ark sites.

A FAQ on the Tape Ark website says it can migrate any number of tapes to the cloud: “We can move data to the cloud faster than anyone else and on a massive scale.  No matter what the tape type or data format.

The migration is nominally done at no cost, with Tape Ark getting charging a monthly fee to access the data.

AWS customers can access the data via a Tape Ark client access portal.  Customers can login, search for their data or tape, select the data to restored, and either do a direct restore of the data, or request a service where Tape Ark does the restore, index or eDiscovery for them.

This is a restore-as-a-service scheme, and customers can order their legacy data via their Tape Ark portal and receive the files, folders or entire tapes through a secure download or FTP. 

Tape Ark claims cloud storage is cheaper than off-site tape storage: “Offsite tape storage pays no attention to the volume of data you store, so legacy tapes (greater than 7 years old) tend to be more cost-effective in the cloud than on a shelf in an air-conditioned room, where they are inaccessible and your hardware to read them is no longer supported.”

The Seagate angle

With Seagate the data recovered from tape can be sent to “large public cloud platforms of the customer’s choice.” 

Tape Ark says it: “will expand its operations into Seagate facilities globally, establishing scalable, mass ingest facilities capable of processing tens of thousands of tapes per day – by far the largest facilities of the kind anywhere in the world. These new facilities, in Oklahoma and Amsterdam, will become Tape Ark’s new North American and European operations base.” 

Seagate claims that its Lyve Data Services have advanced data management expertise, without explaining what this means.

Ted Oade, Seagate director of product marketing, dramatises the background; “There are over one billion stored tapes on the planet, and the value of the data on those tapes is incalculable.”

Just a reminder, incalculable value doesn’t necessarily mean all this data is ivaluable.

Seagate’s marketing pitch is that customers can liberate zombie data trapped in tape vaults, on tapes that can be decades old, and bring it to the public cloud where it can be searched, indexed and analysed, and become useful.

They can also stop paying to maintain any no-longer supported tape formats and drives.

Seagate doesn’t explicitly say but we imagine it will split the “fee on top of your monthly cloud storage bill to ingest the data, provide access and deliver analytics and other services,” with Tape Ark.

Datera goes Churchward

Startup Datera has a new CEO – exec chairman Guy Churchward who joined the board just four months ago.

Guy Churchward

Datera provides block-access, scale-out, server-based storage with its Elastic Data Fabric software running in x86 servers. This is supplied by Datera or bought off the shelf from Cisco, Dell, Fujitsu, HPE and Supermicro (see the hardware compatibility list here).

This software builds a virtual SAN which aggregates Optane, SATA and NVMe flash and disk storage. Features include auto-tiering and auto-migration of live data.  It supports S3 object storage access and containerised applications. and can either run on premises, with bare metal or virtual servers, or in the cloud.

Datera says its software provides a scale-out Data Services Platform which delivers millions of IOPS with sub-200uS latency. There is more information here.

Datera’s Data Services Platform

The company was founded in 2013 and received $40mn in A-round funding in 2016. We wouldn’t be surprised if a second funding round took place to help expand the company’s infrastructure and development.

The previous CEO, co-founder Marc Fleishmann, steps sideways to become President. Churchward was previously CEO at Data Torrent and an exec at Dell EMC, NetApp and BEA Systems before that. He will lead Datera’s intended market expansion, while Fleishmann will focus more on building strategic relationships, driving Datera’s vision, and refining its technical foundations. 

Intel confirms Optane DIMM and SSD speed

Intel has revealed Optane DIMM and SSD read latency numbers.

Optane DIMMs and SSDs use 3D XPoint memory, which is non-volatile, faster than NAND but slower than DRAM, and based on Phase Change Memory cells.

Intel 3D XPoint graphic

Intel produces Optane DC P4800X SSDs and Optane DC Persistent Memory (PM), delivered in DIMM format.  Optane PM delivers cache line (64B) reads to the CPU and “average idle read latency with Optane persistent memory is expected to be about 350 nanoseconds when applications direct the read operation to Optane persistent memory, or when the requested data is not cached in DRAM.”

In comparison an Optane DC SSD “has an average idle read latency of about 10,000 nanoseconds (10 microseconds).”

Optane DC P4800X

That makes the Optane DIMM 28.6 times faster than the Optane SSD.

Back in the NAND world a Micron 9100 NVMe SSD’s write latency is around 30,000 nanoseconds and a Micron P420m SAS SSD MLC write takes 75,000 nanoseconds.

Intel points out that memory sub-system responsiveness is expected to be identical to DRAM (<100 nanoseconds) in cases where the requested data is in DRAM, either cached by the CPU’s memory controller or directed by the application. 

IBM enlists Nvidia for full-stack AI build-out

The 9:3 configuration refers to 9 x DGX-1 servers and 3 x Spectrum Scale NVMe all-flash appliances.

IBM has introduced Spectrum AI, a souped up integrated set of server, GPU and storage components. The reference architecture includes:

  • IBM Elastic Storage Server or Spectrum Scale NVMe all-flash appliance, due in 2019 and which runs Linux
  • Three to nine Nvidia DGX-1 GPU servers (8 x Tesla V100 Tensor Core GPUs) functioning as Spectrum Scale client nodes
  • Spectrum Scale RAID v5
  • Spectrum Discover metadata management software
  • Mellanox 100Gbit/s EDR with SB7700 or SB7800 series fabric switch
  • 10GbitE management network
  • InfiniBand networking
  • Optional IBM Cloud Object Storage

The Spectrum Scale NVMe appliance is a 2U box storing up to 300TB. Three appliances working together can output up to 120 GB/sec throughput.

Read the IBM SpectrumAI infrastructure brief here, and a more detailed description with training results here

These results include Alexnet, Resnet-152, Resnet-50, Inception-3 and Lenet model training runs and IBM charts the results.

Training run

The Resnet-50 and Resnet-152 image recognition training model results enable us to compare SpectrumAI with other AI reference architectures.  Examples include AIRI from Pure Storage, DDN’s A3I, A700 and A800-based ones from NetApp, Dell EMC AI-Ready Solutions, Cisco C480 M5, and IBM’s AC922 Power server

A word of caution, though. IBM supplies charts showing the relative performance at 1, 4 and 8 GPU levels for the two Resnet tests, and not actual numbers. 

Also, we have estimated the numbers from the charts and acknowledge that this introduces a degree of error. But it gives us the opportunity to make the cross-supplier comparison.

IBM is the leading Resnet-152 system at the 4 and 8 GPU levels. We don’t have 2-GPU numbers from IBM and missing bars on the chart signify other vendors haven’t supplied numbers at particular GPU levels
IBM’s SpectrumAI leads the Resnet-50 results with 1, 4 and 8 GPUs. AS above, missing bars on the chart signify vendors haven’t supplied numbers at particular GPU levels.

IBM has published training results for SpectrumAI systems with multiple DGX-1s and these show a broadly linear increase in performance, as DGX-1s are added. We are unable to compare these with other vendors as we don’t have their test run results for multiple DGX-1 servers.

But with these results IBM’s SpectrumAI reference architecture comes out tops in terms of Resnet-152 and Resnet-50 training model performance.

HPE’s Neri hails storage market share gains

After growing faster than the storage market in its latest quarter, HPE expects to keep on doing that in 2019, buoyed by its InfoSight management tool and new products.  

To give you a flavour of HPE’s view of its storage performance, here is an extract of CEO Antonio Neri’s comments from last weeks earnings call.

“We grew 13 per cent for the full year, which is faster than the market, so we expect to gain share in external storage and that’s driven by a cohesive strategy with both Nimble and 3PAR enabled by a phenomenal platform called HPE InfoSight which provides predictive analytics for storing and managing that data and last week I announced that we’re extending that platform now to the rest of the on-premises infrastructure including both compute and networking.

“The customer sees the value of predictive analytics; fix the problems before they happen, and obviously now we keep adding features to both the InfoSight and the two platforms; both Nimble and 3PAR with the availability of new flash storage and so forth. 

“If you [put] the hyperconverged part of that on top of storage, well, actually we’ll be growing almost 20 per cent, 19 per cent and so the combination of different infrastructure for different use cases plus our intellectual property is paying off and again. 

We expect that to continue to be the case in 2019 and beyond because we have some exciting solutions that are coming to market, and some of them we announced last week at HPE Discover in Madrid.”

If HPE gains market share in 2019 others will lose. Our best guess is that Dell EMC and NetApp are strongly positioned for 2019 and the losers will be found elsewhere.  Of course competitors could still grow revenues  and lose market share if, as expected, the storage market grows overall.


18 storage news bites pack very pleasant crunch

Eighteen storage news bites today for your delectation. four are light bites and 14 are snacks. Cloudiness, containerisation, data backup and predictions feature in the light bites section of this week’s collection of storage news stories.

The predictions are for 2019 and from Western Digital. They illustrate just how far ahead Western Digital is setting its expansionary gaze away from basic disk drives, NAND chips and SSDs and on towards systems.

The stories are organised alphabetically and we start with a backup supplier.

NAKIVO automates backup and improves recovery

NAKIVO has released NAKIVO Backup & Replication v8.1. It provides more automation and  universal recovery of any application objects.

Automation comes from NAKIVO introducing policy-based data protection for VMWare, Hyper-V and AWS infrastructure. Customers can create backup, replication, and backup copy policies to fully automate data protection processes.

A policy can be based on VM name, size, location, power state, tag, or a combination of multiple parameters. Once set up, policies can regularly scan the entire infrastructure for VMs that match the criteria and protect the VMs automatically.

NAKIVO’s Universal Object Recovery enables customers to recover any application objects back to source, a custom location, or even a physical server. Customers can recover individual items from any file system or application by mounting VM disks from backups to a target recovery location, with no need to restore the entire VM first.

NAKIVO Backup & Replication runs natively on QNAP, Synology, ASUSTOR, Western Digital, and NETGEAR storage systems and thereby delivers a claimed up to 2X performance advantage. It has support for high-end deduplication appliances including Dell/EMC Data Domain and NEC HYDRAstor.

Portworx powers up multi-cloud containerisation

Stateful container management supplier Portworx  has announced the availability of PX-Enterprise 2.0, featuring PX-Motion and PX-Central. 

According to Portworx’s 2018 Annual Container Adoption Survey, one-third of respondents report running containers in more than one cloud, with 23 per cent running in two clouds and 13 per cent running in three clouds. 

However, 39 per cent of respondents cite data management as one of their top three barriers to container adoption. Also, 34 per cent cite  multi-cloud and cross-data centre management as a barrier. 

Enterprises must be able to move their application data across clouds safely and securely without negatively affecting operations. Portworx says it integrates with more container platforms than any other solution, including Kubernetes services running on AWS, Microsoft Azure, Google Cloud, IBM Cloud, and Pivotal as well as Red Hat OpenShift, Mesosphere DC/OS, Heptio and Rancher. 

Murli Thirumale, co-founder and CEO of Portworx, said: “Kubernetes does not inherently provide data management. Without solving data mobility, hybrid- and multi-cloud Kubernetes deployments will never be mainstream for the vast majority of enterprise applications.” PX-Motion and PX-Central have been designed to fix that.

PX-Motion allows Kubernetes users to migrate application data and Kubernetes pod configuration between clusters across hybrid- and multi-cloud environments. This enables entirely automated workflows like backup and recovery of Kubernetes applications, blue-green deployments for stateful applications, and easier reproducibility for debugging errors in production.

PX-Central is a single pane of glass for management, monitoring and metadata services across multiple Portworx clusters in hybrid- and multi-cloud environments built on Kubernetes. 

With PX-Central, enterprises can manage the state of their hybrid- and multi-cloud Kubernetes applications with embedded monitoring and metrics directly in the Portworx user interface. Additionally, DevOps users can control and visualize the state of an ongoing migration at a per-application level using custom policies.  

Go to Google Cloud Platform, quoth Quobyte

 Quobyte’s Data Center File System is now available via Google Cloud Platform marketplace, enabling Google Cloud users to configure a hyperscale, high-performance distributed storage platform in a few clicks. It has native support for all Linux, Windows, and NFS applications.

Quobyte’s Data Center File System provides a massively scalable and fault-tolerant storage infrastructure. It decouples and abstracts commodity hardware to deliver low-latency and parallel throughput for the requirements of cloud services and apps, the elasticity and agility to scale to thousands of servers, and to grow to hundreds of petabytes with little to no added administrative burden. With Quobyte, databases, scale-out applications, containers, even big data analytics can run on one single infrastructure.

Users can run entire workloads in the cloud, or burst peak workloads; start with a single storage node and add additional capacity and nodes on the fly; and dynamically downsize the deployment when resources are no longer needed. Quobyte storage nodes run on CentOS 7.

Two of the company founders are Google infrastructure alums and so should understand hyperscale needs.

Western Digital’s 2019 predictions

Western Digital has made ten predictions for 2019. They are (with our comments in brackets):

  • Open composability, Western Digital’s composable systems scheme will start to go mainstream, and make headway against proprietary schemes (meaning, we think, ones from from Dell EMC – MX7000, DriveScale, HPE – Synergy and Liquid.)
  • Orchestration of large-scale containerization. We will see further disaggregation of all pieces – memory, compute, networking and storage – and the adoption of broad-based container capabilities. With this further disaggregation, organizations  will move toward the orchestration of large-scale containerization.
  • Proliferation of RISC-V based silicon as there will be an increased demand from organizations who are looking to specifically tailor (and adapt) their IoT embedded devices to a specific workload, while reducing costs and security risks associated with silicon that is not open-source. (Meaning not ARM, MIPS or X86).
  • Move towards fabric infrastructure, including fabric attached memory, with the wide-spread adoption of fabric-attached storage (NVMe-oF). This allows compute to move closer to where the data is stored rather than data being several steps away from compute.
  • Beginning of the adoption of energy-assist storage (meaning MAMR disk drives giving customers capacities greater than 16TB).
  • Devices will come alive at the edge – such as autonomous cars and medicine diagnoses. In 2019, compute power will get closer to the data produced by the devices, allowing it to be processed in real-time, and devices to awaken and realise their full potential.  Cars will be able to tap into machine learning to make the instantaneous decisions needed to manoeuvre on roads and avoid accidents.
  • Sprouting of the smaller clouds at the edge. With the proliferation of connected “things,” we have an explosion of data repositories. As a result, in 2019, we will see smaller clouds at the edge – or zones of micro clouds – sprout across devices in order to effectively process and consolidate the data being produced by not only the “thing,” but all of the applications running on the “thing.”
  • Expansion of new platform ecosystem as the demands associated with 5G increase. Because 5G won’t be able to support the bandwidth that is required to support all of the IoT devices, machine learning will need to occur at the edge to ensure optimization of the data. The new platform will be a complete edge platform supported by RISC-V processors, ensuring compute moves as close to the data as possible.
  • Adoption of the machine learning into the business revenue stream. Up until now, for most organizations, machine learning has been a concept, but in 2019, we will see real production installations. As a result, organizations will adopt machine learning — at scale – and it will have a direct impact on the business revenue stream.
  • More data scientists will be needed. As noted in the 451 Research report, Addressing the Changing Role of Unstructured Data With Object Storage, “the interest in and availability of analytics is rapidly becoming universal across all vertical markets and companies of nearly every size, creating the need for a new generation of data specialists with new capabilities for understanding the nature of data and translating business needs into actionable insight.”  As a result of this demand to shift data into action, organizations will prioritize hiring data scientists, and in the next three years, 4 out of 10 new software engineering hires will be data scientists.

The first five points are all things that Western Digital is betting on for its growth.  So we can be sure it’s putting its money where these predictions are pointing.

Storage roundup

Backblaze, a public cloud storage supplier has a cloud storage vs LTO tape cost calculator.  You can download the spreadsheet,  test the assumptions, and try it yourself. Download the spreadsheet here.

Barracuda Networks is providing the Leeds United football club with its Message Archiver in order to make the storage and access of emails simpler, quicker and more secure. It allows Leeds United to combine on-site hardware with cloud-based replication. This ensures that email data is easy to recover in the event of an attack or data loss.

DDN’s AI200  AI-focused storage box, twinned with Nvidia’s DGX-1 GPU server with Microvolution SW has delivered a 1,600-fold better yield compared with traditional microscopy workflows. We’re told researchers use sophisticated deconvolution algorithms to increase signal-to-noise and to remove blurring in the 3D images from Lattice LightSheet microscopes and other high-data-rate instruments. The DDN-Nvidia pairing enables real-time deconvolution during image capture in areas such as neuroscience, developmental biology, cancer research and biomedical engineering.

G2M Research has released a Fall 2018 NVM Express Market Sizing Report. It says the market for NVMe SSDs (U.2, M.2, and PCI AOCs) will reach $9bn by 2022, growing from just under $2bn in 2017.  More than 60 per cent of AFAs will utilize NVMe storage media by 2022, and more than 30 per cent of all AFAs will use NVMe over Fabric (NVMe-oF) in either front-side (connection to host) and/or back-side (connection to expansion shelves) fabrics in the same timeframe. The market for NVMe-oF adapters will exceed 1.75 million units by 2022, and the bulk of the revenue will be for offloaded NVMe-oF adapters (either FPGA or SoC based adapters). More information here.

In-memory computing supplier GridGain announced that the GridGain In-Memory Computing Platform is now available in the Oracle Cloud Marketplace is fully compatible with Oracle Cloud Infrastructure.

Hitachi Vantara has upgraded its Pentaho analytics software to v8.2, integrating it with the Hitachi Content Platform (HCP). Users can onboard data into HCP, which functions as a data lake, and use Pentaho to prepare, cleanse and normalize the data.  Pentaho can then be used to make the logical determination of which prepared data is appropriate for each cloud target (AWS, Azure or Google).  

The new Pentaho version has AMQP and OPenJDK support,improved Google Cloud security and Python functionality.

HubStor provides data protection as a service from within Azure and has announced a new continuous backup for its cloud storage platform, enabling users to capture file changes as they happen on network-based file systems and within virtual machines. File changes can be captured either as a backup with a very short recovery point objective (RPO) or as a WORM archive for compliance. 

It’s also added version control so as it captures incremental changes, it builds out a version history for each file and maintains point-in-time awareness in the cloud, allowing restoration to known healthy period.  It has version control settings that diminish the number of versions held over time as data ages.

Nutanix and GCP-focused data protector HYCU has announced a global reseller agreement with Lenovo to enable Nutanix-using customers to buy Lenovo offerings and HYCU Data Protection for Nutanix directly from Lenovo and its resellers in a single transaction.

Immuta, which ships enterprise data management products for AI, has new features to reduce the cost and risk of running data science programs in the cloud. It has has eliminated the need for enterprises to create and prepare dedicated data copies in Amazon S3 for each compliance-type policy that needs to be enforced. Data owners can enforce complex data policy controls where storage and compute are separated to allow transient workloads on the cloud.

The NVM Express organisation has ratified TCP/IP as an approved transport layer for non-volatile memory express (NVMe), Lightbits Labs says its hyperscale storage platform is the industry’s first to take full advantage of this transport standard. Lightbits participated in the November 212-15 NVMe over Fabrics (NVMe-oF) Plugfest at The University of New Hampshire’s InterOperability Laboratory (UNH-IOL), and showcased interoperability with multiple NIC vendors. 

NVM Express Inc. announced the pending availability of the NVMe Management Interface (NVMe-MI) specification. The NVMe-MI 1.1 release standardizes NVMe enclosure management, provides the ability to access NVMe-MI functionality in-band and delivers new management features for multiple NVMe subsystem solid-state drive (SSD) deployments.

The in-band NVMe-MI feature allows operating systems and applications to tunnel NVMe-MI Commands through an NVMe driver. The primary use for in-band NVMe-MI is to allow operating systems and applications to achieve parity with the management capabilities that are available out-of-band with a Baseboard Management Controller (BMC).

Rambus says Micron has selected its CryptoManager Platform for Micron’s Authenta secure memory product line. This will enable Micron to securely provision cryptographic information at any point in the extended manufacturing supply chain and throughout the IoT device lifecycle, enhancing platform protection while enabling new silicon-to-cloud services.

Data Warehouse in the AWS and Azure clouds supplier Snowflake Computing has announced the general availability of Snowflake on Microsoft Azure in Europe. 

Unitrends has published a white paper comparing its backup and recovery options to those from Veeam. Get the white paper here.

Self-driving cars! The Quantum of some solace for ailing tape vendor

CEO Jamie Lerner is taking Quantum into vertical markets with a front and central focus on StorNext, its scale out filesystem manager. The first example is a mobile storage product designed for bulk data collection in autonomous car tests.

As I wrote at the beginning of this year, Quantum faces many challenges. 

  • It is a $450m-run-rate business from a $1bn run rate in 2007
  • Tape is its largest product business – $52.2m + $9.3m royalties last quarter – has strong revenue stream but likely to continue declining
  • DXi deduplication business – $11.7m last quarter – is a small player in a declining market
  • StorNext file management and workflow hardware/software have good niche in entertainment and media – $33.8m last quarter – but are a hard sell with lumpy business results
  • Ceph-based open-source Rook product has no immediate revenue-earning significance

And if that isn’t enough, accounting issues have delayed publication of third quarter results. The company is conducting an internal investigation into accounting irregularities and negotiating an existential re-financing.

Lerner joined Quantum in June 2018 and is the fourth CEO in eight months. The rout was triggered by CEO Jon Gacek’s departure in November 2017, following the intervention of activist investor VIEX  and a March 2017 NYSE delisting threat sidestepped by a month later by a reverse stock split.

Against this background Lerner must stabilise and grow the troubled business.

Sticky tape

Quantum’s original tape business spans drives and formats to libraries. It is  in long-term, slow decline but remains the company’s biggest revenue earner, generating $50m per quarter.

The backup-to-disk DXi deduplicating storage arrays did not ride to the rescue, as market-leading Data Domain systems stood in the way.  That said, the product lines has a $30m-$35m quarterly run rate (except for two anomalous better quarters in 2017).

Quantum’s second shot at a growth business is the StorNext scale-out storage and file management product line. Encompassing software, Xcellis disk arrays, Lattus object storage and Scalar tape libraries, this substantial but slow-growing division fails to compensate for flat or declining revenues elsewhere.

Financial charts

A couple of charts illustrate these points. First, general revenues and profit/loss by quarter. This reveals a declining revenue trend and three most recent loss-making quarters.

Second, quarterly product segment performance shows that StorNext is not the growth engine that Quantum had hoped for.

Overall, tape remains Quantum’s largest business segment, especially when royalty revenue is included.

To summarise, DXi disk backup is declining and StorNext is low growth, especially after the Q2 fy2017 great white hope quarter when revenues rose rapidly, only to fall back.

Quantum’s product moves

Faced with this, Lerner told a US IT press tour this month that StorNext can still grow by embedding the software in vertical applications that need storage and workflow integration. Examples include video and rich media where Quantum reckons it is the leader, with more than 25,000 customers worldwide.

The idea is that certain industries have formal and structured workflows with different storage needs at each stage, as this Quantum slide shows.

Quantum aims to integrate StorNext functionality with third-party products at each workflow stage. 

Quantum has identified several markets where product and workflow integration could yield growth magic. The company will update its products to support this strategy and it has kicked off with an in-vehicle data collection system.

In addition, the company plans product refreshes in early 2019 for QXS block storage and DXi backup appliances and the introduction of  Quantum an archive cloud managed service.

Quantum’s Scalar tape offerings are due for an upgrade in 2019. We understand enhancements include an on-premises presence with a public cloud-like service, featuring  elastic consumption, pay-per-use an purported operational simplicity.

According to Quantum, the tape market will grow and become a key part of cloud infrastructure. It has produced this chart showing tape is almost a third of the cost of Amazon’s Glacier, over three years.

However, this calculation pre-dates Amazon’s Glacier Deep Archive  November 2018 announcement of base storage cost that is one quarter of Glacier’s price. No doubt Quantum’s spreadsheet beancounters are busy working out if tape still comes out cheaper in a three-year Total Cost of Ownership calculation.

Quantum’s IT Press Tour  presentation lifts a 2015 quote from Aaron Ogus, Partner Development Manager at Microsoft Azure: “All cloud vendors will be using tape and will be using it at a level never seen before.”

Yes, but….Glacier Deep Archive could be based on tape but if prices are low enough it presents a long-term threat to the entire on-premises tape market.

Where next for StorNext?

StorNext v6.2 provides S3 access to its filesystem, supports Azure’s Block Blob object format, and has multi-site synchronisation. It’s also introducing cloud-based analytics software to make StorNext more efficient.

Quantum thinks that artificial intelligence application and analytics will drive demand for deep media catalogues, where StorNext could play a role. There will be strong NVMe adoption for high performance media workflows and Quantum has teamed up with Excelero to develop capabilities there.

Quantum thinks  artificial intelligence application and analytics will drive demand for deep media catalogues, where StorNext could play a role. There will be strong NVMe adoption for high performance media workflows and Quantum has teamed up with Excelero to  develop capabilities there.

Data intelligence

The company says it has evolved from offering basic storage capacity to providing data management as well. The next stage is data intelligence, locating files by content with recognition of faces and speech understanding in videos, for example.

This is a valid proposition but Quantum needs backers to marshal resources to self-fund substantial investment. Let’s see what shape the company emerges from its accounting and re-financing woes. We look forward to publication of its results.

VMware rules HCI market (true)

Our thanks to VMware VP Lee Caswell who has published some extracts of a recent 451 Research survey of hyperconverged system vendor installed system market share. Look at the chart below and you can see why.

The 451 researchers asked 256 enterprise and small/medium business respondents: “Which of the following vendors is your organization currently using for hyperconverged infrastructure?”

VMware is racing ahead of the pack and, together with hyperconverged arch-rival Nutanix, accounts for almost two-thirds of HCI installations. This confirms other surveys and market analysis from Gartner and IDC.

The second division comprises three big names: Dell EMC, Cisco and Microsoft. We are unable to assess how Microsoft is progressing as this is the first 451 HCI survey we have seen – and Microsoft does not appear in Gartner and IDC analyses.

Dell EMC’s hyperconverged business is growing and, after a slow start, Cisco is showing momentum. It recently joined the leaders in Gartner’s HCI Magic Quadrant.

The third division has just two names, but they are both biggies: HPE and NetApp. 

Left at the starting gate?

451’s HCI chart seems to demonstrate the characteristics of a mature market. As you can see, there are few dominant suppliers followed by tail enders with small market shares.

For instance, DataCore, lauded in a recent What Matrix HCI report, is in eighth position with one per cent share.

Below DataCore is Pivot3 with a sub-1 per cent share and Datrium with a lower sub-1 per cent share. Maxta and Scale Computing are in the Other category.

Vendors with less than five per cent share – and this includes HPE and NetApp – may be right to be optimistic if the HCI market is in its early stages and has five years or more growth ahead. No-one knows what the future holds here. All the players are convinced that HCI is an immature growth – so they have a real chance of gaining market share from the dominant players at the top of the 451 chart.

But they will have to work hard and some might not make it even if they have great technology. Acquisition by a bigger vendor is, of course, a realistic option for many.

Taejin Infotech gets up to speed in SPC-1 league table

Taejin Infotech has established the best price/performance for million-plus IOPS arrays in the SPC-1 benchmark.

Earlier this week we noted the emergence of a new SPC-1 price/performance class with Korean supplier Taijin scoring 1.5 million IOPS at $326.75 per thousand SPC-1 IOPS (SPC-1 KIOPS).

That was with a seven-node Jet-Speed system. Today we learnt it recorded 1,510,150 SPC-1 IOPS with a price/performance of $287.01/SPC-1 KIOPS, using a 10-node Jet-Speed rig.

By comparison the NetApp A800 scored 2,401,171 SPC-1 IOPS and $1,154.53/SPC-1 KIOPS. This performance is comparable with the Jet-Speed  10-node but at quadruple the price.

Here is a list of the systems scoring more than a million IOPS on the SPC-1 rankings.

Measure for Measure: SPC-1 benchmark yields new class of storage price-performer

Korea’s Telecommunications Technology Association (TTA) has filed an SPC-1 benchmark ranking seventh in the performance list, and cementing a new performance and price-performance grouping within the SPC-1 rankings.

The organisation filed the benchmark on behalf of Korea’s Taejin Infotech Co. The result, together with a recent filing by China’s Inspur, establishes an informal category for the SPC-1 benchmark: sub-$350/KIOPS at 1.5 million IOPS sub-level.

The Taejin Infotech configuration features two Jet-Speed HHS3124F and five Jet-speed HHS211F nodes, all using NVMe SSD data drives with either SATA SDDs or HDDs as system drives. The nodes use a Mellanox ConnectX-5 EDR 100Gbit/s InfiniBand interconnect.

Made to Measure

The SPC-1 benchmark tests shared external storage array performance in a test designed to simulate real-world business data access. Its main measurement unit is SPC-1 IOPS. These are not literally drive IOs per second but transactions consisting of multiple drive IOs. 

A second measure is price/performance, expressed as dollars per thousand IOPS ($/KIOPS), and then there are response times, application storage unit (ASU) capacity, ASU and the total system price.

TTA used a 7-node system, interlinked with 200Gbit/s InfiniBand, and with NVMe SSDs for benchmark run data, and disk drives for system-level data. The TTA SPC-1 results summary table looks like this:

TTA’s SPC-1 benchmark summary table

Here’s how it compares.

With the top system scoring 7 million IOPS the TTA system doesn’t look impressive.

There are two things to consider when evaluating the TTA result. First, low-performing systems tend to have fewer configured storage drives and cost less for their performance, below $400/KIOPS, for example. They also generally run at sub-1 million IOPS. There are many SPC-1 results in this area.

Second, higher-performing systems are configured with more controllers and more drives – and cost more. But the cost-per-IOPS may be lower if they use clever software which extracts more IOPS from the hardware.

In recent SPC-1 results, three systems scored more than 1.5 million IOPS and cost less than $350/KIOPS.  These are new records for  price/performance and performance and thus relevant to customers who need that level of IOPS and  who don’t have million dollar-plus budgets.

The top performers are Huawei Fusion Storage with 4,500,392 IOPS and $329.90/KIOPS, Inspur’s AS5300G2 with 1,500,346 IOPS and $307.62/KIOPS, and, splitting the difference, TTA with 1,510,346 IOPS and $326.75/KIOPS. 

The chart below plots IOPS against $/KIOPS and shows a pack of sub-$400/sub-million IOPS entries. It also identifies higher-scoring systems and the three high-scorers with sub$350/KIOPS ratings.

SPC-IOPS and cost/IOPS ($/KIOPS)

Seen from this standpoint the TTA result is, to say the least, respectable.

There is another thing that stands out as you look at the SPC-1 listings; there are few US suppliers and it’s dominated by Asian ones, particularly Huawei.

Whatever happened to Dell EMC and HPE?

The table above shows the top 8 results; all posted since June 2017.  NetApp is the sole US vendor in the top eight SPC-1 results. An IBM system is at number 9, a DS8888 with 1,500,187 IOPS and a whopping $1,313.44/KIOPS,  filed in a benchmark run in November 2016. The next four results are from Huawei and FusionStack.

From the look of it, SPC-1 is a Dell EMC and HPE-free zone. This is curious.  Have they abandoned this benchmark? We’ll find out more.