Kaminario, Commvault and DataCore are the top software-defined block storage suppliers. However, open source Ceph is a follower in this market.
So says GigaOm, the tech analyst firm, which has published supplier comparisons for software-defined block storage, using its Radar marketscape methodology.
Software-defined block storage is characterised as software available for multi-vendor commodity servers. It is sold as software-only, pre-installed and configured bundles or pre-configured appliances. GigaOm report author Enrico Signoretti therefore does not include companies that make their software available on-premises on the supplier’s hardware only – such as Dell EMC’s PowerFlex or NetApp ONTAP.
DataCore, founded in 1998, is the longest established of the top trio, followed by Kaminario, which set up in 2008 as an all-flash array supplier and has since evolved into a software-defined supplier using certified hardware delivered by its channel. Commvault is a new entrant, courtesy of last year’s acquisition of Hedvig. Datera and StorOne are also rated as leaders, and StorPool is on the cusp of breaking into the top ranks.
Signoretti seems to regard NVMe-oF as ‘table stakes’ – companies that lag in support for the fast network protocol fare less well in the GigaOm radar. They include Red Hat, SoftIron and SUSE, which all use Ceph as their software platform, and DDN Nexenta.
Comment
Today, hardware suppliers dominate the on-premises block storage market. Software-defined block storage has not made noticeable inroads yet into SAN arrays or hyperconverged systems.
Proprietary software-defined block storage is in better shape than the open source alternative. Ceph-based companies have made less impact than newcomers such as Datera, Commvault-Hedvig and StorOne.
Ceph is the intended data storage equivalent of Linux for servers but the open source technology lags behind better-established proprietary competitors. For wider scale adoption, Ceph will need to raise its performance game.
OpenDrives, which provides video post production NAS systems, has been in the spotlight this week after releasing new customizable enterprise products in the form of three new NAS hardware series.
The firm claims its video post production platform can deliver 23GB/sec write and 25GB/sec read performance with an average of 13μs latency from a single chassis packed with NVMe SSDs. It has not yet not explained how it achieves this level of performance.
Products
There are three lines in the firm’s Ultra hardware platform range:
Ultimate Series – a 2U chassis with Ultimate 25 compute module and supporting NVMe SSDs and the fastest performer.
Optimum Series – 2U chassis with Ultimate 15 compute module and configurable with either all-flash NVMe or SAS disk drives or both.
Momentum Series – 2U chassis with Ultimate 5 compute module and supporting disk drive-based capacity-optimised modules. Designed to excel at write-intensive workflows, such as camera-heavy security surveillance.
OpenDrives Ultra hardware line
These have 2U base chassis and come with storage modules:
F – 8 x all-NVMe SSDs (960GB to 15.36TB)
FD – 24 x NVMe SSDs (1.92TB to 15.36TB)
H – 21 x SAS HDD (4,8,12, 16TB)
HD – 72 x SAS HDD (4,8,12, 16TB)
The F, FD and H have 2U enclosures while the HD comes in a 4U enclosure.
These Ultimate system are clusterable with a distributed file system, and run Atlas software. This is based on OpenZFS and provides data integrity, high availability (standby compute module) compression, inline caching in DRAM, RDMA, active data prioritisation for low latency, and a dynamic block feature. This fits incoming data to the most efficient block size from 4KB to 1MB.
Other features include single pane of glass management, single namespace, storage analytics and various data integrity techniques such as checksums, dual parity striping and inflight error correction.
Background
The company was founded in Los Angeles in 2011 by Jeff Brue and Kyle Jackson. Jackson left in 2015 and Brue passed away in 2019. They were media creatives that were looking for an efficient solution to tackle their large file, video, and imaging performance workloads – mostly film dailies, or the raw, unedited footage shot on a movie set during every day. It has since expanded from its original video market to healthcare, video games, e-sports, architecture, corporate video, and advertising agencies.
Chad Knowles joined in 2014 as the CEO and is classed as a co-founder in LinkedIn. Retired US Navy Vice Admiral David Buss became the current CEO in September 2019 with Knowles staying on as a Chief Strategy Officer and then relinquishing that to be a board member.
Sean Lee is now Chief Product and Strategy Officer.
The firm boasts revenue has increased by 83 per cent annually, every year for 5 years, though one doesn’t know from which base.
OpenDrives has so far taken in $11m in funding – peanuts in the great scheme of storage startup VC funding amounts.
Customers include Apple, Netflix, AT&T, Disney, Fox, Paramount, Universal, Sony, Warner Bros., CBS, ABC, NBC, HBO, Turner, Riot Games, Epic Games, Deluxe, Fotokem, Skydance, YouTube, Saatchi & Saatchi, Deutsch, NBA, NFL, PGA, NASCAR, Grammy Awards and Vox Media – a good roster to pin on the wall.
Competition
The 23GB/sec write, 25GB/sec read performance and average of 13μs latency sound good. How does it compare to the competition?
A Dell EMC Isilon F800 delivers up to 250,000 IOPS and 15 GB/s aggregate throughput from its up to 60 SAS SSDs in a single chassis configuration. OpenDrives doesn’t provide IOPS numbers because, it claims, its system’s algorithms and efficiencies are optimised for large file performance, not small file IO.
Dell EMC has recently announced a PowerScale F600 system, PowerScale being the new brand name for Isilon systems going forward. It didn’t release specific performance numbers.
However, the F600 supports 8 x NVMe SSDs and has superiority in CPU socket number, DRAM capacity, SSD speed, and IO port bandwidth which suggests that its IOPS and throughput numbers will be significantly higher than those for the F800. Blocks & Files thinks the F600 will deliver 5x more performance than the F800, meaning 1.25 million IOPS and more throughput as well. A literal 5x throughput improvement would mean 75GB/sec but we think this could be unrealistic.
However, again, envisaging the PowerScale F600 surpassing 25GB/sec throughput does seem realistic.
Qumulo can scale to high GB/sec levels beyond 25GB/sec but, we suspect, its single chassis performance may fall behind OpenDrives.
Net:net
OpenDrives is relatively unknown outside its niche but has a solid roster of impressive customers for its large file IO-optimised systems. If it can see off PowerScale/Isilon competition then that will speak volumes about the strength of its product.
Teradata has become much more serious about working in the public clouds and Western Digital has an NVMe Express-bolstering present for its zoned QLC SSD initiative.
Teradata’s head in the clouds
Legacy data warehouse Teradata has its head in the clouds in a seriously big way: AWS, Azure and GCP to be precise. It’s pushing more features into its Vantage-as-a-service products on the Big 3 of cloud. Customers get:
Reduced network latency via Teradata’s growing global footprint, and upgrades to compute instances and network performance;
Support for customer-managed keys for Vantage on AWS and Azure. Availability: The service level agreement (SLA) for availability is now 99.9% for every as-a-service offering. Guaranteed, higher uptime;
Quicker compressed data migration times withTeradata’s new data transfer utility (DTU) with 20% faster transfers;
Self-service web-based console gets expanded options for monitoring and managing as-a-service environments;
Integration with Amazon cloud services, including Kinesis (data streaming); QuickSight (visualization); S3 (low-cost object store); SageMaker (machine learning); Glue (ETL pipeline); Comprehend Medical (natural language processing); and
Integration with Azure cloud services, including Blob (low-cost object store); Data Factory (ETL pipeline); Databricks (Spark analytics); ML Studio (machine learning); and Power BI Desktop (visualization).
There will be more to come over time.
The Amazon and Azure Vantage enhancements are available now. They will also apply to Vantage on Google Cloud Platform (GCP), which will begin limited availability in July 2020.
WD’s Zoned Namespace spec ratified
The NVMe Express consortium has ratified Western Digital’s ZNS (Zoned Namespace) command set specification. WD has a pair of zoned storage initiatives aimed at host management of data placement in zones on storage drives.
For SMR (Shingled Magnetic Recording) HDDs:
ZBC (Zoned Block Commands)
ZAC (Zoned ATA Command Set)
For NVMe SSDs:
ZNS (Zoned Namespaces) for NVMe SSDs.
These are host-managed as opposed to the drives managing the zones themselves. That means system or application software changes. It also requires support by other manufacturers to avoid zoned disk or SSD supplier lock-in. This ratification helps make ZNS support by other SSD manufacturers more likely.
ZNS is applicable to QLC SSDs where data with similar access rates can be placed in separate zones to reduce overall write amplification and so extend drive endurance. They can also provide improved I/O access latencies.
WD’s ZNS concept.
The ZNS specification is available for download under the Developers -> NVMe Specification section of the www.nvmexpress.org public web site, as an NVM Express 1.4 Ratified TP.
WD has been working with the open source community to ensure that NVMe ZNS devices are compatible with the Linux kernel zoned block device interface. It says this is a first step and modifications to well-known user applications and tools, such as RocksDB, Ceph, and the Flexible IO Tester (fio) performance benchmark tool, together with the new libzbd user-space library, are also being released.
It claims public and private cloud vendors, all flash-array vendors, solid-state device vendors, and test and validation tool suppliers are adopting the ZNS standard – but these are not named.
Blocks & Files thinks ZNS support by other SSD suppliers such as Samsung, Intel, and Micron will be essential before storage array manufacturers and SW suppliers adopt it with real enthusiasm.
WD claimed that, with a small set of changes to the software stack, users of host-managed SMR HDDs can deploy ZNS SSDs into their data centres. More from WD here.
Shorts
Data warehouser Actian has announced GA of Vector for Hadoop. This is an upgraded SQL database with real-time and operational analytics not previously feasible on Hadoop. The SW uses patented vector processing and in-CPU cache optimisation technology to eliminate bottlenecks. Independent benchmarks demonstrated a more than 100X performance advantage with Vector for Hadoop over Apache Impala.
The Active Archive Alliance announced the download availability of a report: “Active Archive and the State of the Industry 2020,” which highlights the increased demand for new data management strategies as well as benefits and use cases for active archive solutions.
Backupper Assigra and virtual private storage array supplier Zadara announced that Sandz Solutions Philippines Inc. has deployed their the Cloud OpEX Backup Appliance to to defend its businesses against ransomware attacks on backup data.
AWS’Snowcone uses a disk drive to provide its 8TB of usable storage, not an SSD.
Enterprise Information archiver Smarsh has Microsoft co-sell status and its Enterprise Archive offering is available on Azure for compliance and e-discovery initiatives. Enterprise Archive uses Microsoft Azure services for storage, compute, networking and security.
Taipei-based Chenbro has announced its RB133G13-U10; a custom barebones 1U chassis pre-fitted with dual Intel Xeon motherboard and ready to install two Intel Xeon Scalable processors with up to 28-cores, 165W TDP. There is a maximum of 2TB of DDR4 memory, 2X 10GbitE connectivity, 1X PCI-Ee Gen 3 x16 HH/HL expansion slot and support for up to 10X hot-swappable NVMe U.2 drives. It has Intel VROC, Apache Pass, and Redfish compliance.
France-based SIGMA Group, a digital services company specialising in software publishing, integration of tailor-made digital solutions, outsourcing and cloud solutions, has revealed it uses ExaGrid to store its own and customer backups, and replicate data from its primary site to its disaster recovery site.
Estonia-based Diaway has announced a strategic partnership with Excelero and the launch of a new product, DIAWAY KEILA powered by Excelero NVMesh. Component nodes use AMD EPYC processors, PCIe Gen 4.0, WD DC SN640 NVMe SSDs, and 100GbitE networking. Sounds like a hot, fast box set.
FalconStor can place ingested backup data on Hitachi Vantara HCP object storage systems. This means data ingested by FalconStor through its Virtual Tape Library (VTL), Long-Term Retention and Reinstatement and StorSafe offerings can be deduplicated and sent to an HCP target system. Physical tape can ingested by the VTL product and sent on to HCP for faster access archive storage.
Hitachi Vantara was cited as a Strong Performer in the Forrester Wave Enterprise Data Fabric, Q2 2020 evaluation. But Strong Performers are second to Leaders and the Leader suppliers were Oracle, Talend, Cambridge Semantics, SAP, Denodo Technologies, and IBM. Hitachi V was accompanied asa Strong Performer by DataRobot, Qlik, Cloudera, Syncsort, TIBCO Software, and Infoworks. Well done Hitachi V – but no cigar.
Backupper HYCU has a Test Drive for Nutanix Mine with HYCU initiative. Customers can try out Nutanix Mine with HYCU at their own pace, with in-depth access and hands-on experience by launching a pre-configured software trial.
Data protector HubStor tells us it has revamped its company positioning as a SaaS-based unified backup and archive platform. Customer adoption remains strong and it’s adding one petabyte of data into the service each month as of recent months.
China’s Inspur has gained the number 8 position in the SPC-1 benchmark rankings with an AS5500 G3 system scoring 3,300,292 SPC-1 IOPS, $295.73/SPC-1 KIOPS and an 0.387ms overall response time.
Seagate’s LaCie unit announced new 1big Dock SSD Pro (2TB and 4TB SSD capacities) and 1big Dock (4TB, 8TB, and 16TB HDD capacities) storage for creative professionals and prosumers. Both are designed by Neil Poulton to look good on your desktop. The 1big Dock SSD Pro is for editing data-intense 6K, 8K, super slow motion, uncompressed video, and VFX content. The 1big Dock has direct ingestion of content from SD cards, CompactFlash cards, and USB devices and serves as the hub of all peripherals, connecting to the workstation with a single cable.
LaCie 1big Dock SSD Pro.
Micron Solutions Engineering Lab recently completed a proof of concept using Weka to share a pool of Micron 7300 PRO with NVMe SSDs and obtained millions of IOPS from the file system. The testing used six nodes in a 4 + 2 (data + parity) erasure-coding configuration for data protection. There’s more information from Micron here.
More than 4.5 million IOPS from Weka Micron system.
Nutanix Foundation Central, Insights and Lifecycle Manager have been updated to enable Nutanix HCI Managers to do their work remotely.
Foundation Central allows IT teams to deploy private cloud infrastructure on a global scale from a single interface, and from any location.
Insights will analyse telemetry from customers’ cloud deployments, including all clusters, sites and geographies, to identify ongoing and potential issues that could impact application and data availability. Once identified, the Insights service can provide customised recommendations.
Lifecycle Manager (LCM) will deliver seamless, one-click upgrades to the Nutanix software stack, as well as to appliance firmware – without any application or infrastructure downtime.
Nutanix and HPE have pushed out some new deals with AMD-based systems offering better price/performance for OLTP and VDI workloads, ruggedised systems for harsh computing environments, certified SAP ERP systems, higher capacity storage for unstructured data, and turnkey data protection with popular backup software. More from Nutanix here.
Cloud data warehouser Snowflake, with an impending IPO, today announced general availability on Google Cloud in London. The UK’s Greater Manchester Health and Social Care Partnership is using Snowflake in London. This follows Snowflake’s general availability on Google Cloud in the US and Netherlands earlier this year.
Storage Made Easy (SME) has signed an EMEA-wide distribution agreement with Spinnakar for its Enterprise File Fabric, a single platform that presents and secures data from multiple sources, be that on-premises, a data centre, or the Cloud. The EFF provides provides an end-to-end brandable product set that is storage agnostic, and currently supports more than 60 private and public data clouds. It supports file and object storage solutions, including CIFS/NAS/SAN, Amazon S3 and S3 compatible storage, Google Storage and Microsoft Azure.
StorageCraft announced an upgrade of ShadowXafe, its data and system backup and recovery software. Available immediately, ShadowXafe 4.0 gives users unified management with the OneXafe Solo plug-and-protect backup and recovery appliance. It also has Hyper-V, vCenter and ESXi support and consolidated automated licensing and billing on ConnectWise Manage and Automate business management platforms.
StorOne has launched its Optane flash array, branding it S1:AFAn (All-Flash Array.Next) claiming it’s the highest-performing, most cost-effective storage system on the market today and a logical upgrade to ageing All-Flash Arrays. CompuTech International (CTI) is the distributor of StorONE’s S1:AFAn. Use TRU price to run cost comparisons.
Europe-based SW-defined storage biz StorPool has claimed over 40 per cent y-o-y growth in H1 2020 and a 73 per cent NPS score. New customers included a global public IT Services and consulting company, a leading UK MSP, one of Indonesia’s largest hosting companies, one of Netherland’s top data centres, and a fast-growing public cloud provider in the UK. StorPool is profitable and hasn’t had any funding rounds since 2015.
TeamGroup announced the launch of the T-FORCE CARDEA II TUF Gaming Alliance M.2 Solid State Drive (512GB, 1TB) and T-FORCE DELTA TUF Gaming Alliance RGB Gaming Solid State Drive (5V) (500GB, 1TB), both certified and tested by the TUF Gaming Alliance.
TeamGroup gaming SSDs.
Frighteningly fast filesystem supplier WekaIO said it has been assigned a patent (10684799) for “Flash registry with write levelling,” and has forty more patents pending. Forty? Yes, forty.
Wells Fargo financial analyst Joe Quatrochi has issued a note to subscribers describing the state of capacity increase play in the NAND industry. He’s interested because investors are curious about publicly quoted NAND industry suppliers. B&F is interested because of the technology picture it provides.
Update; Flash endurance table replaced with updated values. 20 November 2020.
Flash dies are built on circular wafers using semi-conductor techniques such as vapour deposition and etching. Flash foundry operators are facing a conundrum: how do they increase capacity to meet bit demand?
There are five basic ways. One is to increase capacity per die by adding layers to the 3D NAND dies they build on their wafers. A second option is to add bits to cells – moving from, for example, 3 bits/cell (TLC) to four (QLC).
The third option is to shrink the physical cell size on a NAND die, meaning more dies per wafer. However a cell’s ability to store rewritten values reliably, its endurance in terms of write cycles, decreases with the process size, and also the cell bit count, to the point where no further progress is possible.
Indicative values from averaged industry sources.
The fourth option is for manufacturers to build more wafers, measured in wafer starts per month (wspm), which means building new fabs once a fab’s wafer making capacity is fully allocated.
The fifth option is to pick two or more of those options together: increase the layer count, the bits/cell count, the physical cell size, and the wspm at the same time.
Quatrochi’s research gives an insight into the issues involved in these choices.
Layer counts
Adding layers adds process complexity and process time. Etching a hole through 64 layers is easier than etching one through 96 layers and the difficulty rises again as the layer count increases to 112, 128, 144, etc. This means that the yield of good dies from a wafer can go down as badly-etched holes render a die ineffective. Yields going down mean costs per TB of NAND on the wafer go up.
NAND Suppliers and known layer count stages.
Quatrochi mentioned an aspect ratio issue, stating: “The vertical stacking attributes of 3D NAND make it increasingly dependent on precision in the etching process to leverage higher aspect ratios, while deposition consistency continues to be more difficult.”
He added: “Aspect Ratio continues to increase with the increase in layer count – a 96-layer device is estimated roughly 70:1 (vs. 60:1 for 64L). Continued increases in Aspect Ratio results in a number of potential issues during the deposition and etching steps, including non-uniform layers, incomplete etch (holes do not reach the bottom), bowing, twisting, and critical dimension variation between the top and bottom of the stack.”
These issues can render a die useless and lower the yield per wafer.
Adding layers, however you do it, increases process time and thus costs as well.
Quatrochi wrote: “Moves to higher layer counts result in lower wspm as processing time increases / additional steps are added. For example 128L single-stack etch has been estimated as taking 2x the time of 96L single-stack.”
If a single machine takes 5 days to build a wafer then you can achieve 6wspm (taking the average month as 30 days). If it takes 10 days then that halves. Suppliers have to ask themselves questions such as, if adding 30 per cent more capacity per die by increasing the layer count actually gets them more output capacity if their machines take twice as long to manufacture the wafers.
A way of dealing with manufacturing problems caused by layer count increases is to build a die from two or three separate sub-units, called strings. A 96-layer die can be made by stacking two 48-layer dies one above the other – so-called string stacking. That reduces overall hole etch depth.
String-stacking does not provide a get-out-of-jail-free card, though: “A string-stack could add as much as 30 per cent more cost with added steps.” Balancing process time and cost increases against capacity gain methods is difficult.
Factor in the possible yield of good dies per wafer going down as layer counts increase and the calculation becomes more complex. Yields tend to increase overtime as the manufacturing process is tuned but it is not an exact science.
Bits per cell
Yet another complicating factor is the bits per cell count. You get declining benefits from this too, as we saw above. An SLC to MLC (2bits/cell) change gets you a 100 per cent capacity increase. Moving to TLC (3bits/cell) brings a 50 per cent capacity rise rewards. But the TLC to QLC (4bits/cell) transition means a 33 per cent step up while QLC to PLC (5 bits/cell) is a 25 per cent rise and, were it possible, a move to 6 bits/cell would bump up capacity 20 per cent.
Actual yield, TB per wafer, can differ from the theoretical gain. Quatrochi wrote: “QLC 3D NAND TB/wafer estimated at upwards of 70TB/wafer; W. Digital QLC 112-layer estimated 40 per cent higher TB/wafer than TLC 112-Layer.” That’s better than the theoretical 33 per cent increase we noted.
NAND bit recording quality deteriorates with each increase in cell count. QLC NAND is not the same stuff as TLC NAND. It takes longer to read and write bits and the life of a QLC cell is shorter than that of a TLC cell as we noted above. PLC makes things worse again.
SK Hynix 128-layer wafer, chips, U.2 and ruler drives.
Over-provisioning by adding spare capacity (extra cells) to replace worn-out cells increases costs. What would be the point of replacing a QLC SSD having 20 per cent over-provisioning with a similar usable capacity PLC drive having 50 per cent over-provisioning and the same overall endurance if there was no cost benefit?
Decreasing layer count gains
The ability to drive capacity per wafer higher by increasing layer counts brings deceasing gains. Moving from 64 to 96 layers adds a 50 per cent capacity increase. Transitioning from 96 to 128 layers adds 33 per cent more capacity. Adding 32 layers again to reach 160 layers means a 25 per cent capacity rise. Progressing to 192 layers brings a smaller 20 per cent capacity rise. Eventually the gains are outweighed by the extra processing time and yield issues.
At that point you can add bit capacity by making more wafers. A 50 per cent utilised machine can make twice as many wafers by being 100 per cent utilised. After that you need more machines, and that leads to building a new fab – ker-ching – that will be $15bn please.
Net:net
The pace of gains in SSD capacity tends to slow because of these various NAND die production issues. The effect of this can be seen in a chart showing industry shipped capacity by layer count over time:
We see that it is taking longer (in terms of quarters after launch) for 96L NAND to rise to 70 per cent of industry shipped capacity than it took 64/72L NAND. The first iteration of 100+ layer NAND looks likely to take longer still.
The NAND and SSD industries are very fertile technologically speaking. They are not, overall, running out of capacity increase runway but the costs and difficulties of increasing capacity are rising and cul de sacs approaching, as with cell bit counts.
Ruler format drives will increase the physical space available for NAND dies. Drive and host controller error checking technology, write cycle reduction through random write avoidance, and over-provisioning will help QLC and then PLC NAND become usable in more cases. Better process technology and materials will help smaller cell sizes become practical. The outlook is positive.
Wafers and disks
Such layer-count issues do not affect disk drive manufacturers, who also use semi-conductor techniques to build bit storage entities, magnetic domains, on circular wafers, called disks and physically smaller than NAND wafers. Disks have a 1-dimensional structure, a single layer. The HDD manufacturers’ problem is shrinking the size of the bits while keeping stored bit value stable and readable.
With magnetic materials currently in use bit area shrinkage reduces the number of electrons in a bit. The stability of the bit area’s magnetic field reduces towards unpredictability at room temperature as bit size diminishes.
Disk suppliers are moving to energy-assisted recording to achieve bit stability and readability at room temperatures through writing the bits to more stable recording material. This resists magnetic polarity change more strongly at room temperature. The bit area is made receptive to change through heat (HAMR) or microwave (MAMR) energy. This brings its own problems but having multiple layers is not one of them.
Full-year results from video workflow storage-focussed Quantum show stabilising revenues, improving gross margin and lower costs but the fourth quarter was hit by the pandemic and revenues slumped. Growth prospects in hyperscalers and the US public sector look great though.
Chairman and CEO Jamie Lerner has been reshaping and recovering Quantum’s business, refocussing the strategy on supporting video workflows and cutting costs whilst preserving R&D. The pandemic sent revenues down towards the end of the fourth fiscal quarter but the year as a whole was pretty satisfactory.
Lerner’s prepared quote declared: “Quantum delivered significantly improved performance in fiscal 2020, particularly in terms of profitability, despite a marked slowdown in revenue in mid-March when the outbreak of the COVID-19 pandemic halted professional sporting events and many of our customers in the media and entertainment sectors temporarily ceased filming operations.”
Full year revenues to March 31 2020, were $402.9m, a mere $220K more than last year but growth none the less. There was a net loss of $5.2m which was a pleasingly large improvement on last year’s $42.8m loss. Gross margins increased 120 basis points to 42.8 per cent and total operating expenses decreased $21.1m, or 12 per cent. Research and development expenses increased 13 per cent to $36.3m.
There was a 3 per cent increase in product revenue with growth in primary storage and devices and media partially offset by a decline in secondary storage systems. Quantum experienced a slight decline in service revenues, particularly from legacy tape backup customers as tape transitions from backup to archive.
Cash and cash equivalents of $6.4m at year end were down from the year-ago $10.8m. Quantum has secured an additional $20m credit facility help here.
The fourth quarter
Q4 fy20 revenues were $88.2m, down 15 per cent from the year-ago’s $103.3m. Product revenue slumped 20 per cent due to lower secondary storage demand and lower hyperscale demand, caused by the pandemic.
There was a $3.8m loss, better than last year’s $9.4m loss, and also creditable since product revenues were down by a fifth.
We can see by charting Quantum’s quarterly revenues by fiscal year that fy20 had seen a levelling of the downward revenue trend – until Q4 was affected by the pandemic which sent revenue abruptly down.
Earnings call
In the earnings call Lerner said: “Quantum is a healthy, cost-efficient innovator focused on higher-value and higher-margin solutions. The pandemic will subside or at least be contained, and our customers in sports and entertainment will return.”
Discussing the tape market, CFO Mike Dodson said: “The adoption of LTO-8 has lagged expectations primarily due to attractive pricing for LTO-7, and customer anticipation of the next-generation LTO-9 that is expected to be launched in the next six months.”
Blocks & Files also points to the shoddy dispute between tape media suppliers Sony and Fujifilm which limited LTO8 media supplies.
Lerner said a major release of the StorNext file system software is expected for the end of 2020, with a software-defined architecture. Quantum will “simplify and converge our product offerings, expanding the use cases we can address to include edge environments in small and post animation houses.”
Largest archive in the world
Tape is not dead. Both it and object storage are used for archives in hyperscale customers, enterprises, governments and service providers. Lerner said: “Our experience in the last few years in building the largest archive in the world has informed our product road map and new architectures.” What hyperscaler’s archive is that we wonder? Is it a glacial one, amazonian in size?
The hyperscale business is expanding, with Lerner saying: “Our hyperscaler business is actually starting to expand into very large telco, very large enterprise.”
The US public sector is a growth one for Quantum too, according to Lerner: “We are just becoming more relevant in unstructured data in the government, and that’s everything from the national labs to space programs with NASA. We want to go to Mars with NASA.
“There’s going to be a lot of imagery there. There’s a huge amount of work, whether it’s disease modelling, warfighter modelling. There’s just a huge amount of work happening in the national laboratories. And every single thing that we do in our military is video attached. So, I just think that is a big growth area for us.”
The guidance for the next quarter is for revenues of $73m plus/minus $1m, as Quantum expects the customer delays and disruptions experienced in the last two weeks of Q4 fy20 to have a more pronounced impact. That’s well down from Q1 fy20’s $106.6m.
But from then on, pandemic willing, things should pick up.
Dell EMC has confirmed PowerFlex as the new name for its scale-out, multi-hypervisor, virtual SAN and HCI VxFlex product set, and added data safety features to the OS.
PowerFlex rack.
PowerFlex, sold as a hardware appliance, is Dell EMC’s 3 to multi-thousand node scale-out block storage product providing a virtual SAN or hyper-converged deployment mode. It uses PowerEdge X86 server nodes in a parallel node architecture to provide basic storage, a scale-out SAN (2-layer), HCI (1-layer) or mixed system, and was available as a VxFlex appliance, VxRack with integrated networking or Ready Node. (Now they are PowerFlex-branded.) Dell EMC has said it’s software-defined but the SW doesn’t appear to be available on its own.
VxFlex itself is the 2018 rebrand of the acquired ScaleIO product, which EMC bought in June 2013 for circa $200m. This was a Dell EMC alternative to VMware’s vSAN. Compared to VxRail, which supports VMware exclusively and with which it overlaps, VxFlex supports multiple hypervisors and bare metal servers, and can be used as a traditional SAN resource or in HCI mode.
PowerFlex appliance.
PowerFlex storage can be used for bare-metal databases, virtualised workloads and cloud-native containerised applications. Its parallel architecture makes for quick rebuilds when drives or nodes fail.
Dell EMC PowerFlex performance claims.
The PowerFlex OS, Dell EMC said, now delivers native asynchronous replication, alongside the existing sync replication, and disaster recovery with RPO down to 30 seconds. It also delivers secure snapshots for customers with specific corporate governance and compliance requirements, including healthcare and finance, the company added.
PowerFlex appliance table.
PowerFlex appliances have all-flash hardware nodes with SATA, SAS and NVMe drive support. Blocks & Files expects more NVMe drive support, along with NVMe-oF and Optane DIMM and/or SSD support in the future to pump up the performance power further.
Dell EMC said it has completed its brand and product simplification under the Power portfolio umbrella. VMware-focussed products have a V-something moniker while Dell’s own products use the Power-prefix.
Intel’s DC P4510 is a U.2 (2.5-inch) data centre SSD with a PCIe 3 x 4 lane, NVMe interface. It was announced in January, 2019, comes in 1, 2, 4, and 8TB capacities and is built using 64-layer 3D NAND in TLC format. It pumps out up to 641,000 random read IOPS and 134,500 random write IOPS. The sequential read and write bandwidth numbers are up to 3.2GB/sec and 3.0GB/sec.
Intel ruler format.
Now there is a 15.36TB capacity version, using the same NAND, in the EDSFF E1.L format, L meaning long. You can cram more of these ruler drives into server chassis than U.2 format SSDs, thus gaining more storage density in the same server chassis space.
You can have either a 9mm or an 18mm heat sink with the drive and its random read/write IOPS numbers are 583,800 and 131,400 IOPS; both less than the U.2 drive with its 8TB maximum capacity. There must be a reason for this performance drop, despite the drive logically having more dies and thus more parallel access headroom, meaning more performance ought to be possible.
On the sequential read and write front the 15.56TB version performs at 3.1GB/sec each; more or less the same as the U.2 version.
The endurance is 1.92, 2.61, 6.3, 13.88 and 22.7 PBW (petabytes written) as capacity rises from 1TB to 15.36TB, with a limited 5-year warranty.
You can check out an Intel DC P4510 data sheet for more information.
Blocks & Files expects major server manufacturers such as Dell EMC, HPE and Lenovo to embrace the EDSFF format from now on, with EDSFF servers becoming mainstream in 2021. We might also expect 32 and 64TB ruler capacity point to come onto the scene.
GPU-accelerated data warehouser SQream has put another round of VC cash in its pockets.
SQream took in $39.4m in a B+ round led by Mangrove Capital Partners and Schusterman Family Investments. Existing investors, including Hanaco Venture Capital, Sistema.vc, World Trade Center Ventures, Blumberg Capital, Silvertech Ventures and Alibaba Group, also pumped in cash. Total SQream funding is now $65.8m and this round follows a $19.8m B2 round in 2018.
The cash will pay for what SQream calls “top talent recruitment” to develop the product technology.
The company claims its growth is accelerating, despite the Covid-29 pandemic.
Two VCs – Roy Saar (Mangrove Capital and Wix founder) and Charlie Federman (Partner at Silvertech Ventures) will join the SQream board of directors.
A canned quote from Saar said: “Given the growth of data, which has been on a rocket-ship trajectory to zettabyte levels due to rapid digitalisation, we are just scraping the surface of how companies will be generating value from their data.”
The data warehouse business is on a roll, with Snowflake preparing for an IPO. SQream can’t afford to fall behind and its GPU turbo-charging should give it an edge.
Update. Read/write bandwidth and latency numbers added. 25 June 2020.
WD has built a faster NVMe SSD and populated an OpenFlex NVMe-oF composable JBOF enclosure with 24 of them to provide a hot box, fast-access data centre flash array.
This is somewhat surprising as WD has recently divested its data centre storage array business, witness the disposal of IntelliFlash arrays to DDN and ActiveScale archiving to Quantum. But the OpenFlex box does not have a storage controller delivering data services: it’s bare metal.
A canned quote about the new WD products from IDC research VP Jeff Janukowicz struggled to say anything specific about them: “The future of Flash is undoubtedly NVMe as it’s all about speed, efficiency, capacity and cost-effective scalability, and NVMe-oF takes it to the next level. … the company is well positioned to help customers fully embrace NVMe and get the most out of their storage assets.”
Yusuf Jamal, SVP of WD’s Devices and Platforms Business, provided an anodyne quote, too: “We’re fully committed to helping companies transition to NVMe and move to new composable architectures that can maximise the value of their data storage resources.”
The SSD
The DC (Data Centre) SN840 uses the same 96-layer 3D NAND TLC technology as the earlier SN640 and its capacity range is 1.6, 3.2, 6.4 (3 drive write per day) or 1.92, 3.84, 7.68, and 15.36TB (1DWPD) in its U.2 (2.5-inch) form factor. The 15.36TB peak is double the earlier SN640’s 7.68TB in its U.2 form factor. That drive also came in a 30.72TB EDSFF E1.L ruler form factory, which format is not available for the SN840.
The SN840 outputs up to 780,000/250,000 random read/write IOPS, much more than the SN640’s maximum 480,000/220,000 IOPS. The sequential read/write bandwidth numbers are 3.5GB/sec and 3.4GB/sec, and the latency is 157µs or lower.
WD SN840 performance charts
Its 1 and 3 drive write per day formats make it suitable for either read-intensive or mixed read/write-use workloads. The SN840 is a dual-ported drive with power-loss protection and TCG encryption.
The JBOF
The OpenFlex Data24 NVMe-oF Storage Platform is a 2U x 24 slot all-flash box populated with the SN840 SSDs. It uses an RDMA-enabled RapidFlex controller/network interface cards, developed with WD’s acquired Kazan Networks NVMe-oF Ethernet technology. Notice the vertical integration here.
This JBOF (Just a Bunch of Flash) follows on from the OpenFlex F3100 which was an awkward design housing 10 x 2.5-inch SSDs inside 3.5-inch carriers.
OpenFlex Data24.
The 2U Data24 box houses 24 hot-swap SN840s with a maximum 368TB capacity. Up to 6 hosts can be directly connected to this JBOF with a 100GbitE link and 6 RapidFlex NICs. They get the benefit of up to 13 million IOPS, 70GB/sec throughput and sub-500 nanosecond latency.
RapidFlex controller.
The Data24 can interoperate with the existing OpenFlex F-series products and comes with a five-year limited warranty.
It’s certainly a fast box and comes with no traditional storage controller providing data services such as erasure-coding, RAID, snapshots, or data reduction. This is just a stripped down, NVMe-oF-accessed bare flash drive dragster box supporting composability.
WD said it can be used for high-performance computing (HPC), cloud computing, SQL/NoSQL databases, virtualisation (VMs/containers), AI/ML and data analytics. We envisage DIY hyperscalers and system builders will be interested in looking at the box, as a SAS JBOF replacement perhaps. All WD’s data centre SSD customers will pay attention to the DC SN840.
Ultrastar DC SN840 NVMe SSD shipments will begin in July. The OpenFlex Data24 NVMe-oF Storage Platform is scheduled to ship in autumn/fall. RapidFlex NVMe-oF controllers are available now.
Western Digital has responded to its shingled Red NAS drive issue by adding a Red Plus product variant, and the company’s coming 14TB and 16TB Gold disk drives’ details have leaked into the market.
The Golden touch
Tom’s Hardware has revealed online eTailers are listing 16TB and 18TB Western Digital Gold drives ahead of their formal launch.
The Gold series officially have a 1TB to 14TB capacity range and are helium-filled drives in the 12TB and 14TB models, use conventional magnetic recording, not shingled, spin at 7,200rpm and employ a 6Gbit/s SATA interface. They are also built for hard work; 24x7x365.
The UK’s Span lists a WD181KRYZ 18TB Gold model for £624.00. It has a 5-year limited warranty and a 2.5 million hours MTBF rating. The cache and bandwidth numbers are not certain, though Span suggests a 52GB cache and a 257-267MB/sec read speed. A 14TB Gold disk cost $578.00.
We expect a formal launch in a few days or weeks.
WD Red drives and shingling
WD has been facing user problems with its WD Red NAS disk drives using shingled magnetic recording. To give users clarity, it’s now splitting the line into shingled (Red) and non-shingled (Red Plus) product types. The Red Pro line is not affected by this change.
Expanded Red disk drive range.
A WD blog explained: “WD Red Plus is the new name for conventional magnetic recording (CMR)-based NAS drives in the WD Red family, including all capacities from 1TB to 14TB. These will be the choice for those whose applications require more write-intensive SMB workloads such as ZFS. WD Red Plus in 2TB, 3TB, 4TB and 6TB capacities will be available soon.”
“The Red line with device-managed shingled magnetic recording (DMSMR) (2TB, 3TB, 4TB, and 6TB [capacities] will be the choice for the majority of NAS owners whose demands are lighter SOHO workloads.”
The firm added: “We want to thank our customers and partners for your feedback on our WD Red family of network attached storage (NAS) hard drives. Your real-world insights shared through in-depth reviews, blogs, forums and from our trusted partners are directly contributing to our work on an expansion of models and clarity of choice for customers. Please continue sharing your experiences and expectations of our products, as this input influences our development.”
Comment
This blog and the Red Plus product line clarification will be seen by some as recognition for the hundreds of users who said they had problems with Red NAS drives using shingling when the write load exceeded the drive’s ability to delay performance-sapping writes by caching them in a buffer until the drive was idle.
It is very good that WD is listening to its users, but a pity that its product design engineers didn’t realise there would be a problem in the first case.
HPE wants to be part of the picture when businesses develop and deploy apps and workflows with AI and ML functionality in containerised environments and has announced a newly created software portfolio – Ezmeral – to that end.
HPE is another mainstream vendor looking to grab a part of the emerging new Kubernetes container organisation territories.
HPE’s Kumar Sreekanti, CTO and head of software, provided a canned overarching quote: “The HPE Ezmeral software portfolio fuels data-driven digital transformation in the enterprise by modernising applications, unlocking insights, and automating operations.” Yes, OK.
“Our software uniquely enables customers to eliminate lock-in and costly legacy licensing models, helping them to accelerate innovation and reduce costs, while ensuring enterprise-grade security.” (That refers to an open source piece of the news.)
HPE is getting on a Kubernetes train with other passengers already onboard. NetApp also has a Kubernetes framework initiative with its Project Astra, and MayaData has its Kubera Kubernetes management service.
Ezmeral brand
Ezmeral is wrapped in a trendy digital transformation marketing framework and is a brand name developed from esmeralda, the Spanish word for emerald, with lots of high-flown HPE quotes talking about “The transformation of raw emerald to a cut and polished stone to reveal something more beautiful and valuable is analogous to the digital transformation journey our customers are on.”
The marketing bods have clearly lined emeralds, which are green, with GreenLake, HPE’s all-singing, all-product dancing subscription deal.
Ezmeral portfolio
Backin the real world HPE has combined its acquired BlueData and MapR assets with open source Kubernetes-related software to build the Ezmeral portfolio. This applies to the range of IT environments from edge locations, data centres and the public cloud and includes:
Container orchestration and management,
AI/ML and data analytics,
Cost control,
IT automation and AI-driven operations,
Security
The Ezmeral Container Platform and Ezmeral ML Ops are two software items announced as part of the portfolio.
Ezmeral Container Platform
This enables customers to manage multiple Kubernetes clusters with a unified control plane, and use a MapR distributed file system for persistent data and stateful applications. Interestingly HPE says customers can run both cloud-native and non-cloud-native applications in containers without having to rewrite the legacy apps.
This involves use of the HPE-contributed KubeDirector open source project which provides the ability to run non-cloud native stateful applications on Kubernetes without modifying the code.
But it is no silver bullet, being focused on distributed stateful applications. KubeDirector enables data scientists familiar with data-intensive distributed applications such as Hadoop, Spark, Cassandra, TensorFlow, Caffe2, etc. to run these applications on Kubernetes – with a minimal learning curve and no need to write GO code.
OK – so forget running the broad mass of legacy apps in containers.
Ezmeral ML OPs software uses containerisation to introduce a DevOps-like process to machine learning workflows. HPE claims it will accelerate AI deployments from months to days. The BlueData part of this refers to using container technology for Big Data analytics and machine learning.
The Ezmeral Container Platform and Ezmeral ML OPs products will be available as software and also delivered as a cloud service through GreenLake.
Startup Nebulon has come out of stealth to reveal scale-out, on-premises, server SAN, block-based storage using commodity X86 servers bolstered with storage processing offload cards, along with a data management service delivered from its cloud.
It claimed its so-called cloud-defined storage (CDS) is less pricey than equivalent all-flash SAN array storage and doesn’t use up CPU capacity in its host servers, a disadvantage it claims affects both software-defined storage (SDS) and hyperconverged infrastructure (HCI) systems using commodity server chassis.
Siamak Nazari.
A prepared quote from Siamak Nazari, co-founder and CEO of Nebulon, said: “Cloud-Defined Storage delivers global insights, AI-based administration and API-driven automation making enterprise-class storage a simple attribute of the data centre fabric with self-service infrastructure provisioning and storage operations as-a-service for application owners.”
Nebulon’s storage is embodied in its Storage Processing Unit, an add-in, FH-FL PCIe card, with an 8-core, 3GHz ARM CPU plus encryption/dedupe offload engine, that is layered in front of a host server’s SAS or SATA SSDs, and connects to them via a triple SerDes connector. Nebulon co-founder and COO Craig Nunes told B&F the SPU will support “NVMe when it becomes generally available in the early Fall.”
The SPU card, which looks to upstream system software like an HBA or RAID card, presents block volumes to applications running in the servers. Up to 32 servers can be clustered in an nPod with the SPUs connected by a 10 or 25gigE network. There is a separate 1gigE port for management from the cloud.
Nebulon SPU
Data services provided by the SPU include deduplication, compression, encryption, erasure coding, snapshots and mirroring. There is no GPU on the card.
The SPU contains 32GB of NVRAM to speed writes, and reads come straight from the SSDs. NVRAM write caching means the SPU can turn random writes into sequential writes to the SSDs, thus helping to lengthen the drives’ endurance. Data is not striped across SPUs.
Initially 4TB SSDS are supported, with up to 24 supported by a single SPU, meaning 96TB, and a maximum capacity of 3,072TB across the 32 SPUs in an nPod. There is a single, all-flash storage block-access tier.
The performance on reads is slower than if NVMe SSDs were supported. Nunes told us: “At the device level, SATA latencies can be many times [that of] NVMe. However, when measured with the enterprise data services software stack, the latencies at the application level will be in the 300us to 400us range and acceptable to the cloud native, container and hypervisor use cases we are targeting.”
OEM channel
Nebulon sells its SPU and ON service through an OEM channel, with both HPE and Supermicro signed up so far, and a third OEM likely. An HPE configuration is based on the ProLiant Dl380 gen 10 server in a 2U x 24 slot chassis, while Supermicro uses its Ultra line of servers for Nebulon storage
That means that actual server hardware configurations, including drive types and capacities come from the OEMs. So too do purchase and/or subscription arrangements.
Nebulon is pitching its product, through its OEMs, at mid-to-large enterprises needing block storage at PB and up scale and wanting to increase storage and app server efficiency and reduce acquisition and management costs.
The card can be set up using application templates to optimise it it for different workloads, such as VMare, MongoDN and Kubernetes. Nebulon storage supports any OS or hypervisor in the host server. NebOS upgrades are non-disruptive.
SPU management
The SPU runs NebOS software and is managed through a Nebulon ON SaaS service hosted in a Nebulon cloud which uses multiple CSPs and multiple regions for high-availability. It is updated through the ON service.
Nebulon says the ON service manages fleets of Nebulon systems at scale. These systems send telemetry messages to the ON Cloud; tens of thousands of storage, server and application metrics per hour. These are stored in a distributed time series database.
ON includes an AIOps function which looks at the telemetry, analyses it in real time, and responds to adverse events in seconds by re-jigging a Nebulon system to respond to changing operational patterns. It also provides storage usage metrics over time and predictive analytics.
Nebulon ON dashboard
Customer admin staff can self-provision Nebulon storage through an ON dashboard and ON can deliver automated updates across a Nebulon fleet.
Replication will be delivered as a future upgrade, possibly in the next software release. We expect stretch cluster support in a future release as well.
The SPU is a DPU
Nebulon said the SPU card is an example of a DPU (Data Processing unit), a dedicated storage or networking processor intended to offload storage and/or network processing from a host server’s CPU so that it can concentrate on application processing.
Examples of DPU supply and use include Diamanti, Fungible, Pensando, Nvidia (SmartNICs) and AWS for in-house use (Nitro).
We might say Nebulon is an HCI system on DPU steroids; so many that Nebulon claims it is no longer an HCI system at all, but an ultraconverged system.
The SPU is a gateway to the storage for its host CPU. If it fails, the host server loses access to its storage. A server can have other storage installed which is not accessed through the SPU. A loss of internet connectivity will not prevent an SPU from functioning.
Competition
The Nebulon system, being a kind of super-server SAN system, will compete with Dell’s VxRail and Nutanix systems. It will also compete with disaggregated HCI systems such as Nimble’s dHCI and Datrium.
Inside HPE the Nebulon storage competes with or complements SimpliVity HCI and Nimble dHCI systems, and Primera, 3PAR and Nimble arrays. It also competes/complements other HPE storage partners providing block array services such as Datera.
Nebulon is not supported by HPE’s cloud-delivered, predictive analytics Infosight management or its GreenLake subscription service, but it is early days.
Nebulon’s Craig Nunes says more than half of HPE’s servers are sold into customers with non-HPE storage. The Nebulon storage, which should cost less than external array storage, and uses lower server SAN CPU resources, gives HPE a win-back opportunity in his view.
Regarding Pensando Nunes tells us: “Each Nebulon SPU has an 8-core 3GHz processor and 32GB of battery backed NVRAM, and runs the entire software stack you might find on a 3PAR or Pure Storage array controller. As a compare, Pensando supports 8GB of RAM—enough for the network/security functionality but not enough to run a full storage SW stack on the card.”
Executive Chairman David Scott says Nebulon and Pensando are complimentary: “I could easily see some use cases where a customer has both a Pensando DSC card and a Nebulon SPU in the same application server(s). :)”
Comment
Nebulon is entering a new and undeveloped market, the DPU-enhanced server SAN market with cloud-delivered, AIOps management, and its competitors, at the OEM level, are suppliers such as Pensando and the other DPU suppliers.
At the end-user level its competitors are, well, legion, and existing SDS and HCI vendors will say Nebulon is simply just another SDS or HCI vendor, one using proprietary HW and SW to give its host server chassis a performance kick. If customers accept that positioning then suppliers will compete on speeds, feeds, support – the usual stuff.
If customers see Nebulon as a new class of server SAN, then the OEM+Nebulon offer will be differentiated, although this will require a marketing and sales push.