NetApp is setting up its own streaming TV service, starting with digital content from its Insight 2021 virtual event and including a performance by Sting. You remember Sting?
Whitney Cummings.
There will be an Insight event channel on the service, and the Insight event is morphing from an annual show to an always-on online hub for on-demand and live content. As well as broadcasting webcasting a live Sting performance, NetApp’s Insight channel will feature hosting by Whitney Cummings — billed as “the reigning Queen of American stand-up”.
A sample Cummings joke goes: “Found a fragrance called Vixen. Guess they can’t name them after the people who actually wear them. Nobody’s going to buy Secretary.”
And another: “Stand-up is a lot like sex. There’s a lot of crying involved and I get paid to do it.”
We wonder — we seriously wonder — just how rude she will be and what she will say about NetApp.
We envisage NetApp TV being like a series of video blogs, podcasts and interviews, with execs and customer people, about NetApp’s products, services and views on industry trends. Maybe it will use it for product and service launches as well.
NetApp Insight 2021 runs from October 20 to 21 and you can register here.
Micron is launching a PCIe Gen-4 enterprise-class NVMe SSD with ruler formats as it stakes a claim for greater datacentre SSD market share, but it’s using the same old 96-layer 3D NAND as the prior 7300 NVMe SSD, which was based on PCIe Gen-3.
The earlier 7300 SSD was a TLC (3bits/cell) drive which came in PRO (1 drive write per day) and MAX (3 DWPD) versions, both packaged in M.2 (gumstick) and 2.5-inch (U.2) form factors. The newer 7400 sticks with the PRO and MAX variants, but there are now seven form factors in total: M.2 2280 in 80mm and 110mm lengths, U.3 2.5-inch 7mm and 15mm thick, and three sizes of E1.S (short ruler or large gumstick). The E1.S ruler is designed to take over from the M.2 format.
Jeremy Werner, Corporate VP and GM of Micron’s Storage BU, issued the announcement quote: “Our customers need improved storage density and efficiency to run their businesses. The Micron 7400 SSD is flexible in its ability to address myriad applications and system interoperability requirements, enabling deployments and delivering value from edge to cloud.”
Micron 7400 SSD variants.
They don’t get any improved density from denser NAND; Micron is already shipping far denser 176-layer product than the legacy 96-layer stuff. For example the 2450 and 3400 notebook and desktop/workstation PCIe Gen-4 drives in M.2 format, which were announced just four months ago. The 7400 customers will get improved density from the E1.S format drives, more of which can be packed into a server chassis than the M.2 product.
7400 and 7300 capacities in the M.2 and 2.5-inch form factors are identical except in the MAX products. There, the 7300 MAX M.2 capacities are 400GB and 800GB while the 7400 MAX M.2 has these capacities plus 1.6TB and 3.2TB. A table provides the full capacity points and maximum performance numbers:
The performance increases with capacity and only the maximum numbers are shown.
PCIe Gen-4 helps give the 7400 a significant speed boost over the PCIe Gen-3 7300, as a glance at the following table will show:
Micron 7300 performance table.
The 7400 outperforms the PCIe Gen-4 2450, but only just surpasses the 3400 which maxes out at 720,000/700,000 random read/write IOPS and 6.6/5.0 sequential read/write GB/sec bandwidth. Micron’s 7400 PRO U.3 pumps out 1,000,000/400,000 random read/write IOPS and 6.6/5.4 sequential read/write GB/sec bandwidth. The 7400 puts out more random read IOPS but fewer write IOPS than the 3400 and has more sequential write bandwidth.
Controller security features are enhanced compared to the 7300, as the 7400 comes armed with TCG-Opal 2.01 and IEEE-1667, Firmware Activate without Reset, Power Loss Protection, Enterprise Data Path Protection, Secure Erase, Secure Boot, Hardware Root of Trust and Secure Signed Firmware.
The 7400 controller features support for 128 namespaces to increase scalability for software environments, and also supports Open Compute Project (OCP) deployments.
Check out Micron’s 7400 web pages to find a product brief and other documentation.
The contenders
In the PCIe Gen-4 arena, Samsung has its PM9A3 — an E1.S format drive which complies with the OCP specification. Its capacities range from 960GB to 7.68TB — virtually the same as Micron’s 7400. Kioxia’s CM6 and CD6 client SSDs in U.3 format outperform Micron’s 7400. Liqid’s LQD4500 Honey Badger SSD simply blows the Micron drive out of the park performance-wise, with its four million random IOPS, but it does use 16 lanes instead of the 7400’s four.
SK Hynix is sampling two PCIe 4.0 drives: the 96-layer TLC flash PE8010 and PE8030, both in U.2 format. It is also prepping the 128-layer TLC PE8111 in EDSFF long format. Provisional performance numbers show similar numbers to the 7400.
Micron may well have an edge because of its form factor range (M.2, U.3 and E1.S) and its security features.
Samsung has an open-source Scalable Memory Development Kit (SMDK) which virtualises memory attached to the CXL interconnect.
The Compute eXpress Link is a developing open, industry-backed standard interconnect to enable servers, accelerators, memory expanders and smart I/O devices to exchange data using shared memory and at high speed across a PCIe Gen-5 connection. Samsung makes DRAM and NAND-based products which can be attached to a CXL link.
Cheolmin Park, VP of the Memory Product Planning Team at Samsung Electronic, said in a statement: “In order for datacentre and enterprise systems to smoothly run next-generation memory solutions like CXL, development of corresponding software is a necessity.”
Samsung wants to deliver “a total memory solution that encompasses hardware and software, so that IT OEMs can incorporate new technologies into their systems much more effectively.”
Samsung launched a CXL expander device in May, with CXL 2.0 support. This was a CXL-connected DDR5 memory module with memory mapping, interface converting and error management technologies to enable CPUs and GPUs to use its DDR5 DRAM as main memory. It suggests a host server’s per-CPU memory capacity can be increased up to 50 per cent and bandwidth boosted up to 75 per cent. There would be an expander per CPU.
Samsung CXL Expander box and card.
The SMDK consists of pre-built code libraries and APIs to enable a host’s main memory and the CXL memory expander to work together in heterogeneous memory systems. There are two APIs. System developers can use a compatibility API to incorporate CXL-attached memory into IT systems without modifying existing app environments, or an optimisation API to optimise app software to suit special needs.
Samsung heterogeneous memory diagram. The memory zones recognise normal DRAMS and CXL memories separately.
The SMDK supports memory virtualisation, meaning separate pools of memory, such as server socket-attached DRAM and CXL-attached memory (such as DRAM or storage-class memory) can be shared. It includes what Samsung calls a proprietary Intelligent Tiering Engine, with which the SMDK user can identify and configure the pool memory type, capacity and bandwidth to match particular use cases with tiering priorities.
Samsung’s SMDK is available on a limited basis for initial testing and optimisation and will be open-sourced within the first half of next year.
Yesterday VMware announced its Project Capitola development to virtualise different memories into a single logical pool. That involves the CXL interconnect, and Samsung is one of VMware’s partners in the project. We would hope for similar initiatives to emerge from other hypervisor developers, such as Red Hat and Nutanix, leading to an industry standard so that application developers don’t have to re-invent their CXL wheel for each hypervisor they support.
Questions
We have asked Samsung several questions about this SMDK, and the answers are below each question:
1. The SMDK is open source. Will the Intelligent Tiering Engine be open sourced?
→ Yes, it will be open sourced, too.
2. Will the Intelligent Tiering Engine (ITE) identify and configure the memory type, capacity and bandwidth of non-Samsung CXL-attached memory types?
→ Yes, as long as they comply with CXL and PCI specifications.
3. Will it be used by server operating system developers, system SW developers (such as in-memory SW tools) or application developers or all three?
→ Yes, it can be used by all three, however the current version mostly targets application developers and system SW developers.
4. Is Samsung expecting or hoping on working towards an open standard for CXL-attached memory devices?
An Israeli startup called NeuroBlade has exited stealth mode, built a processing-in-memory (PIM) analytics chip combining DRAM and thousands of cores, put four of them in an analytics accelerating server appliance box, and taken in $83 million in B-round funding.
The idea is to take a GPU approach to big data-style analytics and AI software by employing a massively parallel core design, but take it further by layering the cores on DRAM with a wide I/O bus architecture design linking the cores and memory to speed processing even more. This design vastly reduces data movement between storage and memory and also accelerates data transfer between memory and processing cores.
A statement from CEO Elad Sity said: “We built a data analytics accelerator that speeds up processing and analysing data over 100 times faster than existing systems. Based on our patented XRAM technology, we provide a radically improved end-to-end system for the data centre.”
A supportive quote from Patrick Jahnke, head of the innovation office at SAP, which has been working with NeuroBlade, said: “The performance projections and breadth of use cases prove great potential for significantly increased performance improvements for DBMS at higher energy efficiency and reduced total cost of ownership on-premises and in the cloud.”
PIM XRAM chip
The rationale is the same as for having GPUs accelerate graphics workloads, but taken the extra step forward with a PIM architecture called XRAM computational memory. NeuroBlade says the XRAM processors “enable the system to compute inside the memory itself, drastically reducing data movement, saving energy, and speeding up data analytics processing times.”
NeuroBlade XRAM graphic.
The PIM XRAM chip is embedded into an Intense Memory Processing Unit (IMPU) and the appliance, in which a quartet of IMPUs is installed, is called Xiphos. This, NeuroBlade says, “has a parallel, scalable, and programmable architecture that is optimised for accelerated data analytics, enabled through terabytes-per-second of memory bandwidth.”
Xiphos appliance.
The Xiphos motherboard has a PCIe capability about which NeuroBlade said: “Everything is connected on top of PCIe fabric.” The appliance contains local direct-attached NVMe storage, with up to 32 NVMe SSDs per appliance. An x86 CPU running Linux acts as the appliance controller.
An Insights Data Analytics software suite is said to provide the software needed to support high-performance data analytics on Xiphos hardware and to integrate with the existing ecosystem.
Xiphos SW suite.
We asked about the bandwidth on the wide I/O bus and a spokesperson said: “We are talking about multiple x16 lanes PCIe buses, the official spec is still under NDA at this stage.”
We also asked what the 100x performance increase was based upon and were told: “We compare to standard TPC benchmarks and queries we work on with customers.”
Speedata
Coincidentally Israel-based Speedata exited stealth at the end of September and announced an APU or Analytics Processing Unit chip along with $55 million in funding. A NeuroBlade spokesperson told us: “NeuroBlade has paying customers already and is shipping out to data centres all over the world — a big differentiator here.” NeuroBlade is further along in the process as well, as its technology uses XRAM computational memory.
NeuroBlade said: “The data analytics market is projected to be somewhere at $65 billion so the fact that Speedata identified the same target is great. We see even the hyperscalers like Amazon working on new solutions. Couple the giants with other startups it really just suggests that this is a new market with plenty of room to approach in different ways.”
NeuroBlade background
NeuroBlade was founded in 2016 in Tel Aviv by CEO Elad Sity and CTO Eliad Hillel who is also VP for Product Strategy, and formally launched as a company in 2018. Sity and Hillel were in the technological unit of Israel’s Intelligence Corps and then worked at SolarEdge.
It raised a $4.5 million seed round in 2018 and a $23 million A-round the next year. The B-round was led by Corner Ventures with contribution from Intel Capital, and supported by current investors StageOne Ventures, Grove Ventures and Marius Nacht plus technology companies including MediaTek, Pegatron, PSMC, UMC and Marubeni. Total funding is now $117.5m.
Hillel and Sity have filed patents, such as US patent number 10,762,034 for memory-based distributed processor architecture.
The company has passed the 100-employee count and started shipping its Xiphos data accelerator to customers and partners worldwide. The new cash will be used to expand the engineering teams in Tel Aviv and build out sales and marketing teams globally.
Bootnote: A xiphos is a double-edged, Iron Age straight and short sword used by the ancient Greeks.
VMware is developing vSphere software to virtualise different kinds of memory into a single logical tier, so that applications can have access to more memory than there is DRAM in their host physical server without having to use different coding methods.
The initiative is called Project Capitola and was revealed as a technology review at the VMworld 2021 event. It is discussed in a VMware blog by the vSphere team. It is, they write, “a software-defined memory implementation that will aggregate tiers of different memory types such as DRAM, PMEM, NVMe and other future technologies in a cost-effective manner, to deliver a uniform consumption model that is transparent to applications”.
The developing CXL interconnect has a role to play as a blog diagram shows:
We see pools of different kinds of memory: DRAM (DDR), CXL-attached Optane persistent memory (DIMMs) and other “memory” accessed via CXL, RDMA-over-Ethernet (RoCE), NVMe — which must surely mean SSDs — and pooled NVMe. More than one physical server can be involved in this memory tiering and logical pooling, according to the diagram.
The pooled memory types will form a non-uniform memory architecture (NUMA) with different tiers having different access speeds. That will have to be managed by the Project Capitola software.
VMware is working with:
Memory vendors such as Samsung, Micron and Intel — memory here meaning DRAM and Optane and possibly Samsung’s Z-SSD;
Server vendors such as Cisco, Dell, HPE, and Lenovo;
Service providers — Equinix.
You can read VMware partner comments and here is a sample of them:
Cisco CTO Dan Hanson — “We are excited to partner with VMware on Project Capitola to further enhance the hybrid cloud vision we have with UCS, HyperFlex, and Intersight by including this software-defined memory management into our set of solution offerings.”
Dell chief technology & innovation officer Paul Perez — “With tiered memory technology from VMware on Dell EMC PowerEdge servers, we’re able to increase capacity and performance for memory-intensive workloads.”
Hazelcast chief product officer Manish Devgan — “Hazelcast is excited to partner with VMware on Project Capitola to deliver a flexible, simplified software defined memory management solution that brings together historical and real-time data at microsecond latencies to empower innovative applications.”
Micron senior director of the datacentre segment Ryan Baxter — “Project Capitola can deliver new levels of memory access to data-hungry applications, enabling customers to optimise for solution performance and performance per dollar. As an industry leader in DRAM and NAND technology, we are delighted to work with VMware to deliver this value to customers.”
It will be interesting see what Micron brings to the Capotola table as it exited its 3D XPoint partnership with Intel in favour of developing its own CXL-accessed memory products.
VMware says its leading partner is Intel and Capitola will come to the market, possibly in a first phase, using Xeon processors and Optane persistent memory. Trust Intel to support this idea.
If one hypervisor can abstract different tiers of memory into a single virtual tier then so can another — and we expect Nutanix’s AHV and other KVM versions such as Red Hat to do so as well. And if a hypervisor can do it then why not an operating system?
Quantum is introducing an object-storage-on-tape tier to its ActiveScale object storage system, providing an on-premises Amazon S3 Glacier-like managed service offering.
The idea is to have a single namespace covering ActiveScale disk and tape storage with multi-site data durability, high capacity, low cost, and policy-based data transfer to the cold storage — all available through a subscription to a fully-managed service.
Bruno Hald, Secondary Storage GM at Quantum, provided an announcement quote: “ActiveScale is now the industry’s first and only on-premises object store system with an integrated cold storage class based on tape technology. In short, this means it dramatically lowers costs, consumes little power, and reliably stores data for decades.”
Jamie Lerner, Quantum’s chairman and CEO, said: “The innovations announced today enable us to combine Quantum hyperscale tape architectures with Quantum software, package all of this technology as a cloud service that can be deployed anywhere and offer it to enterprises and cloud providers who are facing the same challenges as hyperscalers.”
Blocks & Files diagram.
Data durability
Quantum says the cold storage class has up to 19 nines of data durability. Backblaze says its 11 nines durability has a .99999999999 per cent chance of data loss — “if you store one million objects in B2 for ten million years, you would expect to lose one file.” So, with 19 nines of durability, your data is so much safer that the chances of any data loss are astronomically remote.
According to an ActiveScale datasheet, the extra durability comes from Quantum’s Reed-Solomon-based two-Dimensional Erasure Coding (2D EC) and a RAIL (Redundant Arrays of Independent Libraries) architecture. The 2D EC distributes objects and parity data shards within and across tapes and libraries to maximise recoverability from data loss while not having too much extra data stored, down to 15 per cent. Quantum says restoring an object requires only a single tape read, and 2D EC uses local reconstruction codes to recover from nearly all tape and drive errors using just a single tape.
RAIL provides parallel access to multiple tape libraries that can be geo-distributed, high availability and scalability. A 3-geo system has three separate libraries in geographically dispersed locations. The 2D EC allows for hierarchical data spreading with object data split into chunks written across 18 drives in the three data centres — an 18/8 erasure code policy in which objects can be decoded from just ten chunks. The system can recover from three tapes being lost.
Alternative deployments include a single datacentre or two datacentres with data replication.
Lifecycle policies can be used to select objects in the Active Storage class for transfer to cold storage. Objects sent to the cold storage class (tape) by an application direct from other storage are first stored in the active storage class (disk) for fast acknowledgement. Then they are batched and interleaved with read requests before being sent to cold storage so as to optimise tape streaming performance. Object restoration from the cold storage class will typically occur in less than five minutes.
Object data buckets can also be migrated to the public cloud if desired.
Services
Quantum is providing Object Storage Services to deliver this ActiveScale technology, with two classes of service. There is one for for active data and a second for cold data, with an all-inclusive two-tier pricing model:
Service delivery is aided by Quantum’s AIOps software, Cloud-Based Analytics (CBA) with its predictive analytics based on device sensor telemetry, and the MyQuantum service delivery facility which provides access to CBA and the AIOps feature.
Find out more about Active Scale Cold Storage on Quantum’s mini-site.
Comment
Other object-to-tape technologies include Germany’s PoiNT Systemes with S3-supporting Archival Gateway software, which supports Quantum Scalar libraries as target tape systems. It also uses erasure coding.
A second competitor to Quantum is Fujifilm’s Object Archive which has an S3 API and uses OTFormat — an open-source file format — to store objects and their metadata on tape. Scality’s Zenko is the object server used inside this product. PoiNT Systemes said in June last year that it will support Fujifilm’s Object Archive format.
Quantum is introducing its cold object storage as an integrated offering in its ActiveScale product line which itself is integrated in Quantum’s overall StorNext portfolio, making it a potential add-on sale to existing Quantum customers. The managed services angle helps make it an affordable add-on as well.
Will we see other on-premises object storage providers add a tape storage archive tier to their products? The easiest way for them to do that would be through a partnership with a tape library vendor, such as HPE, IBM and Spectra Logic. Will they or won’t they? We will have to wait and see.
Has ExaGrid achieved a $100 million revenue run rate? The deduping backup target shipper with fast restore capabilities has announced its third record-breaking quarter in a row with accelerating revenue growth and is pondering a $500 million run rate target for 2027.
Its year-on-year revenue growth was 24 per cent in the first quarter, 50 per cent in the second quarter, and 57 per cent in this third quarter. The full year revenues are looking to be excellent.
Bill Andrews.
In its third quarter, ending September 30, President and CEO Bill Andrews told us: “We are cash and P&L positive for each quarter and will be for the year. We brought on over 150 new logo customers in the quarter and 42 of those were six and seven figure deals. We are replacing low-cost primary storage disk behind backup apps as well as Dell EMC Data Domain, HPE StoreOnce, Veritas storage appliances, etc.”
He said: “We are expanding the sales team and currently have over 60 open positions worldwide in inside sales and field sales. We have over 3,100 installed customers worldwide in the upper mid-market to the enterprise.”
ExaGrid has not revealed its revenue run rate, but our impression (no more than that) is that it has achieved the $100 million mark and that is prompting its exploration of a $500 million run rate target.
Comment
The deduping backup target market is technologically mature. The only real innovations introduced in the last five years or so have been backup to all-flash systems, such as Pure’s FlashBlade for very fast restore, backup software help in speeding dedupe, such as Data Domain Boost, and public cloud-based long-term retention tiers.
All-flash target systems relied on 3bits/cell (TLC) flash which was more affordable than the then-mainstream 2bits/cell (MLC) NAND, and has now moved on to even more affordable QLC (4bits/cell) flash in systems such as Pure’s FlashArray//C.
But the bulk of backed-up data does not need all-flash speed and deduped disk provides the best mix of performance and cost. Against that background the supplier group is stable — Dell EMC with Data Domain, HPE with StoreOnce, Quantum with its DXi product plus systems from Veritas. By and large the only development is the addition of faster CPUs, more memory and larger-capacity disk drives.
As the amount of data to be backed up grows, system ingest and restore speeds become more important, and this is why ExaGrid sales are growing. It ingests data without deduping it, thus providing a fast data landing zone from which initial fast restores can be made, and then dedupes the data for longer-term and more efficient storage in a so-called retention zone.
ExaGrid’s systems also scale out and provide global deduplication, which, for example, prime competitor Data Domain does not.
So long as its competitors fail to match ExaGrid’s technology, it should keep on picking up their customers as they become dissatisfied with their installed system’s ingest and restore speeds and deduplication efficacy.
We think that there are two potential pressures ExaGrid could face. One is from Quantum, which is showing recent energetic technology development, and the other is from an expansion of all-flash backup target systems into the mainstream backup storage market. The speed of all-flash ingest and restore could then reduce ExaGrid’s advantages. There is however no indication that this is happening, or likely to happen.
Just seven months into his role as Nutanix Chief Revenue Officer, Chris Kaddaras has quit to join a pre-IPO security tech startup.
Rajiv Ramaswami, Nutanix President and CEO, said so in an email to all Nutanix staff, writing: “I can understand the draw for Chris to have a large hand in shaping a company at an early stage, and we wish him the very best on this next adventure.” Ramaswami became Nutanix CEO in December 2020 when Kaddaras was EVP Worldwide Sales and promoted him to the CRO position in February this year.
Chris Kaddaras.
He was at pains to say: “Chris will help us finish out our first quarter of fiscal 2022 before he departs on October 31, 2021 [and] our first fiscal quarter is on track with the guidance we shared on our September 1, 2021 earnings call.” That was for between $172 million and $177 million in ACV billings next quarter, up 26.4 per cent year-on-year. In other words, there’s nothing wrong with Nutanix’s business results to make Kaddaras or Ramaswami unhappy.
Ramaswami emphasised the capabilities and experience of Nutanix’s post-Kaddaras sales team: “We have a dedicated and passionate senior sales leadership team in place that will continue Nutanix’s positive momentum as we embark on our next era of innovation, customer satisfaction, and success.”
Andrew Brinded, Nutanix SVP and Worldwide Sales Chief Operating Officer, will step into the CRO slot on an interim basis.
It must be a heck of an opportunity to tempt Kaddaras away from the top of Nutanix’s sales tree. Here’s a roster of the top 22 cyber security companies. Maybe Kaddaras’s startup is amongst them. Ironically, he must be being tempted by a king’s ransom in pay from a company that surely has fighting ransomware at the top of its capabilities list.
There are granularity consequences of the network fabric choices made by composable systems suppliers. Go the PCIe route and you can compose elements inside a server as well as outside. Go the Ethernet route and you’re stuck outside.
This is by design. PCIe is designed for use inside servers as well as outside, so it can connect to components both within and without. Ethernet and InfiniBand aren’t designed to work that way, so they don’t. You can only compose what you can connect to.
The composability concept is that an individual server, with defined CPU, network ports, memory and local storage, may not be perfectly sized in capacity and performance terms for the application(s) it runs. So applications can run slowly or server capabilities — DRAM, CPU cores, storage capacity, etc. — can be wasted, stranded and idle.
If only excess capacity from one server could be made available to another server, then the resources would not be wasted and stranded but become productive. A composable system dynamically sets up — or composes — a bare metal server from resource pools so that it is better sized for an application, and then decomposes this server when the application run is over. The individual resources are returned to the pool, for re-use by other dynamically-organised servers later.
HPE more or less pioneered this with its Synergy composable systems, organised around racks of its blade servers and storage. A set of startups followed in its footsteps — such as DriveScale, bought by Twitter, as well as GigaIO and Liqid. Western Digital joined the fray with its OpenFlex scheme, and then Fungible entered the scene, with specifically designed Data Processing Unit (DPU) chip-level hardware. Nvidia is making composability waves also, with its BlueField-2 DPU or SmartNIC and partnership with VMware.
These suppliers can be divided into pro-Ethernet and pro-PCIe camps. A table shows this and summarises their different characteristics:
It notes that PCIe is evolving towards the CXL interconnect which can link pools of DRAM to servers.
Composing and invisibility
In order to bring an element into a composed system you have to tell it and the other elements that they are now all inside a single composed entity. They then have to work together as they would if they were part of a fixed physical piece of hardware.
If the hardware elements are linked across Ethernet, then some server components can’t be seen by the composing software because there is no direct Ethernet connection to them. The obvious items here are DRAM and Optane Persistent Memory (DIMMs), but FPGAs, ASICS and sundry hardware accelerators are also invisible to Ethernet.
Fungible, HPE and Nvidia are in the Ethernet camp. GigaIO and Liqid are in the PCIe camp. The Fungible crew firmly believe Ethernet will win out, but the founders do come from a datacentre networking background at Juniper.
Nvidia has a strong focus on composing systems for the benefit of its GPUs and is still exploring the scope of its composability, with VMware’s ESXi hypervisor running on its SmartNICs and managing the host x86 server into which they are plugged. Liqid and GigaIO believe in composability being supplied by suppliers who are not server or GPU suppliers, as does Fungible.
Liqid has made great progress in the HPC/supercomputer market, while the rumour is that Fungible is doing well in the large enterprise/hyperscaler space. GigaIO says it owns the IP to make native PCIe into a routable network enabling DMA from server to server throughout the rack. It is connecting servers over PCIe, to compose masses and multiple types of accelerators to multiple nodes to get around BIOS limitations and so getting compute traffic off the congested Ethernet storage network.
Dell EMC has its MX7000 chassis with servers composed inside it, but this is at odds with Dell supporting Nvidia BlueField-2 SmartNICs in its storage arrays, HCI systems and servers. We think that there is a chance the MX7000 could get BlueField SmartNIC support at some point.
The composability product space is evolving fast and customers have to make fairly large bets if they decide to buy systems. It seems to us here at Blocks & Files, that the PCIe or Ethernet decision is going to become more and more important and we can’t see a single winner emerging.
NetApp has acquired CloudCheckr and its cost-optimising public cloud management CMx platform to expand its Spot by NetApp CloudOps offering.
CloudCheckr’s software analyses the costs for AWS, Azure and GCP clouds, looking at billing details for multiple accounts and provides a full set of a customers cloud account details, including resources, configurations, permissions, security, compliance and changes. Customers should be able to analyse and control their cloud costs better with this software.
Anthony Lye.
An announcement quote from Anthony Lye, EVP and GM of NetApp’s Public Cloud Services business unit, said: “By adding cloud billing analytics, cost management capabilities, cloud compliance and security to our CloudOps platform through the acquisition of CloudCheckr, we are enabling organisations to deploy infrastructure and business applications faster while reducing their capital and operational costs. This is a critical step forward in our FinOps strategy … Simply put, NetApp continues to empower customers to achieve more cloud at less cost.”
Financial details of the transaction are not being disclosed. As CloudCheckr’s total funding is $67.4 million, and it’s been growing fast, and is backed by private equity, we would expect a $200 million to $300 million acquisition cost range.
NetApp said that the acquisition extends Spot by NetApp’s cloud FinOps offerings by combining cost visibility and reporting from the CloudCheckr platform with continuous cost optimisation and managed services from Spot by NetApp. It suggests public cloud customers will be better able to understand and continuously improve their cloud resources.
Lye has written a “More cloud. Less cost” blog to discuss the acquisition. In it he writes: “CloudCheckr will add cost visibility with automated actions, secure configurations, deep insights, and detailed reporting to our Spot portfolio’s continuous optimisation.” It will help them control their public cloud financial operations or FinOps.
CloudCheckr was started up Rochester, NY, in 2011 by COO Aaron Klein and previous CEO and now Exec Chairman Aaron Newman. Klein gave up the COO position at the end of 2017 and he and Newman founded BlocWatch, a SaaS offering to manage blockchain environments. CloudCheckr has taken in a total of $67.4 million in funding through a 2012 seed round for $400,000, a two-part A-round in 2013 for $2 million and (slight delay!) $50 million in 2017 and a 2019 B-round raising just $15 million. Both the late A-round and B-round were led by Level Equity Management, which operates as a private equity firm.
At the time of the $50 million A-round funding event CloudCheckr serviced over 150 AWS and Azure authorised resellers and provided support to nearly 40 per cent of all AWS Premier Consulting Partners. Its direct client roster included managing over $1 billion in cloud spend for enterprises such as Nasdaq, Siemens, and Regeneron.
By B-round time CloudCheckr said it was experiencing explosive growth, doubling customers, and achieving a 5x increase in revenue over the last three years. It then claimed a full 40 per cent of the top 50 MSPs, ranked by ChannelE2E, were powered by CloudCheckr.
CloudCheckr is led by CEO Tim McKinnon, who joined in June 2019, two months before the $50 million B-round. He had been President and CEO at Sonian, an Amazon- and VC-funded company, acquired by Barracuda Networks.
Interestingly Nutanix Chief Commercial Officer Tarkan Maner sits on CloudCheckr’s board. He was appointed at B-round time. At that time CloudCheckr said “Tarkan has had many successful exits and held executive roles at Nexenta (acquired by DDN), Dell, Wyse (acquired by Dell), CA (acquired by Broadcom), IBM, Quest, and Sterling Software (acquired by CA).” It seems that old Maner magic, aided by McKinnon, has worked again.
This week we have have panned more nuggets and fragments of gold from the storage news stream. It features automated driving systems, Kubernetes container data protection, media workflow storage, Oracle supporting RoCE and Optane, Amazon Web Services partnering, technology patent awards and more, from ReRAM to data warehousing via edge computing right out to DNA-based malware.
The amount of what we might call weekly background news in the storage industry is getting steadily larger. Scan this bulletin and see for yourself.
IBM AREMA and ADAS
IBM Archive and Essence Manager (AREMA) orchestrates any file-based broadcast production workflow and is used in Autonomous SDRiving (AD) and Automated Driver Assistance Systems (ADAS). AREMA is a media asset management system, and has more than 140 different, ready-to-deploy agents and new workflows can be swiftly modified or created. Each agent interacts with the workflow engine and executes one specific functionality — such as actual functionality in file transfers or adapters to control third-party systems. “AREMA for Automotive” was introduced about four years ago on a project by project base and it’s now becoming a standard IBM Software product.
We’re told AREMA is the “clue” software which is needed by the automotive engineers to find the needle in the haystack data snippets to be used for AI training for Level 4/5 robo cars. AREMA works with Spectrum Scale, NAS filers and any S3 Object Storage, all at the same time. OpenShift/K8 is also supported.
You can ask questions like: “Please find all data in the data lake where the temperature was >5 degrees, the road was wet, speed 40–60km/h and yellow buses with flashing lights were involved”. It will find all relevant files and timings and works with video, radar and LiDAR data.
Catalogic crams more into CloudCasa
Cloud-native data protector Catalogic has added more features to its CloudCasa product. This provides scalable data protection, with a free service tier, and disaster recovery service for cloud-native apps, and supports all leading Kubernetes distributions and managed services and cloud database services.
CloudCasa now supports backup of Kubernetes persistent volumes (PVs) to cloud storage including Amazon Elastic Block Storage (EBS) persistent volumes and is offered in a new capacity-based subscription model. The PVs get:
Fair, capacity-based pricing for persistent volume backups — Users pay only for the data being protected vs what infrastructure is in use.
Free Service Tier — Unlimited PV and Amazon RDS snapshots with up to 30 days’ retention, with no limits on worker nodes or clusters, and Kubernetes resource data included.
Free Amazon RDS snapshot management — Multi-region copies with no limits on databases or accounts.
Software and storage included — as a SaaS application, there are no software costs, no storage to purchase, no infrastructure to provision.
Security and compliance — SafeLock protection provides tamper-proof backups that are locked from deletion by any user action or API call.
Oracle Exadata X9M, Optane and RoCE
Database and ERP supplier Oracle announced the availability of its Exadata X9M prouducts, converged systems to run Oracle Database. They include the Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which runs Oracle Autonomous Database in customer datacentres. It supports small databases running with fractional CPUs to enable agile, low-cost consolidation, application development, and testing.
Oracle says they deliver higher performance at the same price as the previous generation. They accelerate online transaction processing (OLTP) with more than 70 per cent higher IOPS rates and IO latencies of under 19 microseconds. They also deliver up to an 87 per cent increase in analytic SQL throughput and machine learning workloads.
This enables customers to reduce the costs of running transactional workloads by up to 42 per cent, and analytics workloads by up to 47 per cent.
Juan Loaiza, EVP Mission-Critical Database Technologies at Oracle, said: “For X9M we adopted the latest CPUs, networking, and storage hardware, and optimized our software to deliver dramatically faster performance. Customers get the fastest OLTP, the fastest analytics and the best consolidation — all at the same price as the previous generation. No other platform, do-it-yourself infrastructure, database, or cloud service comes close to Exadata X9M performance, cost/performance, or simplicity.”
The X9M systems use Intel’s latest Xeon SP processors and support Optane persistent memory and RDMA over Converged Ethernet (RoCE). They have 33 per cent more database server CPUs and memory, as well as 28 per cent more storage than previous generations.
Oracle says that, compared to Amazon RDS using all-flash storage, Exadata Cloud@Customer X9M delivers 50x better OLTP latency. Compared to Microsoft Azure SQL using all-flash storage, Exadata Cloud@Customer X9M delivers 100x better OLTP latency. For analytics, Exadata Cloud@Customer X9M delivers up to 25x faster throughput than Microsoft Azure SQL, and up to 72x faster throughput than Amazon RDS. Exadata Cloud@Customer X9M also delivers 50x better OLTP latency and 18x more aggregate throughput than databases running on AWS RDS using a full rack AWS Outposts configuration.
DNA-based malware
A Wired article discusses a demonstration of DNA-based malware:
Read the article and enjoy the ingenuity of it all.
Symply’s new product range for digital media professionals
Media-centric shared storage supplier Symply has launched its SymplyFIRST product range with personal RAID protected storage, backup/archive, cloud archive/transfer, and connectivity features for media pipelines in on-set production, post production, VFX, and for independent filmmakers, in-house creatives, photographers, and other media professionals.
SymplyPRO LTO and SymplyPRO DIT are Thunderbolt 3-connected desktop and rack systems with LTO-9 tape with up to 18TB native storage per tape and over 400Mbits/s read/write speed. Systems are available in multiple configurations including a cable-less design, all-metal enclosure that allows for fast removal and insert of tape drives for transport or upgrade. SymplyPRO LTO systems are certified for use with popular backup and archive utilities for macOS and Windows including YoYotta, Hedge Canister, and Archiware P5.
SymplyATOM (Advanced Tape Operations and Management) software is included with every SymplyPRO LTO system.
SymplyPRO DIT is an archive and transfer system with multi-access technology combining camera card readers for RED, Atomos and Blackmagic Design along with removable 2.5″ SSD and LTO tape for fast all-in-one DIT storage operations.
SymplySPARK is a personal transportable media-optimized RAID system with quiet operation, Thunderbolt 3 connectivity and capacities up to 144TB. Its impact-resistant flight-friendly carry case features tool-free user serviceability of drives, fans, and power supplies.
Symply LTO products.
SymplyNEBULA is a cloud backup and archive with S3 compatibility with datacentres throughout the US and Europe and no egress charges. It is available for a single per-TB price and is up to ten times faster than other providers. It enables easy movement of content to and from AWS EC2 for a variety of media processing and compute functions.
SymplyADDR is a series of affordable high-performance PCIe expansion slot systems.
SymplyLOCK is a simple Thunderbolt cable lock that works with all SymplyFIRST products and any Apple-certified Thunderbolt cable preventing accidental disconnection of a Thunderbolt device.
SymplyFIRST products are now generally available from resellers worldwide. Prices range from:
$4,599 USD for SymplyPRO LTO (thunderbolt connected tape drive);
$3,299 USD for SymplyLTO (SAS connected tape drive);
$4,199 USD for SymplySPARK (thunderbolt connected eight bay storage);
$4.99 USD per month for SymplyNEBULA.
WANdisco replicates loss
Replicator WANdisco has seen revenues fall six per cent year-on-year. Its first half 2021 revenues of $3.4 million compare to the $3.6 million earned a year ago. Back in September 2020 we wrote that this UK company is heavily loss-making and the revenue picture is muddied as it transitions from selling perpetual licences to subscription. However, it is buoyed by a recent capital raise of £25 million and on the back of strong demand for its Azure software “expects to deliver significant revenue growth in FY2021 with the Board expecting a minimum revenue of $35 million.”
We now hear that the General Availability of its Azure service is expected in the next few weeks, “a critical step in converting pipeline customers.” You don’t say. Its statutory loss from operations has risen to $20.3 million from the $14.0 million reported a year ago. Also WANdisco has reduced its full year revenue target from $35 million to a minimum of $18 million. That is still an ambitious 71 per cent increase on the $5.2 million 2020 revenue amount.
The company has raised $42.4 million through a share placing and subscription to accelerate its growth ambitions and pursue near-term opportunities with channel partners. It has won a contract with the analytics division of a global telco to migrate analytical data to the Microsoft Azure cloud, worth a minimum of $1 million over a maximum term of five years.
The problem for WANdisco is that its ability to transfer bits from source to target is not matched by its ability to transfer dollars from customers to its own bank account.
Amazon Web Services
Chinese American distributed OLAP startup Kyligence, originator of Apache Kylin and developer of the AI-augmented data services and management platform Kyligence Cloud, has been accepted into the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners who provide software solutions that run on or integrate with AWS.
File data lifecycle manager Komprise announced a special offer with Amazon Web Services (AWS) to attendees of the AWS D.C. Summit, taking place September 28 and 29. AWS Summit attendees could get a risk-free assessment of their data savings from Komprise, and be eligible to receive up to 25 per cent off their estimated first year AWS spend if they convert within 90 days.
Cloud file storage supplier Nasuni is working with AWS to address ransomware in the public sector with three new offerings designed to deliver extremely fast ransomware file recovery, as well as built-in backup and disaster recovery, all at a reduced price. These new bundles are available for a limited time as Ransomware “First Aid Kits” for Public Sector Files in AWS. The Nasuni Ransomware First Aid Kits for Public Sector can be procured in AWS Marketplace, while also leveraging AWS Consulting Partners like CDW-G, SHI Government, Presidio and AHEAD.
Patents
Cloud-native storage startup Robin.io has been awarded nine new US patents in the first nine months of 2021. They boost the company’s IP portfolio in three technologies central to the management and orchestration of 5G and enterprise cloud-native applications with Kubernetes. The patents are:
Orchestration of heterogeneous multi-role applications (#11086725).
Rolling back Kubernetes applications (#11113158).
Reducing READ loads on cloned snapshots (#11099937).
Automated management of bundled applications (#11036439).
Automated management of bundled applications (#10908848).
Redo log for append-only storage scheme (#11023328).
Block map cache (#10976938).
Implementing secure communication in a distributed computing system (#10896102).
Health monitoring of automatically deployed and managed network pipelines (#11108638).
Ionir, which supplies storage and data services for Kubernetes-orchestrated containers, has been awarded a US patent for its system of synchronising data containers and improving data mobility. It says its patent allows application data to move with the ease of applications. This data mobility ensures the application can start working immediately at the new location. Full volumes — regardless of size or amount of data — are transported across clouds or across the world in less than 40 seconds. The company owns ten issued patents in this field and has submitted applications for many more.
Shorts
Backblaze says it offers instant recovery in any cloud through its use of Veeam Backup and Replication. Its “Backblaze Instant Recovery in Any Cloud [is] an infrastructure as code (IaC) package that makes ransomware recovery into a VMware/Hyper-V based cloud easy to plan for and execute for any IT team.” Customers “can use an industry-standard automation tool to run a single command to quickly bring up an orchestrated combination of on-demand servers, firewalls, networking, storage, and other infrastructure in phoenixNAP, drawing data for your VMware/Hyper-V based environment over immediately from Veeam Backup & Replication backups.”
Cloudflare’s R2 Storage zero egress cost takes on egregious S3 egress charges. R2 Storage is a public cloud storage service that is S3-compatible but at a lot lower cost than Amazon’s S3 service. The Register, our sibling publication, covered this last week. Our interest is that there is a growing number of cloud services taking on Amazon’s S3 — witness Backblaze — and we should see S3 price cuts shortly, even if they are nominal.
Red Hat OpenShift containers can read and write object data from Cloudian HyperStore object storage systems. Cloudian supports the Kubernetes CSI (Container Storage Interface)as well as Amazon’s S3 interface and both SSDs and Optane for faster object IO performance. OpenShift containers get API call access, multi-tenancy, support for multiple public cloud (AWS, Azure and GCP) back-end storage, and data encryption.
Enterprise file collaboration supplier Egnyte released its 2021 Data Governance Trends Report, which surveyed 400 IT executives in July 2021. It found that unchecked data growth, combined with a lack of visibility, is increasing the risk of breaches, ransomware, and compliance violations dramatically. The most commonly cited application of AI that companies are currently using or plan to invest in the next 12 months is to identify and protect sensitive data in files (47 per cent). Another 42 per cent are using AI to assess risk and threats, and 40 per cent are using it to monitor for malicious behaviour. (Expect Egnyte to buff up its AI credentials.)
In-memory SW supplier Hazelcast announced GA of its Hazelcast Platform product for transactional, operational and analytical workloads. It combines the capabilities of a real-time stream processing engine with in-memory computing with a robust library of connectors from MongoDB to Redis Labs to Snowflake.
SaaS data protector HYCU has recruited service provider Teraflow to its Cloud Services Provider Program. Teraflow is a data engineering and artificial intelligence firm based in London with offices in Johannesburg, South Africa. Its also recruited Extreme Solution, a Google Cloud partner servicing customers across Egypt, UAE and North America, Extreme Solution is used by millions of users across a number of industries for Mobile and Web Solutions Development, Public Cloud and mission-critical Infrastructure initiatives.
IBM Spectrum Scale container native v5.1 adds:
Separate operator namespaces;
Multiple custom resources and controllers;
Core pods managed by daemon controller;
Online rolling pod upgrade;
Set memory and CPU requests and limits for pods;
Support for storage cluster encryption;
Automated deployment of CSI Driver;
Using the GUI;
Automated IBM Spectrum Scale performance monitoring bridge for Grafana.
NetApp is recruiting more staff. Wells Fargo analyst Aaron Rakers tracks some mainstream storage suppliers’ recruitment trends and a chart of NetApp’s open job positions shows a sustained rise over recent months.
We think this may be to do with its cloud services business unit led by Anthony Lye.
Blast-from-the-past: privately-owned and legacy storage supplier Overland-Tandberg, now separated from Sphere3D, has added RDX SSD drives alongside its RDX HDD line of removable disk drive cartridges. Having removable SSDs in RDX cartridges gives RDX users a faster backup and restore medium. The available capacities are 500GB, 1TB, 2TB, 4TB, and 8TB.
Phison has announced a technology development of PCIe Gen-5 customisable SSD controller products for enterprise and client SSDs. It is actively developing its first PCIe Gen-5 platform — the E26 Series Controller and SSD which will support PCIe Dual Port and have advanced features such as SR-IOV and ZNS, and support for the newest, fastest NAND interfaces ONFI 5.x and Toggle 5.x. The E26 Gen-5 test chip has been successfully taped out in 12nm advanced process, and customised SSD solutions are targeted to be shipping in the second half of 2022.
Managed multi-cloud Kubernetes-as-a-Service provider Platform9 has, during the past 12 months, grown as a company by 50 per cent, quadrupled its customer base and tripled revenue. It’s increased the capacity that Model9 Cloud Data Manager protects by 10 times.
New Zealand-based Portainer launched its Portainer Business Charmed Operator, supporting integration with Canonical’s Charmed Kubernetes distribution. It enables the automatic installation and integration of Portainer Business as part of the Kubernetes cluster deployment process, using Juju, the Charmed Operator framework. Portainer Business transforms any Kubernetes implementation into a containers-as-a-service setup. [See bootnote about used of ‘Charmed’ term.]
IOMesh by Kubernetes storage supplier SmartX has achieved Red Hat OpenShift certification, and is officially available on Red Hat Ecosystem Catalog. Kyle Zhang, CTO & Cofounder of SmartX, said: “The Red Hat certification demonstrates our ability to support mission-critical workloads on OpenShift.”
24Gbit/sec SAS connectivity was highlighted by the the SCSI Trade Association (STA) at the 2021 Storage Developer Conference, a virtual storage industry event sponsored by the Storage Network Industry Association. A SAS Integrator’s Guide, which provides additional detail on the subject, can be downloaded here. See the event agenda here.
The SNIA’s Storage Management Initiative has announced the approved ISO certification for the Swordfish 1.1.0d specification. The SNIA Swordfish specification extends the DMTF Redfish Scalable Platforms Management API specification to define a comprehensive, RESTful API for managing storage and related data services.
Synology announced GA of its new operating system, DSM 7.0.1 for the vast majority of its storage devices, including its enterprise NAS and surveillance-focused offerings. It features expedited processing of support requests through Active Insight integration, Fibre Channel support, flash volume deduplication, and a K8s CSI driver to improve volume management in Kubernetes.
Synology also announced Hybrid Share which enables simple and fast file synchronisation between locations or offices, combining efficient local file caching with bandwidth offloading to the cloud to improve productivity while reducing IT burdens.
Data Warehouse-as-a-Service supplier Vertica has launched its Vertica Accelerator, which runs on the new Vertica 11 Unified Analytics Platform. Vertica Accelerator runs on AWS public cloud infrastructure in a customer’s own AWS account, providing the ability to preserve all negotiated pricing and committed spend while automating the setup and management of the Vertica environment. It includes all of the core functionality — from advanced analytical functions including time series, pattern matching, geospatial, and in-database end-to-end machine learning.
Server virtualiser and containeriser VMware has a blog arguing for the use of computational storage at the edge. It’s to get rid of computational bottlenecks caused by moving data to compute. “Computational storage devices (CSx) enhance the capabilities of traditional storage devices by adding compute functionality that can improve application performance and drive infrastructure efficiency.”
VMware computational storage diagram.
The blog explains: “Instead of moving the data to a central location for processing, we move the processor to the data! By allowing the storage devices to process the raw data locally, CSx (SSDs with embedded compute, like those provided by NGD Systems) are able to reduce the amount of data that needs to be moved around and processed by the main CPU. Pushing analytics and raw data processing to the storage layer frees up the main CPU for other essential and real-time tasks.”
ReRAM developer Weebit Nano has implemented its ReRAM in a 28nm process. This is much denser than the 130nm process with which most of its development work has been done, and a step on from the prior 40nm process. It says this is a key step in productising the technology for the embedded memory market and the technology can support smaller geometries used in AI, autonomous driving, 5G and advanced IoT.
Western Digital says the World Economic Forum (WEF) has named two of its smart factories in Penang, Malaysia and Prachinburi, Thailand to its Global Lighthouse Network, a community of world-leading companies that have demonstrated leadership in the adaptation of Fourth Industrial Revolution at scale in both technology innovation and workforce engagement. Lighthouses apply 4IR technologies such as artificial intelligence, 3D printing and big data analytics to maximise efficiency and competitiveness at scale, transform business models and drive economic growth, while augmenting the workforce, protecting the environment, and contributing to a learning journey for all-sized manufacturers across all geographies and industries.
Zadara’s enterprise-grade, expert-managed cloud services — including compute, storage and networking — are now available to Cyxtera customers. Zadara’s fully managed storage-as-a-service has been available in Cyxtera’s 61 highly connected, hybrid-ready datacentres worldwide. Now Zadara’s zCompute has been added and delivers a seamless on-demand cloud experience available in Cyxtera datacentres, built and managed by Zadara.
People
Victoria Grey has resigned from her role as CMO at Arm server startup Bamboo Systems. Previously she had been CMO at Nexsan and VP Marketing at Quantum as well as a VP at EMC after it bought Legato where she was OEM Sales Manager.
NVMe-oF storage supplier Excelero has hired Jeff Whitaker as its VP of Product. He joins Excelero from NetApp, where he was a founding member of the NetApp cloud team that defined its first cloud-based products. His most recent role was NetApp’s senior manager of hyperscaler portfolio marketing.
HYCU has hired Michele Lynch to be its EMEA Channel Director. She comes via stints at Spirent, Localis, Scality and Commvault.
Panzura CEO Jill Stelfox has been named Transformation Leader of the Year for 2021 in the annual Globee Awards by an international panel of experts. A Panzura spokesperson tells us: “The accolade is based on Jill’s leadership through Panzura’s Refounding process which started with our acquisition last May and officially concluded a few weeks ago.” Among the achievements are a Net Promoter Score of 87, and rocketing past all previous sales records with year-over-year revenue growth of 525 per cent and an ARR of 96 per cent.
CharmingBootnote. Where did the term Charmed Opertator come from? Portainer tells us: “‘Juju was created in 2009 in order to tackle the infrastructure and application management problem, trying to take the solution a step forward from the traditional configuration management tools. Ever since its beginning, Juju used “charms”, wrappers of code to manage infra/app components by encapsulating the knowledge of human operators.
“These two names were employed, because at that time people considered the solution working “like magic”. In recent years, Kubernetes operators have started to make their appearance to solve the same problem within the cloud native ecosystem. Since Kubernetes quickly became a first class citizen in Juju, we decided to slightly rebrand charms into “charmed operators”. The difference between standard Kubernetes operators and charmed operators is the latter’s ability to integrate and drive software lifecycle management across different cloud environments, not only Kubernetes, but also legacy systems, virtualization platforms (e.g. Openstack) and public clouds.”
Cloud backup and data storage provider BackBlaze is finding its SSDs fail at nearly the same rate as its disk drives at the equivalent stage in their life cycle.
A Backblaze blog by Andy Klein, its oddly titled Principal Cloud Story Teller, outlines how Backblaze used to boot its systems off disk drives and then started using SSDs instead. It monitors its disk drive reliability and does the same for its SSDs, so it can compare their failure rates. To make the comparison fair, it compared the SSD failure rates to the HDDs used for boot drive duty at a similar age in their life cycle and found similar failure rates.
Klein wrote: “Where does that leave us in choosing between buying a SSD or a HDD? Given what we know to date, using the failure rate as a factor in your decision is questionable. Once we controlled for age and drive days, the two drive types were similar and the difference was certainly not enough by itself to justify the extra cost of purchasing a SSD versus a HDD. At this point, you are better off deciding based on other factors: cost, speed required, electricity, form factor requirements, and so on.”
Here’s the tabulated Backblaze summary data:
The SSDs did fail less often than the HDDs — with a 1.0 per cent annual failure rate as opposed to 1.38 per cent for the disks. Then Klein charted the data over time to see what the curves looked like:
There’s an uncanny resemblance in the shape of the two curves. However the HDD curve steepens substantially after four years of deployment. So the big, big question is whether the SSD curve will do the same thing. We guess we’ll see in October 2022, so long as Andy Klein puts the data out for us all to see.