Meet Tiger Technology. It provides a hybrid multi-cloud file namespace for Windows Servers and enables space-saving file tiering from on-premises servers to cheaper file and object stores with ancillary backup, archive, file sync, business continuity, and disaster recovery benefits.
Alexander Lefterov
Tiger Technology is a 50-person storage company based in Sofia, Bulgaria, with offices in the USA and UK. It was started in 2003 by founder and CEO Alexander Lefterov. He saw that Windows Server data sharing could be enhanced, both for SANs and files, by manipulating metadata. The company’s MetaSan software product evolved into Tiger Store, which provides on-premises file sharing. Tiger Pool combines multiple volumes into a single pool, and Tiger Spaces enables file sharing among work group members.
Then a Tiger Bridge product was developed as a cloud storage gateway and tiering product. Before getting in to that we’ll note there are two rack-level hardware products: the Tiger Box appliance and the Tiger Server metadata controller. Both come with Tiger Store and can have the Pool, Spaces, and Bridge software added.
Tiger Bridge is a Windows Server kernel-level, filesystem filter driver. It monitors an on-premises fileset and can move low-access rate files to cheaper storage to save primary storage capacity. Files are selectively moved according to settable policies and their metadata remains on-premises in so-called stub files.
When a user or application needs to access them they are fetched from their destination storage transparently to the requesting entity. Tiger Bridge implements a single namespace across the source Windows Server and destination storage, using an NTFS extension over HTTPS/SSL which adheres to Active Directory ACLs for access control.
The destination systems can be on-premises NAS filers, tape libraries, and object stores (S3), the Fujifilm Object Archive, and hot, cool, and archive object stores in the AWS, Azure, Google, and IBM clouds. Cloud storage provider Wasabi OEMs Tiger Bridge, and Tiger Bridge also supports the Seagate Lyve Cloud and is compatible with Veeam Backup and Replication.
File data is replicated to the destination systems and policies can be set so that hot files are replicated whenever changes are made. This provides a mechanism for file synchronisation across multiple sites for file-sharing scenarios and also for disaster recovery. A failed source file server can be reinstated at a remote site using the replicated files. The file folder system can be set up virtually instantaneously, using metadata, and then file data streamed to the new server in the background. Any files directly accessed before being streamed get pulled to the head of the queue and streamed at once.
Tiger Bridge CloudDR
Lefterov says that, although Tiger Bridge can be used for migration to the cloud, its main purpose is to enable on-premises file-based workloads to extend out to the cloud, using its elastic and affordable capacity, with no change to workflow procedures.
Tiger Tech has a roster of thousands of customers, many in the media and entertainment market. It provides Tiger Bridge as a way for them to bring the cloud’s scalable capacity and relative low cost into their on-premises workflows with little or no change.
A specific version of the software, Surveillance Bridge, has been built to store video files in the cloud with their stubs on the video server for fast search and identification.
Surveillance Bridge
The Bridge software is available under subscription and fixed-term contracts.
Competition
Tiger Technology competitors include Komprise, which eschews stub file technology, preferring its own dynamic link software, and which provides a set of analytic software layered on top. Another competitor is Data Dynamics and its StorageX file virtualization software, and we should also include Rubrik with its acquired Igneous unstructured Data-Management-as-a-Service.
Finally we should mention the Cohesity DataPlatform and its SmartFiles tiering technology. That’s four substantial competitors – so Lefterov’s Tiger needs a strong software roar to progress against them.
“You have to spend money to make money” seems to be the mantra being adopted by Backblaze.
The cloud storage and backup provider reported Q4 fiscal 2021 revenues up 28 per cent year-on-year at $18.7 million with a net loss of $9.6 million. B2 cloud storage revenues were 56 per cent higher at $6.6 million while Computer Backup revenues rose 16 per cent to $11.9 million. Full-year revenues were $67.5 million, up 25 per cent, with a net loss of $21.7 million.
The chart below only shows the information we have – Backblaze has not released the missing numbers. We can see, though, that losses are deepening faster than revenues are rising. Spending has risen.
William Blair analyst Jason Ader told subscribers: “While we expected this bootstrapped business to use its IPO proceeds to drive growth, management made clear that given Backblaze’s market opportunity ($91B cloud storage TAM in 2025, according to IDC), the time is right to step on the sales and marketing gas.”
Ader reckons “Backblaze is a pure-play on SMB cloud adoption, with investors getting a high-growth, independent cloud storage platform supported by a cash-cow computer backup business that can also drive cross-selling opportunities.”
Backblaze had 498,933 customers at quarter end versus 466,298 a year ago.
The outlook for the current quarter (Q1 2022) is for revenues between $19.0 million and $19.5 million, with $83–86 million full-year revenues.
As of 3 March, Backblaze will no longer offer USB flash drive restores. Small restores can be done over the internet with larger restores available on an orderable USB disk drive. Here are the full details on the Backblaze USB Restore, Return, Refund program.
Though Pure Storage and Nutanix can now be regarded as incumbents, they still behave like growth-hungry startups.
Nutanix went public in 2016 following Pure’s 2015 IPO. Since then Nutanix has amassed 20,700 customers and enjoyed $1.39 billion in revenues in its fiscal 2021. Pure Storage has just under 10,000 customers and reported $1.68 billion in revenues for its fiscal 2021.
Both have embraced the cloud and subscription business models and both face strongly placed legacy incumbent vendors such as Dell EMC, HPE, IBM, and NetApp with all-flash arrays for Pure. Nutanix competes primarily with VMware and also with hyperconverged infrastructure appliances (HCIA) from Dell EMC, Cisco, and HPE.
And yet the two companies are markedly different in a couple of respects. Pure’s on-premises arrays are based on proprietary hardware – its homegrown solid-state modules – whereas Nutanix is a software-only supplier. Secondly Nutanix has a history of making increasingly large losses whereas Pure, while still loss-making, is almost conservatively run in contrast.
Nutanix
Here is Nutanix’s quarterly revenue and profit/loss history:
Its latest quarter losses, $418.9 million, exceeded revenues of $378.5 million. The annual revenue and profit/loss history shows a pattern of losses increasing faster than revenues from its fiscal 2019 onwards:
A look at quarterly revenues cut by fiscal year shows that Nutanix revenue growth hit a wall in its fiscal 2019 and only now, in the last two quarters, has it regained a high growth rate:
Despite this, Nutanix is in no danger of running out of cash. That’s thanks to Bain Capital in large part, as a look at its cash and short-term investment history illustrates:
Why the jump in Q1 2021? CFO Dustin Williams said in that quarter’s earnings call: “We closed the quarter with cash and short-term investments of $1.32 billion versus $720 million in Q4 ’20. The Q1 cash total includes $750 million from the Bain convertible note, less expenses, and $125 million stock buyback.”
The Bain Capital private equity convertible note was a $750 million investment through stock purchase in August 2020 with the intention of helping Nutanix grow. Note also that Nutanix spent $125 million cash to buy back its own shares at that time.
Pure Storage
Pure looks to be a conservatively run company in comparison – with a much lower level of losses, and a more consistent growth rate. Here’s the quarterly revenue and profit/loss story:
The annual one:
And a look at quarterly revenues by fiscal year to view the revenue growth trajectory:
Apart from a blip in its fiscal 2021 Pure has grown consistently, with a much more consistent set of curves than the equivalent Nutanix chart.
On its storage array front Pure is facing two burgeoning startups in the form of Infinidat and VAST Data, both angling for enterprises to use their kit for storing primary and secondary data. VAST doesn’t have a block-based offering but punts its product as being Universal Storage. Then there are other all-flash startups with their hats in the ring, such as Pavilion Data and StorONE.
Nutanix is in a safer position in that there are no new startups offering hyperconverged appliance software and equivalent platform functionality. Attention has moved to Kubernetes and the HCIA startup era is finished. In that sense Nutanix, unlike Pure, is safe from being attacked from below by hungry startups.
Profitability
Eventually both will have to become profitable. That’s the whole point of their customer growth – to amass a customer base large enough to sell into at a profit. Just how many customers does Nutanix want before it decides profitability should be prioritised over growth? 25,000? 50,000? VMware’s vSAN had passed 30,000 customers in 2020’s fourth quarter, according to IDC. Perhaps that’s Nutanix’s aim: equal the vSAN customer count.
Pure should have a much easier path to profitability – its losses are a fraction of Nutanix’s. A look at the quarterly revenue history chart indicates that, were it to push revenues up to the $600 million level, it would probably enter profit territory.
Nutanix might do the same, if its revenues were up at the $800million per quarter level, which is a much higher mountain to climb. Put another way, growth is more important to Nutanix than Pure and profits are closer to Pure than they are for Nutanix.
Analysis: N-able provides data protection and other services to more than 25,000 MSPs, yet many of us in the storage world have not heard of it – which is unsurprising. Although the company was started in 2000, it was bought by SolarWinds in 2013 ($120M in cash) and disappeared, so to speak, becoming the SolarWinds MSP business when combined with LOGICNow, which SolarWinds acquired in 2016. Now it has come back into public view, reappearing in July last year as an independent company.
What happened and what kind of data protection company is N-able?
Spin-off
Although parent company SolarWinds was affected by a cyberattack on 14 December 2020, the separation of N-able had been discussed from August 2020 onwards. In the cyber incident, Sunburst malicious code was injected into SolarWinds’ Orion software. N-able said it had found no evidence of Sunburst in its own systems.
However, in N-able’s March 2021 SEC 10-12B registration statement it declared: “We believe the cyber incident has caused reputational harm to SolarWinds and also had an adverse impact on our reputation, new subscription sales and net retention rates, although the extent of such impact was not significant in our financial results during 2020. In the final weeks of December 2020 and in the first quarter of 2021, we experienced an adverse impact to new subscription sales and expansion rates relative to historical levels. We believe this was due in part to our decision in response to the Cyber Incident to temporarily reduce investments in demand generation activities through January 2021, as well as a result of certain of our MSP partners delaying their purchasing decisions as they assessed the potential impact of the Cyber Incident.”
Seen in this context it’s conceivable that N-able’s separation had added impetus.
When N-able was spun off, it claimed more than 25,000 MSP partners and more than 500,000 SME customers for its services from those MSPs. The majority shareholders are SolarWinds itself, SolarWinds shareholders, Silver Lake, and Thoma Bravo.
Image vs file backup
In an IT Press Tour briefing this month, N-able laid out why it thought its cloud-based data protection services were a good deal for its MSP customers. The basic reason is that it provides file-based rather than virtual machine image-based protection. This means it requires less storage capacity than image-based protection while still enabling recovery via VM creation.
Chris Groot, GM for N-able’s data protection business, said “If you’re capturing all the information from within the operating system. You can actually bring back a full machine as a virtual machine, because a virtual machine is just a series of files organised in a very specific way. So being able to rehydrate that way is actually a different approach. One that we take, and at the end of it, it does provide that image recovery experience.”
A comparison between the two styles of backup showed that incremental image backups could consume 30GB/day while the file-level alternative used up 0.5GB, due to change rate differences – as the slide below illustrates.
When compounded over 60 days, and taking compression into account, the image backups need 2,150GB of capacity while N-able’s file-level ones needed 380GB – giving it a 5.6x capacity advantage. There’s also a time and bandwidth advantage as shooting 0.5GB up to N-able’s cloud takes far less of both than pumping up 30GB.
Groot said when “running an incremental backup in one hour; with 30 gigs, you’re going to need an upload speed of 68 megabits. That same example, if you’re reading with half a gig, you’re going to need an upload speed of 1.1 megabits. So that just means … you don’t have to worry about those bandwidth constraints.”
It can also mean that more backups can be run, improving backup granularity as customers have more restore points available.
Business results
As N-able is now a public company it has to report on its activities. Its revenues in the third 2021 quarter, ended 30 September, were $88.4 million, 16 per cent higher than a year ago. It made a profit of $1.87 million, which compares to the year-ago loss of $1.13 million.
There was subscription revenue of $86.1 million, representing approximately 17 per cent year-over-year growth.
Growth has been consistent over the years, as indicated by a chart showing MSP cohort revenues by year since 2015:
We’ve been able to compare some of N-able’s Q3 numbers with fellow MSP backup provider Datto:
N-able does not provide a quarterly ARR number nor an ARR per MSP number
Datto’s revenues of $157.9 million were almost double N-able’s $88.4 million. They both have huge numbers of MSP customers and both provide, albeit differently, their number of high income MSPs: 1,300 >$100,000 customers for Datto vs 1,662 >$50,000 customers for N-able.
We’ve computed the nominal revenue per MSP for each and Datto comes out ahead – at $8,676, vs N-able’s $3,563. N-able says its average revenue per partner is $13,000. Multiplying that by its 25,000 total partner number gives us $325 million, so we think this could be average revenue per year or for a subset of its partners.
Datto is also more profitable than N-able – $13.5 million vs $1.9 million.
All of which goes to show that N-able has a lot of room for growth. It competes with Kaseya and Veeam for MSP data protection business. Privately owned Kaseya doesn’t reveal the number of its MSP partners or the revenue it gets from them. Neither does publicly owned Veeam. We have no means of comparing N-able’s position in the MSP data protection market with either Veeam or Kaseya.
Footnote: N-able has just announced DNS Filtering which, it says, strengthens an MSP’s ability to proactively safeguard itself and its customers from threats and cyberattacks from within the N-central dashboard. DNS Filtering provides real-time, smart identification of malicious websites, multi-client management from a single platform and single screen, a fast, redundant Anycast network, and on-demand, drill down reporting and analytics.
N-able says 12,000 of its 25,000 MSPs use its data protection/backup offering.
The Piql-based digital storage vault in Svalbard is storing NFTs of money created for use on the remote Arctic island.
We’re told that, throughout much of the 20th century, American, British, Norwegian, Soviet and Swedish coal mining companies created their own banknotes. The Soviets also created coins. The miners received their salary in local currency, and used it for purchasing goods – exclusively on Svalbard.
Last year Sparebank 1, a Norwegian alliance of savings banks, launched digital Svalbard banknotes and coins that could be bought and sold via a Svalbard money website. The coins and notes were/are so-called NFTs (“non-fungible tokens”) based on the Ethereum blockchain. But this operation closed down on 7 February with the NFTs tradeable on an OpenSea secondary market site.
Svalbard money website
NFTs are unique digital units or tokens that can be bought and sold and are stored on an immutable blockchain ledger. The NFT ledgers are said to provide a public certificate of authenticity or proof of ownership but they don’t prevent the sharing or copying of the underlying digital files or, apparently, prevent other NFTs being created based on the same digital files.
Here’s where we go quietly nuts. From 14 February, the digital Svalbard money will be stored for eternity in the Arctic World Archive located inside an old Svalbard coalmine. The actual data from the notes and coins, as well as historically collected material, will be stored on PiqlFilm – reels of 35mm film stock said to be immutable and good for a 1,000-year-plus lifetime. I mean, really, who cares that a bank virtually no one has heard of is storing money NFTs, which are of dubious and erratic value, inside shipping containers in an island coalmine off Norway and beyond the Arctic Circle?
The ingenious Morten Søberg, responsible for public relations at Sparebank1 and the initiator of the Svalbard money project, said “With the help of Piql’s technology, this unique part of financial history will be stored safely for eternity in the Arctic World Archive. In one way, the Svalbard money will be everlasting. It is a beautiful idea.”
In your dreams, Morten.
Rune Bjerkestrand, managing director of Piql, bigged up the idea as well. “The Svalbard money has come full circle. They have been returned to where it all began more than 100 years ago, deep in one of the Arctic coalmines. The dry, cool permafrost will help preserve this fascinating piece of Svalbard memory for future generations.”
Broadcom announced the availability of the industry’s most secure and highest density Gen-7 64Gbit/sec Fibre Channel switch platform: the Brocade 128-port G730 Switch. It also announced the industry’s first double-density 64Gbit/sec Fibre Channel optical transceiver that expands the port density for the Brocade G730 and G720 switches. The company says Brocade Gen-7 Fibre Channel has autonomous SAN capabilities with self-optimising and self-healing features to maximize performance and availability. It can automatically detect and mitigate issues that can lead to disruptions or outages. For example, by understanding and analyzing network telemetry data in real time, the SAN can automatically make intelligent decisions on traffic prioritisation and congestion mitigation to ensure non-stop operations.
…
Catalogic has updated its CloudCasa Kubernetes data protection product, enabling organisations to restore data across clusters, regions, cloud accounts, and cloud providers using storage-class remapping.
It claims CloudCasa is the first Kubernetes data protection product or service that allows auto-creation of an Amazon EKS cluster during recovery based on the configuration of the cluster backed up. Catalogic has also introduced security scanning for Kubernetes clusters and AWS cloud accounts to protect against intrusion and data exfiltration due to misconfiguration. The item addition list includes Kubernetes security posture review, AWS cloud security posture review, Kubernetes cross-cluster, cross-account, and cross-cloud restores, organisation support for enterprises and Agent auto-updates.
…
Data protector Commvault announced GA for Feature Release 11.26. Protection enhancements include:
Cloud-native APIs like Amazon Elastic Block Store (EBS) direct write and Azure Stack Changed Block Tracking (CBT) incremental snapshots to increase performance and reduce reliance on cloud access nodes;
Extending Commvault Disaster Recovery orchestration to include Object Storage and Big Data File Systems like Hadoop;
Protecting and preserving cloud metadata tagging of workloads to simplify conversion and migration across Azure and AWS;
Accelerating cross-region disaster recovery in AWS with EBS direct-write APIs;
Allowing Service Providers to deliver Commvault DRaaS to their clients.
Security enhancements include:
Utilising hardware-based security tokens like those offered though YubiKey and the US DoD, along with common access card support;
Leveraging secure cloud authentication methods, including the AWS Key Management System (KMS) and Azure Key Vault.
Data Insights enhancements include:
Upgraded entity extraction engine reduces memory requirements and increases performance of eDiscovery and compliance operations;
Utilising ML-driven data insights within Data Governance to calculate sensitive data risk assessments and identify anomalies in user behaviour.
…
Confluent, which supplies a data streaming platform, announced fourth quarter revenue of $120 million, up 71 per cent year-on-year. Full year revenue was $388 million, a rise of 64 per cent year-on-year. There was an operating loss of $113.7 million in the quarter and $339.6 million in the year. William Blair analyst Jason Ader said “The Confluent Cloud business continued its trajectory of strong growth, accounting for almost 30 per cent of revenue and growing more than 210 per cent in the quarter.”
…
Distributed file storage supplier CTERA announced record results for 2021. It grew annual recurring revenue for its edge-to-cloud offerings by 43 per cent year-on-year. The US federal government sector grew over 300 per cent with wins in Veterans Affairs and the US Air Force.
…
Data migrator and manager DataDobi has announced that National Cooperative Purchasing Alliance (NCPA) members can now purchase Datadobi’s software suite through the cooperative via its distribution partner Climb Channel Solutions – a subsidiary of Wayside Technology Group and international value-added distributor. NCPA is a US national government purchasing cooperative working to reduce the cost of goods and services by leveraging the purchasing power of public agencies in all 50 states. By using the cooperative’s purchasing contracts and extensive agency network, DataDobi says it will be able to more effectively deliver its unstructured data management offerings to over 90,000 public-sector and non-profit organisations in the US.
…
DDN has announced that the US Department of Defense High Performance Computing Modernization Program (DoD HPCMP) will implement its EXAScaler ES400X appliances in conjunction with Penguin Computing’s TrueHPC supercomputing platform. The new systems will be integrated at the Navy DoD Supercomputing Resource Center (Navy DSRC) and Air Force Research Laboratory DoD Supercomputing Resource Center (AFRL DSRC). The Navy DSRC system uses 26PB of EXAScaler storage, including 4PB of NVMe-based flash storage, and 22PB of disk drive-based storage for long-term data. The AFRL DSRC system, supported by over 20PB of DDN’s EXAScaler storage, enables high-performance data analytics as well as adding to the HPCMP’s capability to support DoD artificial intelligence requirements.
…
SaaS data protector Druva has announced the launch of its Managed Service Provider (MSP) channel programme in Asia-Pacific and Japan (APJ).
…
FADU Technology says its first-generation FC3081 and second-generation FC4121 SSD controllers are in mass production. Both are fully compliant with the Open Compute Project (OCP) NVMe Cloud SSD Specifications.
FADU’s first-generation FC3081 SSD controller (NVMe 1.3a/PCIe 3.1 x 4) is the first to deliver 100K IOPS/watt in the industry.
The second-generation FC4121 SSD controller (NVMe 1.4a/PCIe 4.0 x 4) doubles SSD performance over the first generation with a Sequential Read of 7,300KB/sec, Sequential Write of 4,800KB/sec, Random Read of 1,500K IOPS, and Random Write of 185K IOPS
…
FileShadow has announced an Android app that allows users control over their data from a mobile device, regardless of where it is located. The service connects data repositories from the cloud such as Box, Dropbox, Google Drive, iCloud, and Slack; local storage (macOS, Windows Desktops, Windows Virtual Desktops); and network and direct-attached storage (NAS/DAS) devices. Users can manage collections, upload and download files, apply tags, view and publish files, and manage their account from the app. With machine learning, data can be searched or organised based on file content, OCR results, GPS location or image analysis. A search for sailing can find images with a sailboat or the word “sailing” in a document.
…
We have been sent an IBM FlashSystem and Spectrum Virtualize FAQ PDF document. It’s full of good stuff about the latest FlashCore Modules, safeguarded copies, configuration details, and so on. Download it here.
Sean Milner
…
Big memory supplier MemVerge has appointed Sean Milner as VP of sales, reporting to COO Jonathan Jiang. Milner brings more than 20 years of experience in selling SaaS, cloud, enterprise software, and IT infrastructure systems. Most recently he was SVP Sales for Avochato where he led the company’s sales, business development, and customer success teams.
…
Mainframe VTL converter Model9 has hired Mike Canavan away from being VP Sales for Cohesity’s SaaS business to become VP sales Americas to grow the sales team, acquire new talent, and drive Model9’s GTM strategy in the Americas. Canavan has held senior leadership roles at Hitachi Vantara (>12 years), Pure Storage (FlashBlade), and EMC. Dan Shprung, Model9’s chief revenue officer, said Canavan “is exactly what Model9 needs to make a huge impact as we move forward with our ambitious expansion plans for 2022 and beyond.”
…
Nebulon has developed the first Red Hat Ansible collection for smart infrastructure, which includes a set of modules that customers can use to integrate Nebulon infrastructure management into their Ansible automation playbooks. With the Nebulon Ansible Collection and a combination of Nebulon cluster (nPod), operating system, and storage provisioning, IT organizations can deploy infrastructure end to end and configure applications entirely from within their Ansible playbooks. The Nebulon Ansible Collection, combined with the Nebulon ON cloud control plane, reduces operational overhead by up to 75 per cent compared to hyper-converged infrastructure (HCI) and 3-tier infrastructure alternatives. More details are available in our sister site, The Register.
…
Netlist announced the United States District Court for the Central District of California (the Court) entered Judgment in Netlist’s favor and against Samsung for material breaches of various obligations under the Joint Development and License Agreement (JDLA), which the parties executed in November 2015. The Court entered Judgment for Netlist on each of its three claims: (1) that Samsung breached its supply obligations to Netlist; (2) that Samsung breached its payment obligations to Netlist; and (3) that Netlist properly terminated the JDLA such that Samsung’s licenses and rights under the JDLA have ceased. There is pending patent litigation between Netlist and Samsung in federal court. Netlist won $40 million from SK hynix in a patent cross-licensing deal in April last year.
..
StorCentric’s Retrospect data protection business has announced GA of Retrospect Backup 18.5, featuring new anomaly detection, customizable filtering and thresholds, and enhanced ransomware protection. It has deeper Azure Blob integration for Immutable Backups and integrated cloud bucket creation. Administrators can tailor anomaly detection to their business’s specific systems using customisable filtering and thresholds for each of their backup policies, and those anomalies are aggregated on Retrospect Management Console across the entire business’s Retrospect Backup instances or a partner’s client base with a notification area for responding to those anomalies. V18.5 also includes support for LTO-9, with capacities up to 18TB (45TB compressed).
…
Seagate has announced 2TB and 4TB external Game Drives for Sony PS5 and PS4 storage and delivers the ability to play PS4 games directly from the drive. They are specifically designed and optimised with firmware to work with PS4 and PS5 consoles, are lightweight and offer plug-and-play installation (USB 3.2 Gen 1) in under two minutes with no tools required. Seagate has also announced 2TB and 5TB Horizon Forbidden West Limited Edition Game Drives, with “graphics on the exterior of the drive that highlight the strength and mystery of the Horizon Forbidden West hero, Aloy.” Shipments start next month. Seagate’s Game Drive MSRP is $92.49 (2TB) and $139.99 (4TB), and Horizon Forbidden West Limited Edition Game Drive retails with a suggested price of $99.99 (2TB) and $159.99 (5TB).
…
Cloud data warehouser Snowflake has made global technology consulting and solutions company Hexaware a Premier Services Partner. Hexaware says it will deliver accelerated outcomes through its proprietary Amaze for Data & AI platform at scale and speed on Snowflake’s Data Cloud along with its Snowflake Center of Excellence with more than 400 data consultants and SnowPro certified architects.
…
Western Digital says CERN, the European Organization for Nuclear Research which operates the Large Hadron Collider, is a customer for its Data60 disk drive chassis (JBOD). Each JBOD, is equipped with 60x 14TB Ultrastar SAS HDDs and is connected to a front-end server with 4x 12Gbit/sec SAS links. A differentiator of the Data60 JBOD is its ability to enable each drive to spin at its max performance thanks to the ArticFlow and patented IsoVibe technologies. CERN required high-capacity storage systems capable of reading and writing data at a rate of 12.5GB/sec in each direction for its Run3 project and the Data60 met that need.
…
HPE said its acquired Zerto business unit saw strong demand in 2021 for its DRaaS ransomware recovery capabilities. Zerto had “a record number of customer and partner wins.”
These included AssureStor, developer of a Zerto-powered cloud-based DR offering; StorMagic, which is now delivering an HPE validated design for edge-to-edge workload protection; Epiq Global, which is using Zerto to protect petabytes of data for hundreds of legal sector customers; and IEWC, which implemented Zerto across its IT environment. New customers using Zerto include Atlantic Constructors; Boston Medical Center; City of Georgetown, TX; Fairfax County, VA; and Washington County, TX.
Memory and NAND manufacturer SK hynix has demonstrated processor-in-memory (PIM) technology with a sample graphics memory chip using it.
Its GDDR6-AiM chip (AiM meaning Accelerator in Memory) adds computational functions to GDDR6 memory chips. GDDR6 is Graphics Double Data Rate 6 Random Access Memory, which processes data at 16Gbit/sec. GDDR, as opposed to DDR, was originally intended for use by GPUs. GDDR prioritises bandwidth over latency, which is a main focus of DDR memory design.
The company says GDDR chips are among the most popular memory chips for AI and big data applications.
GDDR6-AiM paired with a CPU or GPU can run certain computations 16 times faster than the same CPU/GPU paired with DRAM. We’re told SK hynix’s PIM reduces data movement to the CPU or GPU, so lowering power consumption by up to 80 per cent. The GDDR6-AiM chip runs on 1.25V, lower than SK hynix’s existing GDDR6 product’s operating voltage of 1.35V.
SK hynix does not say which processing instructions have been added to the graphics memory or how the processing elements are distributed among the memory cells. This is in contrast to Samsung, which provided more information about its MRAM-based PIM technology in January. MRAM is faster than DRAM and uses less electricity in its operations.
An SK hynix-sponsored EE Timesarticle by Dae-han Kwon PhD, project leader of custom design at SK hynix, published in October 2021, adds a lot of extra information.
He shows a chart of power consumption by elements in CPU processing of data, and moving data from DRAM into the CPU-cache area consumes vastly more energy than any other operations there:
Then he discusses typical mathematical operations used in neural network processing, singling out the multiplication and addition of matrices when computing multi-output neuron operations.
Single vs multi-output neuron operations in neural networking
The author writes: “If circuits for these operations are added to the memory, data does not need to be transferred to the processor, and only results need to be processed in the memory and delivered to the processor.” This is very likely what SK hynix has done.
He continues: “For memory-bound applications such as RNN (Recurrent Neural Networks), significant improvements in performance and power efficiency are expected when the application is performed with a computational circuit in the DRAM. Considering that the amount of data to be processed will increase tremendously, PIM is expected to be a strong candidate to improve the performance limit of the current computer system.”
SK hynix intends to exhibit its PIM development at the 2022 International Solid-State Circuits Conference (ISSCC) in San Francisco, 20–24 February. It hopes its GDDR6-AiM chips will be used in machine learning, high-performance computing, big data computation and storage applications.
Ahn Hyun, SK hynix head of solution development who spearheaded the chip’s development, said “SK hynix will build a new memory solution ecosystem using GDDR6-AiM.” The company plans to introduce an artificial neural network technology that combines GDDR6-AiM with AI chips in collaboration with SAPEON, a recently spun-off AI chip company from SK Telecom. Ryu Soo-jung, CEO of SAPEON, said “We aim to maximise efficiency in data calculation, costs, and energy use by combining technologies from the two companies.”
An aerial view shows the Intel Rio Rancho campus in New Mexico. (Credit: Intel Corporation)
While checking out 3D XPoint’s manufacturing location – Intel’s Rio Rancho fab in New Mexico – we came across evidence that Intel had a gen-4 Optane development in April 2020.
Optane uses 3D XPoint technology which features a two-layer or deck crosspoint array with phase change memory cells and selectors providing a non-volatile memory that is faster than flash but not as fast as DRAM. Intel has locked its use to features in Xeon processors with each XPoint generation requiring a specific class of Xeon processor.
Optane products area available in DIMM (PMem) or slower SSD format and their capacity depends upon the capacity of the 3D XPoint die used to build them. Unlike 3D NAND, increasing XPoint layering is more complex and, so far, we have seen gen-1 (Apache Pass) with two layers and gen-2 (Barlow Pass) with four layers.
It contained a contribution by Darren Denardis, who was titled 4th Gen Optane Program Manager.
What he said was generic – Intel and Rio Rancho are both great stuff. He didn’t say anything specific about gen-4 Optane.
An odd thing is that LinkedIn knows nothing about Darren Denardis; a search returns “No results found”. He is listed as a symposium organiser from Intel for the 2010 Materials Research Society Spring Meeting and has various patents listed by Justia. We have asked Intel if he is still working on the 4th gen Optane program. If and when we get an answer we’ll update this article.
The arrival, acceptance, and establishment of the public cloud has caused a whole new application system infrastructure to be erected across the clouds and on-premises datacentres so that developing and running apps as virtual machines and containers can be eased and optimised.
William Blair analyst Jason Ader has described this new cloud infrastructure stack in a mail to subscribers. It acts as a great window through which to identify and place suppliers in the cloud infrastructure landscape.
He writes: “Applications don’t live in a vacuum. Every app has a stack of technology components or layers that sits underneath … While the application stack still depends upon core infrastructure elements like compute, storage and networking, these have increasingly become commoditised.”
Hence “IT practitioner mindshare … has shifted from base infrastructure to higher layers of the stack.”
Ader identifies four key layers of this stack:
Infrastructure automation – a middleman function between core infrastructure components and people managing them (DevOps teams, SREs, operators, cloud specialists).
Event streaming – for moving time-stamped data (events) between producers and consumers (apps, databases, data warehouses).
Operational databases – where app data is stored and retrieved from back-end system of record databases to front-end system of engagement ones.
DevOps – tooling to build, test, deploy, and update custom apps at the top of the stack.
Off to the sides are complementary and necessary function areas such as as data management and protection, security and compliance, monitoring and observability, and data analytics and business intelligence. Our diagram above positions these components.
Having set up this group of stack components and layers, Ader populates them with suppliers:
He then places the suppliers in the platform layers into four categories: prime disruptors, innovators, incumbents and CSPs (Cloud Service Providers).
The prime disruptors are:
DevOps – GitLab and GitHub (Microsoft)
Event streaming – Confluent
Operational database – MongoDB
Infrastructure automation – HashiCorp
Ader says the prime disruptors “were first movers in the shift to cloud, have established themselves as thought leaders in their respective areas, and are operating at or near scale.”
Innovators have much the same vision as the prime disruptors but may lack equivalent scale, technology and/or mindshare.
Both prime disruptors and innovators generally feature product-led growth, a hybrid go-to-market model (bottom-up and top-down selling), cloud-agnostic platforms, and a fully managed SaaS offering.
Incumbents were leaders in the pre-cloud era and are working to pivot their portfolios to align to customers’ needs in a cloud-centric world.
The CSPs provide IaaS functions and have built out higher-level functions as well. They may be innovators and all provide single destination, one stop shop stacks using off-the-shelf, open-source technologies for the various layers.
Ader classifies suppliers into four groups per stack layer. Here is his infrastructure automation supplier groupings:
Event streaming supplier groupings:
Operational database supplier groupings:
DevOps supplier groupings:
Ader devised and wrote his white paper to help provide a context and guide for the investors that subscribe to William Blair services. We can piggyback on his paper and get a great insight into how the cloud infrastructure stack is constructed, which supplier fits where, and a quick rating of the suppliers. It’s a fine piece of work.
Nutanix is reshaping its product set into five groups that run across public, private, and hybrid clouds with a consistent operating model and simplified packaging, metering, and pricing.
This move is in response to customers finding that although moving to a hybrid multi-cloud model for deploying and managing applications is sensible, the complexity of management, security, data integration, and cost is daunting.
Thomas Cornely, Nutanix SVP product management, issued a statement, saying “Our new, simplified portfolio brings together our rich product capabilities across on-premises and public clouds to deliver consistent infrastructure, data services, management, and operations for applications in virtual machines and containers.”
Nutanix’s global Enterprise Cloud Index (ECI) survey found that multi-cloud is currently the most commonly used deployment model and adoption will jump to 64 per cent in the next three years. It also showed that 87 per cent of respondents thought that successful multi-cloud deployments needed simpler management across mixed-cloud infrastructures.
Top multi-cloud challenges included managing security (49 per cent), data integration (49 per cent), and cost (43 per cent) across cloud borders. Some 80 per cent of respondents agreed that moving a workload to a new cloud environment can be costly and time-consuming.
Going all-in on one public cloud would solve some of these problems but customers, mindful of lock-in, don’t want to do that.
Rajiv Ramaswami, president and CEO at Nutanix, played the public-cloud-is-not-the-end-result card. “Solving these complexities is giving way to a new hybrid multi-cloud model that makes cloud an operating model rather than a destination.”
Nutanix’s new product groupings with individual product still visible.
Nutanix says it has has built an enterprise-ready, unified cloud platform with its HCI (hyperconverged infrastructure) system as the foundation. The elements provided on this base are:
Nutanix Cloud Infrastructure (NCI), a complete software stack including virtual compute, storage, and networking for virtual machines and containers that can be deployed in private datacentres on customers’ chosen hardware or in public clouds. Nutanix Cloud Clusters (NC2) are used to run NCI in a public cloud and its use cases include cloud bursting, disaster recovery, and datacentre lift and shift.
Nutanix Cloud Manager (NCM) works across public and private clouds providing monitoring, insights, automated remediation, resource optimisation, unified security operations, regulatory compliance, and visibility into cloud metering and chargeback
Nutanix Unified Storage (NUS), distributed and software-defined storage for volumes, files, objects across private, public, or hybrid clouds with license portability in between.
Nutanix Database Service (NDB) helps customers deliver Database-as-a-Service with database engines like PostgreSQL, MySQL, Microsoft SQL Server, and Oracle Database, with automated provisioning, scaling, patching, protection, and cloning of database instances across hybrid multi-cloud environments.
Nutanix End User Computing Solutions deliver virtual applications and desktops with a per-user licensing option for NCI to simplify capacity planning by matching the infrastructure cost model to that of the end user computing platform. There is a Desktop-as-a-Service (DaaS) platform that can run end user workloads on NCI, public clouds or hybrid clouds.
Blocks & Files’ Nutanix software stack diagram
All the new products are currently available to customers.
Comment
When an enterprise runs its own datacentres, app development and deployment is keyed into the datacentre features. This is not the case with AWS, Azure or GCP because they each have their own software abstractions between the hardware they use and the instances, compute, storage, and services they offer customers.
A company like Nutanix can offer its own abstraction layer across the public clouds, unifying them into a set of platforms on which to run Nutanix services (products). These services are available on-premises as well so that customers can choose where to run applications, virtual or containerised, and associated Nutanix products across the on-premises and AWS/Azure/GCP datacentre environments.
The two big obvious wins are the ability to optimise costs and avoid lock-in to any one cloud vendor. An equally obvious attribute of this is that you now look at your hybrid, multi-cloud datacentre environment through a Nutanix lens. But if you go hybrid or multi-cloud, you either manage the complexity directly yourself, or you look through a supplier’s lens and use its facilities.
The choice you make is based on your view of the breadth, depth, flexibility, and cost of the supplier’s products and services and the quality of its management facilities. There are few companies capable of offering such hybrid, multi-cloud app, service and management environments – Dell EMC (APEX), HPE (GreenLake), IBM (Red Hat), Nutanix, and VMware come to mind as examples.
Nutanix has a firm grasp on what it needs to offer and Ashish Nadkarni, group VP, Infrastructure Systems, Platforms and Technologies Group at IDC, recognised this. “Nutanix’s growth started as a more flexible, easier to manage alternative to SAN storage, and has grown into a full cloud platform from virtualization to networking to security, and supports all workloads across on-premises and public clouds. The new product portfolio delivers even more flexibility to help customers adapt to their business’s changing needs.”
Western Digital has quietly slipped a new PCIe 4 SSD for gaming into its range, which is faster than the old PCIe 3 model but not as quick as the million IOPS SN850.
The Black SN850 M.2 2280 format drive used 96-layer 3D NAND, and was much faster than the earlier 64-layer Black SN750 with its PCIe 3 bus. Now the new Black SN770 slots in between the two in terms of performance, as the table illustrates:
The SN770 is much faster than the 750, and has an SLC cache with a DRAM-less design, relying on its host’s DRAM during read, write and other operations. WD is pricing it under the SN850.
A second table lists the prices and endurance. We can see that WD has kept the endurance levels (terabytes written) fairly constant across these three models on a per-capacity basis:
Soon we shall start seeing PCIe 5 gaming SSDs and their performance should make PCIe 4 look pedestrian, with 2 million-plus IOPS and greater than 10GB/sec throughput levels – which means blisteringly fast loading times for gamers. There should be an equivalent uptick in enterprise application performance with throughput-bound applications getting a substantial jump in performance.
BPM – Bit-Patterned Media – a type of magnetic storage technology designed to increase the areal density of data storage beyond what is achievable with conventional continuous granular media used in hard disk drives (HDD). In BPM, data bits are stored in discrete, isolated magnetic islands or dots, each representing a single bit (either a “1” or a “0”). This is in contrast to conventional magnetic media where bits are stored in regions of a continuous magnetic film with tightly-packed random grains.
Intevac BPM image.
By controlling the size and placement of these magnetic islands, BPM can theoretically achieve much higher data densities. Each island can be made smaller while still reliably storing a bit, pushing the limits of physical storage.
In conventional media, closely packed magnetic grains can lead to interference or noise because the magnetic fields from adjacent bits can affect each other. BPM mitigates this by physically separating the bits, reducing bit-to-bit interference.
Creating uniform, nanoscale islands with precise placement is technically challenging. This involves advanced lithography techniques like nanoimprint lithography or self-assembly methods.
The complexity of manufacturing BPM can significantly increase costs, although this could potentially be offset by the benefits in data density.
HDD read/write heads must be designed to interact correctly with these smaller, discrete bits, which might require new technology or adaptations of existing technology.