Nasdaq-listed Quantum Corporation has released pre-configured data bundles to make it easier to purchase and deploy the company’s ActiveScale Cold Storage, the Amazon S3-like object storage system.
The ActiveScale offering claims to reduce cold storage costs by up to 60 percent through making it easier for organisations to dynamically move cold data across their own on-premises cloud infrastructure, rather than storing it in the public clouds run by the likes of AWS and Microsoft Azure.
Customers can build their own cloud storage system to control costs while ensuring “fast, easy access” to their data, said Quantum, for compliance, analysis, and business insight requirements. The data can be held in their own datacenter, colocation facility or at edge locations.
“We’re using Quantum ActiveScale to build Amidata Secure Cloud Storage, which delivers highly scalable, available, and affordable storage for organisations,” said Rocco Scaturchio, director of sales and marketing at Amidata, a managed service provider based in Australia. “The new bundles offering enables organizations to tap into cloud storage benefits even if they lack internal cloud expertise.”
ActiveScale Cold Storage is available in pre-configured bundles complete with all the components customers need to deploy it. The bundles are available in four standard capacity sizes – small, medium, large and extra large – that range from 10 petabytes up to 100 petabytes in data capacity size.
ActiveScale combines the required object store software with hyperscale tape technology to provide “massively scalable, highly durable, and extremely low-cost storage for archiving cold data,” said Quantum. If organizations don’t have the internal skills to manage tape systems, they can use a managed service provider to run the system for them. The public cloud hyperscalers themselves use tape systems to hold customer data, as it is a cheap medium delivering bumper data density.
Jochen Kraus, managing director at German managed service provider MR Datentechnik, said: “ActiveScale provides the reliable, highly scalable, S3-compatible object storage we needed for building our new storage service. We have a very flexible platform for meeting a broad spectrum of customer needs.”
“Quantum’s end-to-end platform empowers customers to address their unstructured data needs across the entire data lifecycle and create flexible, hybrid cloud workflows that are designed for their unique needs and goals,” added Brian Pawlowski, chief development officer at Quantum.
Growing business caution has hit European server and storage sales, according to analysts, leaving the industry in negative growth territory.
Analyst house Context has revised down its overall 2023 forecast for sales growth through the European IT channel – from 1.6 percent growth for the year to -3.3 percent.
“The cumulative impact of the war in Ukraine, the high cost of living, the cost of capital and high interest rates has led to sinking business confidence, and not just in the European Union, but also in China and the US,” said Context.
The enterprise server market experienced a year-on-year decline in revenue growth in the first quarter of 3.8 percent, said Context, and an even bigger drop in the second quarter of -11.7 percent.
Performance was worst in the UK (-38 percent) and Italy (-29 percent), which are, of course, two of the bigger European markets. Although sales are said to have “held up” in Spain, France and Germany.
Volume server sales growth fared little better, however, declining -21 percent across Europe. This was driven by a 26 percent year-on-year drop in the two-socket segment.
In the storage space, revenue growth fell significantly from 17.3 percent in the first quarter to -9.5 percent in the second quarter, thanks largely to the poor performance of the storage array segment (-13 percent), and especially hybrid and flash arrays.
The hyperconverged infrastructure (HCI) segment recorded a slight revenue increase in the second quarter of 2.9 percent, thanks to a strong performance in the UK (up 15 percent), and Italy – up 53 percent.
Other factors potentially influencing sales going forward are component pricing and availability, especially in artificial intelligence-ready components, the analyst said.
Comparatives from last year will also drag down server and storage growth figures for the rest of 2023. “Those double-digit server sales in 2022 are not going to be replicated in 2023, as they were driven mainly by the clearing of backlogs last year,” said Context.
So will there be a recovery in the server and storage market any time soon? The analyst says inflation is slowly receding across the European Union and opportunities remain for the IT channel thanks to enterprise digital transformation efforts, public sector investments, product refreshes, and the drive for sustainability.
Context is “hopeful” for a return to growth next year.
Data integrity biz Index Engines has boosted its product development and sales efforts with the hiring of two well-known industry veterans.
Update: IndexEngines says no change to existing exec roster. 16 Sep 2023.
Geoff Barrall and Tony Craythorne have joined the leadership team – Barrall as the new chief product officer, and Craythorne as chief revenue officer.
Their additions come as Index Engines says it continues to experience “rapid growth” driven by the rise of ransomware attacks, with its CyberSense analytics engine able to detect ramsomware data corruption and expedite the recovery of that data.
“They’re amazing executives with decades of experience in scaling businesses and storage technologies,” said Tim Williams, Index Engines chief executive officer. “These are the right people to help support our current strategic partners and the ones we are onboarding.”
“We’ve both followed Index Engines for years and we’ve been admirers of what they’ve built,” Craythorne said of himself and Barrall, who previously worked together at three other organizations. “They have a solution that is completely differentiated and superior to any other product on the market.”
Barrall added: “Ransomware detection on primary and secondary storage is no longer optional, it is a necessary component to ensure cyber resiliency across the enterprise.”
Prior to Index Engines, Barrall was the vice president and chief technology officer at Hitachi Vantara, where he led product strategy, partnerships, advanced research and strategic mergers and acquisitions. Before that, he founded several infrastructure companies, including BlueArc and Drobo. He has also undertaken leadership roles at Nexsan, Imation, and Overland Storage, where he was involved in product development.
Barrall is currently involved in various board memberships and advisory roles for both public and private companies.
Craythorne has experience of leading high-growth, scale-up companies in the US, Europe and Asia. Before joining Index Engines, Craythorne served as chief revenue officer of edge cloud provider Zadara, where he rebuilt the sales process and go-to-market strategy. Prior to that, he was CEO of Bamboo Systems Group where he led a rebrand, and developed and launched Bamboo’s Arm Server.
Craythorne has also served as the senior vice president of worldwide sales at unstructured data management vendor Komprise, where he built a new sales strategy, sales team, and channel strategy. In addition, he has held senior management positions at Nexsan, Brocade, Hitachi Data Systems, Nexgen, Bell Micro, and Connected Data. He is currently a board member and advisor to various startups.
Update: We asked Index Engines: “What happens to Jeff Sinnott whom we have listed as Index Engines’ VP Sales? Also what happens to Jeff Lapicco, Index Engines’ current VP Engineering?”
An Index Engine statement said: “Index Engines has unified product management, engineering, and support under Geoff as Chief Product Officer to help scale these areas as the company grows, and all sales and marketing role up under Tony as Chief Revenue Officer to continue our successful expansion into new customer markets, while driving innovation into our product. No change for existing management.”
AI is becoming a major data management challenge for IT and business leaders, according to just published research.
Companies are largely allowing employee use of generative AI, but two-thirds (66 percent) are concerned about the data governance risks it may pose, including privacy, security, and the lack of data source transparency in vendor solutions, according to the research.
The “State of Unstructured Data Management” survey, commissioned by data management vendor Komprise, collected responses from 300 enterprise storage IT and business decision makers at companies with more than 1,000 staff in the US and the UK.
While only 10 percent of organizations did not allow employee use of generative AI, the majority feared unethical, biased or inaccurate outputs, and corporate data leakage into the vendor’s AI system.
To cope with the challenges, while also searching for a competitive edge from AI, the research found 40 percent of leaders are pursuing a multi-pronged approach for mitigating risks from unstructured data in AI, encompassing storage, data management and security tools, as well as using internal task forces to oversee AI use.
The top unstructured data management challenge for leaders is “moving data without disrupting users and applications” (47 percent), but it is closely followed by “preparing for AI and cloud services” (46 percent).
“Generative AI raises new questions about data governance and protection,” said Steve McDowell, principal analyst at NAND Research. “The research shows IT leaders are working hard to responsibly balance the protection of their enterprise’s data with the rapid rollout of generative AI solutions, but it’s a difficult challenge, requiring the adoption of intelligent tools.”
“IT leaders are shifting focus to leverage generative AI solutions, yet they want to do this with guardrails,” added Kumar Goswami, chief executive officer of Komprise. “Data governance for AI will require the right data management strategy, which includes visibility across data storage silos, transparency into data sources, high-performance data mobility and secure data access.”
Edge-to-cloud file services provider CTERA has today taken the wraps off its Vault WORM (write once, read many) protection technology. The Vault offering provides regulatory compliant storage for the CTERA Enterprise Files Services Platform.
CTERA Vault aids enterprises in guaranteeing the preservation and tamper-proofing of their data, while also ensuring compliance with data regulations through strict enforcement.
Many industries need WORM-compliant storage, driven by the need to comply with regulations requiring immutable storage, such as HIPAA and 21 FDA CFR Part 11 for healthcare, NIST 800-53 for government, Sarbanes-Oxley for legal services, and Financial Industry Regulatory Authority (FINRA Rule 4511[c]) for financial services.
WORM compliance ensures that data remains inviolable, creating a trustworthy and irrefutable business record, essential for both regulatory compliance and addressing cyber security best practices.
“Regulated industries require a tamper-proof, secure, reliable, and scalable storage system to safely and confidently protect their data,” said Oded Nagel, CEO of CTERA. “CTERA Vault provides robust immutable WORM compliant storage for our global file system, allowing our customers to create permanent records that can be easily managed and which prevent unauthorised changes.”
Vault is designed to provide the flexibility and granularity to create WORM Cloud Folders with customised retention modes for specific periods of time to fit organisations’ specific regulatory or compliance requirements. The CTERA Portal provides centralised control and management of the policies that are enforced at every remote CTERA Edge Filer.
Hitachi Vantara recently entered into an OEM deal with CTERA for file and data management services, with its Hitachi Content Platform storing the data. “CTERA Vault’s innovative data retention and protection technology represents a significant step toward robust data resilience,” said Dan McConnell, senior vice president of product management and enablement at Hitachi Vantara in a statement. “As customers seek enterprise-grade solutions for regulatory compliance, Vault’s immutability, combined with our Hitachi Content Platform, can become the top choice for safeguarding critical data.”
Startup Enfabrica, which is developing an Accelerated Compute Fabric switch chip to combine and bridge PCIe/CXL and Ethernet fabrics, has raised $125 million in B-round funding.
Its technology, which converges memory and network fabric architectures, is designed to bring more memory to GPUs than current HBM technology using CXL-style pooling, and to feed data into memory through a multi-terabit Ethernet-based switching scheme. This relies on the ACF-S chip, under development, and 100-800Gbps Ethernet links.
Rochan Sankar
CEO and co-founder Rochan Sankar stated: “The fundamental challenge with today’s AI boom is the scaling of infrastructure… Much of the scaling problem lies in the I/O subsystems, memory movement and networking attached to GPU compute, where Enfabrica’s ACF solution shines.”
The funding round was led by Atreides Management with participation from existing investors Sutter Hill Ventures, Valor, IAG Capital Partners, Alumni Ventures, and from Nvidia as a strategic investor. Enfabrica’s valuation has risen 5x from its 2022 $50 million A-round valuation.
Enfabrica is developing an 8Tbps switching platform, enabling direct-attach of any combination of GPUs, CPUs, CXL-attached DDR5 memory, and SSD storage to high-performance, multi-port 800-Gigabit-Ethernet networks. It has around 100 engineers building chips and software, based in Mountain View, CA, Durham, NC, and Hyderabad, India.
It sees memory in terms of tiers. DDR and HBM forms the fastest layer with single digit terabyte capacities. Tier 2 is CXL memory local to the ACF-S device, and comes in the tens of terabytes. Networked memory accessed by RDMA send/receive is tier 3 and has thousands of terabytes of capacity. ACF-S-connected memory can be composed for specific workloads as needed.
The company has proved that its switch chip architecture and design is correct and should accelerate data delivery to GPUs as intended. The VCs have been convinced by its progress so far and now it has funding to build it. Sankar said: ”Our Series B funding and investors are an endorsement of our team and product thesis, and further enable us to produce high-performance ACF silicon and software that drive up the efficient utilization and scaling of AI compute resources.”
Enfabrica is developing an ACF-S-based system
Enfabrica thinks that its technology is especially suited to AI/ML inference workloads, as distinct from AI/ML training. Sankar said that current training systems gang together more GPUs than are needed for compute just to get sufficiency memory. According to Sankar, this means the GPUs are under-utilized and their expensive processing capacity is wasted.
The ACF-S chip and allied software should enable customers to cut their cost of GPU compute by an estimated 50 percent for LLM inferencing and 75 percent for deep learning recommendation model inferencing at the same performance point. This is on top of interconnect device savings as the Enfabrica scheme replaces NICs, SerDes, DPUs, and top-of-rack switches.
ACF switching systems with Enfabrica’s ACF-S silicon will have 100 percent standards-compliant interfaces and Enfabrica’s host networking software stack running on standard Linux kernel and userspace interfaces.
Sankar confirmed that the Enfabrica technology will be relevant to high-performance computing and little development will be needed to enter that market. Enfabrica could, in theory, support InfiniBand as well as Ethernet, but sees no need to do so.
Customers can pre-order systems by contacting Enfabrica.
Bootnote
A SerDes is a Serializer Deserializer integrated circuit device which interconnects single line parallel and serial network links. The Serializer or transmitter converts parallel data to a serial stream, while the Deserializer or receiver converts serial data to a parallel stream.
Ami Gal, co-founder and CEO at SQream Technologies
Data acceleration firm SQream has completed a $45 million Series C funding round to help it win more business in the big data analytics and artificial intelligence/machine learning (AI/ML) workload markets.
According to Blocks & Files record-keeping, this round completion takes the company’s publicly declared total funding past the $111 million mark since 2012. The last raise, in 2020, totaled over $39 million. The firm was founded in Tel Aviv, Israel, in 2010.
The latest round was led by World Trade Ventures, with participation from new and current investors, including Schusterman Investments, George Kaiser Foundation (Atento), Icon Continuity Fund, Blumberg Capital, and Freddy & Helen Holdings.
SQream said the new money will be used to further expand its footprint in North America, extend its strategic partnerships, and further develop its capabilities in enterprise AI/ML.
The firm’s relational database management system (RDBMS) is used with third-party graphic processing units (GPUs), with the aim of getting the most out of this hardware, ultimately reducing the amount of hardware that has to be deployed, achieving faster data processing and gleaning quicker analytics results for business users.
“By utilising the massive and parallel processing capabilities of GPUs, SQream’s solution allows companies to process extremely large and complex datasets faster, more cost-efficiently, with a smaller carbon footprint, using less hardware, and consuming less energy than conventional big data solutions that rely strictly on CPUs (central processing units),” the vendor said.
Ami Gal, chief executive officer of SQream, said of the cash injection: “Companies are very focused on driving analytics maturity right now, and this funding round is another step in our mission to better equip our customers with cutting-edge data analytics and processing solutions, that empower them to derive meaningful insights from their vast datasets, and drive growth in ways previously thought impossible.”
He acknowledged the cash had been raised in a “tight funding climate.”
Abraham Schwartz, partner at World Trade Ventures, added: “We are pleased to contribute to SQream’s ascent towards reshaping the big data and AI landscape, and look forward to seeing the significant impact they will make in the market, especially in North America, to meet this increasing demand.”
SQream recently appointed Deborah Leff, the former global head of business analytics sales at IBM, as chief revenue officer as part of its effort to bump up US sales.
The firm’s international customers include Samsung, LiveAction, Sinch, Orange, AIS, Vodafone, and LG. Last year, data systems provider Hitachi Vantara announced a go-to-market alliance with SQream.
Edge and cloud data migration and management specialist WANdisco has reported more financial pain as it goes through what it terms as its “transitional” year. After previous misreporting of sales, the company’s shares on AIM – the London Stock Exchange sub-market – were suspended at one point this March. The firm is to now be rebranded as Cirata.
For the half-year ended 30 June, 2023, the company saw sales slump to only $3 million, compared to $5.8 million generated in the same period last year. The firm’s bookings in the period also crashed to $2.8 million from $7.3 million last year, and the adjusted EBITDA loss widened to $14.8 million (it was $14.1 million last time).
The statutory loss from operations was $18.8 million, up from $17.2 million.
“The discovery of the irregularities had a significant impact on prospective customers, strategic partners, the pipeline and the overall business,” said the firm in its financial statement this morning. “Not only did the company suffer interruption to normal commercial activities, but a review of pipeline qualification was also a necessary step in the instigation of the turnaround plan to set a realistic baseline.”
The pipeline, it reckons, has now been “appropriately cleansed and qualified”, and management are confident that what remains is “robust and of high quality.” However, “that pipeline continues to be in the early stages of a rebuild.” The company is predicting a return to growth next year.
After the March share suspension, WANdisco revealed the extent of the problem: “The board now expects that anticipated FY22 revenue could be as low as $9 million and not $24 million as previously reported.”
And in 2021, the company ran at an operating loss of almost $40 million, on sales of just $7.3 million.
But though WANdisco had never made a profit since its formation in 2005, its value was estimated at $1 billion before the share suspension.
As previously announced, the firm is to be rebranded as Cirata, and over the coming weeks there will be a “rolling programme of brand introduction and delivery” across the company’s operations.
The company’s share ticker on AIM is expected to be changed from “WAND” to “CRTA” by “early October 2023”.
Stephen Kelly, the former chief executive officer of accountancy software firm Sage, was brought in as CEO to turn the company around after the sales reporting fiasco, and recently helped seal a $30 million equity fundraise from shareholders as part of the company’s recovery efforts, which also included being relisted on AIM.
Kelly said this morning management had conducted a “root and branch” review of the company, and admitted that “sadly, very little from the past deserves preservation.” He added: “We are building from the ground up.”
A snapshot survey just published indicates smaller companies may well baulk at paying rapidly increasing cloud storage costs, with a number instead intending to move some of their data onto on-premises storage systems to mitigate the increases.
All the hyperscale cloud storage suppliers have announced price increases recently, and last week, IBM announced its own raft of more expensive services, taking effect from January 2024.
The research suggests that almost two-thirds (62 percent) of UK SMEs that use cloud computing services will take steps to combat their rising cost by the end of 2023.
The report, commissioned by business internet service provider Beaming, questioned the leaders of 500 UK companies employing fewer than 250 people, and found that a third (33 percent) of them planned to reduce the amount of data they stored in the cloud. In addition, a quarter (24 percent) were going to reduce the number of cloud services they used.
Although more than a quarter (27 percent) of SMEs said they initially adopted cloud to reduce computing expenditure, they now expect a 10 percent increase in the cost of cloud services this year. Several major cloud providers have introduced double-digit price increases for services used by SMEs already.
Just one in five (20 percent) of SMEs say they will absorb the extra costs, however.
In addition to reducing the amount of data stored in the cloud, and cutting the number of cloud services used, 21 percent of firms say they will downgrade some subscriptions to basic services. And a further 17 percent plan to move data or applications from the cloud to on-premise systems instead.
“While the cloud has delivered many benefits to businesses, the cost of cloud has been creeping up for some time now, and that creep is starting to look unjustified to businesses dealing with wider inflationary pressures,” said Sonia Blizzard, managing director at Beaming. “Many SMEs, some of which rushed to the cloud to support remote working during the pandemic, are questioning the value of these services for the first time, and are now taking action to get on top of cost increases.”
Uber is using Alluxio’s virtual distributed file system to speed its Hadoop-based analytics processing by caching hot read data.
Alluxio has developed open source data store virtualization software, a virtual distributed file system with multi-tier caching. Analytics apps can access data stores in a single way instead of using unique access routes to each storage silo. This speeds up data silo access and includes an HDFS driver and interface, which proved useful to Uber.
Alluxio graphic
The ride-hailing company stores exabytes of data across Hadoop Distributed File System (HDFS) clusters based on 4TB disk drives. It’s moving to 16TB drives and has a disk IO problem – capacity per drive has gone up but bandwidth has not, causing slow reads from data nodes.
It analyzed traffic on the production cluster and found “that the majority of I/O operations usually came from the read traffic and targeted only a small portion of the data stored on each DataNode.” A table shows some of the statistics gathered from ~20 hours of read/write traces from DataNodes across multiple hot clusters.
The table above shows the majority of the traffic is read IO and less than 20 percent of the data blocks were read in a 16 to 20-hour time window. In one hour, the top 10,000 blocks were responsible for 90 percent or more of the read traffic.
Based on this, the IT people at Uber reckoned that caching the hot read data would speed overall IO. They thought a 4TB SSD cache should be able to store 10,000 blocks based on the average block size in hot clusters.
They used Alluxio to have a read-only DataNode local cache for the hot data alongside the 525TB disk capacity using 16TB HDDs. The DataNode local SSD cache is situated within the DataNode process, and remains entirely transparent to HDFS NameNode and clients. It resulted in “nearly 2x faster read performance, as well as reducing the chance of process blocking on read by about one-third.”
Uber blog diagram.
Uber experienced a 99.5 percent cache ratio, meaning there was a high chance that a data block in the cache would be accessed again. Cache DataNode read throughput was found to be significantly higher, nearly twice that of non-cache data node read throughput.
Uber concluded: “Overall, the local caching solution has proven to be effective in improving HDFS’s efficiency and enhancing the overall performance.”
An Uber blog contains much more information about the Alluxio caching arrangements.
SaaS backup supplier Druva has passed $200 million in annual recurring revenue, with co-founder and CEO Jaspreet Singh claiming it’s “the first (and only) 100 percent SaaS data resiliency platform to reach this milestone.” [See bootnote below.]
Update: N-able and Druva subscription differences explained by Druva. 8 Sep 2023.
The company’s Data Resiliency Cloud is a cloud-native backup-as-a-service offering based on AWS’s cloud infrastructure. It provides protection for SaaS applications, public and private cloud environments, and end-user devices. Druva sells its product itself and through Dell EMC as PowerProtect and in APEX Backup Services. It announced a $10 million SaaS data protection service warranty in August last year. The company has more than 5,000 customers, growing by 30 percent in the last year.
Jaspreet Singh.
Singh writes: “In an era where evolving cyber threats shape the global IT landscape, Druva has proven itself as a disruptive force driving industry change and a more secure future.”
He says SaaS backup is better than purchased backup software because “Druva customers do not have to think about procuring backup hardware and managing supply chain issues. They do not have to spend weekends ensuring their hundreds of appliance nodes are patched with security updates, and they don’t have to worry about scaling their infrastructure to protect new applications.”
Commvault’s Metallic SaaS data protection service had a $113 million ARR last month from almost 4,000 customers – $28,250 per customer compared to Druva’s $40,000 per customer.
Competitor HYCU reported having more than 3,000 customers at the start of 2022, and passed the 3,200 mark in June of that year. Its ARR number hasn’t been revealed.
Druva has launched an MSP program, which Singh says “has now grown by nearly 300 percent annually.”
We note that N-able, which also supplies backup-as-a-service through MSPs, recorded $106.1 million revenue in the second 2023 quarter, giving it a $424 million annual revenue run rate. Druva’s $200 million number is for its annual recurring revenue. [See below for adiscission of Druva and N-able’s subscription revenues.]
Druva intends to harness AI to “enable users to improve backup operations with automation, efficiency, and simplicity while reducing threat response and recovery time through trend analysis and orchestration.” CTO Stephen Manley writes: “We will further incorporate AI into Druva’s platform to take the next step towards autonomous backups, which will deliver the simplicity, efficiency and proactive security that will revolutionize data protection.”
The company is also intending to expand from AWS with Azure protection. Azure and AI announcements are said to be coming soon.
Druva was founded in 2008 and has raised $475 million through eight funding rounds, the latest one for $147 million in 2021. It made the shift from selling backup software licenses to a SaaS business model in 2013 and now carries out more than 6 billion backups annually, managing more than 275PB of data.
Update – Druva and N-able
We asked Druva some questions about its positioning vs N-Able and Druva CFO Druva CFO Mahesh Patel provided an explanation of relevant terms and phrases in Druva’s blog and release.
First and Only 100% SaaS Data Resiliency Platform
N-Ables’ PRIMARY business is MSP Remote Monitoring and Management and they also offer security with EDR and DNS filtering. They do not disclose the breakdown of their revenue across these services and backup, but of note, there is no mention of storage in any financial filings and cloud hosting costs are their second highest COGs item behind technical support personnel. From our view of SaaS data protection, customer storage/cloud hosting costs would be the highest component of COGs, but it is not for N-Able. Our read is that MSP Remote Monitoring and Management is a dominant portion of their ARR and validates our position.
SaaS versus Subscription
We believe SaaS is materially different than Subscription ARR. SaaS in our view is outcome driven, which means we provide a turn key solution which includes the storage as well as the automation (patching, upgrades, etc.) at a predictable price. Providing software for a customer to deploy on existing hardware is subscription, but not SaaS. In these cases, when the hardware depreciates or fails, the software gets replaced.
Please note the highlighted language from N-Ables SEC Filed 10-K below which states that a portion of revenue is “self-managed” by their customers. While they recognize this revenue upfront in the “subscription revenue” line item, they ratably recognize the maintenance portion for these customers in subscription revenues. SaaS companies do not have a separate maintenance components. Also, below is a picture from N-able’s website offering customers an option to deploy on existing storage. Based on the notes in the financials a minimum of 20% of their revenue is “self-managed.
N-able note
“Subscription Revenue. We primarily derive subscription revenue from the sale of subscriptions to the SaaS solutions that we host and manage on our platform. Our subscriptions provide access to the latest versions of our software platform, technical support and unspecified software upgrades and updates. Subscription revenue for our SaaS solutions is generally recognized ratably over the subscription term once the service is made available to the MSP partner or when we have the right to invoice for services performed. In addition, our subscription revenue includes sales of our self-managed solutions, which are hosted and managed by our MSP partners. Subscriptions of our self-managed solutions include term licenses, technical support and unspecified software upgrades. Revenue from the license performance obligation of our self-managed solutions is recognized at a point in time upon delivery of the access to the licenses and revenue from the performance obligation related to the technical support and unspecified software upgrades of our subscription-based license arrangements is recognized ratably over the agreement period. We generally invoice subscription agreements monthly based on usage or in advance over the subscription period on either a monthly or annual basis.”
Bootnote
A Druva spokesperson told us: “Our “first and only” was specifically intended to describe our “100% Saas Data Resiliency platform” not $200ARR.”
Two semiconductor analysts have written an Emerging Memories report and suggest MRAM has good prospects for replacing SRAM and NOR flash in edge computing devices needing to process and analyze data in real-time.
Tom Coughlin of Coughlin Associates and Objective Analysis’ Jim Handy have produced a 272-page, 30-table analysis of the prospects for five emerging memory technologies: MRAM, Phase-Change Memory (PCM), Ferro-Electric RAM (FERAM), Resistive RAM (ReRAM) and NRAM/UltraRAM. These are typically non-volatile memories with DRAM data access speeds, and roadmaps to increased densities that promise to go beyond the scaling limits of NAND and NOR and use less electricity than the constantly refreshed DRAM and SRAM.
Emerging memories report graphic.
Coughlin writes: “As the Internet of Things builds out, and as a growing number of devices make new measurements of data that was previously unmeasured, the world’s data processing needs will grow exponentially, including AI training and inference using data from these devices. This growth will not be matched with increases in communication bandwidth, despite the adoption of new wireless standards like 5G.”
That means that the data will need to be processed where it is generated and where analytics results have to be applied – in IoT edge locations. Communication links between edge sites and remote datacenters will be too slow, Coughlin writes, for remote datacenters to do the work within the real time and near-real time time limits needed.
Intel had the most determined attempt to popularise an emerging memory technology with its Optane 3D-XPoint based on PCM technology. This failed because of the complexity of its programming needs and its cost. The cost of producing the Optane chips is basically production equipment and materials cost divided by the chip output numbers.
It is inherent in semiconductor fabrication that the higher the output volume, the lower the cost.
With Optane, Intel could have priced its chips based on a much higher output and sold more of them, but it would have written down a loss – though the cost would have been astronomically high. This is the emerging memory production cost trap. High sales volume requires low prices based on high volume output from a costly plant. But the supplier starts out with low production volume, meaning high chip costs, and has to bear the loss until sales volume ramps up high enough to justify production volume increases which lower per-chip costs.
It is not guaranteed that sales volumes will ramp up enough and so a gamble that the manufacturer can sell enough chips over time to justify production increases. This did not happen with Optane, and Intel, and its manufacturing partner Micron, pulled the plug and killed the product.
None of the five emerging memory technologies identified by Coughlin and Handy are that new in a time sense, but none of them have yet crossed the chasm into mass-production. The two think MRAM and its close cousin STT-RAM have the leading chance of becoming widely used, with MRAM having the largest chance.
They estimate that 133TB of MRAM were produced in 2022, generating $118 million in revenues, and this could rise to 4.56EB in 2033 and revenues of $980 million.At that time DRAM and NAND revenues would be vastly higher, as a chart they use (above) indicates, with DRAM looking to amount to near $100 billion.
The Emerging Memories Branch Out report is available for $7,500 from Coughlin Associates with a substantial brochure freely available here.
Bootnote
MRAM manufacturer Everspin was profiled here. NRAM is Nantero’s Nano-RAM. UltraRAM is a charge-based memory.