Home Blog Page 27

Pliops bypasses HBM limits for GPU servers

Key-value accelerator card provider Pliops has unveiled the FusIOnX stack as an end-to-end AI inference offering based on its XDP LightningAI card.

Pliops’ XDP LightningAI PCI card and software augment the high-bandwidth memory (HBM) memory tier for GPU servers and accelerate vLLMs on Nvidia Dynamo by 2.5x. UC Berkeley’s open source virtual large language model (vLLM) library for inferencing and serving uses a key-value (KV) cache as a short-term memory for batching user responses. Nvidia’s Dynamo framework is open source software to optimize inference engines such as TensorRT LLM and vLLM. The XDP LightningAI is a PCIe add-in card and functions as a memory tier for GPU servers. It is powered by ASIC hardware and software, and caches intermediate LLM process step values on NVMe/RDMA-accessed SSDs.

Pliops slide

Pliops says GPU servers have limited amounts of HBM. Its technology is intended to deal with the situation where a model’s context window – its set of in-use tokens – grows so large that it overflows the available HBM capacity, and evicted contexts have to be recomputed. The model is memory-limited and its execution time ramps up as the context window size increases.

By storing the already-computed contexts in fast-access SSDs, retrieving them when needed, the model’s overall run time is reduced compared with recomputing the contexts. Users can get more HBM capacity by buying more GPU servers, but the cost of this is high, and bulking out HBM capacity with a sub-HBM storage tier is much less expensive and, we understand, almost as fast. The XDP LightningAI card with FusIOnX software provides, Pliops says, “up to 8x faster end-to-end GPU inference.”

Think of FusIOnX as AI stack glue for AI workloads. Pliops provides several examples:

  • FusIOnX vLLM production stack: Pliops vLLM KV-Cache acceleration, smart routing supporting multiple GPU nodes, and upstream vLLM compatibility.
  • FusIOnX vLLM + Dynamo + SGLang BASIC: Pliops vLLM, Dynamo, KV-Cache acceleration integration, smart routing supporting multiple GPU nodes, and single or multi-node support.
  • FusIOnX KVIO: Key-Value I/O connectivity to GPUs, distributed Key-Value over network for scale – serves any GPU in a server, with support for RAG/Vector-DB applications on CPU servers coming soon.
  • FusIOnX KV Store: XDP AccelKV Key-Value store, XDP RAIDplus Self Healing, distributed Key-Value over network for scale – serves any GPU in a server, with support for RAG/Vector-DB applications on CPU servers coming soon.
Pliops slide

The card can be used to accelerate one or more GPU servers hooked up to a storage array or other stored data resource, or it can be used in a hyperconverged all-in-one mode, installed in a GPU server, providing storage using its 24 SSD slots, and accelerating inference – an LLM in a box, as Pliops describes that configuration. 

Pliops slide

Pliops has its PCIe add-in-card method, independent of the storage system, to feed the GPUs with the model’s bulk data, independent of the GPU supplier as well. The XDP LightningAI card runs in a 2RU Dell server with 24 SSD slots. Pliops says its technology accelerates the standard vLLM production stack 2.5x in terms of requests per second:

Pliops slide

XDP LightningAI-based FusIOnX LLM and GenAI is in production now. It provides “inference acceleration via efficient and scalable KVCache storage, and KV-CacheDisaggregation (for Prefill/Decode nodes separation)” and has a “shared, super-fast Key-Value Store, ideal for storing long-term memory for LLM architectures like Google’s Titans.”

There are three more FusIOnX stacks coming. FusIOnX RAG and Vector Databases is in the proof-of-concept stage and should provide index building and retrieval acceleration.

FusIOnX GNN is in development and will store and retrieve node embeddings for large GNN (graph neural network) applications. A FusIOnX DLRM (deep learning recommendation model) is also in development and should provide a “simplified, superfast storage pipeline with access to TBs-to-PBs scale embedding entities.”

Comment

There are various AI workload acceleration products from other suppliers. GridGain’s software enables a cluster of servers to share memory and therefore run apps needing more memory than that supported by a single server.  It provides a distributed memory space atop a cluster or grid of x86 servers with a massively parallel architecture. AI is another workload it can support.

GridGain for AI can support RAG applications, enabling the creation of relevant prompts for language models using enterprise data. It provides storage for both structured and unstructured data, with support for vector search, full-text search, and SQL-based structured data retrieval. And it integrates with open source and publicly available libraries (LangChain, Langflow) and language models. A blog post can tell you more.

Three more alternatives are Hammerspace’s Tier Zero scheme, WEKA’s Augmented Memory Grid, and VAST Data’s VUA (VAST Undivided Attention), and they all support Nvidia’s GPUDirect protocols.

Asigra improves SaaS app data restorability

Canadian backup vendor Asigra has unveiled SaaSAssure 2025, its latest data protection platform for SaaS apps, now featuring granular restore and automatic discovery capabilities.

SaaSAssure was launched in summer 2024 with pre-configured integrations to protect customer data with connectors for Salesforce, Microsoft 365, Exchange, SharePoint, Atlassian’s Jira and Confluence, Intuit’s QuickBooks Online, Box, OneDrive, HubSpot, and others. It is available to both enterprises and MSPs so that they can offer SaaS app customer data protection services. SaaSAssure is built on AWS and offers flexible storage options, including Asigra Cloud Storage and Bring Your Own Storage (BYOS). This new release is available to customers in North America, the UK, and the European Union.

Eric Simmons, Asigra
Eric Simmons

CEO Eric Simmons stated: “The international availability of SaaSAssure, including the United Kingdom and Europe, expands our support for MSPs and enterprises who need advanced SaaS backup that goes beyond Microsoft 365 or Salesforce. With expanded Exchange and HubSpot granularity, plus Autodiscovery and UI upgrades, customers gain comprehensive data protection in a way that integrates smoothly with other critical SaaS applications.”

The new features in this release include:

• Exchange Granular Restore for individual mailboxes, folders, emails, contacts, events, and attachments, as well as full backups and mailbox restores.
• HubSpot Granular Restore for specific CRM categories, object groups (e.g. contacts, companies, custom objects), and individual records with or without associated data, and full backup restoration.
• HubSpot Custom Object Restore means previously backed up custom objects are now fully restorable.
• Autodiscovery for Exchange automatically detects and adds new mailboxes – including shared, licensed, and resource types – into domain-level backups.
• Autodiscovery for SharePoint automatically includes newly created SharePoint sites in domain level backups for improved coverage.
• Domain Level SharePoint Backup simplifies multi-site backup management for SharePoint users.
• Intuitive restore interface with a redesigned UI streamlining the recovery process for IT teams and MSPs.
• Configurable email alerts for activities like backup failures to improve incident response.
• Pendo Resource Center Integration offers enhanced in-platform user guidance and support.

New SaaS app connectors are coming for ADP, BambooHR, Docusign, Entra ID, Freshdesk, Trello, and Zendesk. You can be notified about new connectors by filling in a form here.  SaaSAssure is available for immediate deployment.

Xinnor reports rapid growth as xiRAID sales climb sharply

Software RAID supplier Xinnor saw first quarter sales of its xiRAID product reach 86 percent of the company’s total revenue for all of 2024.

Israel-based Xinnor’s xiRAID provides a local block device to the system, with data distributed across drives for faster access. It has a declustered RAID feature for HDDs, which places spare zones over all drives in the array and restores the data of a failed drive to these zones, making drive rebuilds faster. The software supports NVMe, SAS, and SATA drives, and works with block devices, local or remote, using any transport – PCIe, NVMe-oF or SPDK target, Fibre Channel, or InfiniBand. Xinnor says its recent growth has been driven by a series of strategic partnerships, including a major agreement with Supermicro, and an expanded global reseller channel.

Davide Villa, Xinnor
Davide Villa

A statement from chief revenue officer Davide Villa said: ”The momentum we’ve built in Q1 is truly exceptional. Our patented xiRAID technology is proving to be a game-changer in the data storage market. The fact that in one quarter we achieved what took us all last year to accomplish demonstrates the accelerating market recognition of our unique value proposition.

”We are extremely proud that several leading institutions around the world selected xiRAID to protect and accelerate access to critical data for innovative AI projects. The channel partner extension and the reseller agreement with Supermicro will enhance our reach, enabling more customers to experience the performance lead of xiRAID.”

New resellers include:

  • APAC: Xenon Systems in Australia, CNDfactory in South Korea, DigitalOcean in China
  • Europe: NEC Deutschland GmbH in Germany, HUB4 in Poland, 2CRSI in France, and BSI in the UK
  • Americas: Advanced HPC, SourceCode, Colfax International in the US

And recent customer wins:

  • A leading financial company deployed xiRAID across all the NVMe servers within its datacenters.
  • Two major universities in Central Europe, active in advanced AI research, implemented xiRAID in high-availability mode in two independent all-NVMe storage clusters, for over 20 PB. 
  • The Massachusetts Institute of Technology (MIT) deployed xiRAID to protect around 400 NVMe drives for a variety of use cases.

We think Xinnor is benefiting from a rise in AI workloads needing RAID-protected NVMe SSDs.

VAST Data adds vector search and deepens Google Cloud ties

VAST Data has added vector search to its database and integrated its software more deeply into Google’s cloud.

The database is part of its software stack layered on top of its DASE (Disaggregated Shared Everything) storage foundation along with the Data Catalog, DataSpace, unstructured DataStore and DataEngine (InsightEngine). Generative AI large language models (LLMs) manipulate and process data indirectly, using hashed representations – vector embeddings or just vectors – of multiple dimensions of an item. An intermediate abstraction of word in text documents is a token. These are vectorized and a document item’s vectors are stored in a multi-dimensional space with the LLM searching for vectors as it computes steps in its generation of a response to user requests. This is called semantic searching.

A VAST Data blog by Product Marketing Manager Colleen Quinn says: “Vector search is no longer just a lookup tool; it’s becoming the foundation for real-time memory, context retrieval, and reasoning in AI agents.”

Vectors are stored by specialized vector database suppliers – think Pinecone, Weaviate and Zilliz – and are also being added as a data type by existing database suppliers. Quinn says that the VAST Vector Search engine “powers real-time retrieval, transactional integrity, and cross-modal governance in one platform without creating new silos.” 

In the VAST world, there is a single query engine, which can handle SQL and vector and hybrid queries. It queries VAST’s unstructured DataStore and the DataBase, where vectors are now a standard data type. Quinn says: “Vector embeddings are stored directly inside the VAST DataBase, alongside traditional metadata and full unstructured content to enable hybrid queries across modalities, without orchestration layers or external indexes.”

“This native integration enables agentic systems to retrieve memory, reason over metadata, and act – all without ETL pipelines, external indexes, or orchestration layers.”

“The system uses sorted projections, precomputed materializations, and CPU fallback paths to maintain sub-second performance – even at trillion-vector scale. And because all indexes live with the data, every compute node can access them directly, enabling real-time search across all modalities – text, images, audio, and more – without system sprawl or delay.”

“At query time, VAST compares the input vector to all stored vectors in parallel. This process uses compact, columnar data chunks to prune irrelevant blocks early and accelerate retrieval.”

“Future capabilities will expand beyond vector search, enabling new forms of hybrid reasoning, structured querying, and intelligent data pipelines.” Think multi-modal pipelines and intelligent data preparation.

Google Cloud

Building on its April 2024 announcement that it had ported its Data Platform software to Google’s cloud, enabling users to spin up VAST clusters there, VAST has now gone further. It says its Data Platform “is fully integrated into Google Cloud – offering a unified foundation for training, retrieval-augmented generation (RAG), inference, and analytics pipelines that span across cloud, edge, and on-premises environments.”

Renen Hallak, VAST founder and CEO, spoke of a “leap forward,” stating: “By combining the elasticity and reach of Google Cloud with the intelligence and simplicity of the VAST Data Platform, we’re giving developers and researchers the tools they need to move faster, build smarter, and scale without limits.”

The additional VAST facilities now available on GCP include:

  • InsightEngine enabling developers and researchers to run data-centric AI pipelines—such as RAG, preprocessing, and indexing—natively at the data layer.
  • DataSpace with its exabyte-scale global namespace which connects data on-premises, at the edge, and in Google Cloud as well as other hyperscalers for data access and mobility.
  • Unified file (NFS, SMB), object (S3), block, and database access.

VAST says customers can run AI, ML, and analytics initiatives without operational overhead and unify their AI training, RAG pipelines, high-throughput data processing, and unstructured data lakes on its single, high-performance platform.

The base VAST software has already been ported to AWS, with v5.2 available in the AWS Marketplace. We understand v5.3 is the latest version of VAST’s software. 

There is limited VAST availability on the Azure Marketplace, where “VAST’s virtual appliances on Azure allow customers to deploy VAST’s disaggregated storage processing from the cloud of their choice. These containers are free of charge and customers interested in deploying Universal Storage should contact VAST Data to get their capacity under management. This product is available as a Directed Availability release.”

Comment

With its all-in-one storage and AI stack, VAST Data is becoming the equivalent of a software AI system infrastructure mainframe environment, built from modular storage hardware boxes, NMVe RDMA links to x86 and GPU compute, not forgetting Arm (BlueField). Both compute and storage hardware are commodities for VAST. But the software is far from a commodity. It is VAST’s core proprietary IP, and being developed and extended at a high rate, with a promise of being uniformly available across the on-premises environment and the AWS, Azure, and Google clouds. For better or worse, as far as we are aware, no other storage nor system data infrastructure company is working on such a broad and deep AI stack at the same pace.

DRAM and NAND: Micron and SK Hynix’s paths to production

Analysis: There are two companies highly focused on DRAM and NAND production – Micron and SK hynix. Both are competing intensively in enterprise SSDs and high-bandwidth memory but came to their dual market focus in involved and indirect ways, via early sprawling business expansion, with mis-steps and inspired moves enroute. 

One was blessed by Intel and one was cursed. Micron went into bed with Intel and the ill-fated Optane technology, which crashed and burned, while SK hynix bought troubled Intel’s SSD and NAND fab business, and went fast into the high-capacity SSD market, which took off and is flying high. It also stopped Western Digital merging with Kioxia, and then pushed early into the high-bandwidth memory (HBM) business and is now soaring upwards on Nvidia’s GPU memory coat tails.

Micron

Micron was started up in 1978 in Boise, Idaho by Ward Parkinson, Joe Parkinson, Dennis Wilson, and Doug Pitman as a semiconductor design operation. It started fabbing 65K DRAM chips in 1981 and IPO’d in 1984. A RISC CPU project came and went in the 1991-1992 period. Micron acquired the Netframe server business in 1997. It entered the PC business but exited that in 2002, and bought into the retail storage media business by buying Lexar in 2006

Micron entered the flash business in 2005 via a joint venture with Intel. It bought Numonyx, which made flash chips, in 2010, for $1.27 billion. It then developed its memory business by buying Elpida Memory in 2013, giving it an Apple iPhone and iPad memory supply business, also buying PC memory fabber Rexchip and Innotera Memories in 2016.

However, Micron entered into what was, with hindsight, a major mis-step in 2013 by joining Intel in the Optane 3D XPoint storage-class memory business and manufacturing the phase-change memory technology chips. It was even involved in producing its own branded QuantX 3D XPoint chips – but these went nowhere.

Despite Intel pouring millions of dollars into Optane, the technology failed to take off, with production volumes never growing large enough to lower the per-chip cost and so enable profitable manufacture. Eight years later, in March 2021, Micron cancelled the collaboration and walked away, stopping Optane chip production. Intel saw the writing was on the wall and canned its Optane business in mid-2022.

Ironically, Intel sold its NAND and SSD business to SK hynix in 2021, the same year that Micron up-ended the Optane collaboration. If only Micron had been in a position to buy that business it would now have to a stronger SSD market position.

Sanjay Mehrotra became Micron’s CEO in 2017 and it was he who pushed Optane out of Micron’s door. He also sold off the Lexar business to focus on DRAM and NAND.

A look at Micron’s revenues and profits from 2016 to date show a pronounced shortage and glut, peak and trough pattern, characteristic of the DRAM and NAND markets:

During the Optane period from 2013 to 2021, Micron diverted production capacity and funding away from DRAM and NAND to Optane and, with hindsight again, we could say it would be a larger company now, revenue-wise, if had not done that.

SK hynix

SK hynix has a more recent history than Micron. It was founded as Hyundai Electronics Industries Co., Ltd. by Chung Ju-yung in 1983, as part of the Hyundai Group. It produced SRAM product in 1984 and DRAM in 1985. The company built a range of products including PCs, car radios, telephone switchboards, answering machines, cameras, mobile phones and pagers. It sprawled even more than the early Micron in a product sense.

Hyundai Electronics Industries bought disk drive maker Maxtor in 1993 and IPO’d in 1996. It bought LG Semiconductor in 1998. In 2000 and, in financial difficulties caused by DRAM price drops, it restructured with subsidiaries spun off. It rebranded its core business as Hynix Semiconductor in 2001 and then was itself spun out of the Hyundai Group. 

More subsidiary divestitures followed in 2002 and 2004. The business then recovered but not for long as it defaulted on loans and went through a debt-for-equity swap. Its lenders put it up for sale in 2009 and Hynix partnered HP to productize Memristor technology, but that was a bust. 

Hynix was fined for price-fixing in 2010, to add more trouble, and was eventually acquired in 2012 by SK Telecom for $3 billion. SK Telecom rebranded it as SK hynix with a focus on DRAM and NAND and it prospered and is prospering. SK hynix is headquartered in Icheon, South Korea.

The company was part of the Bain consortium which purchased a majority share in the financially troubled Toshiba Memory Systems NAND business in 2017. This business had a NAND fab joint venture with Western Digital and was rebranded as Kioxia. 

SK hynix then bought Intel’s NAND business for $9 billion in 2021 in a multi-year deal; it completed earlier this year, and incorporated it in its Solidigm division, with NAND fabs in Dalian, China. This gave it an excellent position in the high-capacity SSD market and cemented its twin focus on DRAM and NAND.

A merger between Western Digital and Kioxia was suggested as a way forward for Kioxia in 2023 but eventually called off after SK hynix apparently blocked it. A combined Kioxia-WD business would have had a larger NAND market share than SK hynix, and a single NAND technology stack, lowering its costs.

SK hynix has two stacks: its own and Solidigm’s, and faced being relegated to number three in the market, with an 8.5 percent market share, behind Samsung and Kioxia-WD, both with about 33 percent. 

Currently Samsung has a leading 36.9 percent share and SK hynix + Solidigm is in second place with 22.1 percent. These two are followed in declining order by Kioxia (12.4 percent), Micron (11.7 percent), Western Digital (now Sandisk and 11.6 percent) and others.

Solidigm took an early lead in high-capacity enterprise SSDs in 2024, with a 61.44 GB QLC drive and then a 122 TB drive in late 2024. This was well timed for the rapid rise in demand for fast access to masses of data needed for generative AI processing. Micron delivered its own 61.44 TB SSD later in 2024.

SK hynix started mass-producing high-bandwidth memory (HBM) in 2024 and has become the dominant supplier of this type of memory, needed for GPU servers, to Nvidia. As of 2025’s first quarter SK hynix holds 70 percent of the HBM market, with Micron and Samsung sharing the rest, proportions unknown. Micron says its HBM capacity is sold out for 2025 while Samsung’s latest HBM  chips are being qualified.

Revenue-wise SK hynix and Micron were neck and neck in the DRAM/NAND market glut in mid-2023. But since then SK hynix has grown its revenues faster, led by HMB, with a widening gap between the two. 

It seems unlikely that, absent SK hynix mis-steps, Micron will catch up. Its possibilities for catching up in the DRAM market could include getting early into the 3D DRAM business. Any NAND market catchup seems more likely to come from an acquisition; Sandisk anyone?

The two companies, Micron and SK hynix, have both been radically affected by Intel; Micron by the loss-making and jinxed Optane technology, and SK hynix by its market share-expanding Solidigm acquisition. Intel’s CEO at that time, Pat Gelsinger, said Intel should never have been in the memory business. Because it was, it helped hang an albatross around Micron’s neck and gave SK hynix a Solidigm shove upwards in the NAND business. Icheon benefited while Boise did not.

Commvault and Deloitte team up on enterprise cyber resilience

Enterprise data protector Commvault is allying with big four accountancy biz Deloitte pitching to customers who are trying to be more resilient against cyber threats.

Commvault has been layering cyber resilience features on top of its core data protection facilities. It recently improved its Cleanroom Recovery capabilities and has a CrowdStrike partnership to detect and respond to cyberattacks. A CrowdStrike alert to the Commvault Cloud can trigger a ThreatScan check for affected data, and restore compromised data to a known good state using backups. Deloitte has a set of Cyber Defense and Resilience services, including forensic specialists who investigate cyber-incidents and help contain and recover from them.

Alan Atkinson, Commvault
Alan Atkinson

Alan Atkinson, chief partner officer at Commvault, stated: “By combining Commvault’s cyber resilience technologies with Deloitte’s deep technical knowledge in cyber detection and response, we are creating a formidable defense for our joint customers against today’s most sophisticated cyber threats.” 

The two aim to integrate Commvault’s cyber resilience services with Deloitte’s cyber defense and response capabilities to help businesses maintain operational continuity before, during, and after a cyber incident. Such services might have mitigated the impact on UK retailers like Marks & Spencer, Co-Op, and Harrods during their recent cyberattacks. Deloitte, coincidentally, is Marks & Spencer’s auditor.

Specifically, before an attack, Commvault and Deloitte will assist organizations in understanding and defining their minimum viability – the critical set of applications, assets, processes, and people required to operate their business following an attack or outage. Once defined, Commvault’s Cleanroom Recovery can assist enterprises in assessing their minimum viability state and testing their recovery plans in advance.

Then, during an attack, the two say Deloitte’s cyber risk services combined with Commvault’s AI-enabled anomaly detection capabilities help joint clients identify and mitigate potential threats before they escalate.

After an attack, during the recovery phase, Deloitte’s incident response capabilities combined with the Commvault Cloud platform, which includes resilience offerings like Cloud Rewind, Clumio Backtrack, and Cleanroom Recovery, help customers “quickly recover, minimize downtime, and operate in a state of continuous business.” 

David Nowak, Deloitte
David Nowak

David Nowak, Principal, Deloitte & Touche LLP, said: “Together, we are offering a strategic and broad solution that not only helps our clients fortify their defenses but also helps with recovering from outages and cyberattacks.” 

Commvault competitors Cohesity, Rubrik, and Veeam partner with the main IT services firms, such as Deloitte, EY, KPMG, and PwC, on a tactical basis but don’t have strategic alliances with them.

Find out more about Commvault and Deloitte at a Commvault microsite.

Bootnote

Commvault’s Azure infrastructure was breached by a suspected nation-state actor at the end of April, but its customer backup data was not accessed. A Commvault Command Center flaw, CVE-2025-34028, is being fixed.

Cerabyte gains backing from Western Digital

Western Digital has invested in archival ceramic tablet technology developer Cerabyte.

This follows strategic investments from Pure Storage and In-Q-Tel. Cerabyte’s technology involves a femtosecond laser burning nanodot holes in the ceramic coating of glass tablets. The holes form part of QR-type data patterns, provide 1 GB capacity per tablet surface, and are read using scanning microscopes. Tablets are stored offline in shelves with a robot system transporting them to and from writing and reading stations. Their contents can last in immutable form for hundreds if not thousands of years and require no energy while offline. They are denser than tape cartridges and provide faster data access.

Shantnu Sharma, Western Digital
Shantnu Sharma

Shantnu Sharma, Western Digital’s Chief Strategy and Corporate Development Officer, stated: “We are looking forward to working with Cerabyte to formulate a technology partnership for the commercialization of this technology. Our investment in Cerabyte aligns with our priority of extending the reach of our products further into long-term data storage use cases.”

Western Digital has spun off its NAND fab and SSD unit, SanDisk, and is now a pure play disk drive manufacturer. 

Cerabyte was founded in Germany and opened a Silicon Valley office and another in Boulder, Colorado, in summer 2024 with Steffen Hellmold recruited as a director to help with product commercialization. Hellmold used to work for Western Digital as a corporate strategy VP from 2016 to 2021, and was involved with DNA storage technology, joining Twist Bioscience after WD.

Steffen Hellmold, Cerabyte
Steffen Hellmold

Hellmold has said: “It was previously thought only DNA storage could develop to store exabytes per rack. But Cerabyte can scale there as well.” Its tech is more practical and closer to commercialization than DNA storage.

According to Cerabyte, its ceramic data storage has the potential to enable new use cases with better total cost of ownership (TCO) than current cold-tier solutions, such as tape. It says long-term permanent data storage must be affordable, sustainable, resilient to bit rot, not require periodic maintenance or environmental control, or additional energy to reliably retain the data stored.

CEO and co-founder Christian Pflaum said: “Our ceramic data storage offers a vital, complementary long-term data storage layer that ensures rapid data retrieval – often within seconds – unlocking new revenue streams. We are excited to be working with Western Digital to define a technology partnership, fueling our ability to deliver accessible permanent storage solutions at scale.”

Christian Pflaum, Cerabyte
Christian Pflaum

The company developed a demonstration prototype system in 2023 and will now be working with manufacturing partners on building a commercial product, meaning a library system with reading and writing drives, robotics, tablet-storing shelves, tablet load and removal functionality, software to manage and control it, and a ceramic tablet manufacturing capability. 

A company like Quantum or SpectraLogic could help develop a ceramic tablet library system by using their existing tape technology as a starting point. Western Digital, with its experience in manufacturing disk platters, which can have glass substrates, could offer help in the tablet production area, and like Pure Storage, it has a channel through which to deliver technology products.

NEO expands 3D X-DRAM tech for denser, faster memory

NEO Semiconductor has expanded its 3D X-DRAM concept with new variants aimed at improving retention time, density, and power efficiency. By modifying its original one-transistor, zero-capacitor (1T0C) design to include either a capacitor (1T1C) or additional transistors (3T0C), the company is pushing toward denser, faster, and more energy-efficient DRAM structures that align with 2D DRAM scaling roadmaps.

A recently published NEO white paper details the architecture and design choices behind its updated memory cells. The original 1T0C design uses floating body cell technology while the 1T1C development uses an Indium Gallium Zinc Oxide (IGZO) transistor channel. An N+ polysilicon capacitor plate facilitates effective electron storage in the IGZO layer. 

The overall 1T1C cell structure looks like this: 

The left-hand diagram shows a top word line in place while it has been removed in the right-hand diagram to show the underlying components. NEO says: “The IGZO layer is coupled to a metal word line layer, which acts as the transistor gate. The drain of the IGZO layer connects to a vertical bit line made of materials. A thin high-k dielectric layer, serving as a capacitor, is formed along the cylindrical sidewall on the source side of the transistor between the IGZO channel and a capacitor plate layer. The capacitor plate, composed of conductors such as N+ polysilicon, is biased at VDD to facilitate effective electron storage in the N-type IGZO layer.”

This configuration can achieve a greater than 450 second retention time, meaning cell refreshes are needed less often, and up to 128 layers. 

By adding 5nm-thick spacer components between the vertical bit lines and the word line layers, NEO reduces parasitic bit line capacitance so that more than 512 layers can be stacked, increasing device capacity.

There are other 1T1C variants, such as one with a conductor plate, and another that eliminates the insulator between the vertical bit line and the IGZO layer. The 3T0C design looks like this:

It incorporates two IGZO layers for enhanced performance. This design relies on current sensing and is “particularly well-suited for in-memory computing and artificial intelligence (AI) applications, where high-speed data processing and efficient power management are crucial.”

NEO says its 3D X-DRAM array architecture is like a 3D NAND array with layer access stairs at the sides: “The array is segmented into multiple sectors by vertical slits. The multiple word line layers within each sector are connected to decoder circuits through staircase structures located along both edges of the array.”

NEO asserts that its “3D X-DRAM cell can be fabricated using a 3D NAND-like process, requiring only modifications to accommodate IGZO and capacitor formation.” The white paper goes into the process steps in some detail.

It also compares the density of 2D (planar) DRAM with 3D X-DRAM, saying: “According to public estimation, 2D DRAM at the 0a node can reach a density of 48 Gb. In contrast, 3D X-DRAM 1T1C cells can reach densities ranging from 64 Gb to 512 Gb, corresponding to 64 to 512 layers and a more than 10x increase.

Were 3D X-DRAM to be used for high-bandwidth memory (HBM), it would have greater bandwidth due to a wider bus width. HBM3e supports a 1K bit bus width with HBM4 projected to attain 2K bits by 2026. In contrast, “3D X-DRAM’s unique array structure eliminates the need for TSV and enables hybrid bonding technology, which can scale bus width beyond 4K bits to 32K bits, increasing bandwidth by up to 16X while significantly reducing power consumption and heat generation – making it a game-changer for AI applications.”

A proof-of-concept 1T0C test chip is being developed. Proof-of-concept test chips for the 1T1C design are currently in the planning stage, with availability expected in 2026. 

Bootnote

Samsung is exploring VS-DRAM (vertically stacked DRAM), which we understand to be a 1T1C structure.

Newcomer Arcfra lands spot in Gartner’s HCI vendor lineup

Singapore-based startup Arcfra has entered the hyperconverged infrastructure (HCI) market and been named in Gartner’s latest Market Guide for Full-Stack HCI Software less than a year after launch.

Wenhao Xu, Arcfra
Wenhao Xu

CEO Wenhao Xu founded Arcfra in May 2024 and the company has since expanded operations into South Korea and other parts of the Asia-Pacific region. Wenhao Xu was previously co-founder and board chair of Chinese HCI company SmartX, based in Beijing from 2013 to 2024, and a software engineer at Oracle-acquired Nimbula in Mountain View from 2011 to 2012. He was a summer intern at VMware during his academic courses.

Wenhao Xu stated: “We are excited to be recognized by Gartner in the 2025 Market Guide for Full-Stack HCI Software. We believe this recognition reflects our vision of delivering agile, cost-effective, and future-ready infrastructure to enterprises navigating the post-VMware era.”

The company provides full infrastructure support for compute, storage (block and file), networking, security, and disaster recovery with a single resource pool. The company is featured in Gartner’s updated Market Guide for Full-Stack Hyperconverged Infrastructure Software report, which was published last month. 

The Arcfra Enterprise Cloud Platform (AECP) supports both virtualized and containerized applications, and provides security and disaster recovery features. The management layer includes API access along with observability and automation functions. The software is presented as a set of cloud service modules.

Arcfra storage diagram
AECP System diagram

AECP requires a minimum of three nodes as a start point. It is available either through subscription or a perpetual license model and there are four offerings AECP Essential, AECP Standard, AECP Advanced and AECP VDI Essential. There are two supported hypervisors; VMware vSphere and AVE (Arcfra Virtualization Engine). AVE is built on KVM technology.

AECP Essential supports 80 TB of raw capacity per node. The other editions support up to 256 TB. Both NFS and iSCSI protocols are available, with the AECP VDI Essential offering limited to NFS. The file system is built directly on bare devices, said to be more suitable for accessing high-performance block storage, avoiding the overhead of the existing Linux file system.

Arcfra storage diagram
Arcfra storage diagram

Both all-flash and hybrid SSD + disk drive storage is supported with cold data automatically tiered to disk. Data is moved between cluster nodes using RDMA. With a cluster boost mode setting, the “vhost protocol shares memory between Guest OS, QEMU, and ABS to optimize I/O request processing and data transfer, improving VM performance and reducing I/O latency.” ABS stands for Arcfra Block Storage.

All four AECP versions support erasure coding and encryption at rest. Drive-level data block checksums provide silent data corruption protection. System-level protection is available at the node, rack, and site level. The system provides high-availability and stretched active:active clustering. There is a storage snapshot function and automatic recovery for component or node failure.

The specifications are listed here. Download an AECP product brief here.

Arcfra claims a 50 percent reduction in total cost of ownership (TCO) compared with VMware, and it is positioning itself as a VMware migration target with cost-efficiency as a focal point.

Gartner’s report lists eight representative HCI product suppliers: Anchao Cloud Software, Arcfra, Broadcom (VMware), Microsoft, Nutanix, Oxide, Sangfor, and Softiron. It does not include Scale Computing, SmartX, or StorMagic, but then it is a representative and not an exhaustive list.

Anchao, Arcfra, Oxide, Sangfor, and Softiron are classed as regional suppliers with the others being global.

Bootnote

We are told Wenhao founded Arcfra for three reasons:

  • Using a more simplified and streamlined product to address enterprise cloud infrastructure challenges in the AI era, especially building enterprise AI infrastructure.
  • Capturing market trends such as VMware alternative, cloud repatriation and VM-Container convergence.
  • Providing products and services for worldwide customers. With the current installed base in Asia, Arcfra is planning to set up regional headquarters in Europe & USA in 2025 H2.

Arcfra is funded by Vertex Ventures and other major investors.

N-able posts Q1 loss as revenue growth slows

Data protection vendor N-able reported a loss and lower growth in its first 2025 quarter.

John Pagliuca

The company supplies data protection and security software to more than 25,000 managed service providers (MSPs) who supply services to small and mid-market businesses in turn. It also sells to distributors, SIs, and VARs. Revenues in the initial 2025 quarter were $118.2 million, up 3.9 percent year-on-year and above its guidance range, with a GAAP loss of $7.2 million. Subscription revenues grew 4.8 percent to $116.8 million and its gross margin was 76.6 percent.

N-able president and CEO John Pagliuca stated: “Our earnings reflect continued progress advancing cyber-resiliency for businesses worldwide. The launch of new security capabilities, strong addition of channel partners in our Partner Program, and our largest new bookings deal ever showcase that N-able is innovating and growing. We look forward to building on this progress throughout the year.”

The loss was due to increased cost of revenue, operating expenses, and acquisition costs.

Pagliuca said in the earnings call that N-able had signed “our largest new bookings deal ever.”

A revenue history chart shows N-able’s growth rate has been declining:

N-able’s net retention rate (NRR) is 101 percent, which indicates some customer revenue growth and low customer churn. It was 103 percent in 2024, 110 percent in 2023, and 108 percent in 2022. The higher the NRR, the higher a company’s growth rate. A 100 percent NRR indicates any lost revenue from customer churn is offset by increased revenue from new customers, upsells, and cross-sells. A less than 100 percent NRR suggests a business is keeping most of its customers but not growing its revenue as much as it could. N-able’s NRR is not in this category but not much above it either.

The issue is that N-able’s quarterly revenue growth rate has been slowing, with seven declining quarters since a high point in 2023’s second quarter, as a second chart indicates:

Delivering cyber-resiliency services through MSPs is akin to a franchise model. Its revenues should increase as existing MSP partners bring on new clients and grow revenue from existing clients, and also from recruiting new MSP partners. As the chart above shows, N-able’s franchisees are growing its revenues at a declining rate.

The cybersecurity-focused acquisition of Adlumin last November was intended to scale its security portfolio, giving MSP partners new services to sell to their clients and channel. The Adlumin deal brings in distributors, VARs, and SIs, giving N-able access to more channel partners and cross-selling scope.

Pagliuca expects the NRR percentage to grow, saying: “The improvements we’re looking to drive will be driven mostly through the cross-sell opportunity that we have within the customer base.”

N-able’s second quarter outlook is $126 million ± $500,000, a 5.5 percent year-on-year increase at the midpoint. Its full 2025 outlook is $494.5 million ± $2.5 million, a 6 percent year-on-year increase at the midpoint, which suggests that it sees its revenue growth rate staying above 5 percent in the second, third, and fourth quarters.

Analyst Jason Ader said: ”While the company is going through multiple simultaneous transitions this year (channel expansion, investments in security products, and cross-sell/platformization motions), which are pressuring profitability and creating some execution risk, we like management’s aggressive posture and believe it holds the promise of a larger TAM, faster growth, and greater operating leverage in the future.”

N-able has instituted a $75 million share buyback program.

Storage news ticker – May 8

Data protector Arctera has updated its InfoScale cyber resilience product, saying it features:

  • Real-time, application-aware resilience: Arctera InfoScale spans both data and applications, enabling real-time recovery and proactive resilience management to minimize downtime. 
  •  Cyber-ready operational defense: With built-in immutable snapshots and zero-trust principles, InfoScale ensures tamper-proof data recovery, protecting against ransomware attacks and emerging threats like AI-related downtime. 
  •  Proactive recovery: By integrating continuous system monitoring and automated, application-aware response actions, Arctera InfoScale empowers IT teams to shift from reactive to proactive disaster recovery, all while maintaining business continuity.

ADP (Assured Data Protection) is partnering with Nutanix to deliver “a first-of-its-kind backup and DR solution to customers,” Nutanix Disaster Recovery as-a-Service. It requires no investment in facilities or hardware and operationalizes Nutanix disaster recovery solutions, providing protection to customers in over 70 countries worldwide. It can be operational within hours, providing Nutanix customers with a robust backup and DR service, delivered by this global data backup and disaster recovery managed service provider, ADP.

ADP has set up an Innovation Team, a strategic initiative aimed at expanding the company’s DR, backup, and cyber resiliency services with the addition of new technologies that complement current data protection services.

Cloud storage supplier Backblaze reported revenues of $34.6 million in the first 2025 quarter, up 15 percent year-over-year, with a loss of $9.3 million compared to an $11.1 million loss a year ago. B2 cloud storage revenues were $18 million, up 23 percent, while Computer Backup revenues were $16.6 million, up 8 percent year-over-year but down from the prior quarter’s $16.7 million. Analyst Jason Ader said Backblaze closed its largest deal in the quarter, a multi-year, multimillion-dollar contract with an application customer. It’s predicting Q2 revenues of $35.2 million to $35.6 million.

Backblaze said: “A false and misleading short-and-distort report recently raised claims about our financial statements. An independent review confirmed there was no wrongdoing and no issues with our financial statements. For further information, please listen to our earnings call listed below and see our blog entitled ‘Setting the Record Straight’ here.”

Databricks has appointed two EMEA execs: Nico Gaviola as VP, Emerging Enterprise and Digital Natives, and Daniel Holz as VP CEMEA. Gaviola brings over a decade of leadership experience from Google Cloud, where he was Director of Data and AI, South EMEA. At Databricks, he will help emerging enterprises and digital native businesses such as Flo Health, Kraken, and Skyscanner seamlessly adopt the Data Intelligence Platform. Holz joins from Oracle, where he was SVP of North East Europe, responsible for leading the cloud technology division. He has also held leadership positions at Google Cloud and SAP.

Gartner has produced a Market Guide for Hybrid Cloud Storage. Its recommendations are:

  • Take advantage of hybrid cloud storage capabilities by identifying workloads, datatypes and use cases that will benefit from integration with the public cloud.
  • Build a business case for hybrid cloud storage beyond just the price per terabyte by valuing the end-to-end hybrid workflow and standardization enabled by the solutions.
  • Prioritize hybrid cloud storage solutions that enable cloud-native data access capability to best support applications within the public cloud.
  • Choose a hybrid cloud storage provider by its ability to deliver additional services, such as metadata insights, cyberstorage, global access, life cycle management, multi-cloud support, performance acceleration, and data analytics and mobility.
  • Build a comprehensive hybrid cloud data services catalog to define and maintain global hybrid cloud storage services and to ensure standardization and end-user transparency.

You can download a copy courtesy of Nasuni here.

Hazelcast, which supplies combined distributed compute, in-memory data storage, stream processing and integration for enterprise AI applications, is working with IBM to bring its data caching, data integration, and distributed computing capabilities to LinuxONE and Linux on the Z mainframe. Learn more here.

3D DRAM developer NEO Semiconductor has produced industry-first 1T1C and 3T0C-based 3D X-DRAM cells whose designs combine the performance of DRAM with the manufacturability of NAND and density up to 512 Gb – a 10x improvement over conventional DRAM. They use IGZO channel technology and manufacturing will use a modified 3D NAND process, with minimal changes, enabling full scalability and rapid integration into existing DRAM manufacturing lines. 1T1C and 3T0C cell simulations demonstrate retention times of up to 450 seconds, dramatically reducing refresh power. TCAD (Technology Computer-Aided Design) simulations confirm fast 10-nanosecond read/write speeds and over 450-second retention time. NEO says it employs unique array architectures for hybrid bonding to significantly enhance memory bandwidth while reducing power consumption. Proof-of-concept test chips are expected in 2026.

NEO 1T1C 3D DRAM cell

NEO Semiconductor’s technology platform now includes three 3D X-DRAM variants:

  • 1T1C (one transistor, one capacitor) – The core design for high-density DRAM, fully compatible with mainstream DRAM and HBM roadmaps.
  • 3T0C (three transistor, zero capacitor) – Optimized for current-sensing operations, ideal for AI and in-memory computing.
  • 1T0C (one transistor, zero capacitor) – A floating-body cell structure suitable for high-density DRAM, in-memory computing, hybrid memory and logic architectures.

Graph database and analytics player Neo4j announced Aura Graph Analytics, a serverless offering that delivers the power of graph analytics to users of all skill levels, unlocking deeper intelligence and achieving 2x greater insight precision and quality over traditional analytics. Neo4j Aura Graph Analytics is generally available now on a pay-as-you-use basis and works with all databases, such as Oracle and Microsoft SQL, all cloud data warehouses and data lake platforms, such as Databricks, Snowflake, Google BigQuery, Microsoft OneLake, and on any cloud. It removes the need for custom queries, ETL pipelines, or any need for specialized graph expertise.

Neo4j Graph Analytics for Snowflake, a native integration, will be generally available in Q3. Visit the website and blog for more details.

NetApp has recruited two US sales execs. Jim Gannon joins as VP of Strategic Sales, bringing experience from Sysdig, Pure Storage, and VMware, with a track record of scaling high-performing global teams. Darrin Hands returns to NetApp as VP of Corporate, Midmarket and SMB Sales, after leading commercial sales at Pure Storage and previously spending four years at NetApp.

Other World Computing (OWC) has launched the My OWC iOS app, “an all-in-one mobile companion that makes setup, troubleshooting, and staying updated effortless. From real-time firmware update alerts to instant access to support to tailored how-to guides, the app turns every OWC device into a smarter, more connected experience.” The My OWC app is available now as a free download from the Apple App Store here.

Panmnesia showcased its high fan-out CXL 3.x Switch offering at CXL DevCon 2025. This is designed for next-generation AI infrastructure and high-performance computing (HPC) systems – including retrieval-augmented generation (RAG), large language models (LLMs), and scientific simulations. The demo featured a CXL 3.x Composable Server consisting of multiple CXL-enabled server nodes interconnected via Panmnesia’s CXL 3.x Switch. Each node featured disaggregated CPU, GPU, and memory resources powered by Panmnesia’s CXL IP. This composable architecture enables dynamic system configuration based on workload demands.

Panmnesia has launched a $30 million project focused on developing next-generation AI infrastructure products. It’s going to develop chiplet-based modular AI accelerators that integrate next-generation memory functions, including in-memory processing. The new AI accelerators can be used to accelerate the execution of large-scale AI services such as RAG and recommendation systems. The new products will optimize overall cost, enhance resource utilization, and reduce power consumption in AI infrastructure, while delivering high performance. 

Teradata has announced a new data integration with ServiceNow’s Workflow Data Fabric that would fuel AI agents, autonomous workflows and analytics at scale. Its integration with ServiceNow Workflow Data Fabric ensures joint customers can use AI agents to access enterprise-wide data in real-time. Teradata will enable access to data through a Zero Copy connector within the ServiceNow Workflow Data Network, Teradata’s hybrid, multi-cloud analytics and data platform for Trusted AI is now part of ServiceNow Workflow Data Network – an ecosystem of more than a hundred enterprise data partners.

Western Digital has appointed Kris Sennesael as CFO. He most recently served as CFO at Skyworks Solutions.

PeerGFS adds simultaneous multi-protocol file access

The latest release of PeerGFS can access the same file through SMB and NFS protocols at the same time.

PeerGFS (GFS stands for Global File Service) provides real-time, active-active replication of file volumes between datacenters, the public cloud, and edge locations. It has a multi-master approach based on the idea that a distributed organization’s files should be treated as dynamic entities without a fixed location, with the source of truth constantly updated and distributed as needed. The Peer software already supports both SMB and NFS file protocols when used to access separate files. Now it can provide SMB and NFS access to a file volume at the same time across multiple storage systems and geographic distances.

Jimmy Tam, PeerGFS
Jimmy Tam

PeerGFS CEO Jimmy Tam stated: “Multi-protocol support breaks down barriers that have long forced IT teams to create redundant copies of data for different applications or environments. Now, whether you’re ingesting data via SMB at the edge or analyzing it with AI engines using NFS-based storage in the core or cloud, PeerGFS ensures a single, synchronized and accessible dataset.”

This latest v6.20 version of PeerGFS provides support for Amazon FSxN, Dell PowerScale, NetApp ONTAP, and Nutanix Files, and adds Linux file server support for kernel 5.9 and above to its existing Windows server and multi-supplier file storage array support. There is also improved data management and storage optimization for edge locations plus PostgreSQL support.

The release adds signature checking to its Malicious Event Detection (MED) feature, as well as the ability to update parts of the MED configuration while jobs are running.

Peer says the simultaneous multi-protocol support can be “particularly impactful for AI workflows, where data is often ingested at the edge using SMB and processed centrally using Linux-based tools that typically require NFS.”

PeerGFS graphic
PeerGFS graphic

Another example use case is a medical organization that ingests patient MRI scans at local hospitals on SMB-based storage and automatically synchronizes that data to centralized NFS-based storage for analysis with AI. There is now no need for redundant volumes or manual data transfers. Peer claims this can help reduce wait times for diagnosis. Download the PeerGFS datasheet here.