Home Blog Page 5

HPE aims at VMware refugees with Morpheus upgrades

HPE is developing its Morpheus portfolio to lure VMware customers, adding efficiency, zero data loss, and ransomware resilience guarantees to its flagship Alletra MP B10000 scale-out block access array, and launching new entry-level StoreOnce backup appliances.

The company acquired multi-cloud management platform supplier Morpheus Data, which supplied the software used by HPE’s GreenLake subscription offerings, in August last year. It combined Morpheus features with its in-house KVM-based virtualization offering to create VM Essentials and looked to appeal to VMware customers dissatisfied with Broadcom’s stewardship. This can run standalone or on HPE’s own systems and manages HPE VMs and traditional (VMware) VMs. Now HPE is announcing Morpheus Enterprise Software and integrating VM Essentials into its HPE Private Cloud Business Edition. Both software offerings include the HVM hypervisor from HPE and are licensed per-socket to reduce TCO.

Fidelma Russo, HPE
Fidelma Russo

Fidelma Russo, HPE EVP, CTO, and GM of its Hybrid Cloud business, stated: “Enterprises are at a pivotal moment in IT modernization where they must address escalating management complexity and increasing virtualization costs to free investments for core growth areas. We are the leader in disaggregated infrastructure and our private cloud combines that leadership with new software for unified virtualization and cloud management. HPE is giving customers the choice, simplicity and cost efficiencies to outpace the competition and reinvest in innovation.”

Morpheus Enterprise enables a customer’s IT department to become an internal IT services supplier. It has an interface that can be accessed via a GUI, API, Infrastructure-as-Code, or ITSM plug-ins, and can manage both HPE-native KVM and Kubernetes runtimes alongside other applications and public cloud infrastructure.

The product is hypervisor, hardware, and cloud-agnostic, and integrates with surrounding toolsets like ServiceNow, DNS, backup providers, and task orchestration tools to manage application dependencies end-to-end. HPE claims it accelerates provisioning by up to 150x, cuts cloud costs by up to 30 percent, and reduces risk through granular, role-based access controls.

A combination of VM Essentials and Aruba Networking CX 10000 is claimed to lower TCO by up to 48 percent, increase performance by up to 10x, and provide microsegmentation, DPU (data processing unit) acceleration, and enhanced security.

Morpheus VM Essentials customers can upgrade to Morpheus Enterprise. Morpheus VM Essentials and Morpheus Enterprise software can run on specified Dell PowerEdge servers and NetApp AFF arrays. VM Essentials also provides simple, granular storage management for the Alletra Storage MP B10000. Commvault will be the first VM Essentials ecosystem partner to support image-based VM backup and recovery with an upcoming release in May.

The integration of Morpheus Essentials into Private Cloud Business Edition delivers:

  • Cost efficiency with a socket-based pricing model coupled with independent compute and storage scaling significantly lowers total cost of ownership (TCO) compared to core-based licensing and fixed hardware approaches. It can reduce VM license costs by up to 90 percent.
  • Unified management across HPE’s KVM-based hypervisor and VMware-based virtualization environments means businesses can land workloads on the right platform.
  • Operational Simplicity as it uses AI and automation to streamline setup and lifecycle operations, eliminate routine tasks, and accelerate VM provisioning.
  • Lower TCO by up to 2.5x in datacenters and departmental deployments, while SimpliVity-powered solutions deliver up to 45 percent lower cost at the edge.
  • Unified management across edge, core, and cloud environments.
  • Availability in hyperconverged (HCI) or disaggregated (dHCI) form with external storage.

HPE Advisory and Professional Services now offers Virtualization Modernization services with cost analytics, migration tooling, orchestration blueprints and DevOps pipeline integration.

Alletra guarantees

The Alletra MP (Multi-Processor) B10000 is a disaggregated, block-access, controller and storage node all-flash array system with a 100 percent data availability feature. It shares the Alletra range top spot with the object storage MP X10000 system and these 10000 offerings sit above the Alletra 9000, 6000 and 5000 systems.

HPE Alletra MP B10000
Alletra MP B10000

The MP B10000 cyber resilience guarantee says that customers will have access to an HPE services expert within 30 minutes of reporting an outage resulting from a ransomware incident. It also it assures them that all immutable snapshots created on the B10000 remain accessible for the specified retention period. Compensation will be offered if these commitments cannot be kept.

The energy efficiency guarantee says the B10000 power usage will not exceed an agreed maximum target each month. If the energy usage limit is exceeded, you will receive a credit voucher to offset the additional energy costs. HPE says that unlike competitive energy efficiency SLAs, the B10000 guarantee is applicable whether you purchase your B10000 system through a traditional upfront payment or the HPE GreenLake Flex pay-per-use consumption model. (To qualify for this guarantee, an active HPE Tech Care Service or HPE Complete Care Service contract is required.)

The B10000 is a highly available system. The zero data loss and downtime guarantee specifies that if your application loses access to data during a failover, HPE will provide credit that can be redeemed upon making a future investment in B10000. 

These new guarantees join existing ones available to B10000 customers, including 100 percent data availability, StoreMore data efficiency for at least 4:1 cost savings, and a free, non-disruptive controller refresh for 30 percent lower TCO.

StoreOnce

HPE is introducing StoreOnce 3720 and 3760 appliances intended for use in small and medium businesses (SMB) and remote office/branch office (ROBO) locations. StoreOnce appliances can achieve a claimed 20:1 dedupe ratio and employ multi-factor authentication, encryption and immutability to help combat ransomware. They compete with similar deduping backup target appliances from Dell (PowerProtect), ExaGrid, Quantum (DXI), and Veritas Flex appliances from Cohesity. 

According to HPE, the 3720 and 3760 scale from 18 TB to 216 TB of local usable capacity up to 648 TB usable capacity with optional cloud storage, and achieve backup speeds of up to 25 TB/hour. No data sheets are available, but we can place them in a table with existing StoreOnce systems to see how they rate:

We think the two new appliances will have capacities and speed more than the systems to their left in the table and approaching the systems to their right. Effectively, they replace the systems on their immediate left – the 3620 and the 3660, and possibly the 3640 as well. An HPE spokesperson told us: “The datasheet will be available closer to availability date. The 3720 and 3760 are an improved offering targeting a similar small-to-mid-range customer segment as the 3620, 3640, and 3660 range. We’re still offering the 3660, but haven’t sold the 3620 and 3640 for sometime.” 

Availability

The new guarantees for IT outcomes are available starting today as part of the HPE Storage Future-Ready Program. 

HPE Private Cloud Business Edition with Morpheus VM Essentials is available now. New Business Edition systems with HPE SimpliVity will be available in the third quarter of 2025. Morpheus Software integration for the Alletra Storage MP B10000 is available today and is planned for June for HPE Aruba Networking CX 10000. Morpheus Enterprise Software is available now as standalone software.

The StoreOnce 3720 and 3760 will be available early in the third quarter of 2025.

Sandisk launches fastest gumstick SSD yet

Sandisk just launched the SN8100 from its WD_BLACK range, currently the fastest M.2-format (gumstick) drive available.

It is a PCIe Gen 5 drive and succeeds the PCIe Gen 4 WD_BLACK SN850X with twice the read speed, twice the power efficiency, and the same 8 TB maximum capacity. The old drives used the BiCS4 3D NAND generation with 96 layers, whereas the new drive is built with BiCS8 218 layer chips, formatted as TLC (3 bits/cell), like the SN850X. 

Eric Spanneut, SanDisk
Eric Spanneut

Eric Spanneut, Sandisk VP of devices, stated: “The WD_BLACK SN8100 NVMe SSD with PCIe Gen 5.0 delivers peak storage performance for the most discerning users.” 

That means the 2 TB and 4 TB versions deliver over 2.3 million random read IOPS, while sequential read and write bandwidth numbers are 14.9 GBps and 14 GBps respectively. A comparison with other suppliers’ equivalent PCIe Gen 5 M.2 drives shows that Sandisk’s new drive is the fastest in terms of sequential read and write performance:

Spanneut said: “Whether it’s for high-level gaming, professional content creation, or AI applications, high-performance users now have a PCIe Gen 5.0 storage solution that matches speed with power efficiency to help them build the ultimate gaming rig or best-in-class workstation, enabling them to play and create with next-level performance and reliability.”

The SN8100’s average operating power is 7 W and it offers endurance of up to 2,400 terabytes written (TBW).

Sandisk could potentially build an enterprise version of this drive, perhaps with QLC formatting and the hot-swappable E1.S form factor, and reach 16 TB of capacity, which would be impressive.

This SN8100 drive is available for purchase at sandisk.com and select retailers worldwide in 1 TB ($179.99), 2 TB ($279.99), and 4 TB ($549.99) capacities – US MSRP amounts. The heatsink-equipped version will be available this fall in the same capacities for $20 extra. The 8 TB version, with and without a heatsink, is expected to be available later this year.

Pliops bypasses HBM limits for GPU servers

Key-value accelerator card provider Pliops has unveiled the FusIOnX stack as an end-to-end AI inference offering based on its XDP LightningAI card.

Pliops’ XDP LightningAI PCI card and software augment the high-bandwidth memory (HBM) memory tier for GPU servers and accelerate vLLMs on Nvidia Dynamo by 2.5x. UC Berkeley’s open source virtual large language model (vLLM) library for inferencing and serving uses a key-value (KV) cache as a short-term memory for batching user responses. Nvidia’s Dynamo framework is open source software to optimize inference engines such as TensorRT LLM and vLLM. The XDP LightningAI is a PCIe add-in card and functions as a memory tier for GPU servers. It is powered by ASIC hardware and software, and caches intermediate LLM process step values on NVMe/RDMA-accessed SSDs.

Pliops slide

Pliops says GPU servers have limited amounts of HBM. Its technology is intended to deal with the situation where a model’s context window – its set of in-use tokens – grows so large that it overflows the available HBM capacity, and evicted contexts have to be recomputed. The model is memory-limited and its execution time ramps up as the context window size increases.

By storing the already-computed contexts in fast-access SSDs, retrieving them when needed, the model’s overall run time is reduced compared with recomputing the contexts. Users can get more HBM capacity by buying more GPU servers, but the cost of this is high, and bulking out HBM capacity with a sub-HBM storage tier is much less expensive and, we understand, almost as fast. The XDP LightningAI card with FusIOnX software provides, Pliops says, “up to 8x faster end-to-end GPU inference.”

Think of FusIOnX as AI stack glue for AI workloads. Pliops provides several examples:

  • FusIOnX vLLM production stack: Pliops vLLM KV-Cache acceleration, smart routing supporting multiple GPU nodes, and upstream vLLM compatibility.
  • FusIOnX vLLM + Dynamo + SGLang BASIC: Pliops vLLM, Dynamo, KV-Cache acceleration integration, smart routing supporting multiple GPU nodes, and single or multi-node support.
  • FusIOnX KVIO: Key-Value I/O connectivity to GPUs, distributed Key-Value over network for scale – serves any GPU in a server, with support for RAG/Vector-DB applications on CPU servers coming soon.
  • FusIOnX KV Store: XDP AccelKV Key-Value store, XDP RAIDplus Self Healing, distributed Key-Value over network for scale – serves any GPU in a server, with support for RAG/Vector-DB applications on CPU servers coming soon.
Pliops slide

The card can be used to accelerate one or more GPU servers hooked up to a storage array or other stored data resource, or it can be used in a hyperconverged all-in-one mode, installed in a GPU server, providing storage using its 24 SSD slots, and accelerating inference – an LLM in a box, as Pliops describes that configuration. 

Pliops slide

Pliops has its PCIe add-in-card method, independent of the storage system, to feed the GPUs with the model’s bulk data, independent of the GPU supplier as well. The XDP LightningAI card runs in a 2RU Dell server with 24 SSD slots. Pliops says its technology accelerates the standard vLLM production stack 2.5x in terms of requests per second:

Pliops slide

XDP LightningAI-based FusIOnX LLM and GenAI is in production now. It provides “inference acceleration via efficient and scalable KVCache storage, and KV-CacheDisaggregation (for Prefill/Decode nodes separation)” and has a “shared, super-fast Key-Value Store, ideal for storing long-term memory for LLM architectures like Google’s Titans.”

There are three more FusIOnX stacks coming. FusIOnX RAG and Vector Databases is in the proof-of-concept stage and should provide index building and retrieval acceleration.

FusIOnX GNN is in development and will store and retrieve node embeddings for large GNN (graph neural network) applications. A FusIOnX DLRM (deep learning recommendation model) is also in development and should provide a “simplified, superfast storage pipeline with access to TBs-to-PBs scale embedding entities.”

Comment

There are various AI workload acceleration products from other suppliers. GridGain’s software enables a cluster of servers to share memory and therefore run apps needing more memory than that supported by a single server.  It provides a distributed memory space atop a cluster or grid of x86 servers with a massively parallel architecture. AI is another workload it can support.

GridGain for AI can support RAG applications, enabling the creation of relevant prompts for language models using enterprise data. It provides storage for both structured and unstructured data, with support for vector search, full-text search, and SQL-based structured data retrieval. And it integrates with open source and publicly available libraries (LangChain, Langflow) and language models. A blog post can tell you more.

Three more alternatives are Hammerspace’s Tier Zero scheme, WEKA’s Augmented Memory Grid, and VAST Data’s VUA (VAST Undivided Attention), and they all support Nvidia’s GPUDirect protocols.

Asigra improves SaaS app data restorability

Canadian backup vendor Asigra has unveiled SaaSAssure 2025, its latest data protection platform for SaaS apps, now featuring granular restore and automatic discovery capabilities.

SaaSAssure was launched in summer 2024 with pre-configured integrations to protect customer data with connectors for Salesforce, Microsoft 365, Exchange, SharePoint, Atlassian’s Jira and Confluence, Intuit’s QuickBooks Online, Box, OneDrive, HubSpot, and others. It is available to both enterprises and MSPs so that they can offer SaaS app customer data protection services. SaaSAssure is built on AWS and offers flexible storage options, including Asigra Cloud Storage and Bring Your Own Storage (BYOS). This new release is available to customers in North America, the UK, and the European Union.

Eric Simmons, Asigra
Eric Simmons

CEO Eric Simmons stated: “The international availability of SaaSAssure, including the United Kingdom and Europe, expands our support for MSPs and enterprises who need advanced SaaS backup that goes beyond Microsoft 365 or Salesforce. With expanded Exchange and HubSpot granularity, plus Autodiscovery and UI upgrades, customers gain comprehensive data protection in a way that integrates smoothly with other critical SaaS applications.”

The new features in this release include:

• Exchange Granular Restore for individual mailboxes, folders, emails, contacts, events, and attachments, as well as full backups and mailbox restores.
• HubSpot Granular Restore for specific CRM categories, object groups (e.g. contacts, companies, custom objects), and individual records with or without associated data, and full backup restoration.
• HubSpot Custom Object Restore means previously backed up custom objects are now fully restorable.
• Autodiscovery for Exchange automatically detects and adds new mailboxes – including shared, licensed, and resource types – into domain-level backups.
• Autodiscovery for SharePoint automatically includes newly created SharePoint sites in domain level backups for improved coverage.
• Domain Level SharePoint Backup simplifies multi-site backup management for SharePoint users.
• Intuitive restore interface with a redesigned UI streamlining the recovery process for IT teams and MSPs.
• Configurable email alerts for activities like backup failures to improve incident response.
• Pendo Resource Center Integration offers enhanced in-platform user guidance and support.

New SaaS app connectors are coming for ADP, BambooHR, Docusign, Entra ID, Freshdesk, Trello, and Zendesk. You can be notified about new connectors by filling in a form here.  SaaSAssure is available for immediate deployment.

Xinnor reports rapid growth as xiRAID sales climb sharply

Software RAID supplier Xinnor saw first quarter sales of its xiRAID product reach 86 percent of the company’s total revenue for all of 2024.

Israel-based Xinnor’s xiRAID provides a local block device to the system, with data distributed across drives for faster access. It has a declustered RAID feature for HDDs, which places spare zones over all drives in the array and restores the data of a failed drive to these zones, making drive rebuilds faster. The software supports NVMe, SAS, and SATA drives, and works with block devices, local or remote, using any transport – PCIe, NVMe-oF or SPDK target, Fibre Channel, or InfiniBand. Xinnor says its recent growth has been driven by a series of strategic partnerships, including a major agreement with Supermicro, and an expanded global reseller channel.

Davide Villa, Xinnor
Davide Villa

A statement from chief revenue officer Davide Villa said: ”The momentum we’ve built in Q1 is truly exceptional. Our patented xiRAID technology is proving to be a game-changer in the data storage market. The fact that in one quarter we achieved what took us all last year to accomplish demonstrates the accelerating market recognition of our unique value proposition.

”We are extremely proud that several leading institutions around the world selected xiRAID to protect and accelerate access to critical data for innovative AI projects. The channel partner extension and the reseller agreement with Supermicro will enhance our reach, enabling more customers to experience the performance lead of xiRAID.”

New resellers include:

  • APAC: Xenon Systems in Australia, CNDfactory in South Korea, DigitalOcean in China
  • Europe: NEC Deutschland GmbH in Germany, HUB4 in Poland, 2CRSI in France, and BSI in the UK
  • Americas: Advanced HPC, SourceCode, Colfax International in the US

And recent customer wins:

  • A leading financial company deployed xiRAID across all the NVMe servers within its datacenters.
  • Two major universities in Central Europe, active in advanced AI research, implemented xiRAID in high-availability mode in two independent all-NVMe storage clusters, for over 20 PB. 
  • The Massachusetts Institute of Technology (MIT) deployed xiRAID to protect around 400 NVMe drives for a variety of use cases.

We think Xinnor is benefiting from a rise in AI workloads needing RAID-protected NVMe SSDs.

VAST Data adds vector search and deepens Google Cloud ties

VAST Data has added vector search to its database and integrated its software more deeply into Google’s cloud.

The database is part of its software stack layered on top of its DASE (Disaggregated Shared Everything) storage foundation along with the Data Catalog, DataSpace, unstructured DataStore and DataEngine (InsightEngine). Generative AI large language models (LLMs) manipulate and process data indirectly, using hashed representations – vector embeddings or just vectors – of multiple dimensions of an item. An intermediate abstraction of word in text documents is a token. These are vectorized and a document item’s vectors are stored in a multi-dimensional space with the LLM searching for vectors as it computes steps in its generation of a response to user requests. This is called semantic searching.

A VAST Data blog by Product Marketing Manager Colleen Quinn says: “Vector search is no longer just a lookup tool; it’s becoming the foundation for real-time memory, context retrieval, and reasoning in AI agents.”

Vectors are stored by specialized vector database suppliers – think Pinecone, Weaviate and Zilliz – and are also being added as a data type by existing database suppliers. Quinn says that the VAST Vector Search engine “powers real-time retrieval, transactional integrity, and cross-modal governance in one platform without creating new silos.” 

In the VAST world, there is a single query engine, which can handle SQL and vector and hybrid queries. It queries VAST’s unstructured DataStore and the DataBase, where vectors are now a standard data type. Quinn says: “Vector embeddings are stored directly inside the VAST DataBase, alongside traditional metadata and full unstructured content to enable hybrid queries across modalities, without orchestration layers or external indexes.”

“This native integration enables agentic systems to retrieve memory, reason over metadata, and act – all without ETL pipelines, external indexes, or orchestration layers.”

“The system uses sorted projections, precomputed materializations, and CPU fallback paths to maintain sub-second performance – even at trillion-vector scale. And because all indexes live with the data, every compute node can access them directly, enabling real-time search across all modalities – text, images, audio, and more – without system sprawl or delay.”

“At query time, VAST compares the input vector to all stored vectors in parallel. This process uses compact, columnar data chunks to prune irrelevant blocks early and accelerate retrieval.”

“Future capabilities will expand beyond vector search, enabling new forms of hybrid reasoning, structured querying, and intelligent data pipelines.” Think multi-modal pipelines and intelligent data preparation.

Google Cloud

Building on its April 2024 announcement that it had ported its Data Platform software to Google’s cloud, enabling users to spin up VAST clusters there, VAST has now gone further. It says its Data Platform “is fully integrated into Google Cloud – offering a unified foundation for training, retrieval-augmented generation (RAG), inference, and analytics pipelines that span across cloud, edge, and on-premises environments.”

Renen Hallak, VAST founder and CEO, spoke of a “leap forward,” stating: “By combining the elasticity and reach of Google Cloud with the intelligence and simplicity of the VAST Data Platform, we’re giving developers and researchers the tools they need to move faster, build smarter, and scale without limits.”

The additional VAST facilities now available on GCP include:

  • InsightEngine enabling developers and researchers to run data-centric AI pipelines—such as RAG, preprocessing, and indexing—natively at the data layer.
  • DataSpace with its exabyte-scale global namespace which connects data on-premises, at the edge, and in Google Cloud as well as other hyperscalers for data access and mobility.
  • Unified file (NFS, SMB), object (S3), block, and database access.

VAST says customers can run AI, ML, and analytics initiatives without operational overhead and unify their AI training, RAG pipelines, high-throughput data processing, and unstructured data lakes on its single, high-performance platform.

The base VAST software has already been ported to AWS, with v5.2 available in the AWS Marketplace. We understand v5.3 is the latest version of VAST’s software. 

There is limited VAST availability on the Azure Marketplace, where “VAST’s virtual appliances on Azure allow customers to deploy VAST’s disaggregated storage processing from the cloud of their choice. These containers are free of charge and customers interested in deploying Universal Storage should contact VAST Data to get their capacity under management. This product is available as a Directed Availability release.”

Comment

With its all-in-one storage and AI stack, VAST Data is becoming the equivalent of a software AI system infrastructure mainframe environment, built from modular storage hardware boxes, NMVe RDMA links to x86 and GPU compute, not forgetting Arm (BlueField). Both compute and storage hardware are commodities for VAST. But the software is far from a commodity. It is VAST’s core proprietary IP, and being developed and extended at a high rate, with a promise of being uniformly available across the on-premises environment and the AWS, Azure, and Google clouds. For better or worse, as far as we are aware, no other storage nor system data infrastructure company is working on such a broad and deep AI stack at the same pace.

DRAM and NAND: Micron and SK Hynix’s paths to production

Analysis: There are two companies highly focused on DRAM and NAND production – Micron and SK hynix. Both are competing intensively in enterprise SSDs and high-bandwidth memory but came to their dual market focus in involved and indirect ways, via early sprawling business expansion, with mis-steps and inspired moves enroute. 

One was blessed by Intel and one was cursed. Micron went into bed with Intel and the ill-fated Optane technology, which crashed and burned, while SK hynix bought troubled Intel’s SSD and NAND fab business, and went fast into the high-capacity SSD market, which took off and is flying high. It also stopped Western Digital merging with Kioxia, and then pushed early into the high-bandwidth memory (HBM) business and is now soaring upwards on Nvidia’s GPU memory coat tails.

Micron

Micron was started up in 1978 in Boise, Idaho by Ward Parkinson, Joe Parkinson, Dennis Wilson, and Doug Pitman as a semiconductor design operation. It started fabbing 65K DRAM chips in 1981 and IPO’d in 1984. A RISC CPU project came and went in the 1991-1992 period. Micron acquired the Netframe server business in 1997. It entered the PC business but exited that in 2002, and bought into the retail storage media business by buying Lexar in 2006

Micron entered the flash business in 2005 via a joint venture with Intel. It bought Numonyx, which made flash chips, in 2010, for $1.27 billion. It then developed its memory business by buying Elpida Memory in 2013, giving it an Apple iPhone and iPad memory supply business, also buying PC memory fabber Rexchip and Innotera Memories in 2016.

However, Micron entered into what was, with hindsight, a major mis-step in 2013 by joining Intel in the Optane 3D XPoint storage-class memory business and manufacturing the phase-change memory technology chips. It was even involved in producing its own branded QuantX 3D XPoint chips – but these went nowhere.

Despite Intel pouring millions of dollars into Optane, the technology failed to take off, with production volumes never growing large enough to lower the per-chip cost and so enable profitable manufacture. Eight years later, in March 2021, Micron cancelled the collaboration and walked away, stopping Optane chip production. Intel saw the writing was on the wall and canned its Optane business in mid-2022.

Ironically, Intel sold its NAND and SSD business to SK hynix in 2021, the same year that Micron up-ended the Optane collaboration. If only Micron had been in a position to buy that business it would now have to a stronger SSD market position.

Sanjay Mehrotra became Micron’s CEO in 2017 and it was he who pushed Optane out of Micron’s door. He also sold off the Lexar business to focus on DRAM and NAND.

A look at Micron’s revenues and profits from 2016 to date show a pronounced shortage and glut, peak and trough pattern, characteristic of the DRAM and NAND markets:

During the Optane period from 2013 to 2021, Micron diverted production capacity and funding away from DRAM and NAND to Optane and, with hindsight again, we could say it would be a larger company now, revenue-wise, if had not done that.

SK hynix

SK hynix has a more recent history than Micron. It was founded as Hyundai Electronics Industries Co., Ltd. by Chung Ju-yung in 1983, as part of the Hyundai Group. It produced SRAM product in 1984 and DRAM in 1985. The company built a range of products including PCs, car radios, telephone switchboards, answering machines, cameras, mobile phones and pagers. It sprawled even more than the early Micron in a product sense.

Hyundai Electronics Industries bought disk drive maker Maxtor in 1993 and IPO’d in 1996. It bought LG Semiconductor in 1998. In 2000 and, in financial difficulties caused by DRAM price drops, it restructured with subsidiaries spun off. It rebranded its core business as Hynix Semiconductor in 2001 and then was itself spun out of the Hyundai Group. 

More subsidiary divestitures followed in 2002 and 2004. The business then recovered but not for long as it defaulted on loans and went through a debt-for-equity swap. Its lenders put it up for sale in 2009 and Hynix partnered HP to productize Memristor technology, but that was a bust. 

Hynix was fined for price-fixing in 2010, to add more trouble, and was eventually acquired in 2012 by SK Telecom for $3 billion. SK Telecom rebranded it as SK hynix with a focus on DRAM and NAND and it prospered and is prospering. SK hynix is headquartered in Icheon, South Korea.

The company was part of the Bain consortium which purchased a majority share in the financially troubled Toshiba Memory Systems NAND business in 2017. This business had a NAND fab joint venture with Western Digital and was rebranded as Kioxia. 

SK hynix then bought Intel’s NAND business for $9 billion in 2021 in a multi-year deal; it completed earlier this year, and incorporated it in its Solidigm division, with NAND fabs in Dalian, China. This gave it an excellent position in the high-capacity SSD market and cemented its twin focus on DRAM and NAND.

A merger between Western Digital and Kioxia was suggested as a way forward for Kioxia in 2023 but eventually called off after SK hynix apparently blocked it. A combined Kioxia-WD business would have had a larger NAND market share than SK hynix, and a single NAND technology stack, lowering its costs.

SK hynix has two stacks: its own and Solidigm’s, and faced being relegated to number three in the market, with an 8.5 percent market share, behind Samsung and Kioxia-WD, both with about 33 percent. 

Currently Samsung has a leading 36.9 percent share and SK hynix + Solidigm is in second place with 22.1 percent. These two are followed in declining order by Kioxia (12.4 percent), Micron (11.7 percent), Western Digital (now Sandisk and 11.6 percent) and others.

Solidigm took an early lead in high-capacity enterprise SSDs in 2024, with a 61.44 GB QLC drive and then a 122 TB drive in late 2024. This was well timed for the rapid rise in demand for fast access to masses of data needed for generative AI processing. Micron delivered its own 61.44 TB SSD later in 2024.

SK hynix started mass-producing high-bandwidth memory (HBM) in 2024 and has become the dominant supplier of this type of memory, needed for GPU servers, to Nvidia. As of 2025’s first quarter SK hynix holds 70 percent of the HBM market, with Micron and Samsung sharing the rest, proportions unknown. Micron says its HBM capacity is sold out for 2025 while Samsung’s latest HBM  chips are being qualified.

Revenue-wise SK hynix and Micron were neck and neck in the DRAM/NAND market glut in mid-2023. But since then SK hynix has grown its revenues faster, led by HMB, with a widening gap between the two. 

It seems unlikely that, absent SK hynix mis-steps, Micron will catch up. Its possibilities for catching up in the DRAM market could include getting early into the 3D DRAM business. Any NAND market catchup seems more likely to come from an acquisition; Sandisk anyone?

The two companies, Micron and SK hynix, have both been radically affected by Intel; Micron by the loss-making and jinxed Optane technology, and SK hynix by its market share-expanding Solidigm acquisition. Intel’s CEO at that time, Pat Gelsinger, said Intel should never have been in the memory business. Because it was, it helped hang an albatross around Micron’s neck and gave SK hynix a Solidigm shove upwards in the NAND business. Icheon benefited while Boise did not.

Commvault and Deloitte team up on enterprise cyber resilience

Enterprise data protector Commvault is allying with big four accountancy biz Deloitte pitching to customers who are trying to be more resilient against cyber threats.

Commvault has been layering cyber resilience features on top of its core data protection facilities. It recently improved its Cleanroom Recovery capabilities and has a CrowdStrike partnership to detect and respond to cyberattacks. A CrowdStrike alert to the Commvault Cloud can trigger a ThreatScan check for affected data, and restore compromised data to a known good state using backups. Deloitte has a set of Cyber Defense and Resilience services, including forensic specialists who investigate cyber-incidents and help contain and recover from them.

Alan Atkinson, Commvault
Alan Atkinson

Alan Atkinson, chief partner officer at Commvault, stated: “By combining Commvault’s cyber resilience technologies with Deloitte’s deep technical knowledge in cyber detection and response, we are creating a formidable defense for our joint customers against today’s most sophisticated cyber threats.” 

The two aim to integrate Commvault’s cyber resilience services with Deloitte’s cyber defense and response capabilities to help businesses maintain operational continuity before, during, and after a cyber incident. Such services might have mitigated the impact on UK retailers like Marks & Spencer, Co-Op, and Harrods during their recent cyberattacks. Deloitte, coincidentally, is Marks & Spencer’s auditor.

Specifically, before an attack, Commvault and Deloitte will assist organizations in understanding and defining their minimum viability – the critical set of applications, assets, processes, and people required to operate their business following an attack or outage. Once defined, Commvault’s Cleanroom Recovery can assist enterprises in assessing their minimum viability state and testing their recovery plans in advance.

Then, during an attack, the two say Deloitte’s cyber risk services combined with Commvault’s AI-enabled anomaly detection capabilities help joint clients identify and mitigate potential threats before they escalate.

After an attack, during the recovery phase, Deloitte’s incident response capabilities combined with the Commvault Cloud platform, which includes resilience offerings like Cloud Rewind, Clumio Backtrack, and Cleanroom Recovery, help customers “quickly recover, minimize downtime, and operate in a state of continuous business.” 

David Nowak, Deloitte
David Nowak

David Nowak, Principal, Deloitte & Touche LLP, said: “Together, we are offering a strategic and broad solution that not only helps our clients fortify their defenses but also helps with recovering from outages and cyberattacks.” 

Commvault competitors Cohesity, Rubrik, and Veeam partner with the main IT services firms, such as Deloitte, EY, KPMG, and PwC, on a tactical basis but don’t have strategic alliances with them.

Find out more about Commvault and Deloitte at a Commvault microsite.

Bootnote

Commvault’s Azure infrastructure was breached by a suspected nation-state actor at the end of April, but its customer backup data was not accessed. A Commvault Command Center flaw, CVE-2025-34028, is being fixed.

Cerabyte gains backing from Western Digital

Western Digital has invested in archival ceramic tablet technology developer Cerabyte.

This follows strategic investments from Pure Storage and In-Q-Tel. Cerabyte’s technology involves a femtosecond laser burning nanodot holes in the ceramic coating of glass tablets. The holes form part of QR-type data patterns, provide 1 GB capacity per tablet surface, and are read using scanning microscopes. Tablets are stored offline in shelves with a robot system transporting them to and from writing and reading stations. Their contents can last in immutable form for hundreds if not thousands of years and require no energy while offline. They are denser than tape cartridges and provide faster data access.

Shantnu Sharma, Western Digital
Shantnu Sharma

Shantnu Sharma, Western Digital’s Chief Strategy and Corporate Development Officer, stated: “We are looking forward to working with Cerabyte to formulate a technology partnership for the commercialization of this technology. Our investment in Cerabyte aligns with our priority of extending the reach of our products further into long-term data storage use cases.”

Western Digital has spun off its NAND fab and SSD unit, SanDisk, and is now a pure play disk drive manufacturer. 

Cerabyte was founded in Germany and opened a Silicon Valley office and another in Boulder, Colorado, in summer 2024 with Steffen Hellmold recruited as a director to help with product commercialization. Hellmold used to work for Western Digital as a corporate strategy VP from 2016 to 2021, and was involved with DNA storage technology, joining Twist Bioscience after WD.

Steffen Hellmold, Cerabyte
Steffen Hellmold

Hellmold has said: “It was previously thought only DNA storage could develop to store exabytes per rack. But Cerabyte can scale there as well.” Its tech is more practical and closer to commercialization than DNA storage.

According to Cerabyte, its ceramic data storage has the potential to enable new use cases with better total cost of ownership (TCO) than current cold-tier solutions, such as tape. It says long-term permanent data storage must be affordable, sustainable, resilient to bit rot, not require periodic maintenance or environmental control, or additional energy to reliably retain the data stored.

CEO and co-founder Christian Pflaum said: “Our ceramic data storage offers a vital, complementary long-term data storage layer that ensures rapid data retrieval – often within seconds – unlocking new revenue streams. We are excited to be working with Western Digital to define a technology partnership, fueling our ability to deliver accessible permanent storage solutions at scale.”

Christian Pflaum, Cerabyte
Christian Pflaum

The company developed a demonstration prototype system in 2023 and will now be working with manufacturing partners on building a commercial product, meaning a library system with reading and writing drives, robotics, tablet-storing shelves, tablet load and removal functionality, software to manage and control it, and a ceramic tablet manufacturing capability. 

A company like Quantum or SpectraLogic could help develop a ceramic tablet library system by using their existing tape technology as a starting point. Western Digital, with its experience in manufacturing disk platters, which can have glass substrates, could offer help in the tablet production area, and like Pure Storage, it has a channel through which to deliver technology products.

NEO expands 3D X-DRAM tech for denser, faster memory

NEO Semiconductor has expanded its 3D X-DRAM concept with new variants aimed at improving retention time, density, and power efficiency. By modifying its original one-transistor, zero-capacitor (1T0C) design to include either a capacitor (1T1C) or additional transistors (3T0C), the company is pushing toward denser, faster, and more energy-efficient DRAM structures that align with 2D DRAM scaling roadmaps.

A recently published NEO white paper details the architecture and design choices behind its updated memory cells. The original 1T0C design uses floating body cell technology while the 1T1C development uses an Indium Gallium Zinc Oxide (IGZO) transistor channel. An N+ polysilicon capacitor plate facilitates effective electron storage in the IGZO layer. 

The overall 1T1C cell structure looks like this: 

The left-hand diagram shows a top word line in place while it has been removed in the right-hand diagram to show the underlying components. NEO says: “The IGZO layer is coupled to a metal word line layer, which acts as the transistor gate. The drain of the IGZO layer connects to a vertical bit line made of materials. A thin high-k dielectric layer, serving as a capacitor, is formed along the cylindrical sidewall on the source side of the transistor between the IGZO channel and a capacitor plate layer. The capacitor plate, composed of conductors such as N+ polysilicon, is biased at VDD to facilitate effective electron storage in the N-type IGZO layer.”

This configuration can achieve a greater than 450 second retention time, meaning cell refreshes are needed less often, and up to 128 layers. 

By adding 5nm-thick spacer components between the vertical bit lines and the word line layers, NEO reduces parasitic bit line capacitance so that more than 512 layers can be stacked, increasing device capacity.

There are other 1T1C variants, such as one with a conductor plate, and another that eliminates the insulator between the vertical bit line and the IGZO layer. The 3T0C design looks like this:

It incorporates two IGZO layers for enhanced performance. This design relies on current sensing and is “particularly well-suited for in-memory computing and artificial intelligence (AI) applications, where high-speed data processing and efficient power management are crucial.”

NEO says its 3D X-DRAM array architecture is like a 3D NAND array with layer access stairs at the sides: “The array is segmented into multiple sectors by vertical slits. The multiple word line layers within each sector are connected to decoder circuits through staircase structures located along both edges of the array.”

NEO asserts that its “3D X-DRAM cell can be fabricated using a 3D NAND-like process, requiring only modifications to accommodate IGZO and capacitor formation.” The white paper goes into the process steps in some detail.

It also compares the density of 2D (planar) DRAM with 3D X-DRAM, saying: “According to public estimation, 2D DRAM at the 0a node can reach a density of 48 Gb. In contrast, 3D X-DRAM 1T1C cells can reach densities ranging from 64 Gb to 512 Gb, corresponding to 64 to 512 layers and a more than 10x increase.

Were 3D X-DRAM to be used for high-bandwidth memory (HBM), it would have greater bandwidth due to a wider bus width. HBM3e supports a 1K bit bus width with HBM4 projected to attain 2K bits by 2026. In contrast, “3D X-DRAM’s unique array structure eliminates the need for TSV and enables hybrid bonding technology, which can scale bus width beyond 4K bits to 32K bits, increasing bandwidth by up to 16X while significantly reducing power consumption and heat generation – making it a game-changer for AI applications.”

A proof-of-concept 1T0C test chip is being developed. Proof-of-concept test chips for the 1T1C design are currently in the planning stage, with availability expected in 2026. 

Bootnote

Samsung is exploring VS-DRAM (vertically stacked DRAM), which we understand to be a 1T1C structure.

Newcomer Arcfra lands spot in Gartner’s HCI vendor lineup

Singapore-based startup Arcfra has entered the hyperconverged infrastructure (HCI) market and been named in Gartner’s latest Market Guide for Full-Stack HCI Software less than a year after launch.

Wenhao Xu, Arcfra
Wenhao Xu

CEO Wenhao Xu founded Arcfra in May 2024 and the company has since expanded operations into South Korea and other parts of the Asia-Pacific region. Wenhao Xu was previously co-founder and board chair of Chinese HCI company SmartX, based in Beijing from 2013 to 2024, and a software engineer at Oracle-acquired Nimbula in Mountain View from 2011 to 2012. He was a summer intern at VMware during his academic courses.

Wenhao Xu stated: “We are excited to be recognized by Gartner in the 2025 Market Guide for Full-Stack HCI Software. We believe this recognition reflects our vision of delivering agile, cost-effective, and future-ready infrastructure to enterprises navigating the post-VMware era.”

The company provides full infrastructure support for compute, storage (block and file), networking, security, and disaster recovery with a single resource pool. The company is featured in Gartner’s updated Market Guide for Full-Stack Hyperconverged Infrastructure Software report, which was published last month. 

The Arcfra Enterprise Cloud Platform (AECP) supports both virtualized and containerized applications, and provides security and disaster recovery features. The management layer includes API access along with observability and automation functions. The software is presented as a set of cloud service modules.

Arcfra storage diagram
AECP System diagram

AECP requires a minimum of three nodes as a start point. It is available either through subscription or a perpetual license model and there are four offerings AECP Essential, AECP Standard, AECP Advanced and AECP VDI Essential. There are two supported hypervisors; VMware vSphere and AVE (Arcfra Virtualization Engine). AVE is built on KVM technology.

AECP Essential supports 80 TB of raw capacity per node. The other editions support up to 256 TB. Both NFS and iSCSI protocols are available, with the AECP VDI Essential offering limited to NFS. The file system is built directly on bare devices, said to be more suitable for accessing high-performance block storage, avoiding the overhead of the existing Linux file system.

Arcfra storage diagram
Arcfra storage diagram

Both all-flash and hybrid SSD + disk drive storage is supported with cold data automatically tiered to disk. Data is moved between cluster nodes using RDMA. With a cluster boost mode setting, the “vhost protocol shares memory between Guest OS, QEMU, and ABS to optimize I/O request processing and data transfer, improving VM performance and reducing I/O latency.” ABS stands for Arcfra Block Storage.

All four AECP versions support erasure coding and encryption at rest. Drive-level data block checksums provide silent data corruption protection. System-level protection is available at the node, rack, and site level. The system provides high-availability and stretched active:active clustering. There is a storage snapshot function and automatic recovery for component or node failure.

The specifications are listed here. Download an AECP product brief here.

Arcfra claims a 50 percent reduction in total cost of ownership (TCO) compared with VMware, and it is positioning itself as a VMware migration target with cost-efficiency as a focal point.

Gartner’s report lists eight representative HCI product suppliers: Anchao Cloud Software, Arcfra, Broadcom (VMware), Microsoft, Nutanix, Oxide, Sangfor, and Softiron. It does not include Scale Computing, SmartX, or StorMagic, but then it is a representative and not an exhaustive list.

Anchao, Arcfra, Oxide, Sangfor, and Softiron are classed as regional suppliers with the others being global.

Bootnote

We are told Wenhao founded Arcfra for three reasons:

  • Using a more simplified and streamlined product to address enterprise cloud infrastructure challenges in the AI era, especially building enterprise AI infrastructure.
  • Capturing market trends such as VMware alternative, cloud repatriation and VM-Container convergence.
  • Providing products and services for worldwide customers. With the current installed base in Asia, Arcfra is planning to set up regional headquarters in Europe & USA in 2025 H2.

Arcfra is funded by Vertex Ventures and other major investors.

N-able posts Q1 loss as revenue growth slows

Data protection vendor N-able reported a loss and lower growth in its first 2025 quarter.

John Pagliuca

The company supplies data protection and security software to more than 25,000 managed service providers (MSPs) who supply services to small and mid-market businesses in turn. It also sells to distributors, SIs, and VARs. Revenues in the initial 2025 quarter were $118.2 million, up 3.9 percent year-on-year and above its guidance range, with a GAAP loss of $7.2 million. Subscription revenues grew 4.8 percent to $116.8 million and its gross margin was 76.6 percent.

N-able president and CEO John Pagliuca stated: “Our earnings reflect continued progress advancing cyber-resiliency for businesses worldwide. The launch of new security capabilities, strong addition of channel partners in our Partner Program, and our largest new bookings deal ever showcase that N-able is innovating and growing. We look forward to building on this progress throughout the year.”

The loss was due to increased cost of revenue, operating expenses, and acquisition costs.

Pagliuca said in the earnings call that N-able had signed “our largest new bookings deal ever.”

A revenue history chart shows N-able’s growth rate has been declining:

N-able’s net retention rate (NRR) is 101 percent, which indicates some customer revenue growth and low customer churn. It was 103 percent in 2024, 110 percent in 2023, and 108 percent in 2022. The higher the NRR, the higher a company’s growth rate. A 100 percent NRR indicates any lost revenue from customer churn is offset by increased revenue from new customers, upsells, and cross-sells. A less than 100 percent NRR suggests a business is keeping most of its customers but not growing its revenue as much as it could. N-able’s NRR is not in this category but not much above it either.

The issue is that N-able’s quarterly revenue growth rate has been slowing, with seven declining quarters since a high point in 2023’s second quarter, as a second chart indicates:

Delivering cyber-resiliency services through MSPs is akin to a franchise model. Its revenues should increase as existing MSP partners bring on new clients and grow revenue from existing clients, and also from recruiting new MSP partners. As the chart above shows, N-able’s franchisees are growing its revenues at a declining rate.

The cybersecurity-focused acquisition of Adlumin last November was intended to scale its security portfolio, giving MSP partners new services to sell to their clients and channel. The Adlumin deal brings in distributors, VARs, and SIs, giving N-able access to more channel partners and cross-selling scope.

Pagliuca expects the NRR percentage to grow, saying: “The improvements we’re looking to drive will be driven mostly through the cross-sell opportunity that we have within the customer base.”

N-able’s second quarter outlook is $126 million ± $500,000, a 5.5 percent year-on-year increase at the midpoint. Its full 2025 outlook is $494.5 million ± $2.5 million, a 6 percent year-on-year increase at the midpoint, which suggests that it sees its revenue growth rate staying above 5 percent in the second, third, and fourth quarters.

Analyst Jason Ader said: ”While the company is going through multiple simultaneous transitions this year (channel expansion, investments in security products, and cross-sell/platformization motions), which are pressuring profitability and creating some execution risk, we like management’s aggressive posture and believe it holds the promise of a larger TAM, faster growth, and greater operating leverage in the future.”

N-able has instituted a $75 million share buyback program.