Home Blog Page 26

NetApp touches down with the San Francisco 49ers

NetApp has become a founding level partner of the San Francisco 49ers professional football team in a multi-year deal and is now its Official Intelligent Data Infrastructure Partner.

NetApp is also the presenting sponsor of the 49ers 2025 NFL Draft and also the Levi’s Stadium Owners Club, located on the east side of the building. It will be renamed to recognize NetApp’s sponsorship. The company will be the presenting sponsor for the 49ers’ strategy, data, and analytics conference, Horizon Summit, returning to Levi’s Stadium in June 2025. 

Costa Kladianos of the 49ers, which is in partnership with NetApp
Costa Kladianos

Costa Kladianos, 49ers EVP and Head of Technology, stated: “When Levi’s Stadium opened in 2014, it was one of the most technologically advanced stadiums in the country. While we have remained diligent in making improvements to the fan experience year over year, NetApp will empower us to make major changes that will bring the building into the forefront of technology-focused sports and entertainment venues.”

NetApp says the two brands will use intelligent data infrastructure to support 49ers business operations throughout the organization, starting with the reimagination of the fan experience at the Levi’s Stadium. The 49ers plan to make major tech enhancements to the venue with the goal of creating “a new seamless and connected fan experience” with improvements to ingress and egress, bathroom and concession wait times, mobile app functionality, and more. This will involve NetApp’s Keystone storage-as-a-service offering.

NetApp CEO George Kurian said: “Intelligent data infrastructure is crucial to the sports fan experience and team performance. Our support of the 49ers’ ambitious goals for making its home stadium the benchmark for excellence in fan experience and team performance demonstrates our strong ties to our San Francisco Bay Area home and our unique capabilities to make data an asset in leading organizations achieving their transformation goals.” 

We understand that the cost to become a 49ers founding-level partner can be significant, and is typically negotiated on a case-by-case basis and not publicly disclosed. Levi Strauss & Co did reveal it is paying $17 million a year for its ten-year stadium naming rights deal.

NetApp is heavily involved in sports sponsorship, having deals with the San Jose Sharks pro ice hockey team, Porsche Motorsport, TAG Heuer Porsche Formula E team, and the Formula 1 Aston Martin Aramco Racing team. Its annual sports sponsorship budget must be significant.

Spectra Logic optical SAS switch expands tape connectivity

Spectra Logic has an optical SAS switch supporting 100 meter distances between servers and tape drives, providing connectivity cheaper than Fiber Channel.

Its OSW-2400 Optical SAS Switch supports the SAS-4 standard for connectivity between servers and tape storage systems, and features 48 x 24G lanes operating at 22.5 Gbps. That means 1.08 TBps total bandwidth and an aggregate 108 GBps data transfer rate. The 100m distance enables a SAS fabric to cover datacenter floor spaces of up to 10,000 m² (107,639 sq ft), connect between building floors, or extend to nearby buildings.

Nathan Thompson, Spectra Logic
Nathan Thompson

Spectra Logic CEO and chairman Nathan Thompson stated: “The Spectra OSW-2400 Optical SAS Switch represents a unique and transformative step forward in datacenter tape connectivity. By reducing or eliminating the need for expensive Fiber Channel infrastructure, organizations can simplify their tape operations and achieve greater flexibility, while maintaining the performance and reliability they expect.”

Fiber Channel supports much larger-scale storage networking than SAS, with distances exceeding a kilometer and speeds of up to 64 Gbps, with 128 Gbps on its roadmap. But it costs more.

The company says OSW-2400 per-port connection costs are up to 70 percent less than comparable Fiber Channel infrastructure, resulting in savings on connectivity acquisition, maintenance, and upgrade costs.

Simon Robinson, Principal Analyst at Enterprise Strategy Group, part of Informa TechTarget, said: “When managing data at scale, the cost of access can be a significant component of overall storage costs. Extending SAS beyond the rack is a practical way to reduce these costs.” 

The switch has 1RU short-depth packaging with front or back mounting options, hot-swappable dual power supplies, redundant cooling fans, and “a low 50-watt maximum energy consumption.”

It supports SAS-3 tape drives including LTO-9 and IBM TS1170 Enterprise products, while maintaining backward compatibility with SAS-2 devices, including LTO-6, LTO-7, LTO-8, and IBM TS1160 Enterprise Tape Drives. End device frame buffering (EDFB) optimizes bandwidth when using slower devices, improving data transfer rates by as much as 50 percent.

The switch also supports T10 and port-to-port zoning, enabling one-to-many, many-to-one, and many-to-many sharing of Spectra Logic tape libraries. Switch cascades can expand the number of fabric connections or extend connection distances beyond the limits of a single switch.

The OSW-2400 features 12 x 4 wide Mini-SAS HD ports. Each port is capable of connecting four devices such as servers or tape drives. Switch configurations start at 12 SAS-4 lanes (3 ports) and scale in increments of 12 lanes (3 ports) up to a maximum of 48 lanes (12 ports). Both active optical and passive cables are supported. Field-installed port upgrades are available in increments of 12 lanes (3 ports). A maximum of 40 tape drives per switch can be configured.

For high-availability configurations, a second switch may be deployed in a dual-ported configuration. A 10/100/1,000 Mbps Ethernet port and application software are also included for out-of-band management access.

The Spectra OSW-2400 optical SAS switch is available for Q1 delivery. For complete specifications, more information, or to schedule a consultation, click here.

Bootnote

The SAS protocol is developed and maintained by the T10 technical committee of the International Committee for Information Technology Standards (INCITS), while the technology is promoted by the SCSI Trade Association (STA).

New year, new data strategy

COMMISSIONED: The new year has arrived, bringing with it the usual resolutions: get fitter, read more books, maybe finally tackle that ever-growing email backlog.

But for tech leaders, this time of year isn’t just about personal betterment; it’s about rethinking how to unlock business value. And in 2025, one resolution towers above all: getting your data strategy AI-ready.

Let’s face it, data is the lifeblood of modern business, but without a solid infrastructure, it’s like trying to train for a marathon by eating donuts and binge-watching TV. (Tempting, but not effective.) The explosive growth of artificial intelligence (AI) has made it crystal clear that traditional data systems – those dusty warehouses and disjointed lakes – are holding organizations back. This year, it’s all about building a scalable, secure, and flexible data strategy that doesn’t just keep up with AI but accelerates it.

According to GlobalData, by this year, global data creation is forecasted to exceed a mind-boggling 175 zettabytes. A significant chunk of that data will likely include unstructured data like images, videos, and text. AI thrives on this diversity but only if your data strategy can handle it. Unfortunately, many organizations are leveraging legacy systems designed to manage spreadsheets, not neural networks.

Remember dial-up internet (I bet many of you can even recall the sound)? Slow, clunky, and completely unsuited to today’s needs. That’s what legacy data systems feel like in an AI-powered world. Traditional data warehouses weren’t built for the massive throughput, variety, and velocity of modern AI workloads. Worse, they struggle to support semi-structured and unstructured data – the very types AI feeds on.

To add insult to injury, fragmented data across silos makes it nearly impossible to draw actionable insights. Data lakes were supposed to fix this but often turned into data swamps – disorganized, inaccessible, and riddled with performance bottlenecks.

Enter 2025’s shiny new alternative: the AI-driven data platform.

A resolution worth keeping: The Dell AI data platform

Let’s pause the doom and gloom and talk solutions. The Dell Data Platform for AI is like upgrading from that rusty, old station wagon (your legacy system) to a sleek, self-driving EV (AI-ready infrastructure). Here’s how it powers your data strategy to meet the demands of AI:

– Open, flexible, and secure architecture
The platform’s open design supports a wide variety of data types and sources. Whether you’re working with structured sales data, semi-structured IoT logs, or unstructured video content, the Dell Data Platform for AI ensures everything is accessible, queryable, and ready for analysis.

– High performance for GPU-accelerated workloads
AI workloads demand serious compute power, and GPUs are the engines of choice. The platform is engineered to maximize performance, from model training and inferencing to checkpointing during development. It scales effortlessly, letting you process petabytes of data without breaking a sweat.

– Unified Dell Data Lakehouse with Dell PowerScale
Forget the chaos of separate systems. The Dell Data Lakehouse unifies storage and compute, enabling high-speed querying and analytics. Dell PowerScale’s scale-out storage architecture is optimized for AI, ensuring seamless data flow for model refinement and development. It’s the ultimate tool for turning disorganized lakes into productive powerhouses.

Why data governance is your secret asset

AI is only as good as the data that feeds it, and this is where data governance comes into play. Poor governance leads to biases, inaccuracies, and costly compliance issues. With the Dell Data Platform, organizations gain self-service access to high-quality data while maintaining control over security, privacy, and compliance. Think of it as the Marie Kondo of data – keeping everything tidy and purposeful.

AI’s impact isn’t limited to tech giants. In media and entertainment, for example, AI-driven workflows have transformed movie-making. Think advanced visual effects rendering, real-time editing, and personalized viewer recommendations. At the heart of it all? Scalable storage solutions like Dell PowerScale.

Meanwhile, in manufacturing, predictive maintenance and automated quality checks are becoming standard thanks to AI models trained on enormous datasets. The same principles apply – flexible, high-performance storage makes these innovations possible.

This year, 75 percent of enterprises will shift from piloting to operationalizing AI, driving a 5x increase in streaming data volumes according to Gartner’s “AI Adoption Trends” 2023 report. And in its 2023 “Overcoming Data Siloes in AI” report, McKinsey and Company estimates that over 60 percent of companies cite data silos as the biggest obstacle to scaling AI. Elsewhere, Forrester’s “State of AI in Enterprises” report published in 2024 indicates that AI adoption has grown 270 percent in the past four years, and it’s not slowing down.

These stats underscore the urgency of modernizing your data infrastructure. Staying ahead of the curve requires not just investment but a strategic shift in how you think about data.

2025’s data strategy checklist

Ready to kickstart your resolution? Here’s a quick, five-step checklist:

– Audit your current data architecture: Identify gaps and pain points.

– Embrace unified platforms: Eliminate silos with a solution like the Dell AI Data Platform.

– Invest in scalable storage: Prioritize systems designed for high-performance AI workloads.

– Focus on governance: Implement robust policies to ensure data quality and compliance.

– Plan for the future: Choose solutions that can evolve with your business.

This year is a perfect opportunity to reimagine your data strategy. By adopting AI-ready, scalable storage solutions, you’ll do more than keep up with 2025’s challenges – you’ll thrive in them. So ditch the old dial-up mindset and embrace the high-speed potential of a modern data platform. Your AI models (and your business) will thank you.

Happy New Year – here’s to resolutions worth keeping!

For more information about Dell Data Platform for AI, please visit us online at www.delltechnologies.com/powerscale.

Brought to you by Dell Technologies.

Commvault latest to cut deal with CrowdStrike for cyber resilience

Commvault has joined other data protection suppliers by integrating CrowdStrike’s malware-detecting Falcon XDR into its Commvault Cloud to better detect and respond to cyber threats against its customers.

Commvault Cloud, previously called Metallic, is a SaaS backup service protecting hybrid environments against data loss. Its own backup stores are protected against cyberattacks with a ThreatScan facility, which inspects the backup data for signs of malware access and compromise. CrowdStrike provides a Falcon XDR (Extended Detection and Response) service that checks customer endpoints and networks data for IOCs (indicators of compromise), using AI techniques to look for real-time evidence of malware attacks in access behavioral data and system telemetry.

A CrowdStrike alert can be used by Commvault Cloud to trigger a ThreatScan check for affected data, and restore compromised data to a known good state using its backups.

Alan Atkinson, Commvault
Alan Atkinson

Alan Atkinson, Commvault’s Chief Partner Officer, stated: “By partnering with CrowdStrike, we are combining our deep expertise in cyber resilience with their advanced threat detection capabilities, empowering our joint customers with faster response times and a stronger cyber resilience posture.”

Atkinson said: “The average organization has seen eight cyber incidents in the last year, four of which are considered major.¹”

Commvault says this CrowdStrike deal provides proactive threat detection, before any ransomware message, allowing businesses to identify threats earlier, respond faster, and mitigate attacks effectively. They can clear out infected data and potentially prevent a ransomware attack. If customers have separate security and IT operations teams, the partnership can enable a unifying workflow for more efficient attack responsiveness, helping to maintain business uptime.

Dell, Cohesity-Veritas, and Rubrik have all partnered with CrowdStrike to achieve the same level of malware protection for their customers. 

The partnership provides another defense against malware attacks for Commvault Cloud customers alongside its ThreatWise detection facility, which deploys honeypots as attractive malware targets, luring them in for recognition and response.

CrowdStrike is a popular malware threat detection supplier, claiming that 300 of the Fortune 500 companies use its services. We expect other data protection and cyber-resilience suppliers will strike partnership deals with CrowdStrike during 2025.

Bootnote

1According to DeMattia, A., & Gruber, D. (2024). Trends in CR/DR Plans: Contrast and Convergence – Final Survey Results [Unpublished data]. TechTarget, Inc.

Object First’s business is scaling up

Object First, the Veeam-specific object-based backup appliance startup, says it registered 389 percent bookings growth in 2024 along with a 374 percent increase in transacting partners.

The company was started up in 2022 by Veeam founders Ratmir Timashev and Andrei Baronov. to provide an S3-based Ootbi backup appliance, launched in February 2023, offering immutable object storage in a 4-way clustered and non-deduplicating appliance. There are now three node raw capacity points: 64TB, 128TB and 192TB, with an NVMe SSD landing zone and disk drives for actual retention. It also offers on-prem to public cloud data copies. The company has been growing sales fast, and recruiting staff and worldwide channel partners.

David Bennett

CEO David Bennett stated: “Organizations worldwide  are increasingly prioritising secure, easy-to-use and powerful data protection solutions, and our Ootbi appliances continue to set the standard for true immutability and simplicity. With the momentum we’ve built this year, we’re well positioned for even greater  achievements in 2025.”

It issues growth statistics primarily in terms of bookings and transacting partner count percentages and secondly in customer numbers. 

Object First said 2024 bookings grew 389 percent over 2023, “including over triple-digit growth in six-figure and above deals,” and said the partner count increased 262 percent. It said its global customer base increased by 374 percent year-over-year in 2024 compared to 2023, although no numbers were provided.

We have kept a tally of its publicly declared quarterly percentage growth statistics and the numbers show a decline as you would expect from a standing start with small actual sales, and numbers of partners and customers to begin with:

Object First’s discussed bookings, partner and customer numbers percentage growth rates in 2024

Nevertheless less there has been an increase in the percentage partner growth rate from Q3 of 2024 to Q4, reflecting Object First’s emphasis on growing its partner base. It said T-note became Object First’s first Platinum Partner in LATAM and there was “significant partner expansion in EMEA, where the number of transacting partners nearly  quadrupled.” New partnerships included Axians and DMS in the UK, Insight in France, ContecNow and Seidor Soluciones Globales in Spain, and German partner Erik Sterck. In the USA, Object First strengthened its presence in Southern California through its collaboration  with GST, serving state, local, and healthcare organisations.

The company has not revealed actual customer or partner numbers. It did say it had 57 appliance deployments in Q1 2024 but has not disclosed deployment numbers since then.

During 2024 it hired 104 employees across 12 countries and opened an office in Barcelona. Object First notes that Veeam’s latest 12.x releases opens the door to backup storage capacities beyond 3PB, as Ootbi clusters can now be used as multiple extents in the Veeam SOBR (Scale-Out Backup Repository).

Get an Object First product white paper at the bottom of this webpage.

Hammerspace gets WEKA’s ex-CRO as its sales boss

Data orchestrator Hammerspace has orchestrated its first Chief Revenue Officer, Jeff Giannetti, who abruptly left the CRO spot at WEKA last month.

David Flynn

Hammerspace sells Global Data Platform software based on parallel NFS and uses it to orchestrate files and objects stored on other suppliers’ filers and object stores with the ability to support NVIDIA’s GPU Direct and ship file data fast to GPUs. WEKA has developed its own fast access file system software for high-performance computing, AI processing by GPUs, also supporting GPU Direct,  and enterprise high-speed file access. WEKA’s President and CFO left at the same time as Gianetti.

David Flynn, Hammerspace founder and CEO stated: “The days of data silos are behind us. Organizations worldwide are unifying their unstructured data through orchestration, empowering AI, driving GPU performance and unlocking unparalleled efficiency. Jeff will be instrumental in building a high-performing global team of sales leaders to help organizations harness the full potential of their data.”

Chris Bowen has been SVP Sales at Hammerspace since August 2021, with Jim Choumas filling the VP Channel Sales role since the same date. Giannetti is a heavyweight hire, having been CRO at Cleversafe (acquired by IBM) and Deep Instinct and holding several leadership positions at Sun Microsystems, Veeam, Digital Ocean and Forcepoint. He worked in NetApp’s sales organization for more than a decade, where the company grew from $700 million in revenues to over $6 billion during his tenure.

Jeff Giannetti

The company says Giannetti will “drive the expansion of the company’s rapid growth in global demand” and “lead the global sales team to accelerate revenue growth through new customer acquisition and use case expansion within existing customers.”

Giannetti said: “AI is trending to be the biggest technical development in our lifetime, but the challenge for organizations is creating a data infrastructure that can provide high-performance access to unstructured data anywhere. Hammerspace solves these challenges using a standards-based approach, at a massive scale, while providing orchestration and global namespace capabilities that are wholly unique. I’m thrilled to be a part of Hammerspace, a world-class team enabling organizations to experience the full value of their investments in their AI infrastructure and ecosystem.”

iXsystems says unified TrueNAS open storage software almost here

The latest version of the TrueNAS open source storage software from iXsystems will soon be available, according to the company’s latest update notes.

The release is called TrueNAS Fangtooth, and promises to improve performance, security, and scalability for both users and developers. Fangtooth is the successor to TrueNAS Electric Eel.

Fangtooth is based on TrueNAS SCALE, and marketed as the common TrueNAS Community Edition. TrueNAS Fangtooth will be an upgrade for both SCALE 24.10 and CORE 13.x users, and introduces new features for Community and Enterprise users.

The more mature CORE was seen as delivering better performance than SCALE, and it needs less CPU power and memory. TrueNAS CORE is the original successor of FreeNAS, based on the FreeBSD operating system. With the introduction of TrueNAS SCALE in 2022, more modern Linux capabilities were introduced to TrueNAS, enabling adoption by a much larger community.

SCALE was initially created as a fork of CORE, with each version continuing their development, bug fixes, and security updates independently.

But by the back end of last year, there were roughly equal numbers of SCALE and CORE users, with SCALE having doubled its system count over the year, and CORE “declining slowly” as users migrated to SCALE, said iXsystems.

“The benefits of unification will be enormous for the community, both users and developers. Before the end of 2025, we expect most TrueNAS users will be on Fangtooth,” the supplier said.

Fangtooth (aka TrueNAS 25.04) is already available for developers, and the BETA1 version is expected to be ready by February 11. Bug fixes, feature updates, and “ongoing polishing” will continue until the targeted release date for a “stable” community version on April 15.

Notable new capabilities in Fangtooth include TrueNAS Versioned API, which allows third parties to use APIs to control TrueNAS, knowing that future versions of TrueNAS will honor the same API schemas.

“TrueNAS can evolve and improve in a more organized manner,” said iXsystems, “allowing external tools to run with longer stability.” Future versions of TrueCommand are expected to enhance system longevity. User-Linked API Tokens are also included to provide secure and restricted management.

In addition, Fast Dedup promises a “significant reduction” in storage media costs, and iSCSI Block Cloning allows virtualization solutions to benefit from using iSCSI XCOPY commands for efficient and rapid data copying.

There is also upgraded containerization and virtualization, with TrueNAS integrating Incus support, and an upgraded WebUI with support for native LXC containers.

Also, by upgrading to Linux kernel 6.12 LTS, Fangtooth will support new hardware. This will be an advantage for both CORE and SCALE users upgrading their hardware.

Apps in Electric Eel use TrueNAS’s host IP address. Fangtooth enables IP alias addresses to be created and assigned to one or more apps. A number of other new features can be viewed here.

By July, it is expected that Fangtooth will be recommended to enterprise users.

RiverMeadow flows on-premises Pure Storage workloads to the cloud

Workload and data mobility player RiverMeadow has upgraded its platform to help extend Pure Storage’s on-premises Evergreen storage-as-a-service to the cloud.

RiverMeadow says it offers its Workload Mobility Platform and services to allow businesses to migrate and “optimize” workloads with “unprecedented scale, speed, and certainty.”

With the upgrade, customers will now be able to use RiverMeadow to access Pure Cloud Block Store on platforms like Azure, AVS (Azure VMware Solution), and AWS. “This advancement provides unprecedented flexibility and elasticity for cloud-based workloads and disaster recovery,” claimed RiverMeadow.

Jim Jordan

The two companies said the collaboration represents a “significant step forward” in reducing cloud storage costs through enhanced data optimization during workload migration. RiverMeadow says it is offering “fixed-price” migration capabilities to support Pure Storage’s enterprise-grade data platforms.

Jim Jordan, president and CEO of RiverMeadow, explained: “For customers moving storage-bound workloads to Azure, AVS, or AWS, Pure Storage now offers the ability to scale up their storage capacity without increasing the number of overall nodes. RiverMeadow’s integration with Pure Cloud Block Store means customers can move workloads faster, while simultaneously optimizing the target architecture.”

Cody Hosterman

“Pure Storage is committed to innovating to meet the evolving needs of our customers, and our work with RiverMeadow is an example of this commitment,” said Cody Hosterman, senior director of cloud product management at Pure Storage. “As businesses continue to migrate workloads due to shifts in strategy, cost management, or as part of their cloud efforts, our collaboration provides scalable and efficient solutions that enable customers to leverage Pure Storage capabilities to consume the cloud dynamically and cost-effectively.”

As part of a wide-ranging upgrade to its PowerMax high-end enterprise block storage arrays last October, Dell offered “simple options” to move live PowerMax workloads to and from its on-demand APEX Block Storage, which can be located on-premises and in the AWS and Azure public clouds. The company said the offer relied on RiverMeadow’s technology to do it.

Microsoft proposes Managed Retention Memory to tackle AI workloads

Microsoft researchers have proposed Managed Retention Memory (MRM) – storage-class memory (SCM) with short-term persistence and IO optimized for AI foundation model workloads.

Sergey Legtchenko, Microsoft
Sergey Legtchenko

MRM is described in an Arxiv paper written by Microsoft Principal Research Software Engineer Sergey Legtchenko and other researchers looking to sidestep high-bandwidth memory (HBM) limitations in AI clusters. They say it is “suboptimal for AI workloads for several reasons,” being “over-provisioned on write performance, but under-provisioned on density and read bandwidth, and also has significant energy per bit overheads. It is also expensive, with lower yield than DRAM due to manufacturing complexity.”

The researchers say SCM approaches – such as Intel’s discontinued Optane and potential alternatives using MRAM, ReRAM, or PCM (phase-change memory) – all assume that there is a sharp divide between memory, volatile DRAM, which needs constant power refreshes to retain data, and storage, which persists data for the long-term, meaning years.

They say: “These technologies traditionally offered long-term persistence (10+ years) but provided poor IO performance and/or endurance.” For example: “Flash cells have a retention time of 10+ years, but this comes at the cost of lower read and write throughput per memory cell than DRAM. These properties mean that DRAM is used as memory for processors, and Flash is used for secondary storage.”

But the divide need not actually be sharp in retention terms. There is a retention spectrum, from zero to decades and beyond. DRAM does persist data for a brief period before it has to be refreshed. The researchers write: “Non-volatility is a key storage device property, but at a memory cell level it is quite misleading. For all technologies, memory cells offer simply a retention time, which is a continuum from microseconds for DRAM to many years.”

By tacitly supporting the sharp memory-storage divide concept, “the technologies that underpin SCM have been forced to be non-volatile, requiring their retention time to be a decade or more. Unfortunately, achieving these high retention times requires trading off other metrics such as write and read latency, energy efficiency, and endurance.”

General-purpose SCM, with its non-volatility, is unnecessary for AI workloads like inference, which demand high-performance sequential reads of model weights and KV cache data but lower write performance. The tremendous scale of such workloads requires a new memory class as HBM’s energy per bit read is too high and HBM is “expensive and has significant yield challenges” anyway.

The Microsoft researchers say their theorized MRM “is different from volatile DRAM as it can retain data without power and does not waste energy in frequent cell refreshes, but unlike SCM, is not aimed at long term retention times. As most of the inference data does not need to be persisted, retention can be relaxed to days or hours. In return, MRM has better endurance and aims to outperform DRAM (and HBM) on the key metrics such as read throughput, energy efficiency, and capacity.”

They note: “Byte addressability is not required, because IO is large and sequential,” suggesting that a block-addressed structure would suffice.

The researchers are defining in theory a new class of memory, saying there is an AI foundation model-specific gap in the memory-storage hierarchy that could be filled with an appropriate semiconductor technology. This “opens a field of computer architecture research in better memory for this application.”

Endurance requirements for KV cache and model weights vs endurance of memory technologies
Endurance requirements for KV cache and model weights vs endurance of memory technologies

A chart (above) in the paper “shows a comparison between endurance of existing memory/storage technologies and the workload endurance requirements. When applicable, we differentiate endurance observed in existing devices from the potential demonstrated by the technology.” Endurance is the length of time over which write cycles can be continued. “HBM is vastly over-provisioned on endurance, and existing SCM devices do not meet the endurance requirements but the underlying technologies have the potential to do so.”

The Microsoft researchers say: “We are explicitly not settling on a specific technology, instead highlighting an opportunity space. This is a call for action for those working on low-level memory cell technologies, through those thinking of memory controllers, to those designing the software systems that access the memory. Hail to a cross-layer collaboration for better memory in the AI era.”

They conclude: ”We propose a new class of memory that can co-exist with HBM, Managed-Retention Memory (MRM), which enables the use of memory technologies originally proposed for SCM, but trades retention and other metrics like write throughput for improved performance metrics crucial for these AI workloads. By relaxing retention time requirements, MRM can potentially enable existing proposed SCM technologies to offer better read throughput, energy efficiency, and density. We hope this paper really opens new thinking about innovation in memory cell technologies and memory chip design, tailored specifically to the needs of AI inference clusters.”

Storage news ticker – January 24

Observability and data management supplier Apica announced a Freemium version of its Ascent offering, providing free access to an enterprise-grade telemetry pipeline and intelligent observability, processing up to 1 TB/month of logs, metrics, traces, events, and alerts. Ascent is designed to help organizations centrally manage and automate their telemetry data workflows and gain insights from their data. It supports OpenTelemetry. Ascent Freemium users can upgrade to paid tiers as their needs evolve. 

CTO/CPO Ranjan Parthasarathy said: “With Ascent Freemium, we offer a comprehensive platform that consolidates telemetry data management and observability, while leveraging AI/ML workflows and built-in AI agents to significantly reduce troubleshooting time.” Sign up here.

Data orchestrator and manager Hammerspace is a finalist in three major categories at theCUBE Technology Innovation Awards: “Most Innovative Tech Startup Leaders” for CEO and co-founder David Flynn, the “HyperCUBEd Innovation Award – Private Company,” and “Top Data Storage Innovation.” Winners will be announced on February 18. The Most Innovative Tech Startup Leaders honors exceptional individuals from a B2B tech company who have significantly advanced the industry through groundbreaking ideas, leadership, and execution. 

In-memory grid provider Hazelcast has joined the STAC Benchmark Council.

On-prem, hybrid, public cloud, and SaaS app data protector HYCU is now a Dell Technologies Extended Technologies Complete (ETC) program member, one of only two data protection providers to be a member. The other is Druva. There is more information in CEO Simon Taylor’s blog post.

HYCU R-Cloud was named a winner in the annual TechTarget Storage Products of the Year Awards. It won Bronze in the Backup and Disaster Recovery Hardware Software and Services Category in the 23rd Annual Awards edition of the TechTarget Storage Products of the Year.

The IBM Storage Ceph for Beginners document has been updated to v2.0 and includes NVMe-oF Gateway and native SMB protocol support. NVMe-oF Gateway is said to be ideal for VMware and other high-performance block workloads. Download the document here.


Tape backup hardware supplier and service provider MagStor has joined the Active Archive Alliance. Pete Paisley, MagStor’s VP of Business Development, said: “As MagStor has grown to offer increasingly capable tape storage hardware, media, and data services, we seek opportunities to add our voice to the storage archive ecosystem to help customers better solve problems related to AI and exponential data growth at the lowest possible cost.” Active Archive Alliance members and sponsors include Fujifilm, MediQuant, Spectra Logic, Arcitecta, Cerabyte, IBM, Iron Mountain, Overland Tandberg, PoINT Software & Systems, QStar Technologies, Savartus, S2|Data, Western Digital, and XenData.

Wedbush analysts told subscribers it believed nearline disk drive sales volumes came in the low to mid 16 million unit range. This result implies an incremental few hundred thousand units for both Seagate and Western Digital. It said: “Generally, we see favorable trends continuing through the next few quarters with industry units likely holding around current levels and ASPs appearing set to lift modestly.”

The flash market was different: “For NAND, we believe the quarter was defined by sharper than expected ASP declines as enterprise demand dipped sharply.” It was due to three factors: continued workdowns in client device OEM SSD and module inventories; a sharp drop in high capacity SSD demand tied to GPU server shipment delays; and a push by hyperscale customers to moderate pricing.

Solidigm announced a multi-year extension of its agreement with Broadcom on the use of high-capacity SSD controllers to support AI and data-intensive workloads. Broadcom’s custom controllers have served as a critical component of Solidigm SSDs for more than a decade, with more than 120 million units of Solidigm SSDs shipped featuring Broadcom controllers. The agreement also includes collaboration on Solidigm’s recently announced 122 TB D5-P5336 datacenter SSD, at the time of publication the world’s highest capacity PCIe SSD.

SMART Modular announced that its 4-DIMM and 8-DIMM CXL (Compute Express Link) memory Add-in Cards (AICs) have passed CXL 2.0 compliance testing and are now listed on the CXL Consortium’s Integrators’ List.

Multi-protocol storage array provider StorONE has added Kubernetes integration to its ONE Enterprise Storage Platform product. It allows all customers to benefit from Kubernetes functionality as part of their existing license, free of charge, after upgrading to the latest version. They gain access to StorONE’s data protection, security, and snapshots in Kubernetes environments. StorONE features like auto-tiering, snapshots, and replication work seamlessly alongside Kubernetes. More information here.

TrendForce said a magnitude 6.4 earthquake struck southern Taiwan, with its epicenter in Chiayi, at 12:17 AM local time on January 21. TSMC and UMC’s Tainan fabs, which experienced seismic intensity levels above 4, initiated immediate personnel evacuation and equipment shutdowns for inspections. While no critical equipment damage was reported, unavoidable debris was generated inside furnace equipment. Operations at these facilities began resuming on the morning of January 21, with TrendForce noting that the earthquake’s impact on production appears to be within controllable limits.

Dell opts for CrowdStrike to up threat detection game

Dell is trying to beef up data protection services to customers via its security operations centers (SOCs) in a bid to stop cyber criminals that are targeting backup and restore systems in the datacenter.

It has expanded its managed detection and response (MDR) services through an agreement with CrowdStrike. Dell is now using CrowdStrike’s Falcon Next-Gen SIEM (security incident and event management) as part of its MDR, to “simplify” threat detection and response with a unified platform, “boosting visibility” and helping to prevent breaches.

The combo promises to give enterprises visibility into their infrastructure that’s “not possible with off-the-shelf tools”.

Dell says cyber baddies are increasingly targeting data protection environments first, because they are fundamental to recovering and restoring corrupted data. Currently, many IT security teams rely on the infrastructure to provide system log information to a SIEM tool. But this can create a flood of unprioritized alerts that security teams have to spend significant amounts of time manually reviewing and addressing, adding another layer of complexity to managing infrastructure security, according to Dell.

As an alternative, Dell and CrowdStrike have developed more than 60 unique indicators of compromise (IOCs) tailored specifically for Dell PowerProtect Data Domain and PowerProtect Data Manager. The IOCs are surfaced within Falcon Next-Gen SIEM’s AI-powered detections, ranked by severity, and provide forensics data to Dell security analysts to “accelerate” responses, we’re told.

Examples of the IOCs include disabled multi-factor authentication, login from a public IP address, mass data deletion, and multiple failed login attempts.

Mihir Maniar.

“Extending MDR to cover data protection infrastructure and software enhances visibility and proactive threat detection across the environment, providing exceptional protection from threats,” said Mihir Maniar, vice president, infrastructure, edge and security services portfolio, Dell Technologies. “Dell and CrowdStrike have developed advanced threat detection capabilities to provide actionable, high-quality data to our security experts. With this expansion, we’ve extended our MDR service to provide end-to-end coverage across IT environments.”

“Falcon Next-Gen SIEM provides Dell MDR with a powerful, foundational new platform to seamlessly ingest rich data backup and protection telemetry, and rapidly detect and respond to threats,” added Daniel Bernard, chief business officer, CrowdStrike. “Together, we look forward to delivering the technology and services that customers need to transform security operations, protect critical data, and stop breaches.”

This isn’t the first time that Dell has integrated its services with third party technologies to boost protection. Dell’s on-premises and in-cloud PowerProtect Cyber Recovery vault products use Index Engines’ CyberSense software to give full content indexing and searchability for ransomware activity. IBM’s Storage Defender product also uses CyberSense software, as does Infinidat’s InfiniSafe Cyber Detection.

Last year, both Rubrik and Cohesity announced service integration deals with CrowdStrike to improve their threat protection offer to customers.

Dell MDR services are currently available in 75 countries.

Scale Computing claims sales growth amid demand for VMware alternative

Edge hyperconverged infrastructure player Scale Computing is claiming strong annual growth for its solutions across the market.

The company, which is privately-owned so B&F has no way of verifying its financial results, said it saw a hike in software sales of over 45 percent year-on-year in 2024, and “more than doubled” its number of new customers. In the fourth quarter of the year, the business said software sales jumped 77 percent, and new logos were up 350 percent compared to the last quarter of 2023.

Scale Computing chose not to reveal its actual dollar sales, nor its profit figures.

Regarding overall growth, it claimed customers were increasingly seeing Scale as an alternative virtualization option to VMware, as businesses and the channel wrestle with licensing and support changes since Broadcom acquired VMware in 2023. It added that AI inference solutions were also driving sales.

Jeff Ready, Scale Computing
Jeff Ready

“We currently see an unprecedented opportunity to enable the best outcomes for our customers and partners as they navigate the industry disruption caused by Broadcom and VMware,” Jeff Ready, CEO and co-founder of Scale Computing said in a statement. “Scale Computing Platform (SC//Platform) provides a major upgrade to VMware by providing a hypervisor alternative, while simultaneously enabling edge computing and AI inference at the edge. Our partners and customers get a two-for-one: a solution to today’s Broadcom problem, and a technology roadmap into the future of edge and AI.”

While not revealing the value, Ready claimed the firm was seeing “record profitability” on “record demand.”

The company claims that SC//Platform reduces downtime “by up to 90 percent” and “decreases total costs by up to 40 percent” compared to VMware, through “simpler” management, integrated backup and disaster recovery, built-in “high availability,” and “effortless scalability.”

In Q4, Scale Computing launched the SC//Fast Track Partner Promotion, offering new resellers a free hyperconverged edge computing system to experience the company’s technology. It also announced a new agreement with 10ZiG to provide “managed, secure, and flexible” virtual desktop infrastructure (VDI) by combining 10ZiG’s hardware and software tech with SC//Platform.