Home Blog Page 15

Komprise adds AI-powered PII detection to protect sensitive data

Komprise automated AI data pipelines can now detect and protect personally identifiable information (PII).

The unstructured data manager has a Smart Data Workflow Manager product, aimed at orgs wanting to prevent the leakage of PII and other sensitive data. It says IT teams fear the risks of sensitive data “lurking where it shouldn’t be” and these risks are increasing as unstructured data grows. The popularity of GenAI is exacerbating the issue.

Kumar Goswami

Komprise points out that storage admins are generally responsible for data governance and compliance but lack ways to do this systematically across their data estate. CEO Kumar Goswami  stated: “The risk of sensitive data breaches is escalating and thereby paralyzing organizations from using AI. We are pleased to systematically reduce sensitive data risks so that our customers can improve their cybersecurity and provide data governance for AI ingestion with the new sensitive data detection and mitigation capabilities in Komprise Smart Data Workflow Manager.”

Co-founder, president, and COO Krishna Subramanian said this feature was coming in an interview with Blocks & Files earlier this month. Smart Data Workflow Manager now includes:

  • Standard PII detection: Select which PII data types to scan for such as national IDs, credit card numbers and email addresses. Multiple classifications are supported to identify multiple types of PII within any given file.
  • Custom Sensitive Data Detection: Customers can find any text patterns in their data via both keyword and regular expressions (regex) search to identify specific data formats like employee IDs, machine or instrument IDs, product or project codes, or PHI data like healthcare-system specific patient record IDs.
  • Scans Sensitive Data in Place: Executes locally behind enterprise firewalls so sensitive data stays in place, unlike cloud-based data detection services. e.g. here.
  • Remediate and Move: Once PII data is detected, users can set up a workflow to confine the data or move it to a safe location.
  • Pre-Process for AI Ingest: Sensitive data detection can be a pre-process step for an AI ingest workflow to eliminate sensitive data leakage to AI.
  • Ongoing Workflow: Users can set workflows to run periodically, so Komprise automatically finds and acts on any new sensitive data for ongoing detection, tagging and mitigation, with full audit capabilities.

The Komprise software maintains a full audit record of all data processed by any workflow, such as copying data to a location for ingestion by an AI or ML system.

Komprise Smart Data Workflows, along with the new sensitive data detection and regex search, are currently in early access for customers and partners. They will be generally available by the end of Q1 as part of the Komprise Intelligent Data Management Platform.

Scality launches pay-as-you-go ARTESCA storage for Veeam Backup

Scality has an all-new pay-as-you-go licensing program featuring ARTESCA as a software storage backup target for Veeam Backup-as-a-Service offerings. It’s aimed at Scality Cloud & Service Providers (SCSP), meaning cloud providers, IT service providers, and VARs. 

ARTESCA is a cloud-native version of Scality’s RING object storage, co-designed with HPE, and S3-compatible. It can be a backup target for Veeam and there is a hardware appliance version specifically for Veeam customers. 

Amaury Dutilleul-Francoeur, Veeam
Amaury Dutilleul-Francoeur

Amaury Dutilleul-Francoeur, VP of EMEA Channels and Alliances at Veeam, stated: “Our joint customers appreciate a bundled Veeam + Scality solution offering, which provides solid data resilience capabilities. It’s exciting to now see Scality launch its own global  Cloud Service Provider Programme, as it’s a go-to-market model that has seen continuous growth over the years. At Veeam, we are particularly proud of our network of Veeam Cloud & Service Provider (VCSP) partners and the quality of service they offer to hundreds  of thousands of customers.”

This ARTESCA PAYG pricing model aligns with Veeam’s pay-as-you-go licensing framework. Scality claims it can minimize sales friction, streamline order processing, and enhance backup functionality with reduced complexity, “making ARTESCA the premier storage solution for service providers  who offer Veeam Backup-as-a-Service.”

Scality says its ARTESCA PAYG will help SCSPs keep operations streamlined. Customers can choose a one, two, or three-year ARR commitment, and, with dynamic monthly billing, only pay based on utilized storage capacity, ensuring costs align directly with monthly revenue. The cost savings are important, and Scality says the backups are secure because of its CORE5 cyber-resilience features, which go beyond immutability.

The competition to provide object storage-based Veeam backup targets is intensifying. Object First, for example, announced 389 percent bookings growth in 2024.

Eric LeBlanc, Scality
Eric LeBlanc

Eric LeBlanc, GM of ARTESCA and Channel Chief, said: “ARTESCA’s unprecedented growth in 2024 underscored its flexibility, simplicity, and impact. With this launch, we’re enabling Veeam Cloud & Service Providers and partners worldwide  to deliver comprehensive security, performance, scalability, and cost efficiency to their customers.”

The growth LeBlanc is referring to was 500 percent year-over-year, all through partners, with LeBlanc positioning the product as a channel offering including ransomware protection.

Scality partners can upsell with ARTESCA on existing Veeam systems “to build a solid ARR model as capacity increases over time.” ARTESCA PAYG starts with 50 TB capacity and can expand infinitely as customers’ storage needs grow.

Maybe we’ll see similar ARTESCA programs for other major backup brands, such as Cohesity/Veritas, Commvault, and Rubrik.

The ARTESCA pay-as-you-go software offering for Veeam Cloud & Service Providers and partners is globally available now through select Scality distribution partners. There’s more about ARTESCA and the pay-as-you-go licensing model here.

How legacy storage infrastructure could endanger your future

SPONSORED FEATURE: There can’t be many tech leaders who don’t realize that AI relies on vast amounts of data.

Yet it’s not always clear that everyone has grasped just how fundamental their storage choices are to the success of their AI initiatives.

It’s not for want of spending. The first half of 2024 saw organizations increase spending on compute and storage infrastructure for AI deployments by 37 percent to $31.8bn, according to research firm IDC. Almost 90 percent of this was accounted for by servers, with 65 percent being deployed in cloud or shared environments. The storage element increased 36 percent as organizations strive to manage the massive datasets needed for training as well as data repositories for inference.

Gartner’s John-David Lovelock also sounded a warning in October, as he predicted increased spending on AI infrastructure in 2025: “The reality of what can be accomplished with current GenAI models, and the state of CIO’s data, will not meet today’s lofty expectations.”

In fact, CIOs could be justified in asking whether they are pouring all that time and money into fighting what might prove to be the last storage war. AI, and transformations in general, have implications not just for how much data and storage is used, but the way that it is used and how it is provisioned and managed.

As HPE’s vice president of product management Sanjay Jagad explains “Storage has gone from a repository of data value to where value gets created through data.”

The challenge for IT is making sure that’s what happens. For all the obsession with GPUs, Jagad says, those powerful processors cannot deliver AI on their own. “You need to still understand what data you have, what value you want to derive out of it, and how are you going to monetize it?”

Nor does it help that the majority of customers’ data is “dark” according to some reports – ie tied up in repositories that don’t allow live access and formats which are hard to analyze. Or even languishing on tape in off-site facilities. That’s because traditional – legacy – storage systems were designed for legacy, monolithic apps, such as databases, and ERPs.

A question of control

Jagad explains: “They have a box, there are two controllers in them. They have a mid-plane, and a bunch of drives attached to it. That’s a legacy design.” And because of configurations like active passive, typically one controller is doing all the work. These architectures are not easy to expand and tend to be siloed. Scaling them tends to increase, rather than solve, inefficiencies.

As a result, accommodating increasing application demands becomes an ongoing headache. Because keeping pace with transformational changes in technology typically means a whole new box.

“That means you are going to now go do a forklift upgrade, or you have to do a rip and replace and do costly migrations,” says Jagad. That all adds up to constant firefighting just to keep things stable, with an inevitable firestorm of upgrades every three to five years. This model was clumsy enough when it came to those legacy applications. It becomes even more problematic when it comes to modern applications, including AI systems.

It’s not just that AI and other modern workloads require large amounts of data, increasing the burden on storage infrastructure. It also stems from them being effectively cloud native, meaning they are distributed, with a focus on orchestration and reusability. As well as performance, access and scalability are also key. Developers launching new projects or rapidly iterating and expanding existing ones mean resources must be easily provisioned.

All of which means IT departments need a storage platform that is not just performant but also disaggregated and easily scalable.

That’s the promise of HPE’s Alletra Storage MP B10000, which relies on “standardized, composable” building blocks, in the shape of compute nodes, JBOF expansion shelves for capacity with up to 24 drives per enclosure, and switches. The system, configured for block storage, can scale from 15TB to 5.6PB, with controllers upgraded one at a time, and storage in two disk increments.

The B10000 is optimized for NVMe, offering 100Gbit/s data speeds today, with 200Gbit/s and 400Gbit/s on the roadmap. So, the fabric will not be a bottleneck, Jagad says. The controllers themselves are powered by AMD EPYC™ embedded CPUs, which as well as high performance are built to offer a high number of PCI lanes, high memory bandwidth and low energy consumption.

The architecture has been designed to liberate “ownership” of application data from a specific controller. Because they are stateless, each controller can see every disk beneath it, which means no silos. As a disaggregated system, if admins want to add more performance or more capacity, new controllers or new media or both can be added directly. That echoes the sort of iterative, rapid deployment we see in cloud native or DevOps paradigms.

Working over time?

More tangibly, under HPE’s Timeless program, hardware upgrades are included at no additional cost, meaning admins can take advantage of processor or media improvements. If that sounds more like a software subscription or XaaS offering, you wouldn’t be wrong.

As Jagad says, “The hardware is the smallest value story in terms of the whole economics.” Rather, the value comes in software and the disaggregated architecture, he argues.

“I’m pretty confident that you are going to get the value out of it through higher performance, better SLAs and being able to avoid the expensive migrations and forklift upgrades,” he adds. Modern media often has a much longer lifecycle, meaning that over time the storage blocks can be recast as a lower tier of storage rather than simply dumped. Afterall, “Do you really need all that media on your primary storage.”

The B10000 was built to meet a range of deployment need across hybrid cloud – including on-prem, a software defined solution for the public cloud as well as a disconnected air-gapped option for highly regulated environments. That ties in with the third pillar of HPE’s strategy, its HPE GreenLake cloud, which automates management of compute, networking and storage across those on prem, colo and cloud environments, with a cloud-like experience from a single console.

HPE GreenLake itself has become increasingly AI driven, Jagad explains, with a Gen AI hybrid intelligent mode in the back end that handles key tasks, such as managing and moving workloads and optimizing storage accordingly, as well as predictive space reclamation, upgrade recommendations and running simulations.

It also draws on HPE’s Zerto continuous data protection technology to protect data from cyberattacks, including ransomware, and streamline disaster recovery, whether from malicious actions or the myriad other problems to which storage systems are prone. These innovations all add up. HPE’s internal analysis suggests that customers can expect a 30 percent TCO saving by avoiding the need for one forklift upgrade, and a 99 percent operational time saving due to AI-driven self-service provisioning. Earlier research on HPE GreenLake suggested a 40 percent saving on IT resources – this before the current level of AI Ops.

HPE backs up its confidence in the platform with a 100 percent data availability guarantee, a 4:1 StoreMore total savings guarantee, and a 30-day satisfaction guarantee. But that’s not the most important benefit. “Imagine the resources that I can free up from an IT operation so that they don’t have to worry about life cycle support,” says Jagad.

That’s an immense amount of time and resources that can be redirected to other tasks. Not least figuring out the potential value of data, and what they can do to best realize it. “That’s where their time needs to go, right?” says Jagad. “The role of IT is changing.” But for IT to make the most of its potential, storage needs to change too.

Sponsored by Hewlett Packard Enterprise and AMD.

Market reacts after Commvault beats Q3 forecast

Commvault substantially outperformed its revenue guidance of $245 million by reporting a 21 percent year-over-year rise to $262 million in its third fiscal 2025 quarter, ending December 31.

The news had the company’s stock price seesawing wildly, starting the day at $163.613, falling to $142.16, recovering to $155.60, and then gently rising back to $153.16.

GAAP net income was down 35.7 percent to $11 million despite the revenue rise. The customer count rose 31 percent year-over-year to 11,500, including over 7,000 SaaS customers and 1,000 new customers in the quarter, around 200 from Clumio. Annual recurring revenue (ARR) was up 18 percent to $890 million, while subscription revenue increased strongly by 39 percent to $158.3 million, driving subscription ARR up 29 percent to $734 million. Within that, the company talked up its pure SaaS [Commvault Cloud] AR, which rose by 71 percent to $259 million.

Sanjay Mirchandani, Commvault
Sanjay Mirchandani

President and CEO Sanjay Mirchandani said: “Once again, Commvault has delivered a record-breaking quarter with accelerating revenue growth … We saw a strong uptick in transaction volume, impressive growth in our land and expand business, and an acceleration in our organic growth rate … As we look to the future, we believe our unified platform which enables customers to anticipate, prepare for, and recover from inevitable attacks will be more critical than ever.”

Financial summary:

  • Gross margin: 81.5 percent – down from year-ago 82 percent
  • Operating cash flow: $30.1 million
  • Free cash flow: $29.9 million

CFO Jen DiRico talked about “the fantastic results this quarter and year-to-date,” telling the earnings call: “The growth in subscription revenue resulted from continued SaaS momentum and significant improvement in the volume of both term software and SaaS transactions compared to the prior year. Revenue from term software transactions over $100,000 increased by 18 percent, benefiting from a 30 percent rise in volume. This included more than a dozen wins over $1 million. In addition, we saw robust growth in landing new large enterprise customers this quarter, including Equinix, AXA, Vanderbilt University Medical Center, and DenizBank Financial Services.”

She added: “We added a record number of SaaS customers this quarter. We’re pleased with this increase in volume with small and medium-sized enterprises.” 

Costs went up in the quarter, according to DiRico. “Q3 operating expenses included costs associated with SHIFT in London, the onboarded Clumio employees, and our continued investments to accelerate revenue momentum, which included higher commissions and bonuses on a record sales result.”

The SaaS net dollar retention (NRR) rate was 127 percent, up from 125 percent a year ago. Adopting a cyber-resilience strategy is paying off, with Mirchandani saying: “Our cyber resilience marketing message is resonating with customers and we’re seeing record inflows and pipeline growth. And in Q3, we saw increased transaction volumes, strong close rates, accelerating customer numbers, and robust expansion activity.” 

Commvault revenue
Note the steepening (accelerating) growth curves in the 2025 column

The final quarter’s outlook is for revenues of $262 million ± $2 million, a 17.3 percent year-over-year rise at the midpoint. The full FY 2025 revenue forecast is for $982.5 million ± $2.5 million, indicating a 17 percent year-over-year increase at the midpoint.

Commvault aims to achieve more than $1 billion in ARR in fiscal 2026, along with over $330 million in SaaS ARR – a >40 percent CAGR from fiscal 2024. DiRico said: “Looking forward to FY26, we are currently trending ahead of the financial targets that we originally shared, and we now expect to achieve those targets earlier than our initial plan.” Its previous guidance had been $1 billion ARR and $330 million SaaS ARR by the end of fiscal 2026.

The company is capitalizing on the opportunity to sell hybrid cloud products to SaaS customers and SaaS services to hybrid cloud customers. Such cross-selling represents a significant customer base revenue expansion opportunity for Commvault.

This is a steadily growing business that has made relevant acquisitions, like Clumio, and partnerships, and generated a profit for seven successive quarters. It’s surprising that investors don’t sit back and let it grow.

Hammerspace claims tenfold revenue growth for 2024

Hammerspace has reported a tenfold increase in revenue from 2023 to 2024, showcasing the confidence behind its recent hiring of Jeff Gianetti as chief revenue officer. As a private company, it doesn’t reveal revenue figures.

The startup, which orchestrates and manages data through a parallel NFS-based global namespace, was founded in 2018 by David Flynn to make an enterprise’s unstructured data accessible through one interface, appearing local to all users. In 2022, the company reported a 200 percent revenue increase compared to 2021. In June 2024, Flynn remarked: “For the first half of this year, we have already generated sales that are ten times higher than what we made for the whole of last year.” 

Assuming some allowance for hyperbole, this indicates that sales in the first half of 2024 outpaced those in the second half of 2023. Meta announced in early 2024 that Hammerspace was providing it with data orchestration software to support data feeds for two clusters of 24,576 Nvidia H100 GPUs used in training the Llama large language model. The company also recorded its first customer wins in Germany, the Middle East, and India in 2024. 

Flynn stated: “The days of proprietary software locking data to the systems that created it, risky and tedious manual data copies, and IT headaches caused by proprietary client-side software are over. Our 10X revenue growth in 2024 reflects how our Global Data Platform and data orchestration capabilities are redefining what’s possible ‒ unifying fragmented, siloed data and enabling industries to achieve unprecedented efficiency and innovation. The future of data isn’t just orchestrated; it’s limitless.” 

Hammerspace cited two primary drivers of its growth: the increasing demand for cost and power-efficient infrastructure to support GPU computing at scale, and the rapid adoption of hybrid cloud and multi-datacenter architectures. As an example of the first, the company announced a Tier 0 facility in November, using a GPU server’s direct-attached drives as part of a unified fast access multi-tier namespace, and has already booked its first sale for the facility.

Hammerspace in MLPerf Storage

The company also outperformed competitor WEKA and others in the MLPerf storage benchmark, signed a go-to-market partnership with Hitachi Vantara, and integrated Cloudian’s object storage last year. Hiring Gianetti away from WEKA’s CRO position underscores a competitive focus for Hammerspace.

In 2024, Hammerspace reported a 32 percent increase in its customer base and highlighted its expansion into new geographies, alongside “strong customer retention and account expansion metrics.” The company’s Gross Revenue Retention (GRR) exceeded 95 percent, “reflecting strong customer satisfaction and retention strength,” while Net Revenue Retention (NRR) surpassed 330 percent, showcasing organic growth within its customer base.

NRR measures revenue from a company’s existing customer base over a period, accounting for lost customers or downsells and including addition spending or upsells. An NRR above 100 percent indicates growth within the customer base, with greater growth potential as NRR rises further.

Hammerspace increased its headcount by 75 percent in 2024, focusing on go-to-market and customer support teams. It recently launched operations in Asia, establishing resources in China, South Korea, Japan, Singapore, and India. 

This expanding business is now Gianetti’s responsibility to grow further. The company aims to attract new customers and capitalize on the rising demand for AI-driven solutions and hybrid cloud file and object storage access to fuel its next phase of growth.

Broadcom unveils Emulex Secure HBAs with post-quantum encryption

Emulex Secure host bus adapters (HBAs) from Broadcom now include post-quantum encryption algorithms and zero trust architecture to secure storage area network (SAN) data in flight.

HBAs are the endpoints in a Fiber Channel network connecting block access storage arrays to servers in what’s called a SAN. If data crossing the network is not encrypted, it could be accessed and copied. Encrypting HBAs secure all data crossing the SAN. Quantum computers, when and if developed, could potentially break existing encryption algorithms, necessitating the development of new cryptographic methods to prevent such attacks. Post-quantum cryptography is based on mathematical problems that are believed to be resistant to quantum attacks.

Various government mandates, including the United States’ Commercial National Security Algorithm (CNSA) 2.0, the European Union’s Network and Information Security (NIS) 2, Digital Operational Resilience Act (DORA), and other regulations require enterprises to update IT infrastructures with post-quantum encryption algorithms and zero trust architecture. 

Jeff Hoogenboomm, Broadcom
Jeff Hoogenboom

Jeff Hoogenboom, Emulex Connectivity Division VP and GM at Broadcom, stated: “Customers are seeking ways to protect themselves against crippling and expensive ransomware attacks as well as complying with new government regulations mandating all data be encrypted.”

He said the Emulex Secure HBA encrypts all data across all applications, unlike application-level encryption. The devices feature:

  • Encryption algorithms support CNSA 2.0, DORA, and NIS2 mandates.
  • Zero trust platform with Security Protocol and Data Model (SPDM), cryptographic authentication of endpoints, and silicon root-of-trust authentication. 
  • Compliance with the NIST 800-193 framework – secure boot, digitally signed drivers, T10-DIF, and more.
  • Dedupe/compression storage services remain intact.
  • Runs on existing Fiber Channel infrastructure.
  • Cryptography offloaded to hardware, providing encryption with no performance impact.
  • Simple session-based key management with on-demand key generation; transparent and compatible with existing operating systems, applications, and SAN management tools.

HBA manufacturers apart from Broadcom include ATTO, Cisco, and Marvell. Cisco supplies encrypting MDS 9000 series SAN switches, which apply the Fiber Channel Security Protocol (FC-SP) between switches, but not between the HBA and the switch.

Broadcom Emulex diagram
Broadcom Emulex diagram

The FC-SP is not inherently resistant to post-quantum attacks. 

Marvell QLogic 2780 series 32 Gbps HBAs feature StorCryption to encrypt data in flight between initiator and target endpoints across a Fiber Channel SAN. StorCryption complies with the FC-SP-2 standard and these HBAs incorporate a hardware root of trust that prevents malicious firmware from hijacking the HBA.

Broadcom Emulex Secure HBA
Broadcom Emulex Secure HBA. Source: Storage Review via Broadcom

Emulex 32G and 64G Secure HBAs are now available and shipping in 1, 2, and 4-port configurations. Get more information here and read a Storage Review article about the device here.

NetApp touches down with the San Francisco 49ers

NetApp has become a founding level partner of the San Francisco 49ers professional football team in a multi-year deal and is now its Official Intelligent Data Infrastructure Partner.

NetApp is also the presenting sponsor of the 49ers 2025 NFL Draft and also the Levi’s Stadium Owners Club, located on the east side of the building. It will be renamed to recognize NetApp’s sponsorship. The company will be the presenting sponsor for the 49ers’ strategy, data, and analytics conference, Horizon Summit, returning to Levi’s Stadium in June 2025. 

Costa Kladianos of the 49ers, which is in partnership with NetApp
Costa Kladianos

Costa Kladianos, 49ers EVP and Head of Technology, stated: “When Levi’s Stadium opened in 2014, it was one of the most technologically advanced stadiums in the country. While we have remained diligent in making improvements to the fan experience year over year, NetApp will empower us to make major changes that will bring the building into the forefront of technology-focused sports and entertainment venues.”

NetApp says the two brands will use intelligent data infrastructure to support 49ers business operations throughout the organization, starting with the reimagination of the fan experience at the Levi’s Stadium. The 49ers plan to make major tech enhancements to the venue with the goal of creating “a new seamless and connected fan experience” with improvements to ingress and egress, bathroom and concession wait times, mobile app functionality, and more. This will involve NetApp’s Keystone storage-as-a-service offering.

NetApp CEO George Kurian said: “Intelligent data infrastructure is crucial to the sports fan experience and team performance. Our support of the 49ers’ ambitious goals for making its home stadium the benchmark for excellence in fan experience and team performance demonstrates our strong ties to our San Francisco Bay Area home and our unique capabilities to make data an asset in leading organizations achieving their transformation goals.” 

We understand that the cost to become a 49ers founding-level partner can be significant, and is typically negotiated on a case-by-case basis and not publicly disclosed. Levi Strauss & Co did reveal it is paying $17 million a year for its ten-year stadium naming rights deal.

NetApp is heavily involved in sports sponsorship, having deals with the San Jose Sharks pro ice hockey team, Porsche Motorsport, TAG Heuer Porsche Formula E team, and the Formula 1 Aston Martin Aramco Racing team. Its annual sports sponsorship budget must be significant.

Spectra Logic optical SAS switch expands tape connectivity

Spectra Logic has an optical SAS switch supporting 100 meter distances between servers and tape drives, providing connectivity cheaper than Fiber Channel.

Its OSW-2400 Optical SAS Switch supports the SAS-4 standard for connectivity between servers and tape storage systems, and features 48 x 24G lanes operating at 22.5 Gbps. That means 1.08 TBps total bandwidth and an aggregate 108 GBps data transfer rate. The 100m distance enables a SAS fabric to cover datacenter floor spaces of up to 10,000 m² (107,639 sq ft), connect between building floors, or extend to nearby buildings.

Nathan Thompson, Spectra Logic
Nathan Thompson

Spectra Logic CEO and chairman Nathan Thompson stated: “The Spectra OSW-2400 Optical SAS Switch represents a unique and transformative step forward in datacenter tape connectivity. By reducing or eliminating the need for expensive Fiber Channel infrastructure, organizations can simplify their tape operations and achieve greater flexibility, while maintaining the performance and reliability they expect.”

Fiber Channel supports much larger-scale storage networking than SAS, with distances exceeding a kilometer and speeds of up to 64 Gbps, with 128 Gbps on its roadmap. But it costs more.

The company says OSW-2400 per-port connection costs are up to 70 percent less than comparable Fiber Channel infrastructure, resulting in savings on connectivity acquisition, maintenance, and upgrade costs.

Simon Robinson, Principal Analyst at Enterprise Strategy Group, part of Informa TechTarget, said: “When managing data at scale, the cost of access can be a significant component of overall storage costs. Extending SAS beyond the rack is a practical way to reduce these costs.” 

The switch has 1RU short-depth packaging with front or back mounting options, hot-swappable dual power supplies, redundant cooling fans, and “a low 50-watt maximum energy consumption.”

It supports SAS-3 tape drives including LTO-9 and IBM TS1170 Enterprise products, while maintaining backward compatibility with SAS-2 devices, including LTO-6, LTO-7, LTO-8, and IBM TS1160 Enterprise Tape Drives. End device frame buffering (EDFB) optimizes bandwidth when using slower devices, improving data transfer rates by as much as 50 percent.

The switch also supports T10 and port-to-port zoning, enabling one-to-many, many-to-one, and many-to-many sharing of Spectra Logic tape libraries. Switch cascades can expand the number of fabric connections or extend connection distances beyond the limits of a single switch.

The OSW-2400 features 12 x 4 wide Mini-SAS HD ports. Each port is capable of connecting four devices such as servers or tape drives. Switch configurations start at 12 SAS-4 lanes (3 ports) and scale in increments of 12 lanes (3 ports) up to a maximum of 48 lanes (12 ports). Both active optical and passive cables are supported. Field-installed port upgrades are available in increments of 12 lanes (3 ports). A maximum of 40 tape drives per switch can be configured.

For high-availability configurations, a second switch may be deployed in a dual-ported configuration. A 10/100/1,000 Mbps Ethernet port and application software are also included for out-of-band management access.

The Spectra OSW-2400 optical SAS switch is available for Q1 delivery. For complete specifications, more information, or to schedule a consultation, click here.

Bootnote

The SAS protocol is developed and maintained by the T10 technical committee of the International Committee for Information Technology Standards (INCITS), while the technology is promoted by the SCSI Trade Association (STA).

New year, new data strategy

COMMISSIONED: The new year has arrived, bringing with it the usual resolutions: get fitter, read more books, maybe finally tackle that ever-growing email backlog.

But for tech leaders, this time of year isn’t just about personal betterment; it’s about rethinking how to unlock business value. And in 2025, one resolution towers above all: getting your data strategy AI-ready.

Let’s face it, data is the lifeblood of modern business, but without a solid infrastructure, it’s like trying to train for a marathon by eating donuts and binge-watching TV. (Tempting, but not effective.) The explosive growth of artificial intelligence (AI) has made it crystal clear that traditional data systems – those dusty warehouses and disjointed lakes – are holding organizations back. This year, it’s all about building a scalable, secure, and flexible data strategy that doesn’t just keep up with AI but accelerates it.

According to GlobalData, by this year, global data creation is forecasted to exceed a mind-boggling 175 zettabytes. A significant chunk of that data will likely include unstructured data like images, videos, and text. AI thrives on this diversity but only if your data strategy can handle it. Unfortunately, many organizations are leveraging legacy systems designed to manage spreadsheets, not neural networks.

Remember dial-up internet (I bet many of you can even recall the sound)? Slow, clunky, and completely unsuited to today’s needs. That’s what legacy data systems feel like in an AI-powered world. Traditional data warehouses weren’t built for the massive throughput, variety, and velocity of modern AI workloads. Worse, they struggle to support semi-structured and unstructured data – the very types AI feeds on.

To add insult to injury, fragmented data across silos makes it nearly impossible to draw actionable insights. Data lakes were supposed to fix this but often turned into data swamps – disorganized, inaccessible, and riddled with performance bottlenecks.

Enter 2025’s shiny new alternative: the AI-driven data platform.

A resolution worth keeping: The Dell AI data platform

Let’s pause the doom and gloom and talk solutions. The Dell Data Platform for AI is like upgrading from that rusty, old station wagon (your legacy system) to a sleek, self-driving EV (AI-ready infrastructure). Here’s how it powers your data strategy to meet the demands of AI:

– Open, flexible, and secure architecture
The platform’s open design supports a wide variety of data types and sources. Whether you’re working with structured sales data, semi-structured IoT logs, or unstructured video content, the Dell Data Platform for AI ensures everything is accessible, queryable, and ready for analysis.

– High performance for GPU-accelerated workloads
AI workloads demand serious compute power, and GPUs are the engines of choice. The platform is engineered to maximize performance, from model training and inferencing to checkpointing during development. It scales effortlessly, letting you process petabytes of data without breaking a sweat.

– Unified Dell Data Lakehouse with Dell PowerScale
Forget the chaos of separate systems. The Dell Data Lakehouse unifies storage and compute, enabling high-speed querying and analytics. Dell PowerScale’s scale-out storage architecture is optimized for AI, ensuring seamless data flow for model refinement and development. It’s the ultimate tool for turning disorganized lakes into productive powerhouses.

Why data governance is your secret asset

AI is only as good as the data that feeds it, and this is where data governance comes into play. Poor governance leads to biases, inaccuracies, and costly compliance issues. With the Dell Data Platform, organizations gain self-service access to high-quality data while maintaining control over security, privacy, and compliance. Think of it as the Marie Kondo of data – keeping everything tidy and purposeful.

AI’s impact isn’t limited to tech giants. In media and entertainment, for example, AI-driven workflows have transformed movie-making. Think advanced visual effects rendering, real-time editing, and personalized viewer recommendations. At the heart of it all? Scalable storage solutions like Dell PowerScale.

Meanwhile, in manufacturing, predictive maintenance and automated quality checks are becoming standard thanks to AI models trained on enormous datasets. The same principles apply – flexible, high-performance storage makes these innovations possible.

This year, 75 percent of enterprises will shift from piloting to operationalizing AI, driving a 5x increase in streaming data volumes according to Gartner’s “AI Adoption Trends” 2023 report. And in its 2023 “Overcoming Data Siloes in AI” report, McKinsey and Company estimates that over 60 percent of companies cite data silos as the biggest obstacle to scaling AI. Elsewhere, Forrester’s “State of AI in Enterprises” report published in 2024 indicates that AI adoption has grown 270 percent in the past four years, and it’s not slowing down.

These stats underscore the urgency of modernizing your data infrastructure. Staying ahead of the curve requires not just investment but a strategic shift in how you think about data.

2025’s data strategy checklist

Ready to kickstart your resolution? Here’s a quick, five-step checklist:

– Audit your current data architecture: Identify gaps and pain points.

– Embrace unified platforms: Eliminate silos with a solution like the Dell AI Data Platform.

– Invest in scalable storage: Prioritize systems designed for high-performance AI workloads.

– Focus on governance: Implement robust policies to ensure data quality and compliance.

– Plan for the future: Choose solutions that can evolve with your business.

This year is a perfect opportunity to reimagine your data strategy. By adopting AI-ready, scalable storage solutions, you’ll do more than keep up with 2025’s challenges – you’ll thrive in them. So ditch the old dial-up mindset and embrace the high-speed potential of a modern data platform. Your AI models (and your business) will thank you.

Happy New Year – here’s to resolutions worth keeping!

For more information about Dell Data Platform for AI, please visit us online at www.delltechnologies.com/powerscale.

Brought to you by Dell Technologies.

Commvault latest to cut deal with CrowdStrike for cyber resilience

Commvault has joined other data protection suppliers by integrating CrowdStrike’s malware-detecting Falcon XDR into its Commvault Cloud to better detect and respond to cyber threats against its customers.

Commvault Cloud, previously called Metallic, is a SaaS backup service protecting hybrid environments against data loss. Its own backup stores are protected against cyberattacks with a ThreatScan facility, which inspects the backup data for signs of malware access and compromise. CrowdStrike provides a Falcon XDR (Extended Detection and Response) service that checks customer endpoints and networks data for IOCs (indicators of compromise), using AI techniques to look for real-time evidence of malware attacks in access behavioral data and system telemetry.

A CrowdStrike alert can be used by Commvault Cloud to trigger a ThreatScan check for affected data, and restore compromised data to a known good state using its backups.

Alan Atkinson, Commvault
Alan Atkinson

Alan Atkinson, Commvault’s Chief Partner Officer, stated: “By partnering with CrowdStrike, we are combining our deep expertise in cyber resilience with their advanced threat detection capabilities, empowering our joint customers with faster response times and a stronger cyber resilience posture.”

Atkinson said: “The average organization has seen eight cyber incidents in the last year, four of which are considered major.¹”

Commvault says this CrowdStrike deal provides proactive threat detection, before any ransomware message, allowing businesses to identify threats earlier, respond faster, and mitigate attacks effectively. They can clear out infected data and potentially prevent a ransomware attack. If customers have separate security and IT operations teams, the partnership can enable a unifying workflow for more efficient attack responsiveness, helping to maintain business uptime.

Dell, Cohesity-Veritas, and Rubrik have all partnered with CrowdStrike to achieve the same level of malware protection for their customers. 

The partnership provides another defense against malware attacks for Commvault Cloud customers alongside its ThreatWise detection facility, which deploys honeypots as attractive malware targets, luring them in for recognition and response.

CrowdStrike is a popular malware threat detection supplier, claiming that 300 of the Fortune 500 companies use its services. We expect other data protection and cyber-resilience suppliers will strike partnership deals with CrowdStrike during 2025.

Bootnote

1According to DeMattia, A., & Gruber, D. (2024). Trends in CR/DR Plans: Contrast and Convergence – Final Survey Results [Unpublished data]. TechTarget, Inc.

Object First’s business is scaling up

Object First, the Veeam-specific object-based backup appliance startup, says it registered 389 percent bookings growth in 2024 along with a 374 percent increase in transacting partners.

The company was started up in 2022 by Veeam founders Ratmir Timashev and Andrei Baronov. to provide an S3-based Ootbi backup appliance, launched in February 2023, offering immutable object storage in a 4-way clustered and non-deduplicating appliance. There are now three node raw capacity points: 64TB, 128TB and 192TB, with an NVMe SSD landing zone and disk drives for actual retention. It also offers on-prem to public cloud data copies. The company has been growing sales fast, and recruiting staff and worldwide channel partners.

David Bennett

CEO David Bennett stated: “Organizations worldwide  are increasingly prioritising secure, easy-to-use and powerful data protection solutions, and our Ootbi appliances continue to set the standard for true immutability and simplicity. With the momentum we’ve built this year, we’re well positioned for even greater  achievements in 2025.”

It issues growth statistics primarily in terms of bookings and transacting partner count percentages and secondly in customer numbers. 

Object First said 2024 bookings grew 389 percent over 2023, “including over triple-digit growth in six-figure and above deals,” and said the partner count increased 262 percent. It said its global customer base increased by 374 percent year-over-year in 2024 compared to 2023, although no numbers were provided.

We have kept a tally of its publicly declared quarterly percentage growth statistics and the numbers show a decline as you would expect from a standing start with small actual sales, and numbers of partners and customers to begin with:

Object First’s discussed bookings, partner and customer numbers percentage growth rates in 2024

Nevertheless less there has been an increase in the percentage partner growth rate from Q3 of 2024 to Q4, reflecting Object First’s emphasis on growing its partner base. It said T-note became Object First’s first Platinum Partner in LATAM and there was “significant partner expansion in EMEA, where the number of transacting partners nearly  quadrupled.” New partnerships included Axians and DMS in the UK, Insight in France, ContecNow and Seidor Soluciones Globales in Spain, and German partner Erik Sterck. In the USA, Object First strengthened its presence in Southern California through its collaboration  with GST, serving state, local, and healthcare organisations.

The company has not revealed actual customer or partner numbers. It did say it had 57 appliance deployments in Q1 2024 but has not disclosed deployment numbers since then.

During 2024 it hired 104 employees across 12 countries and opened an office in Barcelona. Object First notes that Veeam’s latest 12.x releases opens the door to backup storage capacities beyond 3PB, as Ootbi clusters can now be used as multiple extents in the Veeam SOBR (Scale-Out Backup Repository).

Get an Object First product white paper at the bottom of this webpage.

Hammerspace gets WEKA’s ex-CRO as its sales boss

Data orchestrator Hammerspace has orchestrated its first Chief Revenue Officer, Jeff Giannetti, who abruptly left the CRO spot at WEKA last month.

David Flynn

Hammerspace sells Global Data Platform software based on parallel NFS and uses it to orchestrate files and objects stored on other suppliers’ filers and object stores with the ability to support NVIDIA’s GPU Direct and ship file data fast to GPUs. WEKA has developed its own fast access file system software for high-performance computing, AI processing by GPUs, also supporting GPU Direct,  and enterprise high-speed file access. WEKA’s President and CFO left at the same time as Gianetti.

David Flynn, Hammerspace founder and CEO stated: “The days of data silos are behind us. Organizations worldwide are unifying their unstructured data through orchestration, empowering AI, driving GPU performance and unlocking unparalleled efficiency. Jeff will be instrumental in building a high-performing global team of sales leaders to help organizations harness the full potential of their data.”

Chris Bowen has been SVP Sales at Hammerspace since August 2021, with Jim Choumas filling the VP Channel Sales role since the same date. Giannetti is a heavyweight hire, having been CRO at Cleversafe (acquired by IBM) and Deep Instinct and holding several leadership positions at Sun Microsystems, Veeam, Digital Ocean and Forcepoint. He worked in NetApp’s sales organization for more than a decade, where the company grew from $700 million in revenues to over $6 billion during his tenure.

Jeff Giannetti

The company says Giannetti will “drive the expansion of the company’s rapid growth in global demand” and “lead the global sales team to accelerate revenue growth through new customer acquisition and use case expansion within existing customers.”

Giannetti said: “AI is trending to be the biggest technical development in our lifetime, but the challenge for organizations is creating a data infrastructure that can provide high-performance access to unstructured data anywhere. Hammerspace solves these challenges using a standards-based approach, at a massive scale, while providing orchestration and global namespace capabilities that are wholly unique. I’m thrilled to be a part of Hammerspace, a world-class team enabling organizations to experience the full value of their investments in their AI infrastructure and ecosystem.”