Home Blog Page 40

Wasabi CEO sees IPO ramp as firm reaches 100,000 customers

Cloud storage unicorn Wasabi says it has reached the 100,000 customer point after continuing sales growth.

The company has raised a total of $500 million since its inception in 2017, and gained unicorn status in 2022, after raising $250 million, which included a 50/50 investment and credit facility.

Since then, it has seen 60-70 percent annual revenue growth and its partner numbers have also just reached 15,000 globally.

David Friend

On the IT Press Tour of Boston and Massachusetts this week, David Friend, the company’s founder and CEO, said: “Cloud storage is an infinite market, and you have to try and dominate it as quickly as possible.”

Wasabi, of course, is very much smaller than its hyperscale cloud service rivals AWS, Azure and Google, but its strategy of basing its service on a rival to AWS S3 cloud storage appears to be gaining traction. It claims its offer can be up to 80 percent cheaper than AWS, saying it only charges for the amount of data users store in its cloud, with no extra charges for moving it in and out of that cloud, or any other operational charges.

Friend also claims similar savings can be made if customers choose to put more of their data into Wasabi’s cloud instead of buying more large on-premise storage boxes, over a five-year contract period, when maintenance support fees and the overall cost and depreciation of those boxes is taken into account.

As for building up sales, Wasabi is a channel company and has a simple formula to getting to the best partners globally. “When we are entering a market, the first thing we do is sign up the best distributors in that market, before any rivals get there,” says Friend.

Wasabi price comparison slide

“Once those distributors’ resellers have invested in our systems, benefited from the marketing and development funds, training, integration, etc, when the next guy comes along and asks to work with them instead, they’ll say ‘why should I, I’m working with Wasabi already’.”

He added: “It’s an easy channel model, ‘How much storage do you want, and for how long’. $6.99 per TB per month, that’s the list price.”

As a private company, Wasabi doesn’t release its annual sales figures, but some analysts put these at around $135 million last year, so if the current annual growth continues, it could be around $230 million in 2024. One way or another, Friend told the IT Press Tour that Wasabi believes it has the first $1 billion-revenue year within its sights, or, “within five years” anyhow.

Friend is also a fan of going public too. “We’d like to be a public company. At Carbonite [where he was co-founder and CEO at the backup firm] we became a public company, and more people took us seriously. It’s good to have a public profile, it can help.”

Storage news ticker – October 10

Analytics company Cloudera has launched Cloudera AI Inference, powered by Nvidia NIM microservices and accelerated computing (Tensor core GPUs), boosting LLM speeds by a claimed 36x. A service integration with Cloudera’s AI Model Registry enhances security and governance by managing access controls for both model endpoints and operations. Cloudera AI Inference protects sensitive data from leaking to non-private, vendor-hosted AI model services by providing secure development and deployment within enterprise-controlled environments. This enables “the efficient development of AI-driven chatbots, virtual assistants, and agentic applications impacting both productivity and new business growth.”

Data protector CrashPlan has acquired Parablu, a data security and resiliency supplier with a claimed market-leading offering protecting Microsoft 365 data. This is said to position CrashPlan “to deliver the industry’s most comprehensive backup and recovery capabilities for data stored on servers, on endpoint devices and in Exchange, OneDrive, SharePoint, and Teams to Azure, their cloud, or to CrashPlan’s proprietary cloud.” The Parablu acquisition “enables CrashPlan to offer a complete cyber-ready data resilience solution that protects intellectual property and other data from accidental data deletion, ransomware, and Microsoft service interruptions.”

Software-defined multi-protocol datacenter and edge storage supplier Datacore has added a NIS2-supporting cybersecurity capability “designed to equip organizations with the means to anticipate, withstand, and recover from sophisticated cyber threats while aligning with regulatory standards.” The offering has no specific brand name. Read the solution brief here.



Lucid Motors has again chosen Everspin Technologies’ MRAM for its latest all-electric Gravity SUV, continuing its reliance on Everspin’s memory solutions after integrating them into Lucid Air back in 2021. Everspin’s PERSYST MRAM ensures data logging and system performance within the Lucid Gravity SUV’s powertrain. Its 256 Kb MRAM is also used by Rimac Technology (sister company to Bugatti-Rimac) in the all-electric Nevara supercar.


HighPoint Technologies claims its RocketStor 654x series NVMe RAID Enclosures have established a new industry standard for external storage products. It supports up to eight NVMe SSDs, delivering nearly 0.5 PB of storage with 28 GBps transfer speeds and PCIe Gen4 x16 connectivity. The device stands under five inches tall, has an integrated power supply and cooling system to isolate NVMe media from the host, plus a hot-swap capability to allow storage expansion without downtime.

Germany’s Infodas has announced a secure data transfer and patch management offering called Infodas Connect. Its primary function is to securely distribute updates and patches to critical systems that are not connected to the internet. The data is processed on a system that has internet access. It is then securely transferred to a central system known as the sender service within the Infodas Connect architecture. The collected data is then transmitted to the receiver services on the isolated systems (which have no internet connection). This transfer is highly secure, ensuring that no vulnerabilities are introduced during the update process. Once the data reaches the target system, an administrator manually reviews and installs the updates. This extra layer of manual control ensures that all changes are checked before implementation. Find a detailed product presentation here.

Michael Amsinck

SaaS data protector Keepit has appointed Copenhagen-based Michael Amsinck as chief product officer. He joins Keepit from Cision, where he was chief product and technology officer. He will be instrumental in shaping and executing Keepit’s short and long-term product roadmap. “In the months and years ahead, a key priority will be expanding our comprehensive ecosystem of workloads to meet the evolving SaaS data protection needs of the enterprise segment. We’ll also prioritize strengthening disaster recovery strategies and developing opinionated frameworks to guide and educate our customers,” Amsinck said.

Keepit published a Gatepoint Research SaaS backup survey of senior decision-makers today that reveals:

  • 58 percent of respondents report using Microsoft to back up their SaaS data. However, shared responsibility models mean that SaaS providers are not accountable for customers’ data backup, leaving a critical gap in protection
  • Only 28 percent of respondents have high confidence in their data protection measures
  • 31 percent report moderate to severe lapses in their data protection
  • 57 percent of respondents identify brand and reputation damage as the most significant business impact of data loss, followed closely by financial consequences
  • When it comes to blockers to improving data protection strategies, 56 percent of respondents cite budget constraints, while 33 percent note a lack of expertise and resources
  • 50 percent of respondents cite increased compliance requirements as their top challenge

Keepit will host a free webinar titled “Protecting Your SaaS Data – Pitfalls and Challenges to Overcome” on October 17 at 1400 UTC. Register for the webinar here.

Micron has changed its logo. It says: “Inspired by the curves and colors of its silicon wafers, the new logo design embodies what is at the core of Micron’s technology leadership: staying ahead of the curve – anticipating future needs and driving the next generation of technology. Innovation and rapid execution are central to Micron’s vision of transforming how the world uses information to enrich life for all.”

Old (above) and new (below) Micron logos

NetApp has hired Mike Richardson as VP of US Solutions Engineering. He will lead NetApp’s technical sales strategy across North America. Richardson comes from being VP Systems Engineering Americas at Pure Storage. Before that he was at Forsythe, Commvault, and a professional services consultant at NetApp. What goes around, comes around.

OpenDrives has gained Light Sail VR as a customer for its Atlas storage product. Light Sail makes immersive virtual reality media for Meta, Amazon, Lionsgate, Paramount, Canon, Adidas, and others.

Replicator Peer Software, a supporter of hybrid on-prem-public cloud storage, is making Gartner’s latest hype cycle for storage report available here.

Rocket Software has a report out saying mainframe-stored data should be used by AI models. It says: “If the mainframe is such a wealth of knowledge, why isn’t it being used to inform AI models? According to the survey, 76 percent of leaders said they found accessing mainframe data and contextual metadata to be a challenge, and 64 percent said they considered integrating mainframe data with cloud data sources to a challenge. For this reason, businesses need to prioritize mainframe modernization – setting themselves up to be able to easily and successfully incorporate their mainframe data into their AI models.” Get the report here.

The SNIA is running a webinar entitled “The Critical Role of Storage in Optimizing AI Training Workloads” on October 30 at 1700 UTC. The webinar has “a primary focus on storage-intensive AI training workloads. We will highlight how AI models interact with storage systems during training, focusing on data loading and checkpointing mechanisms. We will explore how AI frameworks like PyTorch utilize different storage connectors to access various storage solutions. Finally, the presentation will delve into the use of file-based storage and object storage in the context of AI training.” Register here.

Tape, object, and archive system vendor SpectraLogic announced Rio Media Migration Services, a professional service offering designed to transition digital media assets from outdated media asset management systems to modern object-based archive infrastructures. A seven-step process features database/datasets analysis, solution design, project scoping, statement of work, installation and validation, production and post-migration review. Rio Media Migration Services is designed for media and video editors, producers, and system administrators, and suited for studios, post-production houses, sports teams, news and cable networks, and corporate media departments.

Decentralized storage provider Storj has acquired UK-based PetaGene, creator and supplier of cunoFS. PetaGene started in 2006, gathering numerous awards as a leading solution for high-performance file storage and genomic data compression. CunoFS is a high-performance file system for accessing object storage. It lets you interact with object storage as if it were a fast native file system with POSIX compatibility that can run any new or existing applications. PetaGene will operate as a wholly owned subsidiary of Storj. All current PetaGene employees will continue on as employees.

CunoFS positioning chart

StorONE is launching a StorONE Podcast “designed to explore the world of entrepreneurship, cutting-edge technology, and the ever-evolving landscape of data storage. Hosted by StorONE Solution Architect James Keating, the podcast will provide listeners with valuable insights from industry experts and StorONE’s in-house thought leaders … In the debut episode, CEO Gal Naor shares his journey from founding Storwize, its acquisition to IBM and vision behind creating StorONE’s ONE Enterprise Storage Platform to solve the industry’s biggest challenges.” The StorONE Podcast will release bi-weekly episodes covering topics such as leveraging AI in storage solutions, strategic planning for long-term success, overcoming obstacles in scaling storage startups, and navigating the balance between innovation and risk. The podcast is available now on Spotify, Apple Podcasts, and iHeart Radio.



Storware has signed up Version 2 Digital as a distributor in APAC.

Wedbush tells subscribers that Western Digital reportedly completed the sale of 80 percent of its Shanghai facility to JCET. Western Digital will receive $624 million for the majority stake. The source is a South China Morning Post article.

The future of security: Harnessing AI and storage for smarter protection

COMMISSIONED: As the world becomes more data-driven, the landscape of safety and security is evolving in remarkable ways. Artificial Intelligence (AI) technology is transforming how organizations manage mission-critical infrastructure, from video surveillance systems to real-time data analysis. At the heart of this transformation lies an important shift: a growing emphasis on the sheer volume of data being collected, how it’s processed, and the insights that can be derived from it. In this evolving ecosystem, edge AI and advanced storage systems are reshaping how security teams work and the role of data in decision-making.

Dell Technologies stands at the forefront of this shift, driving innovation through cutting-edge storage solutions like Dell PowerScale and collaborating with software technology partners to provide smarter, more efficient tools for data management and AI-powered analysis. The opportunities presented by AI in the safety and security space are immense, and the organizations that embrace these advancements will unlock new levels of efficiency, insight, and protection.

The data age in safety and security

Today, security is no longer just about physical safeguards or outdated surveillance systems. It’s about managing vast amounts of data to enhance decision-making, improve response times, and streamline operations. Mission-critical infrastructure now revolves around data volumes and data hygiene – how well that data is stored, maintained, and accessed. In this new paradigm, we see a shift in who’s attending security meetings: Chief Data Officers (CDOs), data scientists, and IT experts are playing pivotal roles in shaping security strategies. They’re not only ensuring systems run smoothly but also leveraging video data and AI-driven insights to drive broader business outcomes.

The introduction of AI into security systems has brought new stakeholders with unique needs, from managing complex datasets to developing real-time analytics tools that improve efficiency. For instance, consider the rise of body cameras equipped with edge AI capabilities. These cameras are no longer just passive recording devices – they’re active data processors, analyzing video footage in real time and even assisting officers in the field by automatically annotating scenes and generating reports. This reduces the time officers spend writing reports at the station, improving productivity and allowing for faster, more efficient operations.

Technologies transforming the security ecosystem

One of the most exciting aspects of AI technology in the security space is the way it is enabling new players to emerge. Independent Software Vendors (ISVs) are stepping into the fold, introducing a range of applications aimed at enhancing organizational efficiency and transforming the traditional security landscape. These new ISVs are bringing innovative solutions to the table, such as edge AI applications that run directly on video surveillance cameras and other security devices.

These advancements have revolutionized how data is collected and processed at the edge. Cameras can now run sophisticated AI models and Video Management Systems (VMS) onboard, transforming them into intelligent, autonomous devices capable of making decisions in real-time. This shift toward edge computing is powered by the increasing presence of Graphics Processing Units (GPUs) at the edge, enabling high-performance AI computing on-site.

For security integrators, this evolution has been transformative. The emergence of edge AI has introduced a new skill set into the industry – data scientists. Traditionally, security teams focused on camera placement, network design, and video storage. Now, they must also manage complex AI models and large datasets, often requiring the expertise of data scientists to oversee and fine-tune these systems. This shift is opening the door for new ISVs and dealers, changing the service landscape for security integrators.

The changing role of storage in AI-powered security

One of the most critical aspects of this AI-driven revolution in security is the need for robust, scalable storage solutions. Traditional storage systems, such as RAID-based architectures, are simply not up to the task of handling the demands of modern AI applications. AI models rely on massive datasets for training and operation, and any gaps in data can have a detrimental impact on model accuracy. This is where advanced storage solutions, like Dell PowerScale, come into play.

Dell PowerScale is designed specifically to meet the needs of AI workloads, offering extreme scalability, high performance, and superior data management. As video footage and other forms of security data become more complex and voluminous, traditional storage systems with Logical Unit Numbers (LUNs) struggle to keep pace. LUNs can complicate data mapping for data scientists, making it difficult to efficiently analyze and retrieve the vast amounts of data generated by AI-driven security systems.

In contrast, PowerScale provides seamless, flexible storage that can grow as security systems expand. This is crucial for AI models that require consistent, high-quality data to function effectively. By offering a scalable solution that adapts to the changing needs of AI-powered security applications, PowerScale ensures that organizations can maintain data hygiene and prevent the bottlenecks that would otherwise impede AI-driven insights.

Edge AI and the future of security

The advent of edge AI is arguably one of the most transformative developments in the security industry. By processing data closer to where it’s collected, edge AI enables real-time decision-making without the need for constant communication with centralized cloud servers. This shift is already being seen in body cameras, security drones, and other surveillance tools that are equipped with onboard AI capabilities.

As GPUs become more prevalent at the edge, the compute and storage requirements of these devices are evolving as well. Cameras and other edge devices can now run custom AI models and scripts directly onboard, reducing latency and improving response times. However, this also means that security teams must manage not only the hardware but also the datasets and AI models running on these devices. Data scientists, once peripheral to the security industry, are now becoming essential players in managing the AI models that power edge-based security systems.

This evolution is also changing the nature of cloud services in the security space. Edge computing reduces the reliance on cloud-based storage and processing, but it doesn’t eliminate it entirely. Instead, we are seeing a more hybrid approach, where edge devices process data locally and send only critical information to the cloud for further analysis and long-term storage. This hybrid approach requires a new level of agility and flexibility in both storage and compute infrastructure, underscoring the need for scalable solutions like PowerScale.

Embracing AI and data-driven security

Despite the clear advantages of AI and edge computing, the security industry has been slow to adopt these technologies. For over six years, IP convergence in security stalled as organizations hesitated to move away from traditional methods. A lack of investment in the necessary skills and infrastructure further delayed progress. However, the time for change is now.

As other industries move swiftly to embrace AI-driven solutions, the security sector must follow suit or risk falling behind. The convergence of AI, data science, and advanced storage solutions like Dell PowerScale represents a tremendous opportunity for growth and innovation in safety and security. Network value-added resellers (VARs) are well-positioned to capitalize on this shift, offering modern mission-critical architectures that support AI-driven security applications.

The future of security lies in data – how it’s collected, processed, and stored. With the right infrastructure in place, organizations can unlock the full potential of AI, driving greater efficiency, faster response times, and more effective security outcomes. Dell Technologies is committed to leading the charge in this transformation, providing the tools and expertise needed to support the AI-powered security systems of tomorrow.

The security industry is at a pivotal moment. The rise of AI and edge computing is transforming how organizations approach safety and security, but these advancements require a shift in both mindset and infrastructure. Dell Technologies, with its industry-leading storage solutions like PowerScale, is helping organizations navigate this new landscape, ensuring they have the scalable, high-performance infrastructure needed to unlock AI’s full potential.

As we move deeper into the data age, embracing these emerging technologies will be critical for staying ahead of the curve. The future of security is bright, but only for those prepared to invest in the right infrastructure to support the AI-driven innovations that will define the next era of safety and security.

For more information, visit Dell PowerScale.

Brought to you by Dell Technologies.

Spotting signs of malware in the age of ‘alert fatigue’

Sam Woodcock, senior director Cloud Strategy at 11:11 Systems, tells us that, according to Sophos, 83 percent of organizations that experienced a breach had observable warning signs beforehand and ignored the canary in the coal mine. Further, 70 percent of breaches were successful and threat actors encrypted the data of the organization to prevent access to it.

11:11 Systems offers on-premises and cloud backup services. For example, it has storage of customers’ unstructured, on-premises data using SteelDome’s InfiniVault storage gateway for on-premises data storage, protection, and recovery. For Azure, it has 11:11 DRaaS (disaster recovery as a service) and for AWS, 11:11 Cloud Backup for Veeam Cloud Connect, 11:11 Cloud Backup for Microsoft 365, and 11:11 Cloud Object Storage.

We asked Woodcock about these signs and what affected organizations should do about them.

Blocks & Files: What warning signs were these? 

Sam Woodcock, 11:11 Systems
Sam Woodcock

Sam Woodcock: Warning signs come in a variety of forms that can be observed independently or in various combinations. Some examples of typical warning signs would be unusual network activity such as excessive or unusual network traffic, spikes in failed login attempts, unusual system activity, unusual file access patterns, and alerts coming from security tools and endpoint device solutions.

Blocks & Files: Why weren’t they seen? 

Sam Woodcock: Typically warning signs can be missed for a variety of reasons – however, one of the most common reasons is “alert fatigue.” Forty percent of organizations receive over 10,000 security alerts on a daily basis. This sheer volume of information results in organizations simply being unable to properly process and respond to every indicator generated from their security solution set. 

Secondly, organizations often realize the need to invest in security technologies. However, often the vital security expertise needed to interpret and react to alerting and information coming from these tools is in low supply and high demand. This can result in a lack of expertise within an organization to triage and respond to vital alerting and monitoring information. Also, organizations may not have full 24x7x365 coverage to monitor, react, and triage security incidents; therefore missing vital signals and opportunities to prevent attacks.

Blocks & Files: How could they have been seen? 

Sam Woodcock: Detecting and responding to threats requires a combination of security tools, monitoring, security expertise, 24x7x365 coverage, robust processes, and proactive and reactive measures. The best practice is to have a multi-layered approach combining preventative security solutions, and reactive data protection and cyber recovery solutions.

It is also critical for organizations to perform proactive vulnerability assessments and penetration testing to understand gaps and risks that may exist within their application and security landscape. An essential part of any approach is to centralize logging and telemetry data into a Security Information and Event Management (SIEM) system; aggregating log and real-time alerting data across application and workloads running across a wide variety of platforms and physical locations. With an effective SIEM solution in place, organizations must also invest in security expertise and coverage to observe and react to patterns and information coming from such a system.

Blocks & Files: What should enterprises do when they see such signs? 

Sam Woodcock: Organizations need to react immediately in a structured and strategic manner to mitigate threats and prevent further growth of threats. Due to the immediacy organizations must invest in first or third-party security expertise that is 24x7x365 in nature so as not to miss or let threats grow in scope. 

The first step of any approach should be to investigate the alerts or logs created by security tools and validate whether the threat is an actual threat or a false positive. If the threat is a true positive, affected systems should be isolated and quarantined immediately to prevent the spread or movement of the attack. Having an incident response team and plan is essential to coordinate the required response and to resolve and remediate the issue. Having a combination of people, processes, and technology working in partnership is essential to swift resolution and recovery.

Blocks & Files: Can 11:11 Systems help here?

Sam Woodcock: 11:11 was formed to ensure organizations can keep their applications and systems always running, accessible, and protected. As previously mentioned, preventative security solutions are essential to preventing attacks or limiting scope. 11:11 provides a combination of security technology (MDR, XDR, Managed Firewall, Real Time Vulnerability scanning) aligned with a global 24x7x365 Security Operations Center with a robust process.

This is to ensure that we understand threats in real time and react accordingly, providing actionable remediation information to resolve incidents. In combination with our Managed Security services approach, 11:11 has a deep heritage in data protection, disaster recovery, and cyber resilience with capabilities to provide end-to-end Managed Recovery of systems, workloads, and applications. 

This Managed Recovery solution set is essential to ensure vital data assets are protected in real time, with a tested and validated recovery plan to ensure swift recovery of a business’s most essential assets.

***

Comment

It seems that a generative AI security agent could be used to look for IT system warning signs, scanning network traffic and IT systems for “excessive or unusual network traffic, spikes in failed login attempts, unusual system activity, unusual file access patterns” and the like. This agent could also receive alerts from “alerts coming from security tools and endpoint device” systems.

A precondition here is that the agent understands the usual network traffic rate, file access patterns, and login attempt levels.

Such an agent could put these inputs together and analyze them in a false-positive or real-positive assessment process, so helping a security team or person defeat “alert fatigue,” make more sense of the threat environment, and deal with threats more effectively.

The notion that “affected systems should be isolated and quarantined immediately” is sensible, of course, but can have far-reaching effects. For example, having your ERP database attacked and needing to be quarantined means that you have no ERP system. It seems to be a very, very good idea that malware attack detection and response should be carefully and thoroughly planned, tested, and rehearsed to prevent a real attack causing chaos and panic.

Having reliable, clean data copies and restartable IT system components would seem to be a precondition for effective malware attack response.

Such a malware threat agent could likely do even more and we’re certain that cybersecurity suppliers, such as Rubrik, are thinking along these lines already.

IBM says mainframes and AI are essential partners

Big Blue wants the tech industry to use its mainframes for AI workloads.

A 28-page IBM “Mainframes as mainstays of digital transformation” report, produced by its Institute for Business Value, found that 79 percent of IT executives agree that mainframes are essential for enabling AI-driven innovation.” It states that, after six decades of evolution, mainframes are mainstays, storing and processing vast amounts of business-critical data. As organizations embark on AI-driven digital transformation journeys, mainframes will play a critical role in extending the value of data.

IBM’s concern seems to be that mainframe users should not just assume modern, generative AI workloads are for the public cloud and/or x86 and GPU servers in an organization’s data centers. Mainframes have a role to play as well.

The report, which we saw before publication, starts from a hybrid mainframe-public cloud-edge approach, with workloads put on the most appropriate platform. AI can be used to accelerate mainframe app modernization, enhance transactional workloads and improve mainframe operations. The report says “Combining on-premises mainframes with hyperscalers can create an integrated operating model that enables agile practices and interoperability between applications.”

It suggests mainframe users ”leverage AI for in-transactions insights to enhance business use cases including fraud detection, anti-money laundering, credit decisioning, product suggestion, dynamic pricing, and sentiment analysis.”

Mainframe performance can improve AI-powered, rules-based credit scoring, with a North American bank, scoring only 20% of its credit card transactions and taking 80ms per transaction, with public cloud processing being able to score 100 percent by moving the app onto its mainframe, achieving 15,000 transactions/sec at 2ms per transaction and saving an estimated $20 million a year in fraud prevention spend.

Mainframes with embedded on-chip AI accelerators “can scale to process millions of inference requests per second at extremely low latency, which is particularly crucial for transactional AI use cases, such as detecting payment fraud.” IBM says “traditional AI may be used to assess whether a bank payment is fraudulent, and LLMs (Large Language Models” may be applied to make prediction more accurate.”

This is IBM’s Ensemble AI approach; combining existing machine learning models with the newer LLMs.

AI can be used to improve mainframe management. The report found that “74 percent of executives cite the importance of integrating AI into mainframe operations and transforming system management and maintenance. AI-powered automation, predictive analytics, self-healing, and self-tuning capabilities, can proactively detect and prevent issues, optimize workflows, and improve system reliability.”

Mainframes can use AI for monitoring, analyzing, detecting, and responding to cyber threats. Also Gen AI LLMs and Code Assistants can be used to speed older coding language work, such as Cobol, conversion to Java, and JCL development, so “closing mainframe skills gaps by enabling developers to modernize or build applications faster and more efficiently.”

IBM is taking an AI processing offload approach with AI-specific DPUs (Data Processing Units) for its next generation z16 mainframe, due in 2025. This will be equipped with up to 32 Telum II processors with on-chip AI inferencing acceleration at a 24 TOPS rate. A Spyre accelerator will add 32 AI accelerator cores and 1GB DRAM, having a similar performance to the Telum II on-chip AI accelerator. Up to 8  can be used in addition to Telum II units on the next mainframe generation.

Big Blue is not talking about adding GPUs to its mainframe architecture though. Inferencing workloads will run effectively on the mainframe but not AI training workloads. We can expect IBM to arrange mainframe vectorization and vector database capabilities to support retrieval-augmented generation (RAG) in inferencing workloads. 

For this commentator, adding GPUs to a mainframe would be a kind of Holy Grail, as it would open the door to running AI training workloads on this classic big iron platform. Maybe this notion, GPU co-processors, will be a z17 mainframe generation thing.

Get the report here and check out an announcement blog here.

HYCU expands SaaS data protection ahead of DORA regulations

HYCU has found a new niche of SaaS app background infrastructure configurations and resource settings that can be mission-critical and need to be protected, and new regulations like DORA will expand the SaaS app backup business with personal exec liability.

SaaS users can use cloud services at many stages of their operation, their application ecosystem, from build to run. They can be used in infrastructure services, IT service management, software development, app management and DevOps, information security and compliance, data management and analytics, and collaborative work management. SaaS app services can work their way into mission-critical operations.

Simon Taylor, HYCU
Simon Taylor

HYCU CEO Simon Taylor presented on this topic to an IT Press Tour audience in Boston. He said an example is AWS and its Lambda functions, with Lambda used for notifications of security events within an organization’s security unit: “Once you break a lambda function, it breaks the flow … We’re talking thousands of functions. All it takes is an intern cleaning up the wrong YAML files, and because you rely on the notification, you no longer get notifications of breaches.”

Another is “cloud formations. If you don’t back it up correctly, you can accidentally redeclare someone’s environment null … These are all little universes where people just take it for granted as a default service. They don’t realize that, when you ask an intern to ‘go clean this up,’ enormous damage can be caused … That’s where we’re seeing a lot of the issues come out.”

HYCU currently protects 86 SaaS applications and databases, with Taylor claiming: “We are the world’s number one SaaS data protection platform [and] we cover more than ten times the rest of the industry at large.” Protecting SaaS app infrastructure items is now becoming a visible need. ”Configuration protection is one of the most under-served markets in backup,” he said.

Having realized that SaaS app infrastructure settings, like configurations, need protecting too, HYCU is adding new capabilities to its SaaS-protecting R-Cloud offering.

Taylor said: “Imagine things like GitHub, Bitbucket, and GitLab. What do they have in common? They all store your source code, a pretty important thing if you’re running a software company … When we started this process, people said, ‘Why would I back that up?’ We said, ‘Well, it’s your source code.’ And then you see the light bulb go off, and they’re like, ‘Oh my god, I’m not backing that up.'”

Another example: “There’s a customer, they actually leverage the assets in Jira Service Management for inventory. Yet if they delete those assets, they have actually deleted their inventory.”

“One last example, Jira product discovery … We use that ourselves, and you would be surprised at how critical that book comes within three weeks. It’s last year’s fastest growing application. Every single piece of feedback that your company has from a product development perspective now lives there. What if you lose that? You basically lost product management when you’ve done that, right?” 

Subbiah Sundaram, SVP, Products, said HYCU’s aim is to protect the entire tech stack including IT tools and services:

HYCU graphic

He said HYCU is looking at providing cross-cloud regional mobility, citing a US federal customer request to provide VMware to Azure Government to AWS GovCloud to Microsoft Azure Stack HCI mobility. A financial customer wanted Nutanix to/from VMware to AWS and AWS zone, and to GCP and GCP zone. HYCU demonstrated its cross-cloud movement capabilities.

DORA and its consequences

HYCU is also providing data protection and residency for compliance with the European Union’s DORA and NIS2 regulations. DORA’s article 12 requires secure, physically and logically separated backup storage:

HYCU extract of DORA Article 12 section 3 requirements
HYCU extract of DORA Article 12 section 3 requirements

This has a possibly unexpected significance, HYCU says, in that the data needing to be backed up includes SaaS app data. Taylor said: “Now the government is mandating that [customers] have their own copy of the data. It’s not even about just backing up your data and recovering it for usage, etc. They now legally have to have a local copy. And what they have to start doing is asking their SaaS vendor, ‘Where am I supposed to get that from?’

“This is a game changer. So they must have to back up their Office 365, and show they have a copy, sure, but at least they can do that. What about Workday? What do they do when it’s Jira and they haven’t thought about backup? What do they do when the government comes and says, ‘Well, wait, where’s all your payroll data, right? Do you have that?’ Oh, those guys have not. That was before DORA. Now you legally have to have that.”

DORA is different from previous regulations: “The big difference here is that there’s personal liability. Now within DORA, this is no longer, oh, the company will pay the fine. Now the CIO, or the operating board member, is responsible for the fines and for personal prosecution.” 

Taylor added: “In other ways, this is happening in the US. You know, regulators are starting to ask those questions of CISOs in particular. We spoke at a CISO forum recently, and you know, it was amazing to me, the fear in the world, fear, actual fear. Because, this time, the CISO community is now personally liable for some of these things.”

There’s a supply chain aspect to this: “If you supply to a [DORA-regulated] financial institution, you have to make sure you are compliant … The government is making sure everybody’s there, that the entire value chain is supported.”

HYCU is providing secure offsite storage for on-premises, cloud, and SaaS workloads. It already supports AWS S3, Google Cloud Storage, and Azure Blob, and is adding support for object storage from Dell, Cloudian, and OVHcloud.

With Commvault developing automation for the cloud application rebuild process and HYCU working on protecting the SaaS app background infrastructure components, the world of public cloud-delivered data protection is becoming more mature – both broader and deeper.

Macrium brings bare metal restore to CoPilot+ Windows PCs

UK-based Macrium has added native bare metal restore for CoPilot+ Windows PCs to its Reflect X backup product.

CoPilot+ is Microsoft’s brand for Qualcomm Snapdragon devices, representing Microsoft’s third attempt to build an Arm-powered PC/laptop franchise paralleling x86. Macrium, formerly known as Paramount Software UK Ltd, started out as a disk image backup utility for Windows using Volume Shadow Copy Services for point-in-time backups. Its Reflect product has gone through eight major versions and Reflect X is the latest, providing the CoPilot+ support as well as a general 2x speed increase on v8. 

Dave Joyce, Macrium
Dave Joyce

Macrium CEO Dave Joyce stated: “With Reflect X, we’re not just updating our software; we’re resetting the industry standard for backup and recovery. In today’s market, where tighter regulations and economic pressures are squeezing businesses from all sides, the tolerance for downtime has hit an all-time low. What was considered acceptable even a few years ago simply doesn’t cut it anymore.”

Reflect X can restore an image up to 5x faster than Reflect 8 because it has multi-threading capabilities, better compression, and backup optimization. It can also resume interrupted backups. A Macrium blog says: “Our new resumable imaging feature uses checkpoint technology to ensure that even if a backup is interrupted, it can seamlessly continue from the last saved point … It reduces the risk of incomplete backups due to outages or reboots, uses resources efficiently by not restarting large backups from scratch, and improves overall reliability for backups in distributed or remote office scenarios.”

The Reflect X product also “introduces file filtering when creating a disk image, which provides users with options to exclude files from disk images. This ensures that only the necessary data is captured, reducing the storage requirements when creating images [and] images can be created and restored faster helping meet [recovery time object and recovery point objective] targets.”

Dave Joyce joined Macrium as COO in 2023, and took over as CEO from the company’s founder, Nick Sills, in September.

CoPilot+ PCs need so-called NPUs (neural processing units) supporting 40-plus TOPS (trillions of operations per second), a Microsoft Pluton security processor, a minimum 16 GB of DRAM and 256 GB storage, plus a CoPilot key and Windows 11 24H2 runtime software with its AI-focused CoPilot layer.

CoPilot+ PC
CoPilot+ PC

Macrium has offices in the UK, US (Denver), and Canada (Winnipeg), and sells its product direct and via OEM and ODM deals. Customers include Sysmex, General Motors, and Disney. Reflect X will be launched on October 8.

ExaGrid reports record Q3 with 150 new customers

Privately held ExaGrid says it has recorded its strongest Q3 in the company’s history, gaining 150 new customers to take its total past 4,400.

The backup appliance vendor boasted of double-digit annual revenue growth, said it sealed 48 six and seven-figure deals, and claimed it was both free cash flow and P&L positive for the 15th consecutive quarter. As a private company, ExaGrid doesn’t make the actual figures public.

Bill Andrews

President and CEO Bill Andrews stated: “ExaGrid continues to profitably grow as it keeps us on our path to eventually becoming a billion-dollar company. We are the largest independent backup storage vendor and we’re very healthy, continuing to drive top-line growth while maintaining positive EBITDA, P&L, and free cash flow … ExaGrid continues to have an over 70 percent competitive win rate replacing primary storage behind the backup application, as well as deduplication appliances such as Dell Data Domain and HPE StoreOnce.”

ExaGrid is a niche storage market business providing Tiered Backup Storage with a disk-cache Landing Zone for the fastest backups, restores, and instant VM recoveries, plus a long-term deduplicated, virtual air-gapped  retention repository with immutable objects, scale-out architecture, and comprehensive security features.

Andrews said: “ExaGrid prides itself on having a product that just works, is sized properly, is well-supported, and just gets the job done. We can back up these claims with our 95 percent net customer retention, [net promoter score] of +81, and the fact that 94 percent of our customers have our Retention Time-Lock for Ransomware Recovery feature turned on, 92 percent of our customers report to our automated health reporting system, and 99 percent of our customers are on our yearly maintenance and support plan.”

Exagrid deal history
B&F chart

He told us: “We have paid off all debt and have zero debt of any kind and continue to generate cash ever quarter. [We] have not raised any capital in 12 years [and have] very healthy financials and are in control of our own destiny.”

The company sells to upper mid-market to large enterprise customers, with 45 percent of bookings coming from existing customers and 55 percent from new logos. Andrews tells us: “Our product roadmap throughout 2025 will be the most exciting we have ever had, especially in what we will announce and ship in the summer of 2025. We don’t see the competitors developing for backup storage. Our top competitors in order are Dell, HPE, NetApp, Veritas Flexscale Appliances, Pure, Hitachi, IBM, Huawei. Everyone else is a one-off sighting here and there.”

CTERA launches Data Intelligence to link file data to AI models

CTERA says its new Data Intelligence offering supports retrieval-augmented generation (RAG) by linking its cloud file services data to customer-selected GenAI models and giving them real-time private context to help prevent inadequate responses.

Oded Nagel, CTERA
Oded Nagel

GenAI chatbots can provide poor responses to user requests when relying solely on their generic training data sets. Giving them access to an organization’s proprietary data is like giving a hotel’s concierge staff access to its guest register so they know who is staying in the hotel and what their preferences are, making for a much better service. CTERA thinks that, while AI services offer some methods for uploading files, these integrations fall short in their ability to handle live enterprise file storage, impose extensive network and compute overhead, and expose organizations to sensitive data leakage.

CEO Oded Nagel stated: “With CTERA Data Intelligence, we combine our expertise in secure file services, connecting distributed object and file data sources with automated AI data processing. We’re enabling AI that truly understands your data – delivering relevant, up-to-date insights grounded in the unique context of your enterprise, all while ensuring the highest levels of data privacy and security.”

CTERA Data Intelligence capabilities include:

  • AI that knows your data: A semantic RAG engine that uses the CTERA Notification API to constantly update its knowledge with live data sources.
  • Identity-based enforcement: Restricts AI data visibility based on the granular file-level access permissions (ACLs) of the currently logged-in user.
  • AI experts: Customizable virtual assistants with predefined personas and domain-specific knowledge scopes.
  • Fully private: On-premises option for organizations that wish to keep their data and LLM 100 percent private.
  • Distributed ingestion: In-cloud and edge data processing using the CTERA Direct protocol, eliminating costly egress fees and eliminating latency impact.
  • OpenAI and Microsoft Copilot integration: Agentic extensions for popular AI services with SSO authentication.
Aron Brand, CTERA
Aron Brand

We asked CTERA CTO Aron Brand some questions to find out more.

Blocks & Files: How is ingested data vectorized? Which vectorizing engine is used?

Aron Brand: The vectorizing engine is configurable. Customers have a choice between public embedding models like OpenAI’s and private Ollama-based embedding models of their choice.

Blocks & Files: How does the system recognize incoming data and trigger the vector engine?

Aron Brand: The CTERA Notification Service, which is part of the CTERA SDK, provides an API for microservices to subscribe to file events that meet a certain filter criteria. CTERA Data Intelligence uses this API to trigger the ingestion when needed according to a predefined policy.

Blocks & Files: How are the vector embeddings stored? Where is the index of vector embeddings (used for semantic search)  stored, i.e. in which public cloud repositories?

Aron Brand: The embedding vectors and index are stored in a database that is part of the offering. Customers have the option to deploy it on-prem or in-cloud based on their security and performance needs.

Blocks & Files: Does CTERA have a capability for supporting edge location GenAI LLMs?

Aron Brand: Yes. This is a multi-LLM platform. Meaning you can configure multiple LLM targets to work simultaneously based on the usage profile. Those LLMs can be either public (like OpenAI), hosted open source (like groq or Together AI), or fully private (Ollama-based).

Blocks & Files: What kind of unstructured data can be ingested from which source systems?

Aron Brand: Our focus for now is on data from the CTERA global file system. So any files stored in the CTERA Portal can be ingested. We support a range of file formats, including PDF, office files, and media.

Blocks & Files: Is a demo available showing a customer employee interaction with a CTERA RAG system?

Aron Brand: See this video

CTERA Data Intelligence dashboard
CTERA Data Intelligence dashboard

Blocks & Files: What about customizable virtual assistants, are these agentic?

Aron Brand: The CTERA Experts are virtual assistants that have predefined content scope, agent profile, and target LLM. Customers can use them to create domain specific “experts” that can be used with the built-in web UI or in an agentic way from external AI services such as OpenAI and Copilot.

Blocks & Files: Tell us more about the OpenAI and Copilot integration.

Aron Brand: CTERA Data Intelligence is accessible from OpenAI and Copilot as an external agent. Customers can create a new GPT or Copilot app that is defined to use the CTERA Data Intelligence API for fetching data from CTERA and embedding it into its context.

On a general note, you might have noticed some analogies to our global file system architecture: Multi-Cloud/Multi-LLM, Edge Filer/Edge Ingestion Data Connectors, file system metadata database/embedding database, public-hybrid-private  deployment options, end-to-end file permissions enforcement, etc. This is no coincidence as the challenges enterprises are facing when extending AI services to corporate unstructured data are similar to those faced when creating global file systems, having to deal with distributed locations, security compliance, and flexibly deployment options.

The first stage of our journey was being able to connect unstructured data under a global file system, as done by the CTERA File Services Platform, which is a prerequisite for enabling AI. Once we solved that, the next phase is moving from regular file system metadata to AI metadata and enabling AI inference workflows and agentic access. This is what CTERA Data Intelligence is all about.

 ***

Camberley Bates, Chief Technology Advisor at The Futurum Group, said: “The success and progress of enterprise AI projects are very much linked to the data quality including classification, security, and data governance requirements … By directly addressing these requirements, CTERA Data Intelligence has the potential to significantly accelerate AI adoption across the enterprise landscape.”

Commvault expands cloud and cyber resilience solutions, including Cloud Rewind

Commvault is announcing new cloud support and anti-malware updates at its London Shift customer event, with full Commvault Cloud availability on AWS, protection of the Google Workspace suite, point-in-time recovery and rebuild of SaaS suite backups, as well as a partnership with Pure Storage to support financial customers affected by the EU’s Digital Operational Resilience Act (DORA).

Commvault Cloud is the new branding for its Metallic SaaS data protection service. DORA is a set of EU regulations to enhance the cyber resilience of financial institutions aiming to ensure they can continue to function during cyberattacks or other potentially disastrous IT incidents. It is scheduled to come into force from January 2025.

Pranay Ahlawat, Commvault
Pranay Ahlawat

Pranay Ahlawat, Commvault Chief Technology and AI Officer, stated: “We are proud to extend the full power of the Commvault Cloud platform and cloud-native solutions to AWS. We believe our game-changing technology will empower joint customers to recover faster, mitigate threats more effectively, and enhance their cyber resilience strategies.”

Offerings that will be available to AWS customers include: 

  • Cloud Rewind: Based on Appranix technology, Cloud Rewind acts as an AWS time machine, enabling customers to “rewind” to the last clean copy of their data, recover that data, and automate the cloud application rebuild process so that businesses can get back to normal in minutes.
  • Cyber Resilience for Amazon S3: Through its recent acquisition of Clumio, Commvault will be introducing new technology in the coming months that will also bring time machine capabilities to Amazon Simple Storage Service (S3) customers. In the event of an attack, this technology will allow customers to revert quickly to to a clean copy of data that has not been infiltrated with malware. 
  • Air Gap Protect: This will provide AWS customers with immutable, isolated copies of data in a Commvault tenant as a service, giving AWS customers another way to keep their data safe and resilient. 
  • Cleanroom Recovery: This is already present in Azure and being extended to AWS. It will allow organizations to automatically provision recovery infrastructure, enabling recovery to an isolated location in AWS and restore infrastructure workloads like Active Directory and production workloads. They can also conduct forensics in this clean, safe location, and it enables IT and security teams to test their cyber recovery plans in advance so that they know that when they are hit, they can recover quickly.

Cloud Rewind is unique to Commvault. “When organizations are attacked, restoring the data is only half the battle,” the company says. “The truly laborious task is actually restoring the distributed cloud applications, which are used to run and power that data.”

A typical enterprise could run more than 350 SaaS apps and attack recovery can require these apps to be restored in a systematic way that takes a lot of time when done manually. Appranix technology automatically identifies and catalogs all cloud components in use, offering full visibility into what assets need protection and recovery, analyzing and defining the relationships and dependencies between various cloud components.

It builds an operational blueprint, capturing the data and the full map of applications, infrastructure, and networking configurations so that “systems can be restored with their complete operational blueprint intact, reducing guesswork for a thorough recovery … When a system is restored, all connected resources and services are aligned, with little to no human involvement.”

Cloud Rewind supports all major public and private cloud platforms, including AWS, Google Cloud, and Microsoft Azure. 

Brian Brockway, Commvault
Brian Brockway

Brian Brockway, Commvault CTO, stated: “What we are doing with Cloud Rewind is unlike anything offered on the market today. In the ransomware era, recovering data is important, but it’s table stakes. We’re ushering in an entirely new chapter in cyber resilience that not only expedites data recovery, but recovery of cloud applications. This is the gold standard in recovery for a cloud-first world.”

A new Cyber Resilience Dashboard provides continuous ransomware readiness assessments and indicates gaps in resilience plans. It provides a view across the entire data estate, assessing components such as testing frequency and success, and availability of immutable air-gapped copies of critical data.

We understand that a coming Clumio capability will be the instant restore of massive S3 buckets with tens of billions of objects. Restoring single S3 objects has easy but restoring buckets at massive scale is a different and vastly more difficult proposition. Clumio’s tech uses snapshotting and versioning with the ability to go back to a point in time and restore a bucket’s state.

Google Cloud

Commvault has launched SaaS-based Cloud Backup & Recovery for Google Workspace. This provides Gmail, Google Drive, Shared Drives, and built-in Google Cloud Storage for Google Workspace protection. Google Workspace customers will be able to effortlessly discover active data, rapidly recover from inadvertent or malicious data deletion, and maintain a copy of valuable data in the Commvault Cloud for compliance mandates.

It is expanding its Google Cloud capabilities with Cloud Rewind, which integrates Appranix’s application rebuild capabilities into the Commvault Cloud platform, providing Google Cloud customers with an automated, cloud-native rebuild solution to recover from cyber incidents more rapidly.

Earlier this year, Commvault announced support for object retention lock for Google Cloud Storage, providing customers with immutable cloud storage on Google Cloud infrastructure.

Pure Storage

Commvault announced a joint cyber-readiness offering with Pure Storage to deliver the ability to continuously test recovery in secure, isolated environments; on-demand in cloud-isolated tenants via Commvault’s Cleanroom Recovery solution or within isolated recovery environments. The two say that customers “can easily deliver rapid, frictionless recovery of clean data to isolated environments with the flexibility needed to meet operational and data sovereignty requirements.” 

This is not only applicable to DORA, but supports compliance under other cybersecurity and privacy regulations like the EU’s NIS2 Directive and e-mandates from Reserve Bank of India (RBI) for recurring transactions.

Patrick Smith, Field CTO, EMEA, Pure Storage, said: “Through our partnership with Commvault, we are giving financial institutions critical tools that not only help comply with regulations like DORA but advance their cyber resilience to help ensure enterprise data remains secure, protected, and if necessary, recoverable.”

Availability

Commvault’s suite of offerings for AWS will be generally available in the coming months. Following that, joint customers will be able to access them in the AWS Marketplace. Commvault Cloud Backup & Recovery for Google Workspace is targeted for availability by the end of the calendar year. Cloud Rewind and the Cyber Resilience Dashboard will be generally available in the coming months.

AI vectorization signals end of the unstructured data era

Analysis: A VAST Data blog says real-time GenAI-powered user query response systems need vector embeddings and knowledge graphs created as data is ingested, not in batch mode after the data arrives.

VAST co-founder Jeff Denworth writes in a post titled “The End of the Unstructured Data Era” about retrieval-augmented generation (RAG): “AI agents are now being combined to execute complex AI workflows and enhance each other. If a business is beginning to run at the speed of AI, it’s unthinkable that AI engines should wait minutes or days for up-to-date business data.”

Jeff Denworth

He says early “solutions for chatbots … front-ended small datasets such as PDF document stores,” but now we “need systems that can store and index trillions of vector embeddings and be able to search on them in real time (in order to preserve a quality user experience).”

A vector embedding is a calculated number (vector) representing some unstructured data item’s presence on a scalable dimension, such as color, position, shape component, aural frequency, and more. A document, file, data object, image, video, or sound recording item can be analyzed by a vectorization engine and hundreds, thousands, or more vector embeddings generated that characterize the item. These vector embeddings are indexed and stored in a vector database. When a user makes a request to a GenAI chatbot, that request is turned into vectors and the vector database searched for similar vectors inside a so-called semantic space.

The user could supply an image and ask “What is the name of this image?” The image and question are vectorized and the chatbot semantically searches its general training dataset for image vectors that are closest to the user-supplied image, and responds with that painting’s name: “It is the Mona Lisa painted by Leonardo da Vinci,” or, more useful to an enterprise: “It is a Cisco Nexus 9200 switch.”

To ensure higher response accuracy, the chatbot could be given access to a customer organization’s data and its retrieved response from its database augmented with that, hence retrieval-augmented generation.

A knowledge graph is generated from structured or block data and stores and models relationships (events, objects, concepts, or situations) between so-called head and tail entities, with a “triple” referring to a head entity + relationship + tail entity. Such triples can be linked and the relationships are the semantics. Crudely speaking, they describe how data item pairs are linked in a hierarchy. Chatbots at the moment do not use knowledge graphs, but suppliers like Illumex are working in what we could call knowledge graph augmented retrieval.

Denworth’s company has announced a real-time data ingestion, vectorization, storage, and response AI-focused InsightEngine system, and his blog extols its virtues and explains the need for its development.

He writes: “We’re watching the exponential improvements in embedding models and seeing a time in the not-too-distant future where these tools will be of a caliber that they can be used to index the entirety of an enterprise knowledge base and even help to curate enterprise data. At that point, hyper-scale vector stores and search tools are table stakes.The trillion-way vector problem is already a reality for large AI builders like OpenAI and Perplexity that have gone and indexed the internet.”

As well as vectorizing existing data sets, companies will “need to be able to create, store and index embeddings in real time.”

“I think of vectors and knowledge graphs as just higher forms of file system metadata,” Denworth writes. “Why wouldn’t we want this to run natively from within the file system if it was possible?”

Existing file and object systems “force IT teams to build cumbersome retrieval pipelines and wrestle with the complexity of stale data, stale permissions and a lot of integration and gluecode headaches … The idea of a standalone file system is fading as new business priorities need more from data infrastructure.”

Let’s think about this from a business aspect. A business has data arriving or being generated in multiple places inside its IT estate: mainframe app environment, distributed ROBO systems, datacenter x86 server systems, top tier public cloud apps, SaaS apps, security systems, data protection systems, employee workstations, and more.

Following Denworth’s logic, all of this data will need vectorizing in real time, at the ingest/generation location point and time, and then stored in a central or linked (single namespace) database so that semantic searches can be run against it. That means that all the applications and storage systems will need to support local and instant vectorization – and knowledge graph generation as well.

There will need to be some form of vectorization standard developed and storage capacity would need to be put aside for stored vectors. How much? Let’s take a PDF image. Assuming 512 vector dimensions, and 32-bit floating point numbers per dimension, we’d need around 2 KB of capacity. Increase the dimension count and up goes the capacity. Halve the floating point precision and down goes the capacity.

This means that file-handling and object systems from Dell, DDN, HPE, Hitachi Vantara, IBM, NetApp, Pure Storage, Qumulo etc. would need to have vectorization, embedding storage, and metadata added to them – if Denworth is right. Ditto all the data lake and lakehouse systems. 

Ed Zitron

AI bubble or reality

Of course, this will only be necessary if the generative AI frenzy is not a bubble, and develops into a long-lived phenomenon with real and substantial use cases emerging. Commentators such as Ed Zitron have decided that they won’t. OpenAI and its like are doomed, according to critics, with Zitron writing: “it feels like the tides are rapidly turning, and multiple pale horses of the AI apocalypse have emerged: a big, stupid magic trick‘ in the form of OpenAI’s (rushed) launch of its o1 (codenamed: strawberry) model, rumored price increases for future OpenAI models (and elsewhere), layoffs at Scale AI, and leaders fleeing OpenAI. These are all signs that things are beginning to collapse.”

But consultancies like Accenture are going all-in on chatbot consultancy services. An Accenture Nvidia Business Group has been launched with 30,000 professionals receiving training globally to help clients reinvent processes and scale enterprise AI adoption with AI agents. 

Daniel Ives

Financial analysts like Wedbush also think the AI hype is real, with Daniel Ives, managing director, Equity Research, telling subscribers: “The supply chain is seeing unparalleled demand for AI chips led by the Godfather of AI Jensen [Huang] and Nvidia and ultimately leading to this tidal wave of enterprise spending as AI use cases explode across the enterprise. We believe the overall AI infrastructure market opportunity could grow 10x from today through 2027 as this next generation AI foundation gets built with our estimates a $1 trillion of AI capex spending is on the horizon the next three years.

“The cloud numbers and AI data points we are hearing from our field checks around Redmond, Amazon, and Google indicates massive enterprise AI demand is hitting its next gear as use cases explode across the enterprise landscape.”

Ben Thompson

Stratechery writer Ben Thompson is pro AI, but thinks it will take years, writing: “Executives, however, want the benefit of AI now, and I think that benefit will, like the first wave of computing, come from replacing humans, not making them more efficient. And that, by extension, will mean top-down years-long initiatives that are justified by the massive business results that will follow.”

Who do we believe? Zitron or the likes of VAST Data, Accenture, Wedbush, and Thompson? Show me enterprises saving or making millions of dollars from GenAI use cases with cross-industry applicability and the bubble theory will start receding. Until that happens, doubters like Zitron will have an audience.

Catalogic adds clean-room recovery, immutability to DPX

Privately held Catalogic has added a Deletion Lock feature and a Cyber Resilient Recovery feature with a clean-room environment for verifying restores are free from malware.

Catalogic Software, founded in 2003, had three product lines by 2020: ECX for copy data management, DPX for endpoint and server data protection, and its CloudCasa SaaS service for Kubernetes app protection. In May 2021, IBM, which accounted for approximately 80 percent of ECX sales, bought the ECX product line. A CloudCasa spinoff was envisaged last year but has been placed on the back burner. CloudCasa partnered with IONOS Cloud in July and received a Persistent Volume upgrade last month. Now DPX v4.11 has been upgraded with immutability and clean room recovery.

Pawel Staniec, Catalogic
Pawel Staniec

Catalogic CTO Pawel Staniec stated: “DPX features a software-defined storage layer with built-in ransomware protection, offering a unique ‘out-of-the-box’ solution. Unlike other vendors that require assembling various components to achieve the same functionality, DPX delivers it all at a fraction of the cost.”

The Deletion Lock feature prevents backed-up data being deleted or altered. The clean-room facility creates a quarantined space isolated from the customer’s network. Within, admins can check and verify a backup is free from malware before restoring its data to production.

Catalogic has a backup repository and storage appliance called vStor. vStor Snapshot Explorer is an anti-malware file-scanning facility, distributed as an agent installable on Windows and Linux hosts. It exposes a REST API and has built-in plugin architecture that can be used for integration with existing security or data protection infrastructure. 

A vStor GuardMode feature provides immutable backups with retention policies, access controls, logging, and auditing.

The latest version of DPX has integrated both vStor Snapshot Explorer and vStor GuardMode. Snapshot Explorer now scans and verifies snapshot backups to verify they are free from ransomware and enables granular file recovery even if catalog information is lost.