Home Blog Page 46

Stanford team proposes hybrid gain cell memory to boost GPU performance

A Stanford University team is researching gain cell memory, combining SRAM and DRAM features, to speed GPU memory data access, validating Israeli startup RAAAM Memory Technologies‘ system-on-chip (SoC) development.

H.-S. Philip Wong

Gain cells use an additional transistor to amplify memory cell read signals and boost data access compared to DRAM. The research team, led by H.-S. Philip Wong, a Stanford Professor of Electrical Engineering, says there is a memory wall problem with a GPU’s on-chip and fast, expensive SRAM having to be loaded with data from a server’s comparatively slow main memory (DRAM). This takes too much time and electrical energy, and it would be better if the two memory types could be combined or SRAM replaced by something better.

An SRAM (static random access memory) cell is quite large in area, needing four transistors to store a bit and two to control access to the cell. The ability to reduce the size of SRAM cells is running out, limiting SRAM chip capacity increases.

DRAM cells are simpler and smaller. They need a single transistor, a source with a bit line connection, a gate with a word line connection, a capacitor to store charge, and a drain; a 1T1C architecture. They are volatile, needing constant refreshing of their bit state, and suffer from destructive reads with their bit state having to be re-established after every read.

Embedded DRAM (eDRAM) could be placed on the GPU system-on-chip (SoC) and so increase capacity, but Wong’s team says its capacitor process has incompatible logic. We need on-chip memory that combines SRAM speed and DRAM capacity, which can be done by having DRAM-like cells with separate read and write (storage) transistors. No extra capacitor is needed because this design amplifies the read signal. 

Wong’s team devised a twin oxide silicon (OS-OS) gain cell, but then developed a hybrid gain cell in which the two transistors are made from different materials, as having two silicon oxide-based transistors slows bit-state signal reading. 

Wong’s team developed an ALD ITO (atomic layer deposition indium tin oxide) FET write transistor with a Si PMOS (silicon P-channel metal-oxide-semiconductor) read transistor. A scientific paper on the subject published by the IEEE can be found here (subscription required). According to research results presented at the  IEEE Symposium on VLSI Technology and Circuits in June, the gain cell device retains data (its bit state) for more than an hour (>5,000 seconds) contrasting with DRAM’s need to be refreshed every 64 ms, and data is read up to 50x faster than from a OS-OS transistor gain cell, with a 1-10 ns access time:

Also, PMOS transistors consume very little power when in the off state. Gain cell reads are non-destructive and preliminary results show them being fairly close to DRAM in several characteristics:

RAAAM says its GCRAM (gain-cell random access memory) technology can be used as a drop-in SRAM replacement in any SoC, allowing for lower fabrication costs through reduced die size or enhanced system performance by increasing memory capacity within the same die size. Its technology is described in a downloadable slide deck.

In essence, both GCRAM and the Wong team’s hybrid gain cell memory are intended to replace SRAM rather than DRAM, with a higher capacity gain cell memory being slower than SRAM in some performance aspects. But it makes up for that by having a much higher capacity, so reducing the existing SRAM-to-DRAM data transfer traffic and thus increasing overall system performance. 

Potential customers are producers of processor SoCs for GPUs and CPUs in datacenters and also embedded systems.

The role of Slough in AI infrastructure

PARTNER CONTENT: The UK is at a crossroads. We have the talent, the innovation, and the investment in artificial intelligence (AI) that can make us a true global leader.

But there’s one crucial factor that’s often overlooked in this conversation: infrastructure. I’m not talking about roads or bridges, but the digital infrastructure – the high-performance data centres, the connectivity, and the raw computing power that AI demands.

As we look to position the UK as a frontrunner in the global AI race, it’s clear to me that our digital infrastructure will determine whether we lead or fall behind. AI has the potential to totally transform industries, but to keep up – and more importantly, to stay ahead – we need to make sure we’ve got the right foundations in place.

AI is the future, but I think many of us underestimate just how much infrastructure it requires. It’s not just about clever algorithms and brilliant minds. AI needs vast amounts of data to operate, and that data must be stored, processed, and analysed at lightning speed. The reality is the massive computational power required to train and run AI models cannot be handled by standard infrastructure.

From autonomous vehicles to real-time financial algorithms and advanced medical diagnostics, AI applications are incredibly demanding. They need low-latency, high-capacity data centres to function effectively. And here’s the thing: AI isn’t just data-intensive; it’s power-hungry. To support the growth of AI, we need facilities that are not only powerful but also energy-efficient, because sustainability is non-negotiable.

Anticipating AI’s Impact early

At Digital Realty, we recognised early on that AI wasn’t just another technological trend – it was a complete game changer. Back in the early 2000s, we saw the growing momentum behind machine learning and deep learning, and we started planning for how AI would reshape industries. By 2017, we had developed a roadmap to help our customers leverage AI, gaining invaluable insights from early adopters in fields like healthcare, finance, and logistics.

We knew that data was going to be at the heart of this AI revolution, and our mission became clear: help businesses manage, store, and derive value from their data. That focus has shaped our strategy ever since. This year, for example, we hosted one of the world’s most powerful AI supercomputers in Copenhagen, and we expanded our collaboration with Oracle to boost AI adoption in enterprises across industries. Over the last two years alone, we’ve supported more than 60 AI-driven projects, working with companies to unlock the power of AI.

This brings me to our most recent investment here in the UK: our Slough data centre campus, which we acquired for $200 million in July.

The Slough campus features two data centres with a combined capacity of 15 megawatts (MW), excellent connectivity, and room for future expansion. It’s not just a powerful facility; it’s already an established hub for over 150 businesses, from technology companies to financial services firms. These companies are using over 2,000 cross-connects to fuel their operations, and I firmly believe that the Slough campus will become an even more critical asset as AI continues to grow.

With its AI-ready infrastructure, Slough is designed to support high-density workloads, making it ideal for companies deploying advanced AI and machine learning applications at scale. Its scalable design allows for dense server deployments, ensuring businesses are equipped to handle evolving AI demands. The facility also offers robust connectivity and direct access to major cloud service providers, facilitating low-latency operations and enabling seamless integration into hybrid or multi-cloud environments.

Consistent with our ambitious commitment to sustainability, our new Slough campus is powered entirely by renewable energy, aligning with our broader practice of matching 100 percent of the energy used in our European portfolio with renewable sources and our goal of achieving carbon neutrality for our European portfolio by 2030.

This acquisition reinforces Digital Realty’s position in the UK market as a leader in digital infrastructure and a key home for AI innovation.

Supporting AI’s high-density and cooling needs

AI’s impact on infrastructure is something we anticipated from the start. Over the years, we’ve invested heavily in making sure our data centres are built to handle AI’s unique demands. Today, we offer high-density deployments that can support up to 150 kilowatts (kW) per rack. That means businesses running AI workloads don’t have to worry about hitting power or space constraints – they can scale their operations as needed.

But power isn’t the only challenge AI presents – there’s also the heat it generates. AI systems run hot, and advanced cooling solutions are essential to keep them operating efficiently. That’s why over half of our 170+ global data centres, including many here in Europe, now support direct liquid cooling. This technology ensures we can maintain the performance of AI systems while significantly reducing their energy consumption. In a world increasingly focused on sustainability, this kind of innovation isn’t just a “nice-to-have” – it’s essential.

What’s important here is that we’ve designed these data centres with AI in mind. We’re making sure businesses have the flexibility and scalability they need to grow as AI’s role in their operations expands. With our modular design philosophy, we can cater to a variety of deployment types, giving companies the room to scale their AI infrastructure without having to overhaul everything every few years.

So where does this leave the UK in the global AI race? Well, we have the talent, we have the innovation, and now – with investments like our Slough campus – we’re building the infrastructure needed to support the next generation of AI-driven growth.

If there’s one thing I’ve learned from decades of technological change, it’s that those who invest early in the right infrastructure are the ones who come out ahead. AI is no different. As companies across the UK continue to embrace AI, they’ll need the support of high-capacity, scalable, and energy-efficient data centres.

That’s where we come in. At Digital Realty, we’ve always been committed to helping our customers navigate major technological shifts, from the rise of the web to the mobile revolution and beyond. Now, we’re doing the same with AI, providing the digital backbone businesses need to thrive.

If the UK is to solidify its position as an AI powerhouse, we must continue to invest in infrastructure that can keep up with the demands of tomorrow’s AI breakthroughs. The Slough campus is just the beginning. The future of AI is here, and it’s our job to make sure the UK has everything it needs to lead.

Contributed by Digital Realty.

Hydrolix pipes in more data lake growth via key partners

Streaming data lake supplier Hydrolix has reported a 12x increase in customer growth over the last 12 months as it widens its integration with other platforms and takes advantage of its relationship with content delivery network Akamai.

At the IT Press Tour of Boston and Massachusetts, Blocks & Files heard that worldwide customer numbers for the Portland, Oregon, company had reached 287, with 59 of them now in the EMEA region.

The company boasted of large worldwide brands using its Hydrolix product to squeeze their customer and operational data, with users across the media, consumer packaged goods, gaming, finance, automotive, sportswear, and security segments, among others.

The idea is that Hydrolix offers all the properties of a traditional data lake, including a flexible schema, raw storage, decoupled storage, and independently scalable query and ingest compute.

Use cases include platform and network observability, compliance, SIEM, real user monitoring, ML/AI anomaly detection, and bot, piracy, and fraudulent user detection, the company says. The data lake can be deployed on Akamai’s LKE, Google GKE, Amazon EKS, and Azure AKS, and stored in S3-compatible object stores.

Hydrolix diagram
Hydrolix diagram

Hydrolix’s data ecosystem includes connectors to leading data platforms like Splunk, Spark, and Kibana (ELK), and works complementarily with Big Query and Snowflake.

The solution is sold direct and through partners like Akamai, which white label the technology. For instance, on the IT Press Tour, we heard from executives from both Akamai and Boston-headquartered sports shoe and apparel manufacturer New Balance as to how Hydrolix’s tech was effectively used.

HYdrolix comparison slide
HYdrolix comparison slide

New Balance uses the Akamai content delivery network to distribute content and collect analytics on its effectiveness, then stores that data in the Hydrolix streaming data lake.

This use case supports the Akamai Connected Cloud, a massively distributed edge and cloud platform that brings together core cloud computing and edge computing, along with security and content delivery.

TrafficPeak is the observability product built on the Akamai Connected Cloud – a white-labeled Hydrolix streaming data lake that enables users to ingest, monitor, query, store, and analyze massive amounts of data in real time, at “75 percent less cost than other providers,” claim the partners.

Marty Kagan, Hydrolix
Marty Kagan

The Akamai connections run deep. Marty Kagan, CEO and co-founder of Hydrolix, used to be Boston-headquartered Akamai’s director of technology, international, based in Paris. Other Hydrolix executives also used to work for the CDN provider.

Maybe it’s not surprising, therefore, that Akamai has made a minority investment in Hydrolix. The company has so far raised $65 million in total funding, including a B round for $35 million completed this May.

Asked whether the firm was profitable yet, Kagan said: “We’re not profitable so far, we have a long runway. We expect to employ 140 by the end of the year, and are hiring staff in Singapore, India and Japan, for instance.”

He added: “We didn’t develop Hydrolix just for the CDN market, but it’s ideal in terms of penetrating the overall data lake market. Akamai are a revenue generator. Before they came along we were mainly dealing with terabytes, not petabytes – they are selling our solution to the world’s 10,000 biggest companies.”

Hydrolix ARR history
Hydrolix ARR history

Hydrolix charges for its data streaming lake per gigabyte of ingested data.

Microsoft and AWS lead Gartner report on distributed HCI market

The top two public clouds – Amazon’s and Microsoft’s – lead the distributed HCI market in Gartner’s Magic Quadrant (MQ) report, with Nutanix and Broadcom (VMware) following close behind.

Gartner stopped doing MQ reports on the hyperconverged infrastructure (HCI) appliance market last year and transitioned to looking at the distributed HCI suppliers (DHI), providing edge and datacenter/colo on-premises systems as well as public cloud HCI offerings. 

The research house defines distributed hybrid infrastructure as “offerings that deliver cloud-native attributes, which can be deployed and operated where the customer prefers.” It contrasts this with public cloud IaaS, “which is based on a centralized approach.”

Its strategic planning assumption is that: “By 2026, 50 percent of enterprises will initiate proofs of concept for alternative distributed hybrid infrastructure (DHI) products to replace their VMware-based deployments and embrace hybrid cloud infrastructure delivery, up from 10 percent in 2024.” That’s good news for Nutanix and Oracle in North America and elsewhere, and the Chinese distributed HCI suppliers who are subject to geopolitical US-China tensions and effectively locked out of American-influenced markets.

Gartner’s first distributed HCI MQ lists nine suppliers, with Microsoft Azure, AWS, Nutanix, Broadcom, and Oracle classed as Leaders:

Screenshot

There are no Challengers and three Niche Players, all Chinese suppliers: Alibaba Cloud, Huawei, and Tencent Cloud. IBM is the sole “visionary.”

Appliance-based HCI systems from Dell (VxRail and PowerFlex), HPE (Alletra 5000 and SimpliVity), Quantum (Pivot3), and Scale Computing (HyperCore) are not mentioned in this MQ. The basic inclusion criterion for them is a full public cloud port followed by other entry qualifications:

  • Must show evidence of 60 enterprise customers deploying products in distributed hybrid infrastructure scenarios or must have reported over $50 million in ARR contract value as of 1 May 2024.
  • Must show evidence that all DHI production customer deployments are across on-premises and at least one hyperscale strategic public cloud environment (see Magic Quadrant for Strategic Cloud Platform Services). For hyperscale cloud providers, there should be evidence of on-premises or edge DHI deployment and software providers or full-stack HCI vendors, as well as evidence of the deployment of a DHI offering in the hyperscale strategic public cloud.

Gartner gives honorable mentions to Google (Google Distributed Cloud), Red Hat (OpenShift), and SUSE (Harvester). 

William Blair financial analyst Jason Ader discusses Nutanix, telling subscribers: “A lot of things are falling into place for the company—from the large VMware displacement opportunity to the addition of new OEM partners, the buildout of a modern container platform, and an improving financial model. … The disruption caused by Broadcom’s acquisition of VMware has created a huge vacuum in the infrastructure software space that Nutanix believes it can fill.”

There is opportunity here for the other DHI players mentioned by Gartner, and also the HCI appliance vendors.

MLPerf storage benchmark: A user guide from the maker

The latest version of the MLPerf Storage benchmark requires careful study to compare vendor single and multi-node systems across the three workloads, system characteristics, and two benchmark run types. MLCommons, the organization behind the benchmark, contacted us with information about the benchmark’s background and use. We’re reproducing it here to help with the assessment of its results.

Kelly Berschauer

Kelly Berschauer, marketing director for MLCommons, told us the benchmark was originally tightly focused on the AI practitioner who thinks in terms of training data samples and samples/second, not in terms of files or MBps or IOPS like storage people do. The MLPerf Storage working group members decided for v1.0 to align the storage benchmarks report to the more traditional metrics that purchasers look for (MB/s, IOPs, etc). Each workload in the benchmark defines a “sample” (for that workload) as a certain amount of data, with some random fluctuation of the size of each individual sample around that number to simulate some of the natural variation that we would see in the real world. In the end, MLPerf treats the samples as a constant size, the average of the actual sizes. The average sizes are Cosmoflow (2,828,486 bytes), Resnet50 (114,660 bytes), and Unet3D (146,600,628 bytes). Since the samples are of a “fixed” size, we multiply by the samples/sec to get MiB/sec, and let the AI practitioner know that they can do the reverse to get the samples/sec number they are interested in.

The normalization of MiB/s per GPU is not terribly valuable because the benchmark considers a “passing result” to be an Accelerator Utilization (AU) of 90 percent or higher for Unet3D and ResNet50, and 70 percent or higher for Cosmoflow. The graph included in our article only shows that the MiB/s/GPU results vary by up to 10 percent. The benchmark uses AU percent as a threshold. This is because, since GPUs are a significant investment, the user would like to ensure that each GPU is not starved for data because the storage system cannot keep up. The benchmark places no additional value on keeping the GPU more than 90 percent busy (or more than 70 percent for Cosmoflow).

The normalization of MiB/s per client system (what the benchmark calls “host nodes”) is also not terribly valuable because while there are physical limits on how many GPUs can be installed in a given computer system, there are no limits on the number of simulated GPUs that can be run on a “host node”. As a result, there is no relationship between the number of “host nodes” reported by the benchmark and the number of real computer systems that are required to host the same number of GPUs that we can draw any conclusions from. Simulated GPUs are used by the benchmark to enable storage vendors to run the benchmark without the significant investment of obtaining such a (typically large) number of host nodes and GPUs.

The “scale” of the simulated GPU Training cluster supportable by the given set of storage gear is the core insight provided by the benchmark. For a given set of storage gear, how many simulated GPUs it can support. With modern scale-out architectures (whether compute clusters or storage clusters), there is no “architectural maximum” performance, if you need more performance you can generally just add more gear. This varies by vendor, of course, but is a good rule of thumb. The submitters in the v1.0 round used a wide variety of the amount of gear for their submissions, so we would expect to see a wide variety in the reported topline number.

Different “scales” of GPU clusters will certainly apply different amounts of load to the storage, but the bandwidth per GPU must remain at 90 percent or greater of what is required or the result will not “pass” the test.

Distributed Neural Network (NN) training requires a periodic exchange of the current values of the weights of the NN across the population of GPUs. Without that weight exchange, the network would not learn at all. The periodicity of the weight exchange has been thoroughly researched by the AI community and the benchmark uses the accepted norm for the interval between weight exchanges for each of the three workloads. As the number of GPUs in the Training cluster grows, the time required to complete the weight exchange grows, but the weight exchange is required so this has been treated as an unavoidable cost by the AI community. An “MPI barrier” is used by the benchmark to simulate the periodic weight exchanges. The barrier forces the GPUs to all come to a common stopping point, the same as the weight exchanges do in real-world training. The AU recently calculated by the benchmark does not include the time the GPU is waiting for the simulated weight exchange to complete.

The bandwidth per second per GPU will be the same value no matter the scale of the GPU cluster, except there will periodically be times when no data is requested at all, during the simulated weight exchanges. It only appears that the required B/W per GPU is dropping as scale increases because the “dead time” of weight exchange has not been accounted for correctly if one divides the total data moved by the total runtime.

MLPerf plans to include power measurement as an optional aspect of the benchmark in the v2.0 round, and will very likely tighten the requirements for reporting the rack units consumed by the gear used in each submission. It is also considering several additional features for v2.0. As with all its benchmarks, the working group will continue to refine it over time.

Gartner’s file and object MQ drops Cloudian, DDN, NetApp and Quantum

Cloudian, DDN, NetApp and Quantum have been dropped from Gartner’s 2024 Magic Quadrant (MQ) for File and Object Storage Platforms, while VAST Data has moved from challenger to leader status.

The 2024 MQ report starts by stating “Market demand for a single platform supporting both file and object storage workloads has led to significant changes in the vendor landscape.” It’s not kidding. Excluding NetApp, Cloudian and the others while including vendors such as DataCore may arouse disbelief among many observers.

Dell has the top position in this MQ, rated as highest in a combined ability to execute and completeness of vision, followed by Pure Storage and then VAST Data. Scality has dropped from being a 2023 MQ leader to a visionary, and Hitachi Vantara changed from being a challenger to a niche player. A diagram showing the 2023 and 2024 MQs highlights the major changes. 

Gartner 2023 and 2024 file and object storage MQs with B&F annotations and additions on the 2024 MQ. Excluded vendor names are in italic script with red crosses, and 2023 dot positions are shown by blue circles.

Gartner appears to have changed its view of the market and hence its vendor inclusion criteria. The 2024 MQ’s market overview says: “The market for unstructured data has been slowly converging from separate file and object products to a single platform that can support all unstructured data workloads. In 2024, there are more vendors supporting a single platform as opposed to separate products. The shift from product to platform is more than supporting all unstructured data workloads. It is also about integrated capabilities to provide cyber resilience, global namespace and file systems and storage as a service.”

The 2024 MQ says that the missing vendors – Cloudian, DDN, NetApp and Quantum – were “dropped for not meeting the 2024 inclusion criteria of a single platform for file and object workloads.”

Jon Toor

Jon Toor, Cloudian CMO, disagrees with this. He told us: “It is important to note that in the upcoming report, Gartner will claim that Cloudian did not meet the inclusion criteria. This claim is questionable on its face. Regardless, while the criteria for 2024 were indeed changed from previous years, we had already informed Gartner of our decision to opt out before the MQ process even began.”

He said: “In early 2024, Cloudian made a strategic decision to opt out of the upcoming Gartner Magic Quadrant (MQ) for File and Object Storage.” 

“Our rationale to opt out was clear: the Gartner MQ increasingly diverges from the needs and priorities of our customers. Over recent years, this MQ has progressively emphasized file storage features while marginalizing the unique strengths of object storage, which remain core to Cloudian’s offering.” 

“This emphasis on file capabilities does not resonate with Cloudian’s customer base, most of whom focus on the S3 API object storage capabilities that Cloudian offers. To them, evaluating a solution based primarily on its file-centric features is akin to choosing between a Porsche and a Ferrari based on who manufactures the better wristwatch—it’s simply irrelevant.”

Toor added: “The most recent 2023 MQ made Gartner’s bias towards file systems especially clear. Vendors specializing in object storage were systematically downgraded compared to their 2022 positions, while those focusing on file storage either improved or retained their placements.”

“This year,” he added, Gartner has removed the word ‘Distributed’ from the report’s title. “The former ‘Distributed File Systems and Object Storage’ report is now labeled simply ‘File and Object Storage Platforms.’ This shift reflects a movement towards traditional file solutions and away from the highly scalable, distributed architectures that are foundational to object storage.”

Paul Speciale

Scality CMO Paul Speciale said: “The wide-ranging changes to the inclusion criteria of this year’s Gartner’s Magic Quadrant for File and Object Storage Platforms show a heavy emphasis on vendor products being measured on their file — not object storage — offerings. Many object storage vendors that were previously included, based on the criteria of previous reports, are completely absent in this edition of the report.”

Jerome Lecat, Scality’s CEO, said: “While combining file and object in the same report may have been relevant in the past, it is not relevant anymore, since the object storage market has matured, and because the Magic Quadrant report only compares one specific product per vendor. For example, we have never seen PowerScale from Dell EMC or Vast Data deployed at scale for object storage needs.”

He added: “We believe Scality’s inclusion in the 2024 Magic Quadrant for File and Object Storage Platforms underscores our vision to deliver the most reliable and performant object storage solutions for AI and cyber resilience. Looking beyond 2024, we’re pioneering technologies that meet the massive data demands of the world’s most critical industries — from space exploration and genomic research to the stringent cybersecurity requirements of the financial industry and the high-performance standards of cloud service providers.”

Toor believes there should be a specific object storage MQ, saying: “We would always welcome the opportunity to work with the Gartner analysts in a comparison that evaluates object storage on its true merits — without conflating it with other technologies like NAS. If Gartner can provide a fair, relevant comparison — one that values Ferrari for being a Ferrari rather than for its ability to tell time — we would be more than willing to engage.”

We have asked DDN, NetApp and Quantum for comment. Check out the MQ courtesy of VAST Data here.

Wasabi CEO sees IPO ramp as firm reaches 100,000 customers

Cloud storage unicorn Wasabi says it has reached the 100,000 customer point after continuing sales growth.

The company has raised a total of $500 million since its inception in 2017, and gained unicorn status in 2022, after raising $250 million, which included a 50/50 investment and credit facility.

Since then, it has seen 60-70 percent annual revenue growth and its partner numbers have also just reached 15,000 globally.

David Friend

On the IT Press Tour of Boston and Massachusetts this week, David Friend, the company’s founder and CEO, said: “Cloud storage is an infinite market, and you have to try and dominate it as quickly as possible.”

Wasabi, of course, is very much smaller than its hyperscale cloud service rivals AWS, Azure and Google, but its strategy of basing its service on a rival to AWS S3 cloud storage appears to be gaining traction. It claims its offer can be up to 80 percent cheaper than AWS, saying it only charges for the amount of data users store in its cloud, with no extra charges for moving it in and out of that cloud, or any other operational charges.

Friend also claims similar savings can be made if customers choose to put more of their data into Wasabi’s cloud instead of buying more large on-premise storage boxes, over a five-year contract period, when maintenance support fees and the overall cost and depreciation of those boxes is taken into account.

As for building up sales, Wasabi is a channel company and has a simple formula to getting to the best partners globally. “When we are entering a market, the first thing we do is sign up the best distributors in that market, before any rivals get there,” says Friend.

Wasabi price comparison slide

“Once those distributors’ resellers have invested in our systems, benefited from the marketing and development funds, training, integration, etc, when the next guy comes along and asks to work with them instead, they’ll say ‘why should I, I’m working with Wasabi already’.”

He added: “It’s an easy channel model, ‘How much storage do you want, and for how long’. $6.99 per TB per month, that’s the list price.”

As a private company, Wasabi doesn’t release its annual sales figures, but some analysts put these at around $135 million last year, so if the current annual growth continues, it could be around $230 million in 2024. One way or another, Friend told the IT Press Tour that Wasabi believes it has the first $1 billion-revenue year within its sights, or, “within five years” anyhow.

Friend is also a fan of going public too. “We’d like to be a public company. At Carbonite [where he was co-founder and CEO at the backup firm] we became a public company, and more people took us seriously. It’s good to have a public profile, it can help.”

Storage news ticker – October 10

Analytics company Cloudera has launched Cloudera AI Inference, powered by Nvidia NIM microservices and accelerated computing (Tensor core GPUs), boosting LLM speeds by a claimed 36x. A service integration with Cloudera’s AI Model Registry enhances security and governance by managing access controls for both model endpoints and operations. Cloudera AI Inference protects sensitive data from leaking to non-private, vendor-hosted AI model services by providing secure development and deployment within enterprise-controlled environments. This enables “the efficient development of AI-driven chatbots, virtual assistants, and agentic applications impacting both productivity and new business growth.”

Data protector CrashPlan has acquired Parablu, a data security and resiliency supplier with a claimed market-leading offering protecting Microsoft 365 data. This is said to position CrashPlan “to deliver the industry’s most comprehensive backup and recovery capabilities for data stored on servers, on endpoint devices and in Exchange, OneDrive, SharePoint, and Teams to Azure, their cloud, or to CrashPlan’s proprietary cloud.” The Parablu acquisition “enables CrashPlan to offer a complete cyber-ready data resilience solution that protects intellectual property and other data from accidental data deletion, ransomware, and Microsoft service interruptions.”

Software-defined multi-protocol datacenter and edge storage supplier Datacore has added a NIS2-supporting cybersecurity capability “designed to equip organizations with the means to anticipate, withstand, and recover from sophisticated cyber threats while aligning with regulatory standards.” The offering has no specific brand name. Read the solution brief here.



Lucid Motors has again chosen Everspin Technologies’ MRAM for its latest all-electric Gravity SUV, continuing its reliance on Everspin’s memory solutions after integrating them into Lucid Air back in 2021. Everspin’s PERSYST MRAM ensures data logging and system performance within the Lucid Gravity SUV’s powertrain. Its 256 Kb MRAM is also used by Rimac Technology (sister company to Bugatti-Rimac) in the all-electric Nevara supercar.


HighPoint Technologies claims its RocketStor 654x series NVMe RAID Enclosures have established a new industry standard for external storage products. It supports up to eight NVMe SSDs, delivering nearly 0.5 PB of storage with 28 GBps transfer speeds and PCIe Gen4 x16 connectivity. The device stands under five inches tall, has an integrated power supply and cooling system to isolate NVMe media from the host, plus a hot-swap capability to allow storage expansion without downtime.

Germany’s Infodas has announced a secure data transfer and patch management offering called Infodas Connect. Its primary function is to securely distribute updates and patches to critical systems that are not connected to the internet. The data is processed on a system that has internet access. It is then securely transferred to a central system known as the sender service within the Infodas Connect architecture. The collected data is then transmitted to the receiver services on the isolated systems (which have no internet connection). This transfer is highly secure, ensuring that no vulnerabilities are introduced during the update process. Once the data reaches the target system, an administrator manually reviews and installs the updates. This extra layer of manual control ensures that all changes are checked before implementation. Find a detailed product presentation here.

Michael Amsinck

SaaS data protector Keepit has appointed Copenhagen-based Michael Amsinck as chief product officer. He joins Keepit from Cision, where he was chief product and technology officer. He will be instrumental in shaping and executing Keepit’s short and long-term product roadmap. “In the months and years ahead, a key priority will be expanding our comprehensive ecosystem of workloads to meet the evolving SaaS data protection needs of the enterprise segment. We’ll also prioritize strengthening disaster recovery strategies and developing opinionated frameworks to guide and educate our customers,” Amsinck said.

Keepit published a Gatepoint Research SaaS backup survey of senior decision-makers today that reveals:

  • 58 percent of respondents report using Microsoft to back up their SaaS data. However, shared responsibility models mean that SaaS providers are not accountable for customers’ data backup, leaving a critical gap in protection
  • Only 28 percent of respondents have high confidence in their data protection measures
  • 31 percent report moderate to severe lapses in their data protection
  • 57 percent of respondents identify brand and reputation damage as the most significant business impact of data loss, followed closely by financial consequences
  • When it comes to blockers to improving data protection strategies, 56 percent of respondents cite budget constraints, while 33 percent note a lack of expertise and resources
  • 50 percent of respondents cite increased compliance requirements as their top challenge

Keepit will host a free webinar titled “Protecting Your SaaS Data – Pitfalls and Challenges to Overcome” on October 17 at 1400 UTC. Register for the webinar here.

Micron has changed its logo. It says: “Inspired by the curves and colors of its silicon wafers, the new logo design embodies what is at the core of Micron’s technology leadership: staying ahead of the curve – anticipating future needs and driving the next generation of technology. Innovation and rapid execution are central to Micron’s vision of transforming how the world uses information to enrich life for all.”

Old (above) and new (below) Micron logos

NetApp has hired Mike Richardson as VP of US Solutions Engineering. He will lead NetApp’s technical sales strategy across North America. Richardson comes from being VP Systems Engineering Americas at Pure Storage. Before that he was at Forsythe, Commvault, and a professional services consultant at NetApp. What goes around, comes around.

OpenDrives has gained Light Sail VR as a customer for its Atlas storage product. Light Sail makes immersive virtual reality media for Meta, Amazon, Lionsgate, Paramount, Canon, Adidas, and others.

Replicator Peer Software, a supporter of hybrid on-prem-public cloud storage, is making Gartner’s latest hype cycle for storage report available here.

Rocket Software has a report out saying mainframe-stored data should be used by AI models. It says: “If the mainframe is such a wealth of knowledge, why isn’t it being used to inform AI models? According to the survey, 76 percent of leaders said they found accessing mainframe data and contextual metadata to be a challenge, and 64 percent said they considered integrating mainframe data with cloud data sources to a challenge. For this reason, businesses need to prioritize mainframe modernization – setting themselves up to be able to easily and successfully incorporate their mainframe data into their AI models.” Get the report here.

The SNIA is running a webinar entitled “The Critical Role of Storage in Optimizing AI Training Workloads” on October 30 at 1700 UTC. The webinar has “a primary focus on storage-intensive AI training workloads. We will highlight how AI models interact with storage systems during training, focusing on data loading and checkpointing mechanisms. We will explore how AI frameworks like PyTorch utilize different storage connectors to access various storage solutions. Finally, the presentation will delve into the use of file-based storage and object storage in the context of AI training.” Register here.

Tape, object, and archive system vendor SpectraLogic announced Rio Media Migration Services, a professional service offering designed to transition digital media assets from outdated media asset management systems to modern object-based archive infrastructures. A seven-step process features database/datasets analysis, solution design, project scoping, statement of work, installation and validation, production and post-migration review. Rio Media Migration Services is designed for media and video editors, producers, and system administrators, and suited for studios, post-production houses, sports teams, news and cable networks, and corporate media departments.

Decentralized storage provider Storj has acquired UK-based PetaGene, creator and supplier of cunoFS. PetaGene started in 2006, gathering numerous awards as a leading solution for high-performance file storage and genomic data compression. CunoFS is a high-performance file system for accessing object storage. It lets you interact with object storage as if it were a fast native file system with POSIX compatibility that can run any new or existing applications. PetaGene will operate as a wholly owned subsidiary of Storj. All current PetaGene employees will continue on as employees.

CunoFS positioning chart

StorONE is launching a StorONE Podcast “designed to explore the world of entrepreneurship, cutting-edge technology, and the ever-evolving landscape of data storage. Hosted by StorONE Solution Architect James Keating, the podcast will provide listeners with valuable insights from industry experts and StorONE’s in-house thought leaders … In the debut episode, CEO Gal Naor shares his journey from founding Storwize, its acquisition to IBM and vision behind creating StorONE’s ONE Enterprise Storage Platform to solve the industry’s biggest challenges.” The StorONE Podcast will release bi-weekly episodes covering topics such as leveraging AI in storage solutions, strategic planning for long-term success, overcoming obstacles in scaling storage startups, and navigating the balance between innovation and risk. The podcast is available now on Spotify, Apple Podcasts, and iHeart Radio.



Storware has signed up Version 2 Digital as a distributor in APAC.

Wedbush tells subscribers that Western Digital reportedly completed the sale of 80 percent of its Shanghai facility to JCET. Western Digital will receive $624 million for the majority stake. The source is a South China Morning Post article.

The future of security: Harnessing AI and storage for smarter protection

COMMISSIONED: As the world becomes more data-driven, the landscape of safety and security is evolving in remarkable ways. Artificial Intelligence (AI) technology is transforming how organizations manage mission-critical infrastructure, from video surveillance systems to real-time data analysis. At the heart of this transformation lies an important shift: a growing emphasis on the sheer volume of data being collected, how it’s processed, and the insights that can be derived from it. In this evolving ecosystem, edge AI and advanced storage systems are reshaping how security teams work and the role of data in decision-making.

Dell Technologies stands at the forefront of this shift, driving innovation through cutting-edge storage solutions like Dell PowerScale and collaborating with software technology partners to provide smarter, more efficient tools for data management and AI-powered analysis. The opportunities presented by AI in the safety and security space are immense, and the organizations that embrace these advancements will unlock new levels of efficiency, insight, and protection.

The data age in safety and security

Today, security is no longer just about physical safeguards or outdated surveillance systems. It’s about managing vast amounts of data to enhance decision-making, improve response times, and streamline operations. Mission-critical infrastructure now revolves around data volumes and data hygiene – how well that data is stored, maintained, and accessed. In this new paradigm, we see a shift in who’s attending security meetings: Chief Data Officers (CDOs), data scientists, and IT experts are playing pivotal roles in shaping security strategies. They’re not only ensuring systems run smoothly but also leveraging video data and AI-driven insights to drive broader business outcomes.

The introduction of AI into security systems has brought new stakeholders with unique needs, from managing complex datasets to developing real-time analytics tools that improve efficiency. For instance, consider the rise of body cameras equipped with edge AI capabilities. These cameras are no longer just passive recording devices – they’re active data processors, analyzing video footage in real time and even assisting officers in the field by automatically annotating scenes and generating reports. This reduces the time officers spend writing reports at the station, improving productivity and allowing for faster, more efficient operations.

Technologies transforming the security ecosystem

One of the most exciting aspects of AI technology in the security space is the way it is enabling new players to emerge. Independent Software Vendors (ISVs) are stepping into the fold, introducing a range of applications aimed at enhancing organizational efficiency and transforming the traditional security landscape. These new ISVs are bringing innovative solutions to the table, such as edge AI applications that run directly on video surveillance cameras and other security devices.

These advancements have revolutionized how data is collected and processed at the edge. Cameras can now run sophisticated AI models and Video Management Systems (VMS) onboard, transforming them into intelligent, autonomous devices capable of making decisions in real-time. This shift toward edge computing is powered by the increasing presence of Graphics Processing Units (GPUs) at the edge, enabling high-performance AI computing on-site.

For security integrators, this evolution has been transformative. The emergence of edge AI has introduced a new skill set into the industry – data scientists. Traditionally, security teams focused on camera placement, network design, and video storage. Now, they must also manage complex AI models and large datasets, often requiring the expertise of data scientists to oversee and fine-tune these systems. This shift is opening the door for new ISVs and dealers, changing the service landscape for security integrators.

The changing role of storage in AI-powered security

One of the most critical aspects of this AI-driven revolution in security is the need for robust, scalable storage solutions. Traditional storage systems, such as RAID-based architectures, are simply not up to the task of handling the demands of modern AI applications. AI models rely on massive datasets for training and operation, and any gaps in data can have a detrimental impact on model accuracy. This is where advanced storage solutions, like Dell PowerScale, come into play.

Dell PowerScale is designed specifically to meet the needs of AI workloads, offering extreme scalability, high performance, and superior data management. As video footage and other forms of security data become more complex and voluminous, traditional storage systems with Logical Unit Numbers (LUNs) struggle to keep pace. LUNs can complicate data mapping for data scientists, making it difficult to efficiently analyze and retrieve the vast amounts of data generated by AI-driven security systems.

In contrast, PowerScale provides seamless, flexible storage that can grow as security systems expand. This is crucial for AI models that require consistent, high-quality data to function effectively. By offering a scalable solution that adapts to the changing needs of AI-powered security applications, PowerScale ensures that organizations can maintain data hygiene and prevent the bottlenecks that would otherwise impede AI-driven insights.

Edge AI and the future of security

The advent of edge AI is arguably one of the most transformative developments in the security industry. By processing data closer to where it’s collected, edge AI enables real-time decision-making without the need for constant communication with centralized cloud servers. This shift is already being seen in body cameras, security drones, and other surveillance tools that are equipped with onboard AI capabilities.

As GPUs become more prevalent at the edge, the compute and storage requirements of these devices are evolving as well. Cameras and other edge devices can now run custom AI models and scripts directly onboard, reducing latency and improving response times. However, this also means that security teams must manage not only the hardware but also the datasets and AI models running on these devices. Data scientists, once peripheral to the security industry, are now becoming essential players in managing the AI models that power edge-based security systems.

This evolution is also changing the nature of cloud services in the security space. Edge computing reduces the reliance on cloud-based storage and processing, but it doesn’t eliminate it entirely. Instead, we are seeing a more hybrid approach, where edge devices process data locally and send only critical information to the cloud for further analysis and long-term storage. This hybrid approach requires a new level of agility and flexibility in both storage and compute infrastructure, underscoring the need for scalable solutions like PowerScale.

Embracing AI and data-driven security

Despite the clear advantages of AI and edge computing, the security industry has been slow to adopt these technologies. For over six years, IP convergence in security stalled as organizations hesitated to move away from traditional methods. A lack of investment in the necessary skills and infrastructure further delayed progress. However, the time for change is now.

As other industries move swiftly to embrace AI-driven solutions, the security sector must follow suit or risk falling behind. The convergence of AI, data science, and advanced storage solutions like Dell PowerScale represents a tremendous opportunity for growth and innovation in safety and security. Network value-added resellers (VARs) are well-positioned to capitalize on this shift, offering modern mission-critical architectures that support AI-driven security applications.

The future of security lies in data – how it’s collected, processed, and stored. With the right infrastructure in place, organizations can unlock the full potential of AI, driving greater efficiency, faster response times, and more effective security outcomes. Dell Technologies is committed to leading the charge in this transformation, providing the tools and expertise needed to support the AI-powered security systems of tomorrow.

The security industry is at a pivotal moment. The rise of AI and edge computing is transforming how organizations approach safety and security, but these advancements require a shift in both mindset and infrastructure. Dell Technologies, with its industry-leading storage solutions like PowerScale, is helping organizations navigate this new landscape, ensuring they have the scalable, high-performance infrastructure needed to unlock AI’s full potential.

As we move deeper into the data age, embracing these emerging technologies will be critical for staying ahead of the curve. The future of security is bright, but only for those prepared to invest in the right infrastructure to support the AI-driven innovations that will define the next era of safety and security.

For more information, visit Dell PowerScale.

Brought to you by Dell Technologies.

Spotting signs of malware in the age of ‘alert fatigue’

Sam Woodcock, senior director Cloud Strategy at 11:11 Systems, tells us that, according to Sophos, 83 percent of organizations that experienced a breach had observable warning signs beforehand and ignored the canary in the coal mine. Further, 70 percent of breaches were successful and threat actors encrypted the data of the organization to prevent access to it.

11:11 Systems offers on-premises and cloud backup services. For example, it has storage of customers’ unstructured, on-premises data using SteelDome’s InfiniVault storage gateway for on-premises data storage, protection, and recovery. For Azure, it has 11:11 DRaaS (disaster recovery as a service) and for AWS, 11:11 Cloud Backup for Veeam Cloud Connect, 11:11 Cloud Backup for Microsoft 365, and 11:11 Cloud Object Storage.

We asked Woodcock about these signs and what affected organizations should do about them.

Blocks & Files: What warning signs were these? 

Sam Woodcock, 11:11 Systems
Sam Woodcock

Sam Woodcock: Warning signs come in a variety of forms that can be observed independently or in various combinations. Some examples of typical warning signs would be unusual network activity such as excessive or unusual network traffic, spikes in failed login attempts, unusual system activity, unusual file access patterns, and alerts coming from security tools and endpoint device solutions.

Blocks & Files: Why weren’t they seen? 

Sam Woodcock: Typically warning signs can be missed for a variety of reasons – however, one of the most common reasons is “alert fatigue.” Forty percent of organizations receive over 10,000 security alerts on a daily basis. This sheer volume of information results in organizations simply being unable to properly process and respond to every indicator generated from their security solution set. 

Secondly, organizations often realize the need to invest in security technologies. However, often the vital security expertise needed to interpret and react to alerting and information coming from these tools is in low supply and high demand. This can result in a lack of expertise within an organization to triage and respond to vital alerting and monitoring information. Also, organizations may not have full 24x7x365 coverage to monitor, react, and triage security incidents; therefore missing vital signals and opportunities to prevent attacks.

Blocks & Files: How could they have been seen? 

Sam Woodcock: Detecting and responding to threats requires a combination of security tools, monitoring, security expertise, 24x7x365 coverage, robust processes, and proactive and reactive measures. The best practice is to have a multi-layered approach combining preventative security solutions, and reactive data protection and cyber recovery solutions.

It is also critical for organizations to perform proactive vulnerability assessments and penetration testing to understand gaps and risks that may exist within their application and security landscape. An essential part of any approach is to centralize logging and telemetry data into a Security Information and Event Management (SIEM) system; aggregating log and real-time alerting data across application and workloads running across a wide variety of platforms and physical locations. With an effective SIEM solution in place, organizations must also invest in security expertise and coverage to observe and react to patterns and information coming from such a system.

Blocks & Files: What should enterprises do when they see such signs? 

Sam Woodcock: Organizations need to react immediately in a structured and strategic manner to mitigate threats and prevent further growth of threats. Due to the immediacy organizations must invest in first or third-party security expertise that is 24x7x365 in nature so as not to miss or let threats grow in scope. 

The first step of any approach should be to investigate the alerts or logs created by security tools and validate whether the threat is an actual threat or a false positive. If the threat is a true positive, affected systems should be isolated and quarantined immediately to prevent the spread or movement of the attack. Having an incident response team and plan is essential to coordinate the required response and to resolve and remediate the issue. Having a combination of people, processes, and technology working in partnership is essential to swift resolution and recovery.

Blocks & Files: Can 11:11 Systems help here?

Sam Woodcock: 11:11 was formed to ensure organizations can keep their applications and systems always running, accessible, and protected. As previously mentioned, preventative security solutions are essential to preventing attacks or limiting scope. 11:11 provides a combination of security technology (MDR, XDR, Managed Firewall, Real Time Vulnerability scanning) aligned with a global 24x7x365 Security Operations Center with a robust process.

This is to ensure that we understand threats in real time and react accordingly, providing actionable remediation information to resolve incidents. In combination with our Managed Security services approach, 11:11 has a deep heritage in data protection, disaster recovery, and cyber resilience with capabilities to provide end-to-end Managed Recovery of systems, workloads, and applications. 

This Managed Recovery solution set is essential to ensure vital data assets are protected in real time, with a tested and validated recovery plan to ensure swift recovery of a business’s most essential assets.

***

Comment

It seems that a generative AI security agent could be used to look for IT system warning signs, scanning network traffic and IT systems for “excessive or unusual network traffic, spikes in failed login attempts, unusual system activity, unusual file access patterns” and the like. This agent could also receive alerts from “alerts coming from security tools and endpoint device” systems.

A precondition here is that the agent understands the usual network traffic rate, file access patterns, and login attempt levels.

Such an agent could put these inputs together and analyze them in a false-positive or real-positive assessment process, so helping a security team or person defeat “alert fatigue,” make more sense of the threat environment, and deal with threats more effectively.

The notion that “affected systems should be isolated and quarantined immediately” is sensible, of course, but can have far-reaching effects. For example, having your ERP database attacked and needing to be quarantined means that you have no ERP system. It seems to be a very, very good idea that malware attack detection and response should be carefully and thoroughly planned, tested, and rehearsed to prevent a real attack causing chaos and panic.

Having reliable, clean data copies and restartable IT system components would seem to be a precondition for effective malware attack response.

Such a malware threat agent could likely do even more and we’re certain that cybersecurity suppliers, such as Rubrik, are thinking along these lines already.

IBM says mainframes and AI are essential partners

Big Blue wants the tech industry to use its mainframes for AI workloads.

A 28-page IBM “Mainframes as mainstays of digital transformation” report, produced by its Institute for Business Value, found that 79 percent of IT executives agree that mainframes are essential for enabling AI-driven innovation.” It states that, after six decades of evolution, mainframes are mainstays, storing and processing vast amounts of business-critical data. As organizations embark on AI-driven digital transformation journeys, mainframes will play a critical role in extending the value of data.

IBM’s concern seems to be that mainframe users should not just assume modern, generative AI workloads are for the public cloud and/or x86 and GPU servers in an organization’s data centers. Mainframes have a role to play as well.

The report, which we saw before publication, starts from a hybrid mainframe-public cloud-edge approach, with workloads put on the most appropriate platform. AI can be used to accelerate mainframe app modernization, enhance transactional workloads and improve mainframe operations. The report says “Combining on-premises mainframes with hyperscalers can create an integrated operating model that enables agile practices and interoperability between applications.”

It suggests mainframe users ”leverage AI for in-transactions insights to enhance business use cases including fraud detection, anti-money laundering, credit decisioning, product suggestion, dynamic pricing, and sentiment analysis.”

Mainframe performance can improve AI-powered, rules-based credit scoring, with a North American bank, scoring only 20% of its credit card transactions and taking 80ms per transaction, with public cloud processing being able to score 100 percent by moving the app onto its mainframe, achieving 15,000 transactions/sec at 2ms per transaction and saving an estimated $20 million a year in fraud prevention spend.

Mainframes with embedded on-chip AI accelerators “can scale to process millions of inference requests per second at extremely low latency, which is particularly crucial for transactional AI use cases, such as detecting payment fraud.” IBM says “traditional AI may be used to assess whether a bank payment is fraudulent, and LLMs (Large Language Models” may be applied to make prediction more accurate.”

This is IBM’s Ensemble AI approach; combining existing machine learning models with the newer LLMs.

AI can be used to improve mainframe management. The report found that “74 percent of executives cite the importance of integrating AI into mainframe operations and transforming system management and maintenance. AI-powered automation, predictive analytics, self-healing, and self-tuning capabilities, can proactively detect and prevent issues, optimize workflows, and improve system reliability.”

Mainframes can use AI for monitoring, analyzing, detecting, and responding to cyber threats. Also Gen AI LLMs and Code Assistants can be used to speed older coding language work, such as Cobol, conversion to Java, and JCL development, so “closing mainframe skills gaps by enabling developers to modernize or build applications faster and more efficiently.”

IBM is taking an AI processing offload approach with AI-specific DPUs (Data Processing Units) for its next generation z16 mainframe, due in 2025. This will be equipped with up to 32 Telum II processors with on-chip AI inferencing acceleration at a 24 TOPS rate. A Spyre accelerator will add 32 AI accelerator cores and 1GB DRAM, having a similar performance to the Telum II on-chip AI accelerator. Up to 8  can be used in addition to Telum II units on the next mainframe generation.

Big Blue is not talking about adding GPUs to its mainframe architecture though. Inferencing workloads will run effectively on the mainframe but not AI training workloads. We can expect IBM to arrange mainframe vectorization and vector database capabilities to support retrieval-augmented generation (RAG) in inferencing workloads. 

For this commentator, adding GPUs to a mainframe would be a kind of Holy Grail, as it would open the door to running AI training workloads on this classic big iron platform. Maybe this notion, GPU co-processors, will be a z17 mainframe generation thing.

Get the report here and check out an announcement blog here.

HYCU expands SaaS data protection ahead of DORA regulations

HYCU has found a new niche of SaaS app background infrastructure configurations and resource settings that can be mission-critical and need to be protected, and new regulations like DORA will expand the SaaS app backup business with personal exec liability.

SaaS users can use cloud services at many stages of their operation, their application ecosystem, from build to run. They can be used in infrastructure services, IT service management, software development, app management and DevOps, information security and compliance, data management and analytics, and collaborative work management. SaaS app services can work their way into mission-critical operations.

Simon Taylor, HYCU
Simon Taylor

HYCU CEO Simon Taylor presented on this topic to an IT Press Tour audience in Boston. He said an example is AWS and its Lambda functions, with Lambda used for notifications of security events within an organization’s security unit: “Once you break a lambda function, it breaks the flow … We’re talking thousands of functions. All it takes is an intern cleaning up the wrong YAML files, and because you rely on the notification, you no longer get notifications of breaches.”

Another is “cloud formations. If you don’t back it up correctly, you can accidentally redeclare someone’s environment null … These are all little universes where people just take it for granted as a default service. They don’t realize that, when you ask an intern to ‘go clean this up,’ enormous damage can be caused … That’s where we’re seeing a lot of the issues come out.”

HYCU currently protects 86 SaaS applications and databases, with Taylor claiming: “We are the world’s number one SaaS data protection platform [and] we cover more than ten times the rest of the industry at large.” Protecting SaaS app infrastructure items is now becoming a visible need. ”Configuration protection is one of the most under-served markets in backup,” he said.

Having realized that SaaS app infrastructure settings, like configurations, need protecting too, HYCU is adding new capabilities to its SaaS-protecting R-Cloud offering.

Taylor said: “Imagine things like GitHub, Bitbucket, and GitLab. What do they have in common? They all store your source code, a pretty important thing if you’re running a software company … When we started this process, people said, ‘Why would I back that up?’ We said, ‘Well, it’s your source code.’ And then you see the light bulb go off, and they’re like, ‘Oh my god, I’m not backing that up.'”

Another example: “There’s a customer, they actually leverage the assets in Jira Service Management for inventory. Yet if they delete those assets, they have actually deleted their inventory.”

“One last example, Jira product discovery … We use that ourselves, and you would be surprised at how critical that book comes within three weeks. It’s last year’s fastest growing application. Every single piece of feedback that your company has from a product development perspective now lives there. What if you lose that? You basically lost product management when you’ve done that, right?” 

Subbiah Sundaram, SVP, Products, said HYCU’s aim is to protect the entire tech stack including IT tools and services:

HYCU graphic

He said HYCU is looking at providing cross-cloud regional mobility, citing a US federal customer request to provide VMware to Azure Government to AWS GovCloud to Microsoft Azure Stack HCI mobility. A financial customer wanted Nutanix to/from VMware to AWS and AWS zone, and to GCP and GCP zone. HYCU demonstrated its cross-cloud movement capabilities.

DORA and its consequences

HYCU is also providing data protection and residency for compliance with the European Union’s DORA and NIS2 regulations. DORA’s article 12 requires secure, physically and logically separated backup storage:

HYCU extract of DORA Article 12 section 3 requirements
HYCU extract of DORA Article 12 section 3 requirements

This has a possibly unexpected significance, HYCU says, in that the data needing to be backed up includes SaaS app data. Taylor said: “Now the government is mandating that [customers] have their own copy of the data. It’s not even about just backing up your data and recovering it for usage, etc. They now legally have to have a local copy. And what they have to start doing is asking their SaaS vendor, ‘Where am I supposed to get that from?’

“This is a game changer. So they must have to back up their Office 365, and show they have a copy, sure, but at least they can do that. What about Workday? What do they do when it’s Jira and they haven’t thought about backup? What do they do when the government comes and says, ‘Well, wait, where’s all your payroll data, right? Do you have that?’ Oh, those guys have not. That was before DORA. Now you legally have to have that.”

DORA is different from previous regulations: “The big difference here is that there’s personal liability. Now within DORA, this is no longer, oh, the company will pay the fine. Now the CIO, or the operating board member, is responsible for the fines and for personal prosecution.” 

Taylor added: “In other ways, this is happening in the US. You know, regulators are starting to ask those questions of CISOs in particular. We spoke at a CISO forum recently, and you know, it was amazing to me, the fear in the world, fear, actual fear. Because, this time, the CISO community is now personally liable for some of these things.”

There’s a supply chain aspect to this: “If you supply to a [DORA-regulated] financial institution, you have to make sure you are compliant … The government is making sure everybody’s there, that the entire value chain is supported.”

HYCU is providing secure offsite storage for on-premises, cloud, and SaaS workloads. It already supports AWS S3, Google Cloud Storage, and Azure Blob, and is adding support for object storage from Dell, Cloudian, and OVHcloud.

With Commvault developing automation for the cloud application rebuild process and HYCU working on protecting the SaaS app background infrastructure components, the world of public cloud-delivered data protection is becoming more mature – both broader and deeper.