Home Blog Page 64

Asigra expands SaaS data protection suite for MSPs

Asigra is increasing the number of SaaS apps it protects and has a SaaSAssure platform that allows MSPs to offer data protection.

The Canada-based data protector now has pre-configured integrations to protect customer data in Salesforce, Microsoft 365, Microsoft Exchange, SharePoint, Atlassian’s JIRA and Confluence, Intuit’s Quickbooks Online, Box, OneDrive, and HubSpot, with more coming.  

Eric Simmons, Asigra
Eric Simmons

Asigra CEO Eric Simmons said: “SaaS app data protection has become a legal obligation and a crucial aspect of maintaining reputation and financial security.”

We think he might be overstating things here as national regulations over compliance and privacy may well apply with legal force to SaaS app operators but data protection, specifically in the backup sense, generally does not, unless mentioned in compliance regulatory requirements such as GDPR, HIPAA, PCI DSS, etc. 

A Microsoft 365 connector bundle covering Exchange, OneDrive, SharePoint and Teams costs $1.30 per user per month. Other business connector apps – ADP, Big Commerce, DocuSign, CRM Dynamics, Epicor, FreshBooks, Freshdesk, HRBamboo, Monday.com, Salesforce, ServiceNow, Slack, QuickBooks, Zendesk, Box, Dropbox – cost $3.00 per user per app per month.  OEM and enterprise agreements are available.

Asigra cites BetterClouds’ State of SaaSOps survey to say that SaaS apps will make up 85 percent of all business software in 2025. It claims that, with 67 percent of companies using SaaS apps that can experience data loss due to accidental or malicious deletions, it is imperative that users protect their own information. The SaaS app providers protect their own infrastructure but not customer data.

Asigra SaaSAssure app coverage
SaaSAssure app coverage

SaaSAssure is built on AWS and claimed features include:

  • Multi-tenant SaaS app coverage featuring data assurance, control, risk and compliance mitigation
  • Multi-Person Approvals (MPA), Multi-Factor Authentication (MFA), AES 256-bit encryption, ransomware protection, and more
  • Choice of backup targets include Asigra Cloud Storage, Bring Your Own Storage (BYOS), or data sovereignty to a location of their choice
  • Quick to set up and easy to use, progressing from start to protection in under five minutes
  • Multi-tenant capable single dashboard for required actions and notifications to maximize IT resources
  • Pre-configured multi-tenant SaaS App Integrations with the user only required to configure authorizations

Asigra says SaaSAssure is complementary to existing backup solutions, allowing MSP partners to expand their service portfolios and revenues without having to switch from existing backup software partner(s). It also includes Auvik SaaS Management to discover shadow IT and sanctioned SaaS app utilization enterprise-wide.

Competitor HYCU’s R-Cloud SaaS data protection scheme is claimed to cover more than 200 SaaS apps. It can be used direct by enterprises or through an MSP. The Own Company (OwnBackup as was) targets mission-critical SaaS app backup. Druva covers Microsoft 365, G-Suite, Slack, and Salesforce. Asigra is ahead of Druva and OwnBackup in the number of  SaaS apps protected but behind HYCU.

SaaSAssure is available for immediate deployment. MSPs can register to receive updates or become a Launch Partner here. Check out a SaaSAssure video here

Clumio anoints new CEO as co-founder steps down

Clumio co-founder and CEO Poojan Kumar has stepped from operational control to part-time board chairman, succeeded by CRO Rick Underwood.

The 2017-founded company provides SaaS data protection services to Amazon’s S3, EC2, EBS, RDS, SQL on EC2, DynamoDB, VMware on AWS, and Microsoft 365, storing its backups in virtual air-gapped AWS repositories.

Rick Underwood, Clumio
Rick Underwood

Kumar announced the transition via a LinkedIn post:

Underwood posted a corresponding LinkedIn message:

Underwood joined Clumio as CRO in November last year. Seven months later he is taking over the top job. Three months after he joined, the company raised $75 million in a Series D funding exercise, with total funding rising to $262 million.

There was a fourfold growth in annual recurring revenue (ARR) for Clumio in 2023, the privately-owned company claims, with double-digit million dollars worth of ARR and more than 100 PB of cloud data protected. Following the last funding round, Clumio said it would develop protection for cloud databases, data lakes, high-performance storage, and support for other “major cloud providers.”

Clumio co-founder Kaustubh Patil is listed on LinkedIn as VP Engineering but Kumar’s post says CTO Woon Ho Jung runs products and engineering. Patil doesn’t appear on Clumio’s leadership webpage. We’ve asked the company about this.

B&F looks forward to seeing Clumio’s services appear in Azure and the Google Cloud Platform, and extending to possibly cover other SaaS applications. Perhaps there will be stronger focus on ransomware and security generally. Assigra, Cohesity-Veritas, Commvault, Druva, HYCU, Keepit, Rubrik, and other SaaS data protection companies will be watching what happens closely.

Druva CSO on ransomware’s impact on cyber insurance

Interview. Yogesh Badwe, chief security officer at SaaS-based data protector Druva, caught up with Blocks & Files for a Q&Q session to discuss how ransomware as a data security problem is affecting cyber insurance.

Blocks & Files: Is ransomware a data security problem rather than a firewall, anti-phishing, credential stealing exercise?

Yogesh Badwe: Ransomware is a serious concern for businesses, and data security is absolutely a major part of that. If an attack is successful, ransomware can impact confidentiality, integrity and/or availability of data, and a strong approach to data security can reduce the probability of the negative outcomes that are associated with ransomware. While there is no silver bullet to preventing ransomware, ensuring a strong approach to data security alongside some of these other critical cyber hygiene practices – like properly segmented firewalls, anti-phishing practices, and strong password policies and management to prevent tactics like credential stealing – are all critical pieces of the puzzle to reducing the likelihood of being impacted by ransomware. 

Yogesh Badwe, Druva
Yogesh Badwe

Blocks & Files: As ransomware attacks are increasing, what will happen to cyber insurance premiums?

Yogesh Badwe: As ransomware attacks increase, we have also seen an increasing trend of ransomware victims paying ransom. Average ransomware payments are also on the uptick. This likely changes the calculus for insurance underwriters. In response, we have and will continue to see a few things:

  1. Scoped-down cyber insurance policies, with sub limits enforced on ransomware payments 
  2. Increased premiums
  3. Premiums that are tightly tied to real-time risk postures (as opposed to one-time understanding of clients risks)
  4. Increased stringency on risk assessments during the initial policy/premium formulation
  5. Requirements for continuous monitoring – it is in the best interest of cyber insurance providers to monitor for and inform its clients of outside-in cyber weaknesses that they see. We will see increasing use of this outside-in open source security monitoring to mitigate risk faced by the clients.

Blocks & Files: Do open source supply chains contribute to the risks here and why?

Yogesh Badwe: Yes, vulnerable OSS supply chains can be a surface area that is targeted by ransomware threat actors for initial intrusion or lateral movement inside an organization. We have also seen persistent and well-resourced threat actors stealthily insert backdoors inside commonly used libraries. 

Blocks & Files: What role does non-human identity (NHI) security play in this?

Yogesh Badwe: NHI is an increasing area of focus for security practitioners. From a ransomware perspective, NHIs are yet another vector for initial intrusion or lateral movement inside an organization. Orgs have spent a lot of time securing human identities via SSO, strong policies around password hygiene – rotation session lifetime, etc. 

To put it into perspective, there are more than 10x NHIs to human identities and as an industry we haven’t spent enough time improving NHI security posture. In fact, over the last 18 months, the majority of public breaches have had some sort of NHI component associated with them. 

The reality is that NHIs cannot have the same security policy enforcement that we assume for human identities. For example, holistically for all NHIs in an organization, it is difficult to have strict provisioning and de-provisioning processes around NHIs similar to what we do for humans – to enforce MFA and password rotation, and to notice the misuse of NHIs as compared to human identities.

Due to NHI sprawl, it is trivial for an attacker to get their hands on a NHI, and typically, NHIs have broad sets of permissions that are not monitored to the extent that human identities are. We’re seeing a number of startup companies focused on security NHIs get top-tier VC funding due to the nature and uniqueness of this problem.

Blocks & Files: Should there be a federal common standard for cybersecurity?

Yogesh Badwe: Absolutely. Approximately two decades ago we had GAAP (generally accepted accounting principles) come out, which laid down a clear set of guidelines, rules, expectations, and processes for the bare-minimum, baseline accounting standard in the finance world. 

We don’t have a GAAP for security. What we have is a variety of overlapping (and sometimes subjective) industry standards and industry frameworks that different organizations use differently. Duty of care as it relates to reasonable security measures that an entity should take is left to the judgment and description of each individual entity without any common federally accepted definition of what good security looks like. 

Only a federal common standard on cybersecurity will help convert the tribal knowledge of what good looks like into enforceable and auditable framework like GAAP.

Blocks & Files: Can AI be used to improve data security, and how do you ensure it works well?

Yogesh Badwe: Generative AI can be leveraged to improve a number of security paradigms, including data security. It can play a transformative role in everything from gathering and generating relevant context about the data and its classification, generating anomalies around permissions and activity patterns, and helping security practitioners either prioritize or help action and remediate data security concerns. 

One simple example is a security analyst reviewing a data security alert around activity related to sensitive data. He or she can leverage generative AI to get context about the data, context about the activity, and the precise next steps to triage or mitigate the risk. The possibilities to leverage AI to improve data security are limitless.

How do we ensure it works well? Generative AI itself is a data security problem. We have to be careful in ensuring the security of data that is leveraged by GenAI technologies. As an example, we have to think about how to enforce permission and authorization that exists on source data, as well as on the output generated by the AI models. 

It’s essential to continue with human-in-the-loop processes, at least initially, until the use cases and technology mature where we can rely on it 100 percent and allow it to make state changes in response to data security concerns. 

Quobyte tackles high-speed file querying demand with new engine

Unified high-performance storage platform Quobyte has taken the wraps off its distributed File Query Engine, intended to allow users to query file system metadata at high speed.

Quobyte allows small teams to run large-scale high-performance computing (HPC) infrastructures across various industry segments, including education and health research.

Aimed at environments with massive data sets, File Query Engine offers a range of capabilities, including the ability to query user-defined metadata for AI/ML training, enabling users to label files with data directly, instead of managing small metadata files.

Additionally, administrators can quickly answer operational questions, such as identifying space-consuming cold files or locating files owned by specific users.

File Query Engine replaces slow file system tree walks (“find”), offering a faster and more efficient alternative for large volumes. It is integrated with Quobyte’s distributed and replicated key-value store, which stores metadata.

And, unlike other products, the engine does not require an additional database layer, resulting in faster queries and “significant” resource savings, claimed Quobyte. Queries are executed in parallel across all metadata servers for fast scans across the entire cluster or select volumes.

“File Query Engine is a game-changer for our customers,” said Bjorn Kolbeck, CEO of Quobyte. “It streamlines the process of querying file system metadata, offering fast and efficient results even for large datasets, AI, and machine-learning workloads.”

The technology is part of Quobyte release 3.22 and is automatically available without any configuration. Users can run file metadata queries using the command-line tool “qmgmt,” which supports output in CSV or JSON formats.

Additionally, queries can be initiated via the Quobyte API, providing “flexibility and ease of use”, said the provider.

Among various existing use cases, Quobyte unified block, file and object storage is being used by the HudsonAlpha Institute for Biotechnology in the US, to store primary life sciences and genomics data in a hybrid disk+flash system.

Nutanix reports solid revenue rise, signs Dell deal to aid VMware migration

Nutanix notched up yet another quarter of solid revenue, ARR and customer count growth in its latest SEC results report and signed a deal with Dell aimed at capturing displeased VMware customers.

Revenue generated in Nutanix’ Q3 of fiscal 2024, ended April 30, was $524.6 million, up 17 percent year-on-year and beating the consensus Wall St analyst estimates by $9 million. The company reported a $15.6 million net loss, much better than the year-ago loss of $70.8 million. The third quarter is seasonally lower than Nutanix’ second quarter, and the software biz has not yet reached a sustained profitability status after last quarter’s landmark first ever profit.

President and CEO Rajiv Ramaswami said in a statement: “We delivered solid third quarter results reflecting disciplined execution and the strength of our business model,” with CFO Rukmini Sivaraman adding: “Our third quarter results demonstrated a good balance of top and bottom line performance with 24 percent year-over-year ARR growth and strong year-to-date free cash flow generation. We remain focused on delivering sustainable, profitable growth.”

Nutanix added 490 new customers in the quarter, taking its total customer count to 25,860. Its Average Contract Value (ACV) billings rose 20 percent Y/Y to $288.9 million, and annual recurring revenue (ARR) increased 24 percent to $1.8 billion.

Financial summary

  • Gross margin: 84.8% vs 81.6% last year
  • Free cash flow: $78.3 million vs year-ago’s $52.7 million
  • Operating cash flow: $96.4 million vs year-ago $74.5 million
  • Cash, cash equivalents and short-term investments: $1.651 billion

William Blair analyst Jason Ader told subscribers: “The top-line outperformance was mainly driven by large wins with the Nutanix Cloud Platform (NCP) and consistent renewals performance from steady infrastructure modernization demand.”

Nutanix is involved in more large deals and these can come with their own challenges. The Blair analyst said: “The number of opportunities in the pipeline with ACV billings greater than $1 million has grown more than 30 percent over the last three quarters, while the total dollar value of those deals is up 50 percent over the same period.” But the deals can take longer to negotiate to a close and add variability to Nutanix’ income rate.

Ramswami said in an earnings call with financial analysts “Our largest new customer win of the quarter was an eight-figure ACV deal with a North American-based Fortune 50 financial services company that was looking to streamline and automate the deployment and management of their substantial fleet of databases. … This win was substantially larger than our typical land win and marks the culmination of an approximately two year engagement.”

Sivaraman expanded on this, saying: “The dollar amount of pipeline from opportunities greater than $1 million in ACV has grown at well over 50 percent and for each of the last three quarters compared to the corresponding quarters last year. These larger opportunities often involve strategic decisions and C-suite approvals, causing them to take longer to close and to have greater variability in timing, outcome and deal structure.”

Both Workday and Salesforce also recently noted an elongation of project approvals for enterprise software contracts.

Dell standalone AHV deal

Nutnix is trying to respond to Broadcom’s VMware acquisition by making it easier for dissatisfied VMare customers to migrate to Nutanix by being able to run Nutanix’ AHV hypervisor on existing Dell servers. That means decoupling its AHV hypervisor from its full software stack, which stack has to run on Nutanix-certified hardware. The Dell AHV server will connect to storage-only HCI nodes available in the next few months and then, in calendar 2025, to Dell’s PowerFlex-based storage systems using IP networking. 

This enables legacy 3-tier architecture customers wanting to depart from VMware to do so immediately rather than waiting for a 3-year depreciation cycle for their hardware to end. Ramaswami said: “This gives us easier insertion into accounts where they’re not quite ready to go depreciate their hardware yet, allowing us to then over time convert them over to HCI.”

Nutanix will support AHV running stand-alone on other OEMs’ servers and various storage nodes, but IP-access storage, not Fibre Channel. Migration then to the full Nutanix stack would be a land-and-expand type opportunity.

Nutanix’ AHV already runs on Cisco UCS servers and the AHV/UCS server combo connects to Nutanix storage-only nodes. Ramswami said: “We expect to see a growing contribution from Cisco in and of course, into FY’25.”

The outlook for Nutanix’ final quarter of fiscal 2024 is for $535 million +/- $5 million, a 8.3 percent year-on-year rise. Ader’s at William Blair, said: “The current guidance includes impact from the increasing variability that management has seen from larger deals in the pipeline, reflecting new and expansion bookings tracking below management expectations. Full-year guidance assumes modest impact from the VMware displacement opportunity (which management continues to see as a multi-year tailwind) and developing OEM partnerships, both of which should have a more material impact in fiscal 2025.”

Since buying VMware, new owner Broadcom has made a number of sweeping changes to licences, products and worldwide channel programs that govern who can and can’t sell VMware.

AI server sales drive topline growth for Dell

Dell’s revenues have finally grown after six successive quarterly drops, led by AI-driven server sales. 

Revenues in the first quarter of Dell’s fiscal 2025 ended May 3, 2024, were up six percent year-on-year to $22.2 billion. The PC-centric Client Solutions Group (CSG) was flat at $12 billion: Commercial client revenue was $10.2 billion, up 3 percent year-on-year; and Consumer revenue was $1.8 billion, down 15 percent. The Infrastructure Solutions Group (ISG) pulled in $9.2 billion, 22 percent higher, driven by AI-optimized server demand and traditional server growth. Servers and networking booked $5.5 billion, up 42 percent. Storage sales didn’t grow and remained flat at $3.8 billion. The Texan tech corp reported $955 million net profit, up 65 percent.

Jeff Clarke

COO and vice chairman Jeff Clarke said: “Servers and networking hit record revenue in Q1, with our AI-optimized server orders increasing sequentially to $2.6 billion, shipments up more than 100 percent to $1.7 billion, and backlog growing more than 30 percent to $3.8 billion.”

Quarterly financial summary

  • Gross margin: 21.6 percent vs 24 percent a year ago
  • Operating cash flow: $1.0 billion
  • Free cash flow: $0.6 billion vs $0.7 billion last year; 
  • Cash, cash equivalents, and restricted cash: $7.3 billion vs $? billion last year
  • Diluted earnings per share: $1.3, up 67 percent y/y

Dell returned $1.1 billion to shareholders through $722 million share repurchases and $336 million dividends.

A welcome rise in Q1 fy25 revenues after 6 successive down quarters

Within ISG, trad server demand grew sequentially for the fourth consecutive quarter and was up Y/Y for the second consecutive quarter. But AI servers led the charge, and the PowerEdge XE9680 server is the fastest ramping new server in Dell history. Storage was left behind. There was demand strength in HCI, PowerMax, PowerStore and PowerScale. The new PowerStore and PowerScale systems should help lift sales next quarter.

AI boosted server sales but model storage has not so far increased storage demand.

Dell is convinced a storage AI demand-led increase is going to happen. Clarke said in the earnings call: “our view of the broad opportunity hasn’t changed around each and every AI server that we sell. I think we talked last time, but maybe to revisit that, we think there’s a large amount of storage that sits around these things. These models that are being trained require lots of data. That data has got to be stored and fed into the GPU at a high bandwidth, which ties in network. The opportunity around unstructured data is immense here, and we think that opportunity continues to exist.” 

He added: “We expect the storage market to return to growth in the second half of the year. And for us outperform the marketplace. … I would call out PowerStore Prime. The addition of QLC allows us to be more competitive, our performance and have a native … sync replication … allow us to be more competitive in the largest portion of the storage marketplace. And our storage margins need to improve and will improve over the course of the year.” 

Dell wants us to know that, even with storage revenues flat, it retains its storage market leadership:

Bothe HPE and VAST Data have new disaggregated shared everything (DASE) block amd file storage systems, claiming more efficient scale-out than their compretitors. Dell will be hoping that its new QLC flash-aided PowerStore and PowerScale, especially the coming parallel file system extension for PowerScale, will stop any inroads into its market leadership.

In CSG, Dell introduced five NextGen AI PCs but sales have yet to take off. Clarke said: “We remain optimistic about the coming PC refresh cycle, driven by multiple factors. The PC install base continues to age. Windows 10 will reach end-of-life later next year. And the industry is making significant advancements in AI-enabled architectures and applications.”

Dell’s focus on AI, with its AI Factory series of announcements, gives it a near pole position in the market for helping customers adopt AI. The amount of AI adoption will depend on the technology delivering accurate and relevant results without going off into hallucinatory lies and mis-statements. AI has to stand for artificial intelligence and not artificial idiocy.

Komprise is praised by analyst for its unstructured data management effort

Unstructured data management specialist Komprise has been named an “innovator” in its field by analyst house IDC.

In the analyst’s report, IDC Innovators: Knowledge Management Technologies, Komprise was recognized for its Intelligent Data Management offering, a single platform to analyze, move and manage unstructured data.

Komprise customers are enterprises with petabyte-scale environments, including Pfizer, Marriott, Kroger, NYU and Fossil.

The IDC report says “over 90 percent” of data is unstructured, and that it is a “key asset” of enterprise intelligence, as well as a “big part” of storage costs.

IDC says Komprise reduces complexity in managing unstructured data growth with location-agnostic file analysis and indexing. “That analysis is purpose-built to not ‘get in the way’, i.e., it will not disrupt data movement and operations,” IDC said.

The platform enables migration, data tiering, replication, workflows, and AI data preparedness.

Intelligent Data Management, adds IDC, helps enterprises unlock the value hidden in unstructured data and reduces storage costs. “Proper metadata tagging and access ensures AI solutions can extract and present the right data, at the right time, and to the right person,” it says.

Earlier this month, Komprise announced Smart Data Workflow Manager, a no-code AI data workflow builder, that addresses use cases such as sensitive data identification, chatbot augmentation, and image recognition, for instance.

Krishna Subramanian

“Being named an IDC Innovator is a great honor and we believe our inclusion indicates how organizations are starting to treat data independently of storage, to ascertain and nurture its true value across hybrid cloud infrastructures,” said Krishna Subramanian, chief operating officer and co-founder of Komprise.

IDC Innovators are vendors with current annual sales of under $100 million, chosen by an IDC analyst within a specific market that offer a new technology, a “ground-breaking solution” to an existing issue, and/or an innovative business model. It is not an exhaustive evaluation or a comparative ranking of all companies, says IDC.

In the past, like many early bird technology companies, VC-backed Komprise has been shy about revealing its sales figures. At least we now know it hasn’t broken the $100 million barrier, yet.

Earlier this year, Komprise introduced elastic replication, that it says provides more affordable disaster recovery for non-mission-critical file data at the sub-volume level.

All-flash leads NetApp revenues higher

NetApp’s latest quarterly revenues swelled higher led by all-flash array demand.

Revenues rose for the third successive time to $1.67 billion in the fourth quarter of NetApp’s fiscal 2024, ended April 26, 2024, up 6 percent and exceeding its guidance mid-point. There was a $291 million profit, up 18.8 percent annually, and NetApp’s 25th successive profitable quarter. This Kurian-led cash machine is so consistent. The all-flash array annual run rate (ARR) beat last quarter’s record to reach $3.6 billion, with $850 million AFA revenues in the quarter. 

Full fy2024 revenues of $6.27 billion were 1 percent down Y/Y with profits of $986 million down 2.2 percent Y/Y.

Hybrid cloud revenues of $1.52 billion in the quarter rose 6.3 percent, basically driving the business. Cloud revenues, meaning public cloud, remain depressingly low at $152 million, a miniscule 0.7 percent higher Y/Y as the Anthony Lye-led buying spree still fails to deliver meaningful revenue increases.

George Kurian

CEO George Kurian’s results quote said: ”We concluded fiscal year 2024 on a high note, delivering company records for annual gross margin, operating margin, EPS, operating cash flow, and free cash flow and building positive momentum. … In fiscal year 2025, we will remain laser focused on our top priorities of driving growth in all-flash and cloud storage services while maintaining our operational discipline.”

NetApp probably has the best public cloud storage service offerings of all the storage suppliers, with its data fabric spanning the on-prem ONTAP world, and SW instantiations on AWS, Azure and GCP, and Blue XP hybrid cloud storage monitoring and management services. Yet significant demand has proved almost illusory and its competitors, notably, Dell, HPE and Pure are slowly but steadily catching up. Indeed Pure has a big announcement coming in this area next month.

Kurian pointed out NetApp’s public cloud progress in the earnings call, saying: “In Q4, we had a good number of takeouts of competitors’ on-premises infrastructure with cloud storage services based on NetApp ONTAP technology, which helped drive our best quarter for cloud storage services with each of our hyperscaler partners. We are well ahead of the competition in cloud storage services and we are innovating to further extend our leadership position.”

He referred to expected public cloud revenue growth, saying: ‘We expect that cloud first-party and marketplace cloud storage should continue to ramp strongly, which will deliver overall growth in cloud, consistent revenue growth in cloud in fiscal year ’25, stronger in the second half than in the first half.”

Quarterly financial summary

  • Consolidated gross margin: 71.5 percent vs 68 percent a year ago
  • Operating cash flow: $613 million vs $235 million a year ago
  • Free cash flow: $567 million vs $196 million last year
  • Cash, cash equivalents, and investments: $3.25 billion
  • EPS: $1.37 vs $1.13 a year ago
  • Share repurchases& dividends: $204 million in stock repurchases

NetApp widened the gap between its own all-flash array revenues and Pure Storage: 

The AFA revenues were singled out by Kurian saying: “Strong customer demand for our broad portfolio of modern all-flash arrays, particularly the C-series capacity flash and ASA block optimized flash, was again ahead of our expectations.”

A look at NetApp’s quarterly revenue history shows that it reversed the lower Q1 and Q2 revenues this fiscal year with higher ones in Q3 and Q4 to end the year pretty much level pegging with the previous one:  

Just like its competitors NetApp is seizing the AI opportunity in front of it, positioning itself as a provider of the data infrastructure foundation for enterprise AI.

Kurian said: “Customers choose NetApp to support them at every phase of the AI lifecycle due to our high performance all-flash storage complemented by comprehensive data management capabilities that support requirements from data preparation, model training and tuning, retrieval-augmented generation or RAG, and inferencing, as well as requirements for responsible AI including model and data versioning, data governance and privacy. We continue to strengthen our position in enterprise AI, focusing on making it easier for customers to derive value from their AI investments.”

He then said: “We had about more than 50 AI wins in Q4 across all elements of the AI landscape I talked about, both in data foundations like data lakes as well as model training and inferencing across all of the geographies. I would tell you that in the AI market, the ramp on AI servers will be much ahead of storage because what clients our doing is they’re building new computing stacks but using their existing data. And so we expect that over time there will be a lot more data created and unified to continue to feed the model. But at this stage, we are in proof of concept. We think that there’s a strong opportunity over time for us and all of the AI growth is factored into our guidance for next year.”

He reckons that: “AI … is the opportunity that will become much more meaningful over time. We are well positioned with the huge installed base of unstructured data, which is the fuel for GenAI, and we are focused on helping customers do in-place RAG and inferencing of that data.” 

Next quarter’s revenue outlook is $1.455 to $1.605 billion, a $1.53 billion mid-point which would be a 7.0 percent annual rise. The full fiscal 2025 revenue outlook is $6.45 to $6.65 billion, with the $6.5 billion mid-point representing a 4.5 percent Y/Y increase and NetApp’s highest ever annual revenue.

Fiscal year 2025 projected revenue is the red bar

This would end NetApp’s 11-year failure to beat its fy2013 $6.332 billion highpoint. George Kurian became CEO in 2015 and reaching a record revenue highpoint in fy2025, after 10 years in office would be quite a satisfying feat.

Wedbush analyst Maat Bryson told subscribers: “Q1 and FY’25 guidance both presented upside vs. prior Street expectations, while management’s tendency towards conservatism, likely means that NTAP will eventually deliver results closer to the high end of the new forecast.”

Scality launches partner recruitment drive in response to growth

Object storage software specialist Scality has made a clarion call for new partners on the back of company growth, and profitability.

Jerome Lecat

Scality CEO Jérôme Lecat confirmed the biz is profitable, on the back of 20 percent annual growth, at this week’s Technology Live! Paris – a vendor showcase event for press and analysts. 

He said regional and global growth meant the firm was in need of new partners to meet demand for its software, which is targeted at two different segments using two different products.

The RING product is aimed at large organizations building their own substantial cloud infrastructure, and its ARTESCA platform – which has just been updated to version 3.0 – is used by smaller organizations or edge/branch sites that have less dense object storage needs. 

Lecat said of the channel: “We have tripled investment in our VAR network and ARTESCA is already handled by the big three distributors – Ingram Micro, Arrow and TD Synnex – but we still need many more partners globally.”

He added: “Even in territories like France, where we are not spread thinly, we still need more partners, and that goes for the likes of Germany and the UK too. If anyone has any potential partners send them to me.”

Lecat said the RING product, which is primarily sold through HPE, only had a four percent churn rate. That might not be that surprising though, as it is used by customers with high numbers of petabytes to deal with – so they’re not exactly going to migrate at the drop of a hat.

As for ARTESCA, over half of that business comes through the Veeam partner channel, with Scality’s technology tightly integrated with the cloud data management/backup king.   

At the back end of last year, Scality launched an ARTESCA hardware appliance specifically for Veeam users, that competes with an Object First product. Object First is another object storage partner of Veeam.

Keepit makes big promises around its SaaS backup service

SaaS data protection vendor Keepit has set out its stall to win big in the relatively fledgling market.

The firm is exploiting demand for increasing demand for SaaS protection – particularly as companies begin to realize they are responsible for backing up their own SaaS data, not the SaaS providers.

At this week’s Technology Live! Paris – a regular data management vendor showcase for press and analysts – Copenhagen, Denmark-headquartered Keepit said it planned to increase the number of SaaS workloads it planned to protect for end customers. The company reaches those customers through channel partners.

Currently, it protects SaaS data generated through eight key business applications/software suites – including Microsoft 365, Google Workplace, Dynamics 365, Salesforce, Azure DevOps, Zendesk, Power Platform, and Extra ID (formerly Azure AD).

Keepit will be increasing this number over the short- to medium-term, it declared. To do this, it will have to develop the connectors that customers will have to use to back up data from the new applications covered.

Overall, it’s a market that is also targeted by the likes of cloud data management vendor Veeam, which offers protection for Salesforce and Microsoft 365 data, for instance. And cloud data backup player HYCU has also been making big noise in the market too, after commercially launching a specific SaaS backup service about 18 months ago.

Keepit has been building up its own datacenter capacity to support customers’ SaaS data, which can be accessed by them on demand. Six of the seven datacenters the company runs involve capacity it rents in Equinix facilities across the US, Canada, Australia, the UK, Germany, Denmark, and Switzerland. To address data sovereignty requirements, customers can stipulate their data stays in their particular region.

The service is aimed at companies of all sizes and charged per seat/user, with no extra charges for putting in or taking out data from the Keepit backup infrastructure. The cost is around €3 a month for each user, with the average cost coming down for enterprises with large numbers of users.

Sylvain Siou.

Sylvain Siou, VP of go-to-market technology at Keepit, said: “All data storage is included in the cost, so customers have simplicity and total cost control.

“We are much cheaper than storing the data in the public cloud – over time, that cost tends to steadily increase. Our charges are flat.”

Users have to buy seats for each separate workload that is backed up. When asked how Keepit charges match up against the likes of HYCU, and the many other data backup companies, Keepit maintains that on price it doesn’t often get beaten after sitting down with customers.

One reason Keepit reckons it can win on price is that it runs its own infrastructure, unlike some rivals that pay to store customers’ data in the public cloud.

All Keepit data is stored on high density disks and is deemed as “hot” – so customers can get immediate access to it when it is needed. More expensive flash systems are not seen as “viable” for the service that Keepit provides. There is no room for tape either, as none of the data is regarded as “cold.” The company mainly uses Dell Technologies hardware in its datacenters.

Pure Storage pitching hard to hyperscalers as it reports Q1 topline boost

Pure substantially beat its own first fiscal 2024 quarter revenue estimates and is getting more confident about winning a future hyperscaler customer.

The $693.5 million revenue in the quarter ended May 5 was up by nearly 20 percent year-on-year and way past the $680 million outlook predicted in the prior quarter’s earnings report. There was a loss of $35 million, better than the year-ago $67.4 million loss. 

Charlie Giancarlo, Pure’s CEO, said: “We are pleased with our Q1 performance, returning to double-digit revenue growth for the quarter.”

According to CFO Kevan Krysler said: “revenue growth of 18 percent and profitability both outperformed.” Why the surprise revenue jump this quarter? Krysler said: ‘Two key drivers of our revenue growth this quarter were: one, sales to new and existing enterprise customers across our entire data storage platform; and two, strong customer demand for our FlashBlade solutions, including FlashBlade//E. … we are aggressively competing and winning customers’ secondary and lower storage tiers with our //E family solutions and FlashArray//C.”

There were 262 new customers in the quarter with the total customer count being more than 12,500. The subscription annual recurring revenue (ARR) of $1.45 billion was up 25 percent year-on-year. Product revenues were $347.4 million, an increase of 12.5 percent Y/Y. There were record first quarter sales of the FlashBlade product.

A gratifying return to double digit revenue growth in fy2025’s first quarter.

Quarterly financial summary

  • Gross margin: 73.9 percent, up from year-ago 72.2 percent
  • Free cash flow: $173 million vs year-ago $122 million
  • Operating cash flow: $221.5 million vs year-ago $173 million
  • Total cash, cash equivalents and marketable securities: $1.72 billion
  • Remaining Performance Obligations: $2.29 billion vs year-ago $1.8 billion. 

Pure won a Meta (Facebook) deal a couple of years ago but little has been heard abut that recently. However Giancarlo was bullish about future hyperscaler business in his prepared remarks: ”Our //E family continues its strong growth and was also a key enabler in our discussions with hyperscalers. …The quantity and quality of our discussions with hyperscalers have advanced considerably this past quarter.”

He explained: “Hyperscalers have a broad range of storage environments. These include high-performance storage based on SSDs, multiple levels of lower-cost HDD-based nearline storage, and tape-based offline storage. We are in a unique position to provide our Purity and DirectFlash technology for both their high performance and their nearline environments, which make up the majority of their storage purchases. Our most advanced engagements now include both testing and commercial discussions. As such, we continue to believe we will see a design win this year.”

Krysler talked about Pure’s “pursuit of replacing the vast majority of data storage with Pure Flash, for all customer workloads, including hyperscalers’ bulk [disk] storage.” He means the public cloud and Meta-like hyperscalers, the FAANGs-type companies: Meta(Facebook), Amazon, Apple, Netflix, and Alphabet (formerly known as Google). Pure is looking at their general purpose bulk storage – 80 to 90 percent of their total storage buildouts sitting on disk.” CTO Rob Lee said in the earnings call “that’s really the opportunity that we see as we refer to the hyperscalers.”

A win there would be a landmark deal.

Giancarlo sees three other opportunities ahead: “The recent advances in AI have opened up multiple opportunities for Pure in several market segments. Of greatest interest to the media and financial analysts has been the high-performance data storage market for large public or private GPU farms. A second opportunity is providing specialized storage for enterprise inference engine or RAG environments. The third opportunity, which we believe to be the largest in the long term, is upgrading all enterprise storage to perform as a storage cloud, simplifying data access and management, and eliminating data silos, enabling easier data access for AI.”

He asserted: “We also believe that long-term secular trends for data storage are no longer based on the expectation of commoditized storage, but rather on high-technology data storage systems, and run very much in our favor.”

Next quarter’s revenues are expected to be $755 million, 9.6 percent up on the year-ago quarter. Pure is looking for 2025 revenues to be around $3.1 billion, meaning a 10.5 percent rise on 2024.

Giancarlo teased future announcements, saying: “At our June Accelerate conference, global customers will see how our latest innovations enable enterprises to adapt to rapid technological change with a platform that fuses data centers and cloud environments.” It sounds like Cloud Block Store is getting a big boost. Maybe we’ll see the Purity OS ported to the public cloud and and a cloud version of FlashBlade.

Scality pushes out latest ARTESCA object storage, claiming to give ransomware a 5-level brush-off

ARTESCA object storage supplier Scality says its latest v3.0 release has five levels of ransomware protection, and claims API-level immutability is not good enough.

Update. Time shift attacks and Network Time Protocol attack points amended. 3 June 2024.

The ARTESCA product is a cloud-native version of Scalitiy’s RING object storage, co-designed with HPE, and S3-compatible. It can be a backup target for Veeam and there is a hardware appliance version specifically for Veeam customers.

Paul Speciale

Scality CMO Paul Speciale said in canned quote: “Every vendor selling immutable storage claims its solution will make your data ransomware-proof, but it’s clear — immutability is not enough to keep data 100 percent protected.

“94 percent of IT leaders rely on immutable storage as a foundational aspect of their cybersecurity strategy.  If immutable backups were the answer, then why did ransom payments double in 2023 to more than $1 billion? It’s time that the storage industry goes beyond immutability to deliver end-to-end cyber resilience.”

A Scality blog states: “Today, a staggering 91 percent of ransomware attacks involve data exfiltration. This meteoric rise can be seen as a direct attempt by threat actors to sidestep the protections afforded by immutability.” 

Speciale’s suggestion that immutability failings have helped cause a doubling of ransomware payments seems unlikely when a blog by his own company attributes 91 percent of ransomware attacks to exfiltration attacks. Immutability will not stop an exfiltration attack.

Be that as it may, ARTESCA v3.0’s five levels of defense are:

  • API-level: Immutability implemented via S3 object lock ensures backups are immutable the instant they’re created. Multi-factor authentication (MFA) and access control help administrators prevent breaches on employees.
  • Data-level: Multiple layers of data-level security measures are employed to prevent attackers from accessing and exfiltrating stored data
  • Storage-level: Encoding techniques prevent destruction or exfiltration of backups by rendering stored data indecipherable to attackers, even when using stolen access privileges to bypass higher-level protections.
  • Geographic-level:  Multi-site data copies prevent data loss even if an entire data centre is targeted in an attack.
  • Architecture-level: An intrinsically immutable core architecture ensures data is always preserved in its original form once stored, even if the attacker attains the necessary access privileges to bypass API-level immutability.

Speciale makes the point in a Solved magazine article that file-based storage systems basically allow file data to be re-written while object storage always creates a new object when data in an existing object changes. It is architecturally immutable whereas file storage is not.

He writes: “This means data remains intrinsically immutable, even to an attacker with superadmin privileges, due to the way the system handles data writes to the drive. The effect is simple — no deletes or overwrites, ever. Additionally, all Scality products disallow root access by default, reducing exposure to common vulnerabilities and exposures (CVEs) and a wide range of threats.”

Scality claims that the following offerings are considered insufficient when it comes to immutability: 

  • NAS/file system snapshots
  • Dedupe appliances
  • Linux-hardened repositories 
  • Tape
  • S3 proxies (S3 API implemented on mutable architectures)

Only solutions based on native object storage design are truly immutable because they preserve data in its original form the very moment it is written, and never overwrite existing data. 

Scality immutability checklist graphic.

File storage is much weaker on this front in Speciale’s view: “because the underlying file system is still inherently mutable, data remains vulnerable to attacks below the API layer. This creates multiple viable avenues for a skilled attacker to bypass the system’s defenses using common tactics like privilege escalation and time-shift attacks.”

Privilege escalation means getting higher-level access such as root level. A time-shift attack focused on Network Time Protocol servers, resetting time to hours before the present time and, maybe, enabling access to data before it was made immutable. Scality tells us: “Immutability is driven by specific time periods – let’s say a bad actor does something as simple as going in and changing the system clock. If a server believes it’s day 31, then a 30-day object lock is invalid and the data is no longer immutable. Similarly, it’s also possible to compromise credentials and change the policy on immutability. 

“These vulnerabilities are very much legitimate, so these soft spots absolutely must be addressed if storage vendors want to truly earn their “ransomware-proof” claims.”

There were no search results we could find that explicitly mention the use of NTP time shifts as a tactic employed by ransomware attackers. This is not to say it could not happen but, overall, there seems to be no empirical evidence that says a ransomware attack circumvented a file’s immutability by using an attack below the API layer. 

Also object storage always has had the create-a-new-object-when-an-old-one-is-changed attribute and there is nothing new in ARTESCA v3 here. S3 object locking was introduced in ARTESCA v2.0 and that’s not new to v3 either.

ARTESCA v3.0 features:

  • Design in accordance with US Executive Order 14028 Improving the Nation’s Cybersecurity and zero-trust architecture principles, including enforced authentication and end-to-end encryption of data. 
  • Multi-factor authentication (MFA) for admin users that can now be globally enforced to provide additional login protection for admins and data managers
  • Integration with Microsoft Active Directory (AD), configurable directly through the secure ARTESCA administrative UI. 
  • Center for Internet Security (CIS) compliance testing through OpenSCAP project tools for continual conformance with CIS cybersecurity recommendations, including password strength compliance based on length, complexity and history.
  • Extended security-hardening of the integrated OS that disallows root access including remote shell or su as root; admin access is only granted through a system-defined Artesca-OS user identity adhering to the principle of least privileges.
  • A software bill of materials (SBOM) of components and suppliers, scanned and continuously patched for CVEs, to provide customers with visibility into their software supply chain, and automated OS updates to patch vulnerabilities.
  • Increased growth to 8.5PB of usable capacity, with support for high-density servers, a wide choice of storage hardware including multiple types of flash drives. 
  • Enhanced dual-level erasure-coding.

Speciale proclaims: “ARTESCA’s CORE5 capabilities set the bar for a new standard of truly cyber-resilient storage in modern data centres. Windows of exposure are effectively eliminated by providing not only the strongest form of data immutability, but also cyber resilience at all levels of the system. Together with Veeam, our customers achieve unbreakable data protection.”

Scality says ARTESCA 3.0 for Veeam is:

  • Veeam Ready validated for Veeam high-performance tier deployments on hybrid and all-flash storage servers at an affordable cost
  • VMware Instant Recovery Ready with ultra-high performance on all-flash servers
  • Simple-to-use compatibility with Veeam Backup & Replication, Veeam Backup for Microsoft 365 and Veeam Kasten in a single system
  • Quickly and effortlessly configured as a ransomware-hardened Veeam repository, thanks to its unique built-in Veeam Assistant tool
  • Offered as a turnkey hardware appliance for Veeam with a Quickstart Wizard to simplify integration into network environments

ARTESCA 3.0 can be deployed as a turnkey hardware appliance for Veeam built on Supermicro servers, software on industry-standard servers, or a virtual appliance for VMware-powered data centres.

It will be available in Q3 2024 through global resellers, supported by distribution partners Ingram Micro, Carahsoft, TD Synnex, Arrow, and other regional distributors.