As it projected it would back in June, ExaGrid now has over 4,000 customers following what it described as the strongest Q3 in the company’s history.
The company supplies scale-out tiered backup appliances with incoming data being written as-is to a so-called landing zone for quick ingress and retrieval, with later deduplication for longer-term, capacity-efficient retention.
Bill Andrews
CEO Bill Andrews said: “ExaGrid prides itself on having a highly differentiated product that just works, does what we say it does, is sized properly, is well supported, and just gets the job done. We can back up these claims with our 95 percent net customer retention, NPS score of +81, and the fact that 94 percent of our customers have our Retention Time-Lock for Ransomware Recovery feature turned on, and 99.1 percent of our customers are on our yearly maintenance and support plan.”
This is ExaGrid’s 11th growth quarter in a row with an average 20 percent revenue growth rate. In summary:
Cash, free cash flow, EBITDA: All positive
New customers added: 130
Customer count: >4,000
7-figure deals: 2
ExaGrid competes with backup appliance leader PowerProtect from Dell, and other suppliers such as HPE (StoreOnce), Quantum (Dxi), and Veritas (NetBackup Flex). All of these suppliers dedupe on data ingest so have slower data load rates and slower retrieval of new backups as all have to be rehydrated (deduplication reversed).
The company also competes with all-flash, direct-to-object backup stores such as Pure’s FlashBlade, ObjectFirst, and VAST Data. The bulk of its competition is against disk-based Dell, Veritas, HPE, and Quantum, which would need to re-engineer their software to have a deduped “landing zone” for incoming data and global deduplication across a cluster of appliances as ExaGrid has. So far none of them have shown any public sign they plan to do so.
They could go all flash, using QLC SSDs like VAST Data and Pure’s FlashBlade//E, to get extra ingest and restore speed, but then so could ExaGrid, which could also support object (S3) ingest. Competitors such as Dell are incrementally improving their products but, so far, not enough to derail ExaGrid’s progress.
Data protector Acronis announced its Acronis CyberApp Standard integration technology, which broadens the Acronis ecosystem to third-party vendors, offering them the means to integrate their products and services into Acronis Cyber Protect Cloud. It claims this is only framework that allows vendors to integrate into the Acronis platform with natively incorporated vendor workloads, alerts, widgets, and reports. This ensures a unified user experience for Acronis-owned and integrated applications. Vendors interested in becoming part of the Acronis ecosystem gain access to the versatile Vendor Portal. From the portal, users can build applications, share app details, upload marketing materials, and publish directly to the Acronis Application Catalog. For more information, visit: https://developer.acronis.com/.
…
Data protector Commvault has announced new enterprise backup and recovery systems powered by Lenovo server hardware. There are no details of these systems available but visitors to the GITEX Dubai exhibition event can learn more at booth #H5-A40. We’ve asked for more information about this.
…
Data lakehouse supplier Databricks says easyJet is using its Lakehouse and Generative AI. The airline has been a customer of Databricks for almost a year, using it for data engineering and warehousing. It just migrated all of its data science workloads and started to migrate analytics workloads onto Databricks. Now it’s started using Lakehouse AI for generative AI work, providing a tool for non-technical users to ask their questions in natural language and get insights from its rich datasets. There’s more information here in a blog.
…
Dell’s PowerProtect Data Manager emerged as the top choice in customer satisfaction for data protection software according to a recent independent double-blind study conducted by a third-party vendor. The alternative choices included Rubrik, Cohesity, Veeam, Commvault and Veritas. A Dell blog provides more info.
…
Real-time data platform supplier GridGain has release v8.9 of its eponymous software, with integrations for for Apache Parquet, Apache Iceberg, CSV, and JSON to support more complex datastores, including enterprise data lakes and NoSQL/semi-structured document databases. It has more storage- and read-efficient management of massive data tables, with support for high performance, ACID-compliant queries and diverse document data types, helping developers to build new and more complex applications faster. GridGain v8.9 takes its distributed, colocated, memory-centric computing capabilities to NoSQL and data lake technologies, enabling faster analytics. GridGain Platform v8.9 is available now. Visit the GridGain website to download.
Kristy Mao
…
TrueNAS software supplier iXsystems has appointed Kristy Mao as its SVP of Finance. She comes from being VP of Finance & Performance Management at Siemens Digital Industries Software, where she led a global team of 90 employees overseeing strategic financial planning, FP&A, finance IT, digital transformation and enterprise risk management. Michael Lauth, President & CEO of iXsystems, said: “iXsystems has evolved into a medium-sized enterprise with a global reach, and Kristy’s expertise and leadership will be instrumental in driving our continued growth and prosperity.”
…
Micron has launched 16Gb DDR5 memory, with speeds up to 7,200 MTps, and made with its 1β (1-beta) process node technology. The company claims its 1β-based DDR5 memory with advanced high-k CMOS device technology, 4-phase clocking and clock-sync 1 provides up to a 50 percent performance uplift and 33 percent improvement in performance per watt over the previous generation. The new 1β DDR5 DRAM product line offers current module densities in speeds ranging from 4,800 MTps up to 7,200MTps for use in data center and client applications.
Micron 1 beta process DDR5
…
Quantum has announced new DXi-Series Backup Appliance bundles; DXi being its deduplicating backup appliance target system. They have built-in replication and newly announced DXi Cloud Share tiering. DXi appliances may be deployed across edge sites, central data centers, and the public cloud. DXi Edge-Core-Cloud Bundles are now available with all the components customers need to deploy across their enterprise.
They include pre-configured physical and virtual appliances and are available in four standard capacity sizes — Small, Medium, Large and Extra Large — in support of multiple edge locations, central data centers, and cloud-based archiving targets. Logical capacities range from 400 terabytes up to 228 petabytes. The new DXi Edge-Core-Cloud bundles are available immediately. DXi Cloud Share is available as part of the DXi 4.9 software release, planned for release in December 2023.
…
Samsung and SK hynix will be allowed to supply US chip equipment to their China factories indefinitely without separate US approvals, according to Reuters. The US has extended existing waivers. Samsung Electronics makes about 40% of its NAND flash chips at its plant in Xian, China, while SK Hynix makes about 40% of its DRAM chips in Wuxi and 20% of its NAND flash chips in Dalian.
…
Digitimes reports both Samsung and SK hynix are expecting strong growth in HBM shipments due to the surge in generative AI and the corresponding need for GPUs which use this form of memory. Samsung will likely complete a capacity expansion investment in the third quarter of 2024, which should boost HBM3 and later HBM generation output. Omdia forecasts an at least 40% CAGR for HBM from now until 2028.
…
SIOS Technology announced MailHippo, which enables healthcare providers to send and receive encrypted HIPAA compliant emails and to collect medical data using HIPAA compliant Web forms, is using SIOS DataKeeper Cluster Edition to protect its critical secure email platform from downtime and disasters.
…
Sony has officially licensed the Seagate Game Drive PS5 NVMe SSD for the PlayStation 5. It delivers sequential read speeds of up to 7300 MBps, using a Seagate-validated E18 controller (made by Phison we think) with 3D TLC NAND. The drive is available in capacities up to 2TB, with endurance of 1.8M MTBF and up to 2550 TBW. It has a five-year limited warranty and priced at offered for $99.99 (1TB), and $149.99 (2TB). Gamers can play PlayStation 5 games directly from the drive.
Seagate PS5 drive
…
UK web hosting business Krystal is using StorPool software-defined storage and NVMe storage clusters for its Katapult virtual infrastructure platform, due to its performance, scalability, and reliability. Founded in 2002, Krystal says it has nearly 30,000 clients who host over 200,000 websites. Read a case study here.
…
My Truong
Stravito, the Swedish SaaS startup (2017) which provides a central place to store and analyze market research data, has appointed My Truong as VP for marketing. It says this comes after a period of rapid growth within the US market – as of September 2023 leading U.S. consumer goods brands within food, beverage, furniture and telecommunications now account for almost 40 percent of Stravito’s business. Stravito works with brands including McDonald’s, Comcast, Burberry and Danone, and is poised to enhance its market reach even further, with Truong leading its marketing initiatives worldwide. Truong has been a member of teams that led to successful exits such as Endicia (Newell Rubbermaid), VerticalResponse (Deluxe), Demandforce (Intuit), Nexmo (Vonage), Adyen (ADYEN). Prior to joining Stravito, he served as CMO of SaaS company, Surfly.
…
Version 10.10 of NAKIVO Backup & Replication is now available for download. It adds a beta version of real-time replication VMware. Organizations with virtualized environments can now create replicas of their VMware vSphere VMs and keep them updated with the source VM as changes are made. Replicas are updated in real time as frequently as one second, allowing for near-zero application downtime and minimal data loss in the event of a disaster. Get a datasheet here.
…
EE Times China reports Yingren Technology (aka InnoGrit) has announced its enterprise-level YR S900 PCIe 5.0 SSD controller, China’s first in-country PCIe 5.0 SSD controller. It is a 4-channel SSD using a RISC-V processor and with 16 or 18 channels. It is said to suit NAND from Yangtze Memory Technologies Corp (YMTC)
Spotting an opportunity to expand its customer base, cyber resilience supplier Rubrik has come up with an MSP service.
Rubrik provides backup and security products through the Rubrik Security Cloud. It says it is offering MSPs turnkey Cyber Resiliency-as-a-Service (CRaaS), a zero-trust platform with cyber resiliency that can scale across enterprise and mid-market businesses levels, which they can sell on to customers.
Ghazal Asif
Ghazal Asif, Rubrik’s VP of worldwide channels and alliances, says this is “a great opportunity for existing and new Rubrik MSP partners the ability to realize operational efficiency and manage multiple customers in a fully managed or co-managed environment with unique service levels with customizable plans, and gain detailed visibility across customer accounts in real time.”
She says the product resolves three specific IT challenges for biz:
Recovering data quickly and effectively to a known point in time that is safe?
Given the ever-increasing risk associated with ransomware and the associated cost pressures of driving efficiency, how can I achieve best practices without being a best practitioner?
How can I be sure what data has been exfiltrated, and do I have to worry about any of it being sensitive?
This third point has special resonance for Rubrik as in March this year the vendor itself suffered an internal data breach through the Fortra file transfer service. It is working with Zscaler to stop sensitive content files being exported outside an organization’s IT boundaries.
As more businesses are turning to MSPs, the MSPs face these challenges too. Asif claims the Rubrik MSP service will:
Reduce operational complexity by offering fully managed or co-managed services with a multi-tenant platform providing real-time visibility into customers’ data health, with customizable MSP service plans.
Enable MSPs to make cybersecurity their customers’ priority, with a platform that meets the industry’s most stringent security and compliance requirements for cyber resiliency.
Improve time to revenue with MSPs able to onboard and start to protect customers quickly, and efficiently.
Provide flexibility and simplicity with on-premise, cloud workload and SaaS protection, less upfront expense, scalabiity and an operational pricing model that ensures flexibility, predictability, controlled costs and scaling.
Rubrik has MSP-as-a-Service Consultative Enablement to help MSPs get on board, and says itincentivizes its field sales force to sell with its MSPs, rather than against them.
The Rubrik MSP offering is immediately available globally. MSPs interested in joining can submit a Partner Onboarding Request on the Rubrik website.
File-based collaborator Egnyte has set up a secure Virtual Data Room (VDR) service for sharing sensitive documents with third parties, with the idea of removing the need for MOVEit-like file transfer services.
A VDR is an online vault for document sharing and access, with actions such as copying, printing, and forwarding disabled for its content. It is generally a standalone service and used as part of a due diligence process before a business acquisition, merger, or IPO so that sensitive business documents can be shared between the parties involved. Egnyte claims VDRs can be more secure than actually moving physical documents as loss in transit or accidental destruction does not happen.
Rajesh Ram, Egnyte co-founder and chief growth officer, said: “Virtual Data Rooms arose to fill a gap in the market around a niche set of highly sensitive, external file-sharing use cases. With this latest release, we have closed that gap so that our customers who might otherwise have needed a standalone VDR can now get everything they need through a single, intuitive content platform. The result is less complexity and friction for the end user, and ultimately less data sprawl and risk.”
Egnyte’s VDR, called Egnyte Document Room, includes:
Security with encryption, multi-factor authentication, Single Sign On, Active Directory integration, audit trails, and anomalous behavior detection.
Granular Access Control, with permissions definable for different users, limiting who can view, download, print, or edit particular documents.
Reporting and Analytics for monitoring user behavior and document access patterns, and notifications when someone accesses or makes changes to files.
Customization to match each customer’s branding needs and transaction type.
It says customers can manage, deploy, and monitor data rooms to streamline the sharing of large volumes of sensitive documents between parties involved in M&A transactions and due diligence processes, legal proceedings, private equity asset purchases, property sales and lease agreements, compliance and audit processes, IP licensing and research collaboration, and more.
The Egnyte Document Room is the business’s third purpose-built secure enclave product, joining previous data enclaves to address Good “x” Practice (GxP), and Cybersecurity Maturity Model Certification (CMMC) regulations.
Comment
Third-party file transfer services such as Fortra and MOVEit have been breached by malware exposing dozens of organizations to file copying and exfiltration. A VDR prevents the physical transmission of files across a network and removes that threat.
If the file transfer is part of an automated workflow, migrating it to a VDR alternative would require customization to include VDR file availability and access in the workflow procedure. More info on the platform here.
Data intelligence company Alation’s ALLIE AI co-pilot is now in public preview. The concept is pitched at AI engineers, data analysts and data stewards looking to increase productivity. An Analytics Cloud providing a visual map of an organization’s data usage is also in the offing.
Alation’s software catalogs or maps a customer’s data, reading metadata from sources such as file systems and databases via a host of connectors. It provides intelligence about the types of data, its location and properties, helps feed it to upstack suppliers such as Databricks and Snowflake, and also provides data governance facilities. Alation was founded in 2012 and has pulled in $340 million in funding. HPE contributed to its latest and largest round of $123 million last year.
Alation CTO Junaid Saiyed said in a statement: “A data-driven organization can quickly address strategic questions. However, as data volumes grow, locating, understanding, and trusting the data becomes increasingly challenging. This challenge becomes more pronounced when businesses invest in data initiatives like generative AI.”
“Such projects demand extensive data stores to operate as intended. … With ALLIE AI integrated into our data intelligence platform, data teams can expedite the discovery of relevant data, gain insights into the lineage of their AI models, and effectively manage business outcomes on a larger scale.”
ALLIE AI builds upon Alation’s machine learning (ML) capabilities so customers can use Aladdin more effectively, saving time and scaling data programs faster, and advance AI initiatives. Its intelligent curation capabilities enable customers to accelerate the population of their data catalog by automatically documenting new data assets and suggesting appropriate data stewards. ALLIE AI intelligent search and SQL auto-generation enable analysts to find the data they need without needing specialist data analyst skills or knowing the names of the underlying datasets or how to write SQL.
Jonathan Bruce, Alation VP of Product Management, said in a statement: “Alation Analytics Cloud shows customers their data in a way that hasn’t been accessible before – targeted at data analysts and stewards who deliver data catalog adoption and support usage analytics at their organizations.”
Alation says its Analytics Cloud enables customers to:
Measure their data culture maturity in terms of data leadership, data search and discovery, data literacy, and data governance
Score data initiatives across metrics such as total assets curated, total active users, and more
Build a visual map of data consumption: see the effectiveness of individual data products by tracking usage with reports showing which queries are being run by which users against which data stores – plus details on the total execution time of database queries highlighting areas for optimization.
This is large enterprise-class stuff. The Data Management Association (DAMA) is a global community of data management professionals organised around local chapters, such as in New England or the UK. There is a DAMA data maturity model which has 120 questions and can take hours to complete. Alation’s data maturity assessment is shorter and quicker to run but supports the notions of data culture and data maturity.
One aspect of all this is that it provides numerical ratings of qualitative activities and enables numbers-led managers to measure and monitor progress. Alation says its Analytics Cloud provides a framework to articulate the business value of management data initiatives, with customers able to check out the ROI of data initiatives.
It says: “Query Log Ingestion technology enables leaders to understand which data sources are most frequently used and which teams are running which queries. This enables data leaders to map data consumption across their entire enterprise.”
“These insights can then be used to measure the effectiveness of different data programs and, in turn, assess the maturity of an organization’s data culture.”
Maxine Geltmeier, Data Governance manager at First United Bank & Trust, is quoted in Alation’s announcement: “Understanding and illustrating value is paramount. We need to know our data maturity level and which data programs align with key organizational priorities and drive business value. We’re excited to have access to a tool that helps mature and evolve our data culture and will serve as a mechanism to prove a correlation between data initiatives and business performance. Ultimately, this empowers us to act as responsible custodians of the data we’ve been entrusted with.”
This is all a long way up from the storage technology stack’s foundation bits, bytes, disks and SSDs, but very real for large enterprise and organizations.
Alation Analytics Cloud is available today and you can find out more here. The company has a downloadable white paper, The Alation Data Culture Maturity Model: The Path to Data Excellence” which you can access here. ALLIE AI is in public preview and is expected to be generally available in H1 2024. To learn more about ALLIE AI, click here.
Mainstream SaaS app data protector OwnBackup is renaming itself the Own Company and putting out a Discover product that finds and analyzes historical SaaS backup data.
The Israeli startup, founded in 2012, initially backed up customer data in the Salesforce SaaS application. This was successful and it attracted significant VC funding, including from Salesforce, with investment rounds annually from 2016 to 2021 totalling more than $500 million and giving it unicorn status. At that point it protected Sage Business Cloud Financials, Veeva (life sciences) and nCino (financial data) as well as Salesforce. Microsoft Dynamics 365 CRM and Power platform protection support was added in August 2021 and ServiceNow support came in June last year.
CTO Adrian Kunzle said in a statement: “For the first time, customers can easily access all of their historical SaaS data to understand their businesses better, and I’m excited to see our customers unleash the potential of their backups and activate their data as a strategic asset.”
Own says it’s committed to helping customers ensure the availability, compliance, and security of mission-critical data, and being able to use that data in their business.
CEO Sam Guttman said: “When we established the Company eight years ago, we were primarily a backup and recovery product. Today, we are a full service SaaS data platform, and as our company has evolved, so have our product offerings and brand identity. The new name reflects the fact that we empower our customers to own their own data, unlocking more ways to transform their businesses.”
Own Discover enables customers to look into their backed up SaaS data, access it in a self-service way with output in a time-series format and use it to:
Analyze their historical SaaS data to identify trends and uncover activity patterns,
Train machine learning models,
Integrate SaaS data to external systems while maintaining security and governance.
Own says Discover “provides zero-ETL to reduce the need for data engineering and development resources while also eliminating the overhead of building a data warehouse.”
Some Own Discover use cases.
Comment
The number of SaaS app data protection suppliers is increasing. While some are intent on increasing the number of SaaS apps they protect by developing or obtaining more connectors – think HYCU and Asigra for example – Own is not going down that route. Instead, it’s keeping its focus on mission-critical SaaS apps and wants its customers to be able to use the data that it accumulates for historical analysis.
An Own spokesperson told us: “Our strategy is to protect and activate our customers’ critical SaaS data. We intend to keep innovating, bringing new solutions to our existing ecosystems, as well as expand into new ecosystems.”
Rockport Networks, rebranded as Cerio, is to roll out composable datacenter IT infrastructure that goes beyond a single PCIe domain.
Startup Rockport has developed switchless and flat network technology for interconnecting servers, storage, and clients, with each endpoint effectively being its own switch. It announced initial product availability in October last year when it came out of stealth mode. Phil Harris became CEO in June that year as investors put more money into the business. Now he’s rebranded the company and its technology as Cerio and says it delivers new scale economics for AI and cloud datacenters.
Harris said: “Every major inflection point in computing is driven by the need for better economics, a better operational model – and now, greater sustainability. For the past couple of decades, we’ve been limited in how we build systems. No longer bound by a single PCIe domain, our customers can compose resources from anywhere in the datacenter to any system.”
The idea is that IT resources such as CPUs, GPUs, NVMe storage drives, DPUs, FPGAs, and other accelerators can be composed into workload-dedicated systems with the right amount of each resource applied to the workload, and then returned to a resource pool for subsequent reallocation when the workload completes. Composability has been a long sought-after technology with large-scale datacenters the main target, but it has not become a mainstream technology, despite the efforts of HPE with Synergy, Dell EMC with its MX7000, and startups such as Liqid and GigaIO in hyperscale and HPC datacenters.
Other startups such as DriveScale and Fungible have had a brief period selling product and then been acquired with their technology removed from general availability.
Cerio is hoping its technology will appeal to hyperscale datacenter operators as it enables them to decrease resource idle time.
Harris said: “Pre-orders of the Cerio platform from hyperscaler, cloud service provider, enterprise and government organizations are a clear signal of the demand for a fundamentally new system architecture that is more commercially and environmentally sustainable.”
Cerio is working with early access customers in North America, Europe, and Asia-Pacific on the implementation of scalable GPU capacity and storage agility use cases.
Its technology is based on high-radix distributed switching, advanced multipathing, and intelligent system adaptation of protocols across low-diameter network topologies, ones with fewer nodes, such as routers and switches, than otherwise. Multipathing enables traffic flows across the network to be kept separate and not interfere with one another. View radix as the number of IO ports on a network device such as a switch or router.
Dr Ryan Grant, assistant professor in the Department of Electrical and Computer Engineering at Queen’s University, Kingston, Ontario, said: “The Cerio platform is driving groundbreaking research into AI acceleration, to optimize the flow of data on a per-application basis. We’re using the unique multipathing capabilities of the Cerio fabric to optimize the precise calibrations of GPU selection, density and communications that will make traffic flows highly efficient and responsive in distributed, heterogenous systems.” His research involves collaboration with Cerio. Matt Williams, Cerio CTO, said: “The work we’re doing with Dr Grant and his team will help us calibrate the per-workload optimizations that will make traffic flows highly responsive for complex AI, machine learning and deep learning applications.”
Grant says Cerio’s technology involves PCIe decoupling from the underlying fabric. PCIe is a bus and not a network fabric, giving it fundamental scale limitations: “PCIe decoupling in the Cerio platform makes it possible to extend PCIe beyond the compute node – and the compute rack – to provide configurable, efficient row-scale computing that changes the economics of the datacenter.”
A Cerio white paper, Beyond the Rack: Optimizing Open Infrastructure for AI, is available here (registration only). It explains how decoupling PCIe from the underlying fabric overcomes the issues of using native PCIe in a large distributed system.
Hitachi Vantara is unifying its product portfolio as a single hybrid storage platform called Virtual Storage Platform One.
The company’s portfolio includes the VSP (Virtual Storage Platform) high-end and mid-range block arrays, HNAS file storage, VSS (software-defined storage), and the HCP (Hitachi Content Platform) for object data. These products are managed through an Ops Center product. Hitachi Vantara says the VSP One single data platform will provide a simplified experience to consume and manage block, file, object, and mainframe data, with flexible consumption as an appliance or software-defined storage across public cloud and on-premises.
Dan McConnell
According to Dan McConnell, Hitachi Vantara SVP of product management for storage and data infrastructure: “Virtual Storage Platform One marks a significant milestone with our infrastructure strategy. With a consistent data platform, we will provide businesses with the reliability and flexibility to manage their data across various storage environments without compromise.”
This is a strategic reshaping of Hitachi Vantara’s storage offerings. It involves creating a single control plane, data fabric, and data plane across block, file, object, cloud, mainframe, and software-defined storage workloads, managed by a single AI-enabled software stack. The intention is that separate block, file, object, and mainframe data silos will become unified.
VSP One will feature cloud self-service for replication and other data services, and intelligent workload management to optimize storage pools by assigning and rebalancing workloads as conditions change, without hands-on management. There will be integrated copy data management to ensure global availability and fault tolerance, without impacting performance, using replication and synchronous active storage clusters.
Getting there from here
Hitachi Vantara told us: “Starting in early 2024, new products will be brought to market under the Virtual Storage Platform One umbrella. This initially includes the new HNAS (file), VSS (Software Defined Storage, Block and Software Defined Storage, Cloud) followed later by the VSP line and HCP object. Once we have new offerings available under VSP One, the old brands will be retired in line with End-of-Life policies and support. Customers will have seamless, non-disruptive upgrades just like they do with any new product release.”
“Hitachi Ops Center will become the primary brand for infrastructure data management on the platform. Ops Center Clear Sight will become the Customer Experience Portal, giving oversight to all Hitachi Vantara infrastructure. Including built-in element managers to administer the embedded software. Ops Center will still include the same software as in our AIOps software suite, allowing for new projects to be developed and launched under the Ops Center family.”
“Under Virtual Storage Platform One you will still be able to reference specific solutions. These names will be reserved for ordering, support and technical documentation only:
VSP will become “Block”
HNAS will become “File”
VSS block will become “SDS Block”
VSS Cloud will become “SDS Cloud”
“Our Integrated System Solutions known as Unified Compute Platform (UCP), for both converged and hyper-converged solutions, will continue as an integrated solution using the platform. We will focus on using the platform as the base for the development of integrated solutions that differentiate us from competitive offerings and address specific market needs.”
For more information about Hitachi Vantara’s Virtual Storage Platform One, click here. Hitachi Vantara will be publishing a Virtual Storage Platform One blog, video, and e-book. It is running a launch event titled Architecting Future Innovations With Data over October 10-11.
Pure has introduced a Disaster Recovery service and is offering financial and service operation terms that it hopes will encourage customer retention and attract fresh users.
The vendor is pledging to pay customers’ power and rack space costs for the Evergreen//One Storage as-a-Service (STaaS) and Evergreen//Flex subscriptions. Specifically, it’ll be a one-time, upfront payment for the entire term of these contracts, which can be made directly as cash or via service credits (EverGreen//Flex), and is based on kilowatt per hour (kWh) and Rack Unit (RU) fixed rates. The payment is proportional to the customer’s geographic location and contract size.
At the same time, Pure is going to bring out No Data Migration, Zero Data Loss, and Power and Space Efficiency guarantees, coupled with flexible upgrades and financing, across the Evergreen portfolio. It also announced Pure Protect//DRaaS – a Disaster Recovery as a Service – along with energy efficiency guarantees for its Evergreen portfolio, and scalable AI-powered storage services via its Pure1 management platform.
Prakash Darji, VP and GM, Digital Experience Business Unit, Pure Storage, issued a statement: “The introduction of Pure Protect //DRaaS, unique Pure1 capabilities for subscription lifecycle operations, and an industry-first sustainability commitment underscore Pure’s pledge to deliver the most secure, smart, and energy-efficient storage services required by modern businesses.”
Scott Sinclair, practice director, Enterprise Strategy Group (ESG), added: “The introduction of a Paid Power and Rack commitment stretches the limits of innovation in the antiquated enterprise storage market. The latest Evergreen enhancements successfully balances enterprise requirements to make progress towards achieving critical ESG and net zero goals using incentives, while establishing peace of mind when it comes to data loss.”
Pure’s announcement indicates it is “eliminating the growing challenges of managing rising electricity costs and rack unit space” and this “exemplifies what it means to offer a true, seamless cloud experience, on premises.” It’s launching, it claims, enterprise STaaS that aligns TCO savings and long-term efficiency goals. Pure will now pay its customers’ power and rack space costs for storage supplied through Evergreen STaaS and Flex subscriptions.
Power and rack space payment details
DRaaS
Pure Protect //DRaaS applies to any storage infrastructure and is a consumption-based Disaster Recovery as-a-Service offering that provides customers with clean environments and multiple restore points to recover clean copies of their on-premises vSphere data, to native AWS EC2. If the DRaaS instantiation is due to a ransomware or similar attack it ensures data centers remain isolated for attack investigation.
Pure has also introduced consumption-based disaster recovery via Pure Protect, and a data resilience scoring system via Pure1. This offers the ability to assess entire Pure fleet configurations against leading practices.
Evergreen guarantees and more
Pure’s portfolio of guarantees and business deals is increasing. There are No Data Migration and Zero Data Loss guarantees for Evergreen//One (SLA), Evergreen//Flex, and Evergreen//Forever customers. With the Zero Data Loss guarantee, Pure assures data protection with data recovery services for any hardware or software product-related incidents, at no cost. With the No Data Migration guarantee, Pure covers technology upgrades with no data migrations. Pure says its Evergreen architecture extends equipment life up to ten years or more.
There are expanded guarantees for customers who opt to own their storage via an Evergreen//Forever subscription. A Power and Space Efficiency Guarantee has Watts per tebibyte (TiB) and TiB/Rack measures. If the guaranteed Watts/TiB or TiB/Rack is not met, Pure Storage will cover the tab.
Pure’s Ever Agile program includes a capacity plus controller trade-in delivered at up to 20 percent lower price than new controller costs. Its Capacity Consolidation program now includes expanded capacity trade-in credits valued at up to 50 percent.
There is an Asset Management and Genealogy service allowing customers and Pure to jointly optimize labor costs to run and operate storage. Customers get full transparency to manage Evergreen assets, contracts, subscriptions, and lifecycle, and get visibility into capacity, energy, and rack space usage.
Customers can also view how each asset or subscription has evolved over time – including software updates, ramps, expansions, and renewals – and gain insight into upcoming lifecycle events such as EOL, upgrades, or contract expiration.
Pure customers get a subscription viewer to understand when subscriptions require attention and renewal, predictive tracking of capacity utilization with actionable alerts to optimize reserve commit vs on-demand consumption, and SLA indicators to track how well Pure Storage is meeting performance and efficiency SLAs.
Policy-driven Upgrades are claimed to take the guesswork out of choosing the right Purity release and simplify fleet management.
Object storage supplier Cloudian has managed to wring 17.7GBps writes and 25GBps reads from a six node all-flash cluster in a recent benchmark.
Cloudian said these are real-world results, generated on an industry-standard benchmark, GOSBENCH, that simulates real-life workloads. It is not an in-house benchmark tool. The servers used were single processor nodes, each with a single, non-bonded 100Gbpds (Ethernet) network card and four Micron 6500 ION NVMe drives per node.
The company supplies HyperStore object storage software and this speed run was done with servers using AMD’s EPYC 9454 CPUs and upcoming v8 Hyperstore software.
Michael Tso
Cloudian CEO Michael Tso said in a statement: “Our customers need storage solutions that deliver extreme throughput and efficiency as they deploy Cloudian’s cloud-native object storage software in mission-critical, performance-sensitive use cases. This collaboration with AMD and Micron demonstrates that we can push the boundaries.”
AMD corporate VP for Strategic Business Development, Kumaran Siva, backed him up: “Our 4th Gen AMD EPYC processors are designed to power the most demanding workloads, and this collaboration showcases their capabilities in the context of object storage.”
CMO Jon Toor told us: “Most of our customers today are talking with us about all flash for object storage, if they’re not already there. Increased performance is a driver, especially as we move into more primary storage use cases. Efficiency is a driver also. With these results we showed a 74 percent power efficiency improvement vs an HDD-based platform, as measured by power consumed per GB transferred.”
HyperStore 8.0 incorporates multi-threading technology and kernel optimizations to capitalize on the EPYC 9454 processor, with its 48 cores and 128 PCIe lanes. This combination was then optimized for Micron’s 6500 ION 232-layer TLC SSDs which delivers 1 million 4KB random write IOPS.
Object storage tends to be linearly scalable as nodes are added to a cluster so great speeds are possible. Cloudian’s per-node performance was 2.95 GBps write and 4,15 Gbps read.
In October 2019, OpenIO achieved 1.372 Tbps throughput (171.5GBpsec), using an object storage grid running on 350 commodity servers. That’s 0.49GBps per server.
A month later MinIO went past 1.4Tbps for reads, using 32 nodes of AWS i3en.24xlarge instances each with 8 NVMe drives, making a total of 256 NVMe drives, and that means 175GBps overall and 5.5GBps per AWS instance, outperforming Cloudian. We don’t know the NVMe drive performance numbers but Minio used two more of them per instance than Cloudian used per node. Object storage performance benchmarks are bedevilled with apples and oranges comparison difficulties.
Check out a Cloudian speed run Solution Brief here.
SPONSORED: In August 2023, Danish hosting subsidiaries CloudNordic and AzeroCloud were on the receiving end of one of the most serious ransomware attacks ever made public by a cloud services company. During the incident, CloudNordic suffered a complete encryption wipe-out that took with it applications, email services, websites, and databases, and even backup and replication servers. In a memorably frank admission, the company said that all customer data had been lost and would not be recoverable.
To the hundreds of companies Danish media reported as having lost data in the incident, this must have sounded incredible. Surely service providers are supposed to offer protection, not even greater vulnerability? Things were so bad, CloudNordic even offered customers last resort instructions on recovering lost website content through the Wayback Machine digital archive. The company reportedly refused to pay a ransom demanded by the attackers but even if it had paid there is no guarantee it would have made any difference.
Ransomware attacks are a dime a dozen these days and the root causes are various. But the assumption every customer makes is that behind a service provider’s virtual machine (VM) infrastructure is a comprehensive data protection and disaster recovery (DR) plan. Despite the common knowledge that ransomware targets backup and recovery systems, there is still a widespread belief that the same protections will always ride to the rescue and avoid catastrophic data loss. The CloudNordic attack is a warning that this isn’t always the case. Doubtless both companies had backup and data protection in place, but it hadn’t been enough.
“The attack and its outcome is not that extraordinary,” argues Kevin Cole, global director for technical product marketing at Zerto, a Hewlett Packard Enterprise company. “This probably happens more than we know. What’s unusual about this incident is simply that the service provider was open about the fact their backups had been attacked and deleted.”
This is what ransomware has done to organizations across the land. Events once seen as extreme and unusual have become commonplace. Nothing feels safe. Traditional assumptions about backup and data resilience are taking a battering. The answer should be more rapid detection and response, but what does this mean in practice?
The backup illusion
When responding to a ransomware attack, time is of the essence. First, the scale and nature of the incursion must be assessed as rapidly as possible while locating its source to avoid reinfection. Once this is within reach, the priority in time-sensitive industries is to bring multiple VM systems back online as soon as possible. Too often, organizations lack the tools to manage these processes at scale or are using tools that were never designed to cope with such an extreme scenario.
What they then fall back on is a mishmash of technologies, the most important of which is backup. The holes in this approach are well documented. Relying on backup assumes attackers haven’t deactivated backup routines, which in many real-world incidents they manage to do quite easily. That leaves offline and immutable backup, but these files are often old, which means that more recent data is lost. Even getting that far takes possibly days or weeks of time and effort.
Unable to contemplate a long delay, some businesses feel they have no option but to risk paying the ransom in the hope of rescuing their systems and data within a reasonable timescale. Cole draws a distinction between organizations that pay ransoms for strategic or life and death reasons – for example healthcare – and those who pay because they lack a well-defined strategy for what happens in the aftermath of a serious attack.
“Organizations thought they could recover quickly only to discover that they are not able to recover within the expected time window,” he explains. “They pay because they think it’s going to ease the pain of a longer shutdown.”
But even this approach is still a gamble that the attackers will hand back more data than will be recovered using in-house backup and recovery systems, Cole points out. In many cases, backup routines were set up but not properly stress tested. Under real-world conditions, poorly designed backup will usually fall short as evidenced by the number of victims that end up paying.
“Backup was designed for a different use case and it’s not really ideal for protecting against ransomware,” he says. “What organizations should invest in is proper cyber recovery and disaster recovery.”
In the end, backup falls short because even when it works as advertised the timescale can be hugely disruptive.
Achieving ransomware resilience
It was feedback from customers using the Zerto solution to recover from ransomware that encouraged the company to add new features tailored to this use case. The foundation for the Zerto solution is its continuous data protection (CDP) technology, with its replication and unique journaling technology, which reached version 10 earlier this year. Ransomware resilience is an increasingly important part of this suite, as evidenced by version 10’s addition of a real-time anomaly system that can detect that data is being maliciously encrypted.
Intercepting ransomware encryption early not only limits its spread but makes it possible to work out which volumes or VMs are bad and when they were infected, so that they can be quickly rolled back to any one of thousands of clean restore points.
“It’s anomaly and pattern analysis. We analyze server I/O on a per-volume basis to get an idea of what the baseline is at the level of virtual machines, applications and data,” explains Cole. “Two algorithms are used to assess whether something unusual is going on that deviates from this normal state.”
An important element of this is that Zerto is agentless which means there is no software process for attackers to disable in order to stop backup and replication from happening behind the victim’s back.
“It sounds small but it’s a really big advantage,” says Cole. “Many ransomware variants scan for a list of backup and security agents, disabling any they find protecting a VM. That’s why relying on a backup agent represents a potential weakness.”
A second advanced feature is the Zerto Cyber Resilience Vault, a fully isolated and air-gapped solution designed to cope with the most serious attacks where ransomware has infected the main production and backup infrastructure. Zerto stresses that this offers no point of compromise to attackers – replication from production via a ‘landing zone’ CDP mirror happens periodically via an FIPS-validated encrypted replication port rather than a management interface which might expose the Vault to compromise.
The possibility of a total compromise sounds extreme, but Cole points out that the use of this architecture is being mandated for financial services by the SEC in the U.S., and elsewhere by a growing number of cyber-insurance policies. The idea informing regulators is that organizations should avoid the possibility of a single point of failure.
“If everything blows up, do you have copies that are untouchable by threat actors and which they don’t even know exist?,” posits Cole. “In the case of the Cyber Resilience Vault, it’s not even part of the network. In addition, the Vault also keeps the Zerto solution itself protected – data protection for the data protection system.”
Ransomware rewind
The perils of using backup as a shield against ransomware disruption are underscored by the experience of TenCate Protective Fabrics. In 2013 this specialist manufacturer of textiles had multiple servers at one of its manufacturing plants encrypted by the infamous CryptoLocker ransomware. This being the early days of industrial ransomware, the crippling power of mass encryption would have been a shock. Tencate had backups in place but lost 12 hours of data and was forced to ship much of its salvageable data to a third party for slow reconstruction. In the end, it took a fortnight to get back up and running.
In 2020, a different version of CryptoLocker returned for a second bite at the company, this time with very different results. By now, Tencate was using Zerto. After realizing that one of its VMs had been infected, the security team simply reverted this to a restore checkpoint prior to the infection. Thanks to Zerto’s CDP, the total data loss was a mere ten seconds and the VM was brought back up within minutes.
According to Cole, TenCate’s experience shows how important it is to invest in a CDP that can offer a large number of recovery points across thousands of VMs with support for multi-cloud.
“Combined with encryption detection, this means you can quickly roll back, iterating through recovery points that might be only seconds apart until you find one that’s not compromised.”
While loss of service is not the only woe ransomware causes its victims, the inability to run applications and process data is where the immediate economic damage always begins. For the longest time, the only remedy was to keep the attackers out. But when those defenses fail as they surely will one day, it is better to fail in style, says Cole.
“The choice is not just backup as people have come to know it,” he advises. “Continuous data protection and isolated cyber recovery vaults are the future anyone can have now.”
David Goeckeler, CEO of Western Digital, has the highest disapproval ratings in a survey of tech bossess by social network company TeamBlind.
The Blind social network is formed of business professionals who register through their work email to comment anonymously on their work experience, without the threat of reprisals.
Blind asked 13,171 members of its network about 103 CEOs in August: “Do you approve or disapprove of the way your CEO is handling their job?” The CEOs were then ranked in terms of their approval and disapproval ratings. There were many storage and storage-related companies in the ratings list and we have picked them out below. Overall Jensen Huang of Nvidia was the most approved CEO while Western Digital’s David Goeckeler had the strongest disapproval rating. The average CEO approval rating was 32 percent.
We have presented the disapproval ratings as negative approval ratings in order to make the chart below
Huang had a 96 percent approval, 3 percent disapproval and 2 percent no opinion rating amongst survey respondents. Databricks’ Ali Ghodsi had an 83 percent approval rating, 13 percent disapproval, and 4 percent no opinion. Other highly ranked CEOs included Apple’s Tim Cook; 83 percent approve, 13 percent disapprove, while 4 percent had no opinion.
Goeckeler had a zero approval rating, 94 percent disapproval rating, and 6 percent had no opinion. Amazon’s Andy Jassy was also unpopular; 10 percent approval, 86 percent disapproval, and 5 percent no opinion. Dropbox CEO Drew Houston was lowly rated too, with only 17 percent approving, 78 percent disapproving, and 5 percent having no opinion.
Blind noted: “Top bosses with the highest CEO approval ratings have generally led their companies to higher valuations in 2023… Nvidia shares now trade more than three times higher (+208 percent year-to-date) than they did at the beginning of the year under Huang’s leadership.
“Perceived job security may be one of the most significant factors in determining a chief executive’s approval among employees… some chief executives with the worst CEO approval ratings had cut jobs in 2023, including some of this year’s highest-profile workforce reductions.”
Blind registration page
David Goeckeler
Western Digital laid off 211 staff in the Bay Area in June this year and cut 60 jobs in Israel in May. Goeckeler’s compensation was $32 million in 2022 while the average Western Digital employee earned $108,524. He became CEO in May 2020. We have asked Western Digital if it has any comment about this survey.
Bootnote
Blind survey respondents could answer “strongly approve,” “somewhat approve,” “somewhat disapprove,” “strongly disapprove” or “no opinion.” The approval rating is the sum of “strongly approve” and “somewhat approve” responses. In other words, a CEO with a 0 percent approval rating indicates no employee answered “strongly approve” or “somewhat approve” in the survey.