NetApp has said it will provide storage as a service through its Keystone subscription offering, with arrays deployed in Equinix colocation centers in an Equinix Metal arrangement.
Equinix Metal is comprised of physical servers and arrays made available to customers as a service. Equinix has offered Pure arrays since 2021. Equinix colocation centers are located near public cloud datacenters for fast interconnection. Keystone is NetApp’s storage subscription service. NetApp Storage on Equinix Metal combines Equinix’s colos with NetApp arrays and Keystone billing so customers can store and access data on NetApp arrays without installing, managing and operating the necessary infrastructure in their own datacenters.
NetApp’s Sandeep Singh, SVP and GM for enterprise storage, said organizations can “move more workload types to the hybrid multicloud more easily and in greater numbers than ever before,” and claimed NetApp was a “leader in delivering native cloud and first-party solutions on all three major public clouds that provide a unified hybrid multi cloud experience for customers.”
The NetApp-Equinix deal provides a single-tenant environment, built to meet customer capacity and performance requirements, delivered as part of a compute, network and storage (file, block, and object) stack to run customer applications, under a single contract and support mechanism. There is an automated Equinix provisioning platform and customers can scale storage resources up and down as needs change.
NetApp says that, when its storage is combined with Equinix Metal’s high-performance bare metal servers, users experience fast data access, low latency, and the ability to handle the most demanding workloads, including AI.
NetApp Storage on Equinix Metal features
It is a hybrid multi-cloud offering because Equinix Metal colos have connectivity to all major public cloud environments. This means NetApp cloud services such as Cloud Volumes ONTAP and NetApp first-party services, AWS FSx for ONTAP, Azure NetApp Files, and Google Cloud NetApp Volumes can be added to NetApp Storage on Equinix Metal.
Equinix had a total of 236 International Business Exchange (IBX) colo datacenters located around the world at the end of last year, and now 26 of them are Metal locations.
Equinix Metal datacenter locations
NetApp Storage on Equinix Metal is a first-party service offered through Equinix. Those interested can look at the NetApp Storage on Equinix Metal web page.
Quietly and behind the scenes, startup Index Engines has notched up Dell, IBM, and Infinidat as OEMs for its ransomware detecting CyberSense technology.
Index Engines was started up in 2003 in Holmdel, New Jersey, by CEO Tim Williams, a former Bell Labs engineer. In July 2021 we wrote about Williams’ background in founding startups and selling them, saying: “Index Engines was started up a month after he left Tacit (Networks), and its software provides searchable indexes of primary and secondary stored data. There is no VC funding whatsoever in this now 18-year-old and profitable business. It sells to relatively few OEM customers and has little need to market its products to the wider enterprise world.”
It has just recruited two senior execs – Geoff Barrall as chief product officer and Tony Craythorne as chief revenue officer – and they spoke to us about the technology and go-to-market activity of Williams’ company.
They identified three current OEMs using its CyberSense technology in a briefing:
Dell’s on-premises and in-cloud PowerProtect Cyber Recovery vault products use Index Engines’ CyberSense software technology, with full content indexing and searchability for ransomware activity
IBM’s Storage Defender product is also based on CyberSense software
In 2021, we noted the company had four product lines:
Catalyst – to index terabytes to petabytes of unstructured file and mail data using file metadata, and identify aged data, abandoned and active data, duplicates, large files, multi-media files, Personal Identification Information (PII), and more
Octane eDiscovery – to search, cut out, and archive online and offline data
Backup Catalog – for legacy backups
CyberSense – to scan indexed data using a full content analytics engine that looks inside files and databases to detect invalid data. It includes machine learning to check for malware-caused data corruption
Indexing was the core technology and that has become highly relevant. The rising plague of ransomware has brought with it the need to recover from attacks as prevention is practically impossible – witness the frequency and severity of attacks such as those on Caesars and MGM in Las Vegas. The MGM attack could cost it $100 million, with entry apparently either due to a bad actor impersonating an employee using LinkedIn details, and calling a help desk to get access, or gaining access to MGM’s Okta system.
If you can’t prevent attacks, you absolutely must be able to recover your data or the vast majority of it. That’s the sweet spot Index Engines wants to hit while cyber resilience has become a boardroom-level topic.
The technology
How does CyberSense work? Barrall said: “It’s been able to do some things that I would have thought were close to impossible… They’ve been able to reverse engineer a giant number of formats. And they’ve been able to produce technology which applies that intellectual property recursively. And that’s a hell of a challenge.
“They can take a Commvault backup, and then find a virtual machine in the Commvault backup. They open the virtual machine, because they understand the binary format, then they parse the file system in the virtual machine because they understand the binary format, then they find a Word file in the virtual machine file system. Remember, this is just a binary dump. Then they can open that file using the recursion, the word parser opens up, and then they find an Excel spreadsheet embedded inside the Word document inside the file system inside the virtual machine and they parse that… They look at the binary pattern of that and try and decide where the corruptions occurred.”
In effect, Index Engines has developed connectors into data sources such as Commvault and other backups, snapshots, file systems, databases, and VMs.
Craythorne said: “There are 200 different inspections that we do on every file that we open, that we then feed into machine learning… We are able to spot changes within the file that nobody else can… We can do the same … on a database. We can go down to the record and apply those 200 change algorithms, and feed that into our machine learning engine to see if it’s been corrupted.”
The company says it has reversed engineered database and other structured and unstructured data formats so that its detection engine can look inside database records, for example, and find evidence of corruption.
Barrall said: “They’ve had 20 years to build up this giant library. And most of it, obviously, was produced for indexing. But it also works great for malware detection, which is a very high priority problem right now.”
How much data do customers using CyberSense scan? Barrall replied: “Customers want to scan more or less everything. So we have customers with petabytes of data that we have to see. And so the product has to be clever… We pay a lot of attention to change logs and other triggers that help us find the things that have actually been modified.
“When you’re scanning that kind of data, first of all, you need parallelism… We’ve got customers with 30 or 40 servers that can engage in a giant scan. But also… you need to use a lot of tricks to be able to scan through that data.”
OEMs and profile raising – ‘no plans on going direct’
Craythorne said the company’s revenues were in the tens of millions area. “We’re profitable, we’re cash-flow positive… The company is in really good shape through just a very small number of customers right now. But we are expanding that pretty quickly at the moment. It’s really exciting.”
He was brought on board partly to help raise Index Engines’ profile. “One of the reasons I’ve been hired is nobody knows who we are, that we are the ransomware detection engine behind these products. We do want to promote our own brand a little bit.”
Craythorne said the product is customized quite heavily for each OEM and is not a generic software component.
He said of IBM and Infinidat: “We’re in heavy engagement with them right now, both from an engineering perspective and in sales engagement.” He’s also talking to other companies in the top right of Gartner’s magic quadrant and close to it, emphasizing: “We have no plans on going direct.”
Effectiveness
Index Engines claims its analytics and machine learning detect corruption with 99.5 percent confidence. A detection alert is issued and post-attack diagnostics identify when the attack took place, its source, the corrupted data, and the most recent clean version.
Craythorne told us: “We’ve got thousands of customers and some are very small, just scanning a few hundred terabytes, some are scanning 30 petabytes.” These are end-user customers. An OEM told Index Engines that around 30 percent of its CyberSense customers “have had a ransomware attack, and have managed to successfully rollback without any issues whatsoever. They’ve done the scan, they’ve identified the malware. They then have been able to identify where it came in, and when it came in, and so… isolate those files, and then successfully rollback without any phone call to support.” It’s self-service ransomware recovery, as he tells it.
Index Engines, he said, intends “to be the leader in the convergence of cyber resilience and storage. That’s our strategy right there.”
If it manages to notch up one or more of Hitachi Vantara, HPE, NetApp, Pure, or VAST Data as OEM customers, Index Engines could be set for substantial revenue growth.
As it projected it would back in June, ExaGrid now has over 4,000 customers following what it described as the strongest Q3 in the company’s history.
The company supplies scale-out tiered backup appliances with incoming data being written as-is to a so-called landing zone for quick ingress and retrieval, with later deduplication for longer-term, capacity-efficient retention.
Bill Andrews
CEO Bill Andrews said: “ExaGrid prides itself on having a highly differentiated product that just works, does what we say it does, is sized properly, is well supported, and just gets the job done. We can back up these claims with our 95 percent net customer retention, NPS score of +81, and the fact that 94 percent of our customers have our Retention Time-Lock for Ransomware Recovery feature turned on, and 99.1 percent of our customers are on our yearly maintenance and support plan.”
This is ExaGrid’s 11th growth quarter in a row with an average 20 percent revenue growth rate. In summary:
Cash, free cash flow, EBITDA: All positive
New customers added: 130
Customer count: >4,000
7-figure deals: 2
ExaGrid competes with backup appliance leader PowerProtect from Dell, and other suppliers such as HPE (StoreOnce), Quantum (Dxi), and Veritas (NetBackup Flex). All of these suppliers dedupe on data ingest so have slower data load rates and slower retrieval of new backups as all have to be rehydrated (deduplication reversed).
The company also competes with all-flash, direct-to-object backup stores such as Pure’s FlashBlade, ObjectFirst, and VAST Data. The bulk of its competition is against disk-based Dell, Veritas, HPE, and Quantum, which would need to re-engineer their software to have a deduped “landing zone” for incoming data and global deduplication across a cluster of appliances as ExaGrid has. So far none of them have shown any public sign they plan to do so.
They could go all flash, using QLC SSDs like VAST Data and Pure’s FlashBlade//E, to get extra ingest and restore speed, but then so could ExaGrid, which could also support object (S3) ingest. Competitors such as Dell are incrementally improving their products but, so far, not enough to derail ExaGrid’s progress.
Data protector Acronis announced its Acronis CyberApp Standard integration technology, which broadens the Acronis ecosystem to third-party vendors, offering them the means to integrate their products and services into Acronis Cyber Protect Cloud. It claims this is only framework that allows vendors to integrate into the Acronis platform with natively incorporated vendor workloads, alerts, widgets, and reports. This ensures a unified user experience for Acronis-owned and integrated applications. Vendors interested in becoming part of the Acronis ecosystem gain access to the versatile Vendor Portal. From the portal, users can build applications, share app details, upload marketing materials, and publish directly to the Acronis Application Catalog. For more information, visit: https://developer.acronis.com/.
…
Data protector Commvault has announced new enterprise backup and recovery systems powered by Lenovo server hardware. There are no details of these systems available but visitors to the GITEX Dubai exhibition event can learn more at booth #H5-A40. We’ve asked for more information about this.
…
Data lakehouse supplier Databricks says easyJet is using its Lakehouse and Generative AI. The airline has been a customer of Databricks for almost a year, using it for data engineering and warehousing. It just migrated all of its data science workloads and started to migrate analytics workloads onto Databricks. Now it’s started using Lakehouse AI for generative AI work, providing a tool for non-technical users to ask their questions in natural language and get insights from its rich datasets. There’s more information here in a blog.
…
Dell’s PowerProtect Data Manager emerged as the top choice in customer satisfaction for data protection software according to a recent independent double-blind study conducted by a third-party vendor. The alternative choices included Rubrik, Cohesity, Veeam, Commvault and Veritas. A Dell blog provides more info.
…
Real-time data platform supplier GridGain has release v8.9 of its eponymous software, with integrations for for Apache Parquet, Apache Iceberg, CSV, and JSON to support more complex datastores, including enterprise data lakes and NoSQL/semi-structured document databases. It has more storage- and read-efficient management of massive data tables, with support for high performance, ACID-compliant queries and diverse document data types, helping developers to build new and more complex applications faster. GridGain v8.9 takes its distributed, colocated, memory-centric computing capabilities to NoSQL and data lake technologies, enabling faster analytics. GridGain Platform v8.9 is available now. Visit the GridGain website to download.
Kristy Mao
…
TrueNAS software supplier iXsystems has appointed Kristy Mao as its SVP of Finance. She comes from being VP of Finance & Performance Management at Siemens Digital Industries Software, where she led a global team of 90 employees overseeing strategic financial planning, FP&A, finance IT, digital transformation and enterprise risk management. Michael Lauth, President & CEO of iXsystems, said: “iXsystems has evolved into a medium-sized enterprise with a global reach, and Kristy’s expertise and leadership will be instrumental in driving our continued growth and prosperity.”
…
Micron has launched 16Gb DDR5 memory, with speeds up to 7,200 MTps, and made with its 1β (1-beta) process node technology. The company claims its 1β-based DDR5 memory with advanced high-k CMOS device technology, 4-phase clocking and clock-sync 1 provides up to a 50 percent performance uplift and 33 percent improvement in performance per watt over the previous generation. The new 1β DDR5 DRAM product line offers current module densities in speeds ranging from 4,800 MTps up to 7,200MTps for use in data center and client applications.
Micron 1 beta process DDR5
…
Quantum has announced new DXi-Series Backup Appliance bundles; DXi being its deduplicating backup appliance target system. They have built-in replication and newly announced DXi Cloud Share tiering. DXi appliances may be deployed across edge sites, central data centers, and the public cloud. DXi Edge-Core-Cloud Bundles are now available with all the components customers need to deploy across their enterprise.
They include pre-configured physical and virtual appliances and are available in four standard capacity sizes — Small, Medium, Large and Extra Large — in support of multiple edge locations, central data centers, and cloud-based archiving targets. Logical capacities range from 400 terabytes up to 228 petabytes. The new DXi Edge-Core-Cloud bundles are available immediately. DXi Cloud Share is available as part of the DXi 4.9 software release, planned for release in December 2023.
…
Samsung and SK hynix will be allowed to supply US chip equipment to their China factories indefinitely without separate US approvals, according to Reuters. The US has extended existing waivers. Samsung Electronics makes about 40% of its NAND flash chips at its plant in Xian, China, while SK Hynix makes about 40% of its DRAM chips in Wuxi and 20% of its NAND flash chips in Dalian.
…
Digitimes reports both Samsung and SK hynix are expecting strong growth in HBM shipments due to the surge in generative AI and the corresponding need for GPUs which use this form of memory. Samsung will likely complete a capacity expansion investment in the third quarter of 2024, which should boost HBM3 and later HBM generation output. Omdia forecasts an at least 40% CAGR for HBM from now until 2028.
…
SIOS Technology announced MailHippo, which enables healthcare providers to send and receive encrypted HIPAA compliant emails and to collect medical data using HIPAA compliant Web forms, is using SIOS DataKeeper Cluster Edition to protect its critical secure email platform from downtime and disasters.
…
Sony has officially licensed the Seagate Game Drive PS5 NVMe SSD for the PlayStation 5. It delivers sequential read speeds of up to 7300 MBps, using a Seagate-validated E18 controller (made by Phison we think) with 3D TLC NAND. The drive is available in capacities up to 2TB, with endurance of 1.8M MTBF and up to 2550 TBW. It has a five-year limited warranty and priced at offered for $99.99 (1TB), and $149.99 (2TB). Gamers can play PlayStation 5 games directly from the drive.
Seagate PS5 drive
…
UK web hosting business Krystal is using StorPool software-defined storage and NVMe storage clusters for its Katapult virtual infrastructure platform, due to its performance, scalability, and reliability. Founded in 2002, Krystal says it has nearly 30,000 clients who host over 200,000 websites. Read a case study here.
…
My Truong
Stravito, the Swedish SaaS startup (2017) which provides a central place to store and analyze market research data, has appointed My Truong as VP for marketing. It says this comes after a period of rapid growth within the US market – as of September 2023 leading U.S. consumer goods brands within food, beverage, furniture and telecommunications now account for almost 40 percent of Stravito’s business. Stravito works with brands including McDonald’s, Comcast, Burberry and Danone, and is poised to enhance its market reach even further, with Truong leading its marketing initiatives worldwide. Truong has been a member of teams that led to successful exits such as Endicia (Newell Rubbermaid), VerticalResponse (Deluxe), Demandforce (Intuit), Nexmo (Vonage), Adyen (ADYEN). Prior to joining Stravito, he served as CMO of SaaS company, Surfly.
…
Version 10.10 of NAKIVO Backup & Replication is now available for download. It adds a beta version of real-time replication VMware. Organizations with virtualized environments can now create replicas of their VMware vSphere VMs and keep them updated with the source VM as changes are made. Replicas are updated in real time as frequently as one second, allowing for near-zero application downtime and minimal data loss in the event of a disaster. Get a datasheet here.
…
EE Times China reports Yingren Technology (aka InnoGrit) has announced its enterprise-level YR S900 PCIe 5.0 SSD controller, China’s first in-country PCIe 5.0 SSD controller. It is a 4-channel SSD using a RISC-V processor and with 16 or 18 channels. It is said to suit NAND from Yangtze Memory Technologies Corp (YMTC)
Spotting an opportunity to expand its customer base, cyber resilience supplier Rubrik has come up with an MSP service.
Rubrik provides backup and security products through the Rubrik Security Cloud. It says it is offering MSPs turnkey Cyber Resiliency-as-a-Service (CRaaS), a zero-trust platform with cyber resiliency that can scale across enterprise and mid-market businesses levels, which they can sell on to customers.
Ghazal Asif
Ghazal Asif, Rubrik’s VP of worldwide channels and alliances, says this is “a great opportunity for existing and new Rubrik MSP partners the ability to realize operational efficiency and manage multiple customers in a fully managed or co-managed environment with unique service levels with customizable plans, and gain detailed visibility across customer accounts in real time.”
She says the product resolves three specific IT challenges for biz:
Recovering data quickly and effectively to a known point in time that is safe?
Given the ever-increasing risk associated with ransomware and the associated cost pressures of driving efficiency, how can I achieve best practices without being a best practitioner?
How can I be sure what data has been exfiltrated, and do I have to worry about any of it being sensitive?
This third point has special resonance for Rubrik as in March this year the vendor itself suffered an internal data breach through the Fortra file transfer service. It is working with Zscaler to stop sensitive content files being exported outside an organization’s IT boundaries.
As more businesses are turning to MSPs, the MSPs face these challenges too. Asif claims the Rubrik MSP service will:
Reduce operational complexity by offering fully managed or co-managed services with a multi-tenant platform providing real-time visibility into customers’ data health, with customizable MSP service plans.
Enable MSPs to make cybersecurity their customers’ priority, with a platform that meets the industry’s most stringent security and compliance requirements for cyber resiliency.
Improve time to revenue with MSPs able to onboard and start to protect customers quickly, and efficiently.
Provide flexibility and simplicity with on-premise, cloud workload and SaaS protection, less upfront expense, scalabiity and an operational pricing model that ensures flexibility, predictability, controlled costs and scaling.
Rubrik has MSP-as-a-Service Consultative Enablement to help MSPs get on board, and says itincentivizes its field sales force to sell with its MSPs, rather than against them.
The Rubrik MSP offering is immediately available globally. MSPs interested in joining can submit a Partner Onboarding Request on the Rubrik website.
File-based collaborator Egnyte has set up a secure Virtual Data Room (VDR) service for sharing sensitive documents with third parties, with the idea of removing the need for MOVEit-like file transfer services.
A VDR is an online vault for document sharing and access, with actions such as copying, printing, and forwarding disabled for its content. It is generally a standalone service and used as part of a due diligence process before a business acquisition, merger, or IPO so that sensitive business documents can be shared between the parties involved. Egnyte claims VDRs can be more secure than actually moving physical documents as loss in transit or accidental destruction does not happen.
Rajesh Ram, Egnyte co-founder and chief growth officer, said: “Virtual Data Rooms arose to fill a gap in the market around a niche set of highly sensitive, external file-sharing use cases. With this latest release, we have closed that gap so that our customers who might otherwise have needed a standalone VDR can now get everything they need through a single, intuitive content platform. The result is less complexity and friction for the end user, and ultimately less data sprawl and risk.”
Egnyte’s VDR, called Egnyte Document Room, includes:
Security with encryption, multi-factor authentication, Single Sign On, Active Directory integration, audit trails, and anomalous behavior detection.
Granular Access Control, with permissions definable for different users, limiting who can view, download, print, or edit particular documents.
Reporting and Analytics for monitoring user behavior and document access patterns, and notifications when someone accesses or makes changes to files.
Customization to match each customer’s branding needs and transaction type.
It says customers can manage, deploy, and monitor data rooms to streamline the sharing of large volumes of sensitive documents between parties involved in M&A transactions and due diligence processes, legal proceedings, private equity asset purchases, property sales and lease agreements, compliance and audit processes, IP licensing and research collaboration, and more.
The Egnyte Document Room is the business’s third purpose-built secure enclave product, joining previous data enclaves to address Good “x” Practice (GxP), and Cybersecurity Maturity Model Certification (CMMC) regulations.
Comment
Third-party file transfer services such as Fortra and MOVEit have been breached by malware exposing dozens of organizations to file copying and exfiltration. A VDR prevents the physical transmission of files across a network and removes that threat.
If the file transfer is part of an automated workflow, migrating it to a VDR alternative would require customization to include VDR file availability and access in the workflow procedure. More info on the platform here.
Data intelligence company Alation’s ALLIE AI co-pilot is now in public preview. The concept is pitched at AI engineers, data analysts and data stewards looking to increase productivity. An Analytics Cloud providing a visual map of an organization’s data usage is also in the offing.
Alation’s software catalogs or maps a customer’s data, reading metadata from sources such as file systems and databases via a host of connectors. It provides intelligence about the types of data, its location and properties, helps feed it to upstack suppliers such as Databricks and Snowflake, and also provides data governance facilities. Alation was founded in 2012 and has pulled in $340 million in funding. HPE contributed to its latest and largest round of $123 million last year.
Alation CTO Junaid Saiyed said in a statement: “A data-driven organization can quickly address strategic questions. However, as data volumes grow, locating, understanding, and trusting the data becomes increasingly challenging. This challenge becomes more pronounced when businesses invest in data initiatives like generative AI.”
“Such projects demand extensive data stores to operate as intended. … With ALLIE AI integrated into our data intelligence platform, data teams can expedite the discovery of relevant data, gain insights into the lineage of their AI models, and effectively manage business outcomes on a larger scale.”
ALLIE AI builds upon Alation’s machine learning (ML) capabilities so customers can use Aladdin more effectively, saving time and scaling data programs faster, and advance AI initiatives. Its intelligent curation capabilities enable customers to accelerate the population of their data catalog by automatically documenting new data assets and suggesting appropriate data stewards. ALLIE AI intelligent search and SQL auto-generation enable analysts to find the data they need without needing specialist data analyst skills or knowing the names of the underlying datasets or how to write SQL.
Jonathan Bruce, Alation VP of Product Management, said in a statement: “Alation Analytics Cloud shows customers their data in a way that hasn’t been accessible before – targeted at data analysts and stewards who deliver data catalog adoption and support usage analytics at their organizations.”
Alation says its Analytics Cloud enables customers to:
Measure their data culture maturity in terms of data leadership, data search and discovery, data literacy, and data governance
Score data initiatives across metrics such as total assets curated, total active users, and more
Build a visual map of data consumption: see the effectiveness of individual data products by tracking usage with reports showing which queries are being run by which users against which data stores – plus details on the total execution time of database queries highlighting areas for optimization.
This is large enterprise-class stuff. The Data Management Association (DAMA) is a global community of data management professionals organised around local chapters, such as in New England or the UK. There is a DAMA data maturity model which has 120 questions and can take hours to complete. Alation’s data maturity assessment is shorter and quicker to run but supports the notions of data culture and data maturity.
One aspect of all this is that it provides numerical ratings of qualitative activities and enables numbers-led managers to measure and monitor progress. Alation says its Analytics Cloud provides a framework to articulate the business value of management data initiatives, with customers able to check out the ROI of data initiatives.
It says: “Query Log Ingestion technology enables leaders to understand which data sources are most frequently used and which teams are running which queries. This enables data leaders to map data consumption across their entire enterprise.”
“These insights can then be used to measure the effectiveness of different data programs and, in turn, assess the maturity of an organization’s data culture.”
Maxine Geltmeier, Data Governance manager at First United Bank & Trust, is quoted in Alation’s announcement: “Understanding and illustrating value is paramount. We need to know our data maturity level and which data programs align with key organizational priorities and drive business value. We’re excited to have access to a tool that helps mature and evolve our data culture and will serve as a mechanism to prove a correlation between data initiatives and business performance. Ultimately, this empowers us to act as responsible custodians of the data we’ve been entrusted with.”
This is all a long way up from the storage technology stack’s foundation bits, bytes, disks and SSDs, but very real for large enterprise and organizations.
Alation Analytics Cloud is available today and you can find out more here. The company has a downloadable white paper, The Alation Data Culture Maturity Model: The Path to Data Excellence” which you can access here. ALLIE AI is in public preview and is expected to be generally available in H1 2024. To learn more about ALLIE AI, click here.
Mainstream SaaS app data protector OwnBackup is renaming itself the Own Company and putting out a Discover product that finds and analyzes historical SaaS backup data.
The Israeli startup, founded in 2012, initially backed up customer data in the Salesforce SaaS application. This was successful and it attracted significant VC funding, including from Salesforce, with investment rounds annually from 2016 to 2021 totalling more than $500 million and giving it unicorn status. At that point it protected Sage Business Cloud Financials, Veeva (life sciences) and nCino (financial data) as well as Salesforce. Microsoft Dynamics 365 CRM and Power platform protection support was added in August 2021 and ServiceNow support came in June last year.
CTO Adrian Kunzle said in a statement: “For the first time, customers can easily access all of their historical SaaS data to understand their businesses better, and I’m excited to see our customers unleash the potential of their backups and activate their data as a strategic asset.”
Own says it’s committed to helping customers ensure the availability, compliance, and security of mission-critical data, and being able to use that data in their business.
CEO Sam Guttman said: “When we established the Company eight years ago, we were primarily a backup and recovery product. Today, we are a full service SaaS data platform, and as our company has evolved, so have our product offerings and brand identity. The new name reflects the fact that we empower our customers to own their own data, unlocking more ways to transform their businesses.”
Own Discover enables customers to look into their backed up SaaS data, access it in a self-service way with output in a time-series format and use it to:
Analyze their historical SaaS data to identify trends and uncover activity patterns,
Train machine learning models,
Integrate SaaS data to external systems while maintaining security and governance.
Own says Discover “provides zero-ETL to reduce the need for data engineering and development resources while also eliminating the overhead of building a data warehouse.”
Some Own Discover use cases.
Comment
The number of SaaS app data protection suppliers is increasing. While some are intent on increasing the number of SaaS apps they protect by developing or obtaining more connectors – think HYCU and Asigra for example – Own is not going down that route. Instead, it’s keeping its focus on mission-critical SaaS apps and wants its customers to be able to use the data that it accumulates for historical analysis.
An Own spokesperson told us: “Our strategy is to protect and activate our customers’ critical SaaS data. We intend to keep innovating, bringing new solutions to our existing ecosystems, as well as expand into new ecosystems.”
Rockport Networks, rebranded as Cerio, is to roll out composable datacenter IT infrastructure that goes beyond a single PCIe domain.
Startup Rockport has developed switchless and flat network technology for interconnecting servers, storage, and clients, with each endpoint effectively being its own switch. It announced initial product availability in October last year when it came out of stealth mode. Phil Harris became CEO in June that year as investors put more money into the business. Now he’s rebranded the company and its technology as Cerio and says it delivers new scale economics for AI and cloud datacenters.
Harris said: “Every major inflection point in computing is driven by the need for better economics, a better operational model – and now, greater sustainability. For the past couple of decades, we’ve been limited in how we build systems. No longer bound by a single PCIe domain, our customers can compose resources from anywhere in the datacenter to any system.”
The idea is that IT resources such as CPUs, GPUs, NVMe storage drives, DPUs, FPGAs, and other accelerators can be composed into workload-dedicated systems with the right amount of each resource applied to the workload, and then returned to a resource pool for subsequent reallocation when the workload completes. Composability has been a long sought-after technology with large-scale datacenters the main target, but it has not become a mainstream technology, despite the efforts of HPE with Synergy, Dell EMC with its MX7000, and startups such as Liqid and GigaIO in hyperscale and HPC datacenters.
Other startups such as DriveScale and Fungible have had a brief period selling product and then been acquired with their technology removed from general availability.
Cerio is hoping its technology will appeal to hyperscale datacenter operators as it enables them to decrease resource idle time.
Harris said: “Pre-orders of the Cerio platform from hyperscaler, cloud service provider, enterprise and government organizations are a clear signal of the demand for a fundamentally new system architecture that is more commercially and environmentally sustainable.”
Cerio is working with early access customers in North America, Europe, and Asia-Pacific on the implementation of scalable GPU capacity and storage agility use cases.
Its technology is based on high-radix distributed switching, advanced multipathing, and intelligent system adaptation of protocols across low-diameter network topologies, ones with fewer nodes, such as routers and switches, than otherwise. Multipathing enables traffic flows across the network to be kept separate and not interfere with one another. View radix as the number of IO ports on a network device such as a switch or router.
Dr Ryan Grant, assistant professor in the Department of Electrical and Computer Engineering at Queen’s University, Kingston, Ontario, said: “The Cerio platform is driving groundbreaking research into AI acceleration, to optimize the flow of data on a per-application basis. We’re using the unique multipathing capabilities of the Cerio fabric to optimize the precise calibrations of GPU selection, density and communications that will make traffic flows highly efficient and responsive in distributed, heterogenous systems.” His research involves collaboration with Cerio. Matt Williams, Cerio CTO, said: “The work we’re doing with Dr Grant and his team will help us calibrate the per-workload optimizations that will make traffic flows highly responsive for complex AI, machine learning and deep learning applications.”
Grant says Cerio’s technology involves PCIe decoupling from the underlying fabric. PCIe is a bus and not a network fabric, giving it fundamental scale limitations: “PCIe decoupling in the Cerio platform makes it possible to extend PCIe beyond the compute node – and the compute rack – to provide configurable, efficient row-scale computing that changes the economics of the datacenter.”
A Cerio white paper, Beyond the Rack: Optimizing Open Infrastructure for AI, is available here (registration only). It explains how decoupling PCIe from the underlying fabric overcomes the issues of using native PCIe in a large distributed system.
Hitachi Vantara is unifying its product portfolio as a single hybrid storage platform called Virtual Storage Platform One.
The company’s portfolio includes the VSP (Virtual Storage Platform) high-end and mid-range block arrays, HNAS file storage, VSS (software-defined storage), and the HCP (Hitachi Content Platform) for object data. These products are managed through an Ops Center product. Hitachi Vantara says the VSP One single data platform will provide a simplified experience to consume and manage block, file, object, and mainframe data, with flexible consumption as an appliance or software-defined storage across public cloud and on-premises.
Dan McConnell
According to Dan McConnell, Hitachi Vantara SVP of product management for storage and data infrastructure: “Virtual Storage Platform One marks a significant milestone with our infrastructure strategy. With a consistent data platform, we will provide businesses with the reliability and flexibility to manage their data across various storage environments without compromise.”
This is a strategic reshaping of Hitachi Vantara’s storage offerings. It involves creating a single control plane, data fabric, and data plane across block, file, object, cloud, mainframe, and software-defined storage workloads, managed by a single AI-enabled software stack. The intention is that separate block, file, object, and mainframe data silos will become unified.
VSP One will feature cloud self-service for replication and other data services, and intelligent workload management to optimize storage pools by assigning and rebalancing workloads as conditions change, without hands-on management. There will be integrated copy data management to ensure global availability and fault tolerance, without impacting performance, using replication and synchronous active storage clusters.
Getting there from here
Hitachi Vantara told us: “Starting in early 2024, new products will be brought to market under the Virtual Storage Platform One umbrella. This initially includes the new HNAS (file), VSS (Software Defined Storage, Block and Software Defined Storage, Cloud) followed later by the VSP line and HCP object. Once we have new offerings available under VSP One, the old brands will be retired in line with End-of-Life policies and support. Customers will have seamless, non-disruptive upgrades just like they do with any new product release.”
“Hitachi Ops Center will become the primary brand for infrastructure data management on the platform. Ops Center Clear Sight will become the Customer Experience Portal, giving oversight to all Hitachi Vantara infrastructure. Including built-in element managers to administer the embedded software. Ops Center will still include the same software as in our AIOps software suite, allowing for new projects to be developed and launched under the Ops Center family.”
“Under Virtual Storage Platform One you will still be able to reference specific solutions. These names will be reserved for ordering, support and technical documentation only:
VSP will become “Block”
HNAS will become “File”
VSS block will become “SDS Block”
VSS Cloud will become “SDS Cloud”
“Our Integrated System Solutions known as Unified Compute Platform (UCP), for both converged and hyper-converged solutions, will continue as an integrated solution using the platform. We will focus on using the platform as the base for the development of integrated solutions that differentiate us from competitive offerings and address specific market needs.”
For more information about Hitachi Vantara’s Virtual Storage Platform One, click here. Hitachi Vantara will be publishing a Virtual Storage Platform One blog, video, and e-book. It is running a launch event titled Architecting Future Innovations With Data over October 10-11.
Pure has introduced a Disaster Recovery service and is offering financial and service operation terms that it hopes will encourage customer retention and attract fresh users.
The vendor is pledging to pay customers’ power and rack space costs for the Evergreen//One Storage as-a-Service (STaaS) and Evergreen//Flex subscriptions. Specifically, it’ll be a one-time, upfront payment for the entire term of these contracts, which can be made directly as cash or via service credits (EverGreen//Flex), and is based on kilowatt per hour (kWh) and Rack Unit (RU) fixed rates. The payment is proportional to the customer’s geographic location and contract size.
At the same time, Pure is going to bring out No Data Migration, Zero Data Loss, and Power and Space Efficiency guarantees, coupled with flexible upgrades and financing, across the Evergreen portfolio. It also announced Pure Protect//DRaaS – a Disaster Recovery as a Service – along with energy efficiency guarantees for its Evergreen portfolio, and scalable AI-powered storage services via its Pure1 management platform.
Prakash Darji, VP and GM, Digital Experience Business Unit, Pure Storage, issued a statement: “The introduction of Pure Protect //DRaaS, unique Pure1 capabilities for subscription lifecycle operations, and an industry-first sustainability commitment underscore Pure’s pledge to deliver the most secure, smart, and energy-efficient storage services required by modern businesses.”
Scott Sinclair, practice director, Enterprise Strategy Group (ESG), added: “The introduction of a Paid Power and Rack commitment stretches the limits of innovation in the antiquated enterprise storage market. The latest Evergreen enhancements successfully balances enterprise requirements to make progress towards achieving critical ESG and net zero goals using incentives, while establishing peace of mind when it comes to data loss.”
Pure’s announcement indicates it is “eliminating the growing challenges of managing rising electricity costs and rack unit space” and this “exemplifies what it means to offer a true, seamless cloud experience, on premises.” It’s launching, it claims, enterprise STaaS that aligns TCO savings and long-term efficiency goals. Pure will now pay its customers’ power and rack space costs for storage supplied through Evergreen STaaS and Flex subscriptions.
Power and rack space payment details
DRaaS
Pure Protect //DRaaS applies to any storage infrastructure and is a consumption-based Disaster Recovery as-a-Service offering that provides customers with clean environments and multiple restore points to recover clean copies of their on-premises vSphere data, to native AWS EC2. If the DRaaS instantiation is due to a ransomware or similar attack it ensures data centers remain isolated for attack investigation.
Pure has also introduced consumption-based disaster recovery via Pure Protect, and a data resilience scoring system via Pure1. This offers the ability to assess entire Pure fleet configurations against leading practices.
Evergreen guarantees and more
Pure’s portfolio of guarantees and business deals is increasing. There are No Data Migration and Zero Data Loss guarantees for Evergreen//One (SLA), Evergreen//Flex, and Evergreen//Forever customers. With the Zero Data Loss guarantee, Pure assures data protection with data recovery services for any hardware or software product-related incidents, at no cost. With the No Data Migration guarantee, Pure covers technology upgrades with no data migrations. Pure says its Evergreen architecture extends equipment life up to ten years or more.
There are expanded guarantees for customers who opt to own their storage via an Evergreen//Forever subscription. A Power and Space Efficiency Guarantee has Watts per tebibyte (TiB) and TiB/Rack measures. If the guaranteed Watts/TiB or TiB/Rack is not met, Pure Storage will cover the tab.
Pure’s Ever Agile program includes a capacity plus controller trade-in delivered at up to 20 percent lower price than new controller costs. Its Capacity Consolidation program now includes expanded capacity trade-in credits valued at up to 50 percent.
There is an Asset Management and Genealogy service allowing customers and Pure to jointly optimize labor costs to run and operate storage. Customers get full transparency to manage Evergreen assets, contracts, subscriptions, and lifecycle, and get visibility into capacity, energy, and rack space usage.
Customers can also view how each asset or subscription has evolved over time – including software updates, ramps, expansions, and renewals – and gain insight into upcoming lifecycle events such as EOL, upgrades, or contract expiration.
Pure customers get a subscription viewer to understand when subscriptions require attention and renewal, predictive tracking of capacity utilization with actionable alerts to optimize reserve commit vs on-demand consumption, and SLA indicators to track how well Pure Storage is meeting performance and efficiency SLAs.
Policy-driven Upgrades are claimed to take the guesswork out of choosing the right Purity release and simplify fleet management.
Object storage supplier Cloudian has managed to wring 17.7GBps writes and 25GBps reads from a six node all-flash cluster in a recent benchmark.
Cloudian said these are real-world results, generated on an industry-standard benchmark, GOSBENCH, that simulates real-life workloads. It is not an in-house benchmark tool. The servers used were single processor nodes, each with a single, non-bonded 100Gbpds (Ethernet) network card and four Micron 6500 ION NVMe drives per node.
The company supplies HyperStore object storage software and this speed run was done with servers using AMD’s EPYC 9454 CPUs and upcoming v8 Hyperstore software.
Michael Tso
Cloudian CEO Michael Tso said in a statement: “Our customers need storage solutions that deliver extreme throughput and efficiency as they deploy Cloudian’s cloud-native object storage software in mission-critical, performance-sensitive use cases. This collaboration with AMD and Micron demonstrates that we can push the boundaries.”
AMD corporate VP for Strategic Business Development, Kumaran Siva, backed him up: “Our 4th Gen AMD EPYC processors are designed to power the most demanding workloads, and this collaboration showcases their capabilities in the context of object storage.”
CMO Jon Toor told us: “Most of our customers today are talking with us about all flash for object storage, if they’re not already there. Increased performance is a driver, especially as we move into more primary storage use cases. Efficiency is a driver also. With these results we showed a 74 percent power efficiency improvement vs an HDD-based platform, as measured by power consumed per GB transferred.”
HyperStore 8.0 incorporates multi-threading technology and kernel optimizations to capitalize on the EPYC 9454 processor, with its 48 cores and 128 PCIe lanes. This combination was then optimized for Micron’s 6500 ION 232-layer TLC SSDs which delivers 1 million 4KB random write IOPS.
Object storage tends to be linearly scalable as nodes are added to a cluster so great speeds are possible. Cloudian’s per-node performance was 2.95 GBps write and 4,15 Gbps read.
In October 2019, OpenIO achieved 1.372 Tbps throughput (171.5GBpsec), using an object storage grid running on 350 commodity servers. That’s 0.49GBps per server.
A month later MinIO went past 1.4Tbps for reads, using 32 nodes of AWS i3en.24xlarge instances each with 8 NVMe drives, making a total of 256 NVMe drives, and that means 175GBps overall and 5.5GBps per AWS instance, outperforming Cloudian. We don’t know the NVMe drive performance numbers but Minio used two more of them per instance than Cloudian used per node. Object storage performance benchmarks are bedevilled with apples and oranges comparison difficulties.
Check out a Cloudian speed run Solution Brief here.