Home Blog Page 180

Mainframers are keeping tape as they add cloud backup, archive storage

An Evaluator Group research project has found that tape use by mainframe users is not going away and that cloud storage of their backup and archive data is increasing at the same time.

The downloadable Technical Insight report; “Use of Cloud Storage in Mainframe Environments” by analyst Randy Kerns, looked at enterprise mainframe users, not service providers or consulting companies. It says that “Public cloud storage usage for mainframe environments is expanding for a variety of reasons and mainframe cloud storage “is dominated by storing backup and archive data from the mainframe.”

General mainframe application storage in the cloud was found to be too expensive: “storage costs were prohibitive because of the class (expensive performance level) of storage required and the time-based charges for the amount that was resident.“ This led to some application repatriation. Specific archive and backup storage use on the cloud though, was common.

The report states: “More than half of respondents reported storing more than one half of a petabyte of data in clouds. That is a large amount of data for mainframe environments and the expense of storage and management, either on-premises or in a public cloud, would be a noticeable amount in an overall budget.” 

Is it increasing or decreasing in amount? It is increasing by more than 11 percent in more than two thirds of the surveyed mainframe users; 

The analyst found that: “It is easier to acquire and utilize cloud storage than on-premises systems. Backup and archive were the primary types of data cited by every interviewee and the growth was primarily due to expanded usage.”

Also the data stored in the clouds is in tape image format, even though some of the users do not have tape on premises: “Interviews indicated that even though some are not using tape, they are still using tape images stored on disk-based systems such as an IBM TS7700 system. The commentary about moving away from operational practices around tape or tape images was that it would be very difficult to change.”

Why are mainframers in general still using tape? Tape was comparatively cheap and mainframe tape-based “operational procedures could not be changed without great difficulty.” Having a physical air gap with tape was cited as good news in this age of ransomware. But; “only a few of those interviewed were using immutable settings for cloud storage. This seemed to be a contradiction in the concern for security and the ransomware recovery strategy. Probing further, the belief was ransomware recovery would be done from on-premises copies.“

Tape use was increasing for about a third of the respondents, due to organic data growth, with the majority saying it was staying at the same level.

The report states: “The use of cloud storage for mainframe data was seen as a more modern approach for backup and archive, much simpler to acquire than virtual tape or physical tape with required technology transitions, and would help in executive directives about use of public cloud. However, use of existing tape was so ingrained in the operations that making a change was a major effort unlikely to have budget assigned.”

Overall: “The research and interviews show that use of cloud storage will continue to increase and that tape usage will continue in most cases.” Cloud storage use for the backup and archive of relatively inactive data, and is increasing faster than the organic growth rate of mainframe data.

You can download the report here (registration required). 

OpenDrives now a backup target through Zmanda deal

OpenDrives Ultra
OpenDrives Ultra

NAS supplier OpenDrives has set its product up as a backup target through a deal with Zmanda and its open source Amanda Enterprise backup software.

OpenDrives provides scale-out and high-performance NAS systems to hundreds of media and entertainment companies, such as HBO, CBS Sports, Spotify and Sony Interactive Entertainment, and also into healthcare and the advertising world. Zmanda provides on-premises backup software. Betsol, a data management and automation supplier  bought Zmanda in 2018. The Betsol-OpenDrives partnership will deliver a container-native enterprise backup and recovery (EBR) offering integrated with OpenDrives’ NAS systems for OpenDrives’ customers. It should have throughput up to 15GB/sec.

Sean Lee, Chief Strategy and Operations Officer at OpenDrives, said of the announcement: “Container-native EBR provides a cost-effective way to run this critical workflow at scale. For enterprise organizations, the sheer volume of data moving within backup and restore procedures demands high-performance – partnering with Zmanda allows us to accelerate both throughput speeds and deployment.”  

OpenDrives Amanda use diagram

The joint system features:

  • On-storage EBR software that is containerized for increased performance and eliminates need for separate backup server
  • Immutable snapshots to combat ransomware attacks
  • Intelligent, touch-free scheduling for automated backup support, said to reduce operational overhead and human error. 
  • All-inclusive licensing model and centralized customer support  
  • Legacy service for current clients and hosts, including those running on Solaris
  • Intuitive UI with clear terminology for admin functions to reduce training time
  • Turnkey EBR solution that scales to any enterprise data environment

Get a solutions brief document here.

Background

We have not heard of Zmanda or Betsol before and did some research, finding out that the open source Advanced Maryland Automatic Network Disk Archiver (Amanda) software running on Unix systems was initially developed at the University of Maryland in 1991, 31 years ago, the IT client:server Stone Age. Zmanda was started up in 2005 in Sunnyvale and develops and maintains Amanda. The SW executes on a server and backs up multiple client systems on a network, contacting each client to run a backup at scheduled times. Zmanda provides a commercial release of Amanda with a GUI and optional S3 cloud backup.

Zmanda was acquired by Carbonite in 2012, and Betsol bought it from Carbonite in 2018, when it protected more than 1 million systems worldwide and had hundreds of enterprise customers. It complemented Betsol’s Rebit consumer and SMB-focussed product. The OpenDrives-Zmanda announcement says: “Zmanda has protected over 1 million servers  and served customers in 40+ countries since 1991.“ 

The number of protected servers has remained constant since 2018 and the software has an old school flavor about it. For example, Zmanda has a forum for registered users and this is dated in appearance:

Zmanda has a list explaining/claiming why Amanda Enterprise is different from other backup products:

We registered at the Zmanda Network and downloaded the 17-page “Advantages of Amanda over Proprietary Backup Software” white paper. There is no publication date but it is potentially a bit dated, as it states: “most commercial backup software packages have not yet released support for IPv6 and do not provide backup to a storage cloud.” 

Au contraire Zmanda, most commercial backup software packages do provide backup to a storage cloud. The paper also cites traditional NetWorker, NetBackup and BackupExec as proprietary product examples, and not Cohesity, Rubrik or Veeam. It does have a good explanation of Amanda’s features though.

A v3.5.2 Amanda community edition release in July this year ensured that the tapes with old data are not overwritten with new backup set data. The v4.3.1 release of the Zmanda SW at the same time integrated LDAP, simplified user management and improved data restoration.

Zmanda makes extravagant claims about its backup software, viz: “Zmanda’s mission is to strive and set the standard for data protection and recovery,” and, “we [st]rive to deliver the best enterprise data management solution. With the 4.1.3 release, we have done it by offering extraordinary new features and enhancements.”

We’ll leave you to make up your own mind.

All this is a long way from the data protection world of Cohesity, Commvault Metallic, Druva, HYCU, Rubrik and Veeam.

Cloudera SaaSifies CDP with data lakehouse service

Big data analytics supplier Cloudera has launched its Cloudera Data Platform (CDP) One, an all-in-one data lakehouse software-as-a-service (SaaS) offering for self-service analytics and exploratory data science on any type of data. 

The data lakehouse concept was popularized by Databricks and Dremio. The idea is to combine any data – structured or unstructured – from virtually any source to support multiple analytic functions, including data warehouse and machine learning workloads. The concept is to enable analytics on ingested data without needing or before using any Extract, Transform and Load (ETL) procedures to load it into a data warehouse for analytics. Data warehouse suppliers such as Snowflake are adding data lake analytics capabilities.

Cloudera CTO Ram Venkatesh blogs: “CDP One enables companies to produce data products that offer data and analytics to more end users than ever before – it’s designed to enable both expert developers and low-code data analysts to get more value from their data.” 

The Cloudera service enables multiple analytic engines to deliver large-scale analytics on any data for ingestion, preparation, analysis, and prediction without the need for specialized operations. It is deployed on a private, single tenant cloud infrastructure managed by Cloudera. This follows news of Dremio’s data lakehouse with a faster SQL engine and new metastore a few months ago.

Cloudera CDP One

Cloudera says this is the first all-in-one data lakehouse SaaS offering, and that everyone from novice data practitioners to skilled developers get access to low-code tools, streaming data analytics, and ML to perform ad-hoc, highly customizable analysis across the full data lifecycle via one secure, centralized data platform.

CDP One provides:

  • Environment Monitoring & Optimization (Cost, SLAs, etc)
  • Compliance and Audit Management
  • End-to-end Integration between Cloud, On-premises and 3rd party apps
  • Environment Security Configuration & Operations
  • Cloud Configuration & Management Infrastructure Provisioning

CDP One’s launch follows Cloudera acquisitions of Datacoral and Cazena in June last year. Datacoral provides no-code connectors so customers can extract data from multiple data sources, orchestrate automated transformations using SQL, and set up end-to-end data pipelines from ingestion to transformation to publishing. Cazena’s instant cloud data lake SaaS platform includes end-to-end infrastructure and orchestration so customers get a fully managed, continuous operations model that delivers guaranteed security and performance 24/7.

Venkatesh said: “Rapidly onboarding new and interesting data and incorporating it into the analytics process is a critical part of helping an organization become data driven. Traditionally provisioning new data feeds and moving models into production environments can take weeks if not months.” Cloudera says CDP One speeds all this up so customers can get information from data faster.

Holy grail

Mike Feibus, principal analyst of market research firm FeibusTech, commented: “It’s inspiring to see what insights organizations can uncover from their data once the barriers are removed. Tearing down those barriers has been Cloudera’s mission since I’ve been following the industry. And now, finally, the self-service simplicity of CDP One is making those insights accessible for a whole new class of companies. Watch for services like CDP One to emerge as the Holy Grail of the data-driven era.”

CDP One appears to be using Talend data management and integration software. Talend’s Data Fabric platform can ingest different types of data from multiple sources into a data lake and provide API access to it. 

We’re told the launch of CDP One continues a long-standing and strategic relationship with Cloudera ISV partner Talend. Rolf Heimes, VP Global Channel & Alliances at Talend, is quoted in Cloudera’s announcement: “The huge volume of data spread across multiple cloud environments and on-premises locations makes it extremely difficult for businesses to ensure high-quality, well-governed data management processes.

“With the combination of Talend’s easy-to-use data management technologies with Cloudera’s powerful data and analytics service, we’re making it easier for our joint customers to use healthy data to drive business outcomes and accelerate their journey to the cloud.”

Private equity-owned business reshaping is going on here. We note that Snowflake has a partnership with Thoma Bravo-owned Talend. Coincidentally Cloudera is owned by private equity firms CDR and KKR who bought it for around $5.3 billion last June. 

We have asked Cloudera if and how Datacoral, Cazena, and Talend technology is used in CDP One and will update this story when we hear anything back.

Get a downloadable CDP One datasheet here.

HYCU offers protection freebie on AWS EC2

Three clouds
Three clouds

Backup-as-a-Service biz HYCU is offering a free-for-life tier of data protection on AWS, a rung below its paid-for tier and missing things like immutability.

You will still pay for AWS consumption used to protect the data, however.

The service is based on HYCU’s Protégé for AWS service. HYCU announced Protégé late last year as a cloud-native offering and began previewing it. It is a fully managed data protection service that provides 24/7 support, and includes monitoring of backup process outcome, with proactive alerting and ransomware protection. The service uses EBS snapshots which are application-consistent and have no impact on application performance.

HYCU told attendees at AWS Summit Anaheim 2022 about the free tier, which it will formally announce next week.

HYCU founder and CEO Simon Taylor said: “By releasing a free tier of HYCU Protégé for AWS, we have created the first cloud-native backup solution purpose-built for cloud and DevOps teams.”

In June, HYCU announced a free-for-life offering on AWS for 1TB of AWS data using Protégé. Now it has come up with another eye-catching offer, clearly wanting to grow its AWS data protection user base as fast as possible.

The AWS Data Protection Free for Life offer is for EC2 instances. It includes:

  • Automated backups with 1-click VM and file restore
  • Granular file restores
  • Daily notifications and alerts of backups
  • 24/7 live view of protection status
  • Recover from outages with cross-region recovery
  • Cross-regional disaster recovery
  • No credit card or billing rights required
  • No capacity or time limits
  • Community support

HYCU says customers only pay for the AWS consumption used to protect their data. The differences between HYCU’s free and paid-for AWS tiers are shown below:

HYCU Protégé for AWS

Note that backup immutability is in the paid-for service.

Protégé is a multi-cloud offering, supporting AWS, Azure, and the Google Cloud, as well as Nutanix and VMware environments.

Competition

HYCU is facing a lot of competition. Commvault protects EC2, EKS, AWS Outposts, VMware Cloud on AWS, RDS, DocumentDB, Redshift, DynamoDB, RDS on VMware, and Aurora. Find out more here.

Startup Clumio offers cloud-native AWS data protection services, and its portfolio includes ransomware protection and offerings for S3, DynamoDB, RDS, SQL Server on EC2, M365, and VMWare Cloud on AWS. Download a product overview here.

Druva also offers cloud-native AWS workload data protection, including EC2, RDS, and EKS with automated creation, retention, and deletion of EBS snapshots, RDS DB snapshots, and Amazon Machine Images (AMIs). It also supplies cross-regional disaster recovery. Get more information here.

With this depth and breadth of competition, it’s not surprising that HYCU is using eye-catching offers to get customers on board: land and expand will be its watchwords.

Find out more about HYCU Protégé for AWS here and get a datasheet here.

Barracuda sold to another private equity business

Barracuda
Barracuda

Mail backup and security vendor Barracuda Networks has been sold by its private equity owner to another, giving an indication of late-stage startup life after an IPO had been ruled out.

Barracuda was acquired for $1.6 billion in November 2017 by Thoma Bravo, a private equity investment company. Thoma Bravo has now sold it to KKR for an undisclosed sum. KKR sponsors investment funds that invest in private equity, credit, and real assets, and has strategic partners that manage hedge funds.

Barracuda CEO Hatem Naguib said:  “We‘re grateful to Thoma Bravo for their valuable strategic and operational support over the last four years.”

Chip Virnig, a partner at Thoma Bravo, added: “Barracuda has been a tremendous partner over the last four years and has experienced strong product, customer and revenue growth. We have enjoyed working closely with Hatem and his team through multiple acquisitions and operational improvements, and we are confident that the company is well-positioned for continued success.”

The Barracuda Networks business was started in 2003 and took in $46.4 million through five funding rounds, according to Crunchbase, but it lists two rounds for undisclosed sums so that is probably understating the total raised.

When it was bought by Thoma Bravo, Barracuda was a $400 million run-rate business. Financial analyst Jason Ader said at the time: “For Barracuda, we believe the Thoma Bravo acquisition is probably the best scenario for shareholders, given skepticism regarding the company’s ability to transition the business from on-premises appliances to cloud-based solutions and recent margin challenges.”

Thoma refocused the business on public cloud security along with network security (firewalls) and small and medium business customers. Now John Park, a partner at KKR, says: “We are excited to complete this transaction and begin working with the Barracuda team to support their continued growth and delivery of next generation cloud-first cybersecurity solutions that protect SMEs from an evolving landscape of threats.”


+Comment

The sequence of events  is that Barracuda was founded in 2003, grew with VC funding to its $400 million/year run rate, but an IPO did not happen. Instead, 14 years after being founded, it was bought in a first-stage private equity transaction. Nearly five years later, it is in the second stage of its private equity life. 

We think stage one reshaped it as a continuing and dependable revenue-earning business. We also think it likely that Thoma Bravo would have sold Barracuda to a publicly owned corporation, if one had wanted to be a buyer, but KKR bought it instead.

The pattern of events here is what probably awaits all late-stage technology startups that have a decent run rate but whose IPO dreams have faded. Private equity ownership beckons if the private equity people can see a viable business (as in, we can do stuff with it and sell it on later). 

Data protection business Datto IPO’d in late 2020 with Vista Private Equity partners having 69 percent of the shares. Datto was then sold to Kaseya for $6.2 billion. There’s money to be made here by PE businesses.

Veritas, coincidentally another data protection and security business, is in stage one private equity ownership, according to this reckoning. The Carlyle Group bought it for $7.4 billion in January 2016.  It’s a potential outcome for other data protection and security startups that can’t quite get to the IPO level but have dependable and sticky revenue streams. These are big bets by private equity players and can take years to play out.

Samsung said to be working on >200-layer NAND

Samsung is reportedly preparing to launch 236-layer 3D NAND product this year and open a NAND R&D center in Korea.

Update; Samsung question answers added to end of article. 19 August 2022.

Currently Samsung’s highest 3D NAND layer count is 176 with its seventh-generation V-NAND, which uses charge-trap technology and string-stacking – two 88-layer sections added together to reach the 176 level. Competing NAND foundry operators have already passed the 200-layer level. Micron has a 232-layer technology, SK hynix is at the 238-layer point, Western Digital and Kioxia’s joint venture is at 212, but not yet shipping product, and China’s YMTC has just announced 232-layer product.

Adding more layers increases a NAND die’s density and therefore enables higher-capacity SSDs or smaller devices. By moving to 236 layers, Samsung will add 34 percent more capacity to its die – another 60 layers, assuming the same cell bit count, TLC (3 bits/cell).

Samsung is not quoted directly in this Korean report nor in a second one. This means that the 236-layer technology development has not been confirmed by Samsung. It did present at FMS 2022 and a slide said eighth-generation V-NAND would appear this year with a 1Tbit die formatted as TLC and operating at 2.4Gbit/sec.

Compared to the 176-layer seventh-gen V-NAND, V8 cells have been shrunk laterally and vertically, and the base peripheral logic layer has been reduced in size as well. A slide note said: “V8 1Tb will be  released with physical scaling technology, improving storage capacity without adding volume.”

Dylan Patel, critical of Samsung
Dylan Patel

SemiAnalysis chief analyst Dylan Patel believes Samsung has a top-down management style cultural problem which is inhibiting its technology progress. He writes scathingly in another article: “Despite leading at 128-layer, Samsung has not shipped a new NAND process technology in years. Their 176-layer and >200-layer NAND process technologies have not been found in any SSDs by reverse engineering firms or teardowns. This is despite their claims of shipping 176-layer consumer SSDs in 2021.”

We have asked Samsung if this is correct and to confirm if it is developing a 236-layer V-NAND product. A Samsung spokesperson told us: :”Yes, we are shipping 176-layer V-NAND based SSDs as well as UFS 3.1 mobile storage to customers. We also began mass production of the 176-layer UFS 4.0 solution this month.”

The company was less forthcoming about 236-layer NAND: “Our 200+ V-NAND is well on-track and will be released as planned.  We cannot confirm the layer count or a specific timeline at this time.”

Bocada says automated BackupOps is coming

Data protection status monitor Bocada says automation is coming to manual backup operations, such as detecting a failed backup operation and doing something about it. About time too.

The company was founded in 1999 and developed backup status monitoring technology, but ran into company direction problems around 2013 when CEO Nancy Hurley left. Four years later, in 2017, it was taken into private equity ownership. Since then it has been extending the range of backup suppliers covered by its product, improving its backup monitoring capabilities, and, as far as we can see, keeping a low profile.

The company has issued a Global Backup Monitoring Trends Report, summarizing survey data from 260-plus IT professionals about backup monitoring trends. We’ve picked out a few points. One is that managing heterogeneous backup suppliers and backup types – such as on-premises, in-cloud, and SaaS app data – is the main backup operations challenge. No surprise there.

Another is that most of the the respondents do not automate manual activities that are increasingly repetitive as data volumes and locations grow. A chart illustrates this:

Bocada diagram

It also seems quite realistic to us. For example, admins should receive automatic alerts when backup storage runs low, or the backup activity pattern becomes suspicious.

Many of the respondents do plan to add automation to these things, as a second chart illustrates:

Bocada diagram

Approximately 50 percent of backup professionals anticipate at least some automation implementation, with backup failure remediation (54 percent) being the top choice. Again, that seems common sense.

A move to the cloud was also detected. The report says that while just over half of backup operations take place on-premises today, backup professionals expect over 60 percent of these operations to move to the cloud within the next three years. As organizations transition to cloud-only or hybrid-cloud environments, there is an acknowledgement that backup oversight must transition there as well.

The report says that backup professionals report budget and headcount growth compared to three years ago. This is a significant signal that organizations are placing greater importance on these activities and are positioning teams to better adapt to this shifting backup landscape.

Off the radar

Bocada has fallen off our radar in recent years and we asked CEO Matt Hall about this.

He told us: “While it may appear that things have been quiet with Bocada, that’s far from the case. The company was acquired by a private equity firm in 2017 and a major commitment was put in place to invest in product development.

“Bocada forward invested in building out cloud backup monitoring and reporting capabilities and addressed needs for broader data protection automation by developing automated approaches to ticketing and asset protection monitoring that seamlessly integrate with enterprise incident management tools (e.g. ServiceNow, Jira, BMC Remedy).”

A look at Bocada’s blogs over the past two or three years shows a great stream of new backup supplier and product support news, with a constant outflow of new software versions.

“These investments are reflected in an engineering team that’s three times larger than it was pre-acquisition, and a monthly release schedule based heavily on customer feedback. Over this span of time, Bocada has organically acquired many new enterprise customers, spanning from global MSPs (e.g. HCL, DXC, Datacom) to standalone enterprise accounts (e.g. Merck, Thermo Fisher, GM Financial).

“So, while it may appear things are being re-energized today, the truth is we’ve quietly but steadily been building the product out to be a holistic solution for data protection monitoring.”

Hall is listed on LinkedIn as a Bocada Customer Advocate, having joined the company in 2016. He’s also a general partner at HFM Capital and an Oregon-based investor. LinkedIn says Bocada has 40 employees.

Key:value SSD controller Pliops gains $100 million funding

Pliops XDP
Pliops XDP

SSD performance-boosting startup Pliops has gained $100 million funding to continue developing its XDP specialized processor card, and confirmed some layoffs in sales and marketing as the price of growth.

The XDP is hardware and software that provides a key:value store interface and functionality to an NVMe SSD to speed up applications such as relational and NoSQL databases – think MySQL, MongoDB, and Ceph.

Uri Beitler, Pliops
Uri Beitler

Uri Beitler, Pliops’s founder and CEO, issued a statement saying: “The ability to monetize data faster and get much more while paying much less is the core priority of every organization, especially in times of market volatility. Our transformative product offers this exact unique capability, making it imminent that Pliops XDP will be the cornerstone of every modern datacenter.”

It’s that prospective “imminence” which must have incentivized the contributing VCs.

Beitler said: “With the trust of our existing customers and partners, and our commitment to align company resources with the current economic climate, this funding round will accelerate our market adoption and help move us closer to becoming the market leader.”

Aligning company resources means making staff redundant. Israel’s Calcalist media outlet reported that 12 Pliops sales and marketing employees had been let go and Pliops is focusing its business on the USA.

Uri Beitler, Pliops CEO, confirmed this, telling the outlet: “We are downsizing our sales and marketing department to continue and grow and to focus on the next generation of our product that will be released in 2023.”

Pliops XDP
Pliops XDP

The funding round was led by Koch Disruptive Technologies (KDT) – with SK hynix, Lip-Bu Tan, chairman of Walden International, and State of Mind Ventures Momentum, plus previous investors. Pliops has to date raised a total of $215 million.

The company’s most recent round was for $65 million only last year, so it must have had a pressing need for new money if it has just taken in another $100 million. The VCs probably told it to control its costs better, which explains Beitler’s comment about “our commitment to align company resources with the current economic climate.”

The VCs and Pliops believe that by reliably accelerating performance in existing and new datacenters, the XDP enables customers to process, store, analyze, and move data faster and better. They get more from their growing data volumes and datacenter footprint, leading to reduced costs and energy consumption.

Cash contributor Jin Lim, head of Solution Lab (SOLAB) at SK hynix, said: “Pliops’s technology is well aligned with our storage, and we consider it an important tool and stepping stone toward next-generation storage systems that maximize the potential of data applications, including AI/ML and data analytics.”

Pliops says it’s looking at global cloud service providers, enterprises, and HPC customers. The XDP works with SmartNICs and DPUs, like Nvidia’s BlueField products, and is not in competition with them.

A next-generation XDP is being developed and should appear in 2023.

Dell’s PowerStore OS is even more agile as it hits the big 3.0

Sponsored Feature Very few people in the software world today would claim that “monoliths” are the best way forward for delivering new features or matching your company’s growth trajectory.

It might be hard to shrug off a legacy database or ERP system completely. But whether you’re tied to an aging, indispensable application or not, most organizations’ software roadmaps are focused on microservices and containers, with developers using continuous delivery and integration to ensure a steady stream of incremental, sometimes radical, improvements for both users and customers.

This marks an increasingly stark contrast with the infrastructure world, particularly when it comes to storage. Yes, relatively modern storage systems will have some inherent upgradability. You can probably add more or bigger disks, albeit within limits that are sometimes arbitrary. And new features do come along eventually, usually in the shape of major operating system or hardware upgrades. You just need to be patient, that’s all.

Unfortunately, patience is in short supply in the tech world. Modern applications involving analytics or machine learning demand ever more data, in double quick time. Adding more disks, or even appliances, to existing systems can only close the gap so far. And manually tuning systems for changing workloads is hardly a real time solution.

Large-scale rip and replace refreshes are at best a distraction from the business of innovation. At worst they will freeze it or destroy it completely as infrastructure teams struggle to integrate disparate systems with different components and processes. Switching architectures can mean learning entirely new tooling. More perilously, it can mean complicated migrations, which may put data at risk.

Any respite is likely to be temporary as software – and customer – demands quickly increase. Oh, and while all of this is happening, those creaking systems and the data they hold present a tempting target for ransomware gangs and cybercriminals.

Which is why Dell, with the launch of its PowerStore architecture in 2020, rethought how to build a storage operating system and its underlying hardware to deliver incremental upgradability as well as scalability. At the time, Dell’s portfolio included its own legacy systems such as EqualLogic and SC (Compellent), alongside those from XtremIO and Unity it inherited through its acquisition of EMC.

As Dell’s PowerStore global technology evangelist Jodey Hogeland puts it, the company asked itself, “How do we do what the industry has done with applications? How do we translate that to storage?”

The result is a unified architecture built around a container-based storage operating system, with individual storage management features delivered as microservices. This means refining or adding one feature doesn’t need to wait for a complete overhaul of the entire system.

Containers for enterprise storage? How does that work?

“So, for example, our data path runs as a container,” Hogeland explains. “It’s a unified architecture. So the NAS capabilities or file services run as a container.” Customers have the option “to deploy or not deploy, it’s a click of a button: do I want NAS? Or do I just want a block optimized environment?”

Needless to say with other vendors, having file-based storage AND block optimized hardware would typically mean two completely different platforms. The comparative flexibility of Dell’s unified architecture is also one of the reasons why PowerStore provides an upgrade path for all those earlier Dell and EMC storage lines. A core feature of the PowerStore interface is a GUI-based import external storage option, which supports all Dell’s pre-existing products, offering auto discovery and data migration.

More importantly, the container-based approach means the vendor has been able to massively ramp up the pace of innovation on the platform since launch, says Hogeland. “It’s almost like updating apps on your cell phone, where it’s, ‘Hey, there’s this new thing available, but I don’t have to go through a major update of everything every few months.’”

The latest PowerStore update, 3.0, was launched in May, and is the most significant to date delivering over 120 new software features. Its debut coincided with the launch of the next generation of hardware controllers for the platform. Existing Dell customers can easily add these to the appliances they already own through Dell’s Anytime Upgrade program. This allows them to choose between one or two data-in-place upgrade options on existing kit or get a discount on a second appliance.

The new controllers feature the latest Intel Xeon Platinum processors which add a number of cybersecurity enhancements – silicon-based protection against malware and malicious modifications for example, as well as Hardware Root of Trust protection and Secure Boot. And enhanced support for third party external key managers increases data at rest encryption security and protects against theft of the entire array.

The controller upgrades also bring increased throughput via 100GbE support, up from 25GbE, and expanded support for NVMe. The original platform offered NVMe in the base chassis, but further expansion meant switching to SAS drives. Now the expansion chassis also offers automated, self-discovering NVMe support. PowerStore 3.0 software enhancements make the most of those features with support for NVMe over VMware virtual volumes, or vVols, developed in collaboration with VMware. This is in addition to the NVMe over TCP capabilities that Dell launched earlier this year.

Preliminary tests suggest this combination of new features can deliver a performance boost of as much as 50 per cent on mixed workloads, with writes up to 70 per cent faster, and a factor of 10 speed boost for copy operations. Maximum capacity is increased by two thirds to 18.8PBe per cluster, with up to eight times as many volumes as the previous generation.

But it’s one thing to deliver more horsepower, another to give admins the ability to easily exploit it. The rise of containerization in the application world has happened in lock step with increased automation. Likewise, it’s essential to allow admins to manage increasingly complex storage infrastructure without diverting them from other, more valuable work. This is where automation, courtesy of PowerStore’s Dynamic Resiliency Engine, comes into play.

We have the power…now what do we do with it?

For example, Hogeland explains, a cluster can include multiple PowerStore appliances. If a user needs to create 100 volumes, how do they work out which is the most appropriate place to host them. The answer is, they shouldn’t have to. “In PowerStore today, you can literally say ‘I want to create 100 volumes.’ There’s a drop down that says auto disperse those workloads, based on what the requirements are, and the cluster analytics that’s going on behind the scenes”

At the same time automation at the backend of the array, such as choosing the most appropriate backend path, is completely transparent to the administrator regardless of what is happening on the host side. “We can make inline real time adjustments on the back end of the array to guarantee that we’re always 100 per cent providing the best performance the array can provide at any given time,” says Hogeland.

With PowerStore 3.0, this has been extended to offer self-optimization allied to incremental growth of the underlying infrastructure. When it comes to hardware upgrades, the PowerStore platform can scale up from as few as six flash modules to over 90. And, Hogeland points out, “We can mix and match drive sizes, we can mix and match capacities. And we can do single drive additions.”

As admins build up their system, the system will self-optimize accordingly. “As a customer goes beyond these thresholds, where they might get a better raw to usable ratio, because of the number of drives that are now in the system, PowerStore automatically begins to leverage that new width.”

“I don’t have to worry about reconfiguring the pool. There’s only one pool in PowerStore. I don’t have to move things around or worry about RAID groups or RAID sets.” adds Hogeland.

On a broader scale version 3.0 adds the ability to use PowerStore to create a true metro environment for high availability and disaster recovery without the need for additional hardware and licensing, or indeed cost. “The 3.0 release leverages direct integration with VMware and vSphere, Metro stretch cluster,” says Hogeland, meaning the sites can be up to 100km apart. Again, this is a natively integrated capability, which Hogeland says requires just six clicks to set up.

This native level of integration with VMware, means “traditional” workloads such as Oracle or SQL Server instances are landing on PowerStore, says Hogeland, along with virtual infrastructure and VDI deployments.

At the same time, Dell also provides integration with Kubernetes. A CSI Driver enables K8s-based systems, including Red Hat OpenShift, to use PowerStore infrastructure for persistent storage. “Massive shops that are already on the bleeding edge of K8s clusters are building very robust, container-based applications infrastructure on PowerStore.”

Which shows what you can do when you grasp that storage architectures, like software architectures, don’t have to be monolithic, but can be built to be flexible, even agile.

As Hogeland points out, historically it has taken storage players six or seven years to introduce features like Metro support, or the advanced software functions seen in PowerStore, into their architectures: “We’ve done it in roughly 24 months.”

Sponsored by Dell.

Storage news ticker – August 17

Storage news
Storage news

Codenotary has announced widespread adoption of immudb, its open-source enterprise-class database with data immutability at scale, by multiple organizations and cloud service providers. Following the release of immudb 1.3 last month, Codenotary has seen increased adoption in financial services, government, military, and ecommerce. With thousands of production deployments, immudb automatically stores the data versioned, timestamped and provides a cryptographic guarantee of zero tampering. There have been more than 15 million downloads of immudb so far.


File-based content collaborator Egnyte has looked at the AEC (Architecture, Engineering, and Construction) market and says data growth is exploding. Its AEC Data Insights Report is based on an analysis of data trends among more than 3,000 customers in the AEC industry. On average, Egnyte’s AEC customers increased storage by 31.2 percent compound annual growth rate (CAGR) from 2017 to 2021.

SaaS data protector HYCU has hired its first CFO, Justin Schumacher. He was VP Finance at JumpCloud where he helped closing out Series E and F rounds totaling $325 million. He was Vice President of Business Operations at Uplight, Vice President of Finance at Simple Energy, controller at SyncHR, and held senior management finance positions at SolidFire (acquired by NetApp), Datalogix (acquired by Oracle), and KPMG.   

Automated data access and security supplier Immuta has set up a European HQ in London, run by Colin Mitchell, GM for EMEA. European customers include Roche, Swedbank, and Billie. Immuta announced $100 million in Series E funding in June.

John Spiers of storage company IntelliProp
John Spiers

IntelliProp has hired John Spiers as its CEO and president, with co-founder Hiren Patel stepping back from his CEO role to become CTO. IntelliProp technology is focused on CXL-powered systems to enable the composing and sharing of memory to disparate systems. The company was  founded in 1999 to provide ASIC design and verification services for the data storage and memory industry. Spiers co-founded HCI pioneer LeftHand Networks, acquired by HPE and was CEO at NexGen Storage, acquired by Fusion-io.

Storage supplier Kioxia has announced industrial-grade flash products using its BiCS 5 (112-layer) TLC 3D-NAND technology available in a 132-BGA package. Densities range from 512 gigabits (64 gigabytes) to 4 terabits (512 gigabytes). They support a wide temperature range (-40°C to +85°C) and the devices have the ability to convert Triple-Level Cell (TLC) flash memory (3-bits per cell) to Single-Level Cell (1-bit per cell) mode – improving performance and reliability. Kioxia also offers wide temperature range (-40°C to +85°C) low density SLC flash memory industrial-grade products.

Big memory supplier MemVerge has released two new software products, Memory Machine Cloud Edition and Memory Viewer. Memory Machine Cloud Edition software uses patented ZeroIO memory snapshot technology and cloud service orchestration to transparently checkpoint long-running applications and allow customers to safely use low-cost Spot Instances, reducing cloud cost by up to 70 percent. Memory Viewer software provides system administrators with actionable information about DRAM. Its topology maps and heat maps provide system administrators new insights into their memory infrastructure. The software is free and available now for download.

Micron, the only US-based manufacturer of memory, today announced plans to invest $40 billion through the end of the decade to build leading-edge memory manufacturing in multiple phases in the US. With the anticipated grants and credits made possible by the CHIPS and Science Act, this investment will enable the world’s most advanced memory manufacturing in America. Micron expects to begin production in the second half of the decade, ramping overall supply in line with industry demand trends.

MSP backup service provider N-able has launched the N-able Cloud User Hub following the acquisition of Spinpanel in July. It’s a multi-tenant Microsoft 365 management and automation platform built for Microsoft Cloud Solution Providers and allows users to automate the management and security of all Microsoft tenants, users, and licenses.

Ocient, is releasing a report, “Beyond Big Data: The Rise of Hyperscale”, that forecasts exponential data growth. The study based on a survey of 500 technology and data professionals, across a variety of industries, who are managing active data workloads of 150 terabytes or more, concluding: 

  • 97 percent of respondents indicating that the volume of data managed by their organization will grow over the next one to five years
  • 72 percent of C-level executives expecting the volume to grow “very fast” over the next five years

To support such tremendous data growth, 98 percent of respondents agreed it’s somewhat or very important to increase the amount of data analyzed by their organizations in the next one to three years.

Matt Kixmoeller, formerly of Pure Storage
Matt Kixmoeller

Matt Kixmoeller stopped being VP Strategy at Pure Storage in September last year and has become a full-time product marketing and design person at Ghost Automotive. This startup was founded in 2017 by CEO John Hayes and  CTO Volkmar Uhlig. Hayes was a Pure Storage co-founder and chief architect at the company from 2009 to 2017. Uhlig designed and built a fully automated programmatic media trading platform at Adello.

Red Hat has announced a new iteration of Red Hat OpenShift Platform Plus with underlying technology updates: Red Hat OpenShift 4.11 (based on Kubernetes 1.24 and CRI-O 1.24 runtime interface), Red Hat Advanced Cluster Management for Kubernetes 2.6, and Red Hat OpenShift Data Foundation 4.11. OpenShift 4.11 includes Installer provisioned infrastructure (IPI) support for Nutanix for users to employ the IPI process for fully automated, integrated, one-click installation of OpenShift on supported Nutanix virtualized environments.

It also includes includes the operator-based OpenShift API for data protection. It can be used to backup and restore applications and data specifics natively or by using existing data protection applications across the hybrid cloud.

….

David Chapa has left his chief evangelist role at Qumulo to join Weka as a full time product marketeer.

Weebit Nano is qualifying its first embedded ReRAM module to verify it meets industry standards for thermal stability, non-volatile cycling endurance (passed 10K cycles with no failures), and data retention, Initial qualification results have been successful. There’s more info in a blog.

In other news… Mark Waite of Cohesive sent in this gem:

The world’s most famous magic society, The Circle of Strife, is taking legal action against a leading tech analyst firm in the first known case of “shapeism.” 

In a statement a spokesperson for the Magic Circle, a Mr A. Cadabra said: “We are taking a stand for circles everywhere who are being discriminated against in favour of four-sided geometric figures. For far too long squares, cubes, oblongs and quadrants have been getting preferential treatment and it’s time to put a stop to it. They will have their comeuppance… what goes round comes round!”

A spokesperson for ovals and triangles was not available at the time of writing.

Meanwhile, there has been a backlash within the IT community too. Dave the Dealer from Top Right Technologies said: “There’s nothing magic about paying an analyst company tens of thousands of dollars to get into a report that no one takes any notice of anyway. It’s more like the tragic quadrant if you ask me.”

To read the full story scroll up, up, up, to the right, right a bit, up, up, right…

TidalScale and inverted server virtualization

TidalScale
TidalScale

Nine-year-old startup TidalScale has developed a distributed hyperkernel which combines different nodes in a cluster of physical servers into a software-defined server. It enables inverse virtualization, combining a cluster of so-called worker nodes into a single large virtual machine with a big memory pool, and is based on the FreeBSD hypervisor bhyve. The hyperkernel virtualizes CPU, memory, and IO into vCPUS, vPages, and vI/O so that they can move around the cluster. The ebb and flow of these virtual components around the cluster is behind the name TidalScale.

Guest operating systems then run the software-defined server (SDServer), and Red Hat and Ubuntu have certified the hyperkernel through their hardware certification program. In order for this to work the hyperkernel has to guarantee distributed coherent shared memory (DCSM) and it does this without a single change to the guest operating systems. It hides the NUMA complexity, presenting a uniform memory architecture to the guest OS and enabling in-memory computing. 

Each node’s memory is like a level 4 cache for the entire software-defined server. The nodes are connected by a private Ethernet interconnect. Nodes can be added or taken away to right-size the cluster at any time. All worker nodes used in a single SDServer must have the same technical specifications in terms of motherboard and BIOS settings, CPUs, memory (DIMM size, speed, quantity), and network interfaces. Built-in, real-time machine learning technology dynamically optimizes both CPU and memory placement for best performance.

TidalScale can run Oracle Linux, CentOS, and SUSE. Windows Server support was coming in spring 2021. However, nothing has yet been announced. TidalScale runs on AWS, aggregating multiple bare-metal Amazon Elastic Compute Cloud (EC2) instances into a single virtual server, and also in the IBM cloud. It does not run in Azure or the Google Cloud platform.

Chief engineer Michael Berman discusses the hyperkernel in a video

With a TidalScale virtual server, cores and memory within the cluster are transparently moved and reallocated in microseconds to fit the target application and optimize performance.

A TidalGuard feature monitors the health of the physical servers, and enables them to be hot-swapped in minutes prior to a system failure. Applications continue to execute and require no  downtime. TidalScale claims this effectively adds two nines to the base server’s overall system uptime by detecting and avoiding over 90 percent of hardware failures. The feature also enables servers to be taken down for patching and maintenance.

You can get a TidalScale product sheet here.

History

TidalScale was started in 2013 by original CEO and now CTO Ike Nassi, lead software engineer Kleoni Ioannidou, and chief engineer Michael Berman. Nassi was EVP and chief scientist at SAP when the category of in-memory databases – think HANA – was established. Ioannidou left in 2018 to join C3.ai as a lead software engineer. 

TidalScale had $3 million in seed funding in 2013 and an $8 million A-round the next year, led by Bain Capital. Then there were what looked like two top-up funding events; a $1.5 million venture round in 2016 and another $1.5 million venture round in late 2018 followed a month or so later by a $27 million B-round, headed up by SK hynix.

There was another top-up-like venture round for $1.2 million in 2020. This pattern suggests it might need another funding round in a year or two.

Gary Smerdon joined as president and CEO in 2016, coming from being a senior advisor at SK Telecom Americas, acting CEO at Pavilion Data Systems for less than a year while the main CEO secured VC funding, and EVP, Chief Strategy and Product Officer at Fusion-io before that.

TidalScale and composable systems

TidalScale’s technology creates a virtual scale-up server with cores, memory, and Ethernet IO composed, as we might say, into its SDServer.

A composable system, such as Liqid’s, does the same but includes accelerators and directly attached storage. Its software composes dynamic virtual servers using pools of separate CPU+DRAM, GPU, Optane SSDs, FPGA, networking, and storage resources accessed over a PCIe fabric. Composed servers are deconstructed after use and their elements returned to the source pools for reuse.

In theory any PCIe-connected server component can be composed but Liqid does not deconstruct, as it were, CPU and DRAM separately; they are not connected by the PCIe bus. TidalScale can but its remit does not extend to having non-x86 server processors be part of its SDServer scheme, meaning no FPGAs, no GPUs, DPUs or SmartNICs.

Liqid can look forward to the CXL (computer express Link) world with externally connected CXL memory-sharing devices becoming part of its composable ecosystem. TidalScale is excluded from this unless it adopts PCIe and the CXL protocol to expand its SDServer scope beyond the limits of Ethernet-connected bare metal servers.

Storage suppliers make the top 5000 US fastest-growing companies list

Five storage suppliers have been ranked in the top 5000 list of North America’s fastest-growing companies: OwnBackup, Panzura, Komprise, SingleStore and OpenDrives. 

Update. Panzura added to the list, 18 Aug 2022.

The Inc. 5000 is an annual report published by the Inc. business magazine and the 2022 edition ranked companies by revenue growth between 2018 to 2021. Inc. explains that the minimum revenue required for its list inclusion in 2018 is $100,000 and the minimum for 2021 is $2 million. Top-ranked BlockFi, a financial services business, achieved spectacular 245,616 percent revenue growth in the period. SnapNurse in second place grew at 146,319 percent. Third-ranked CDL 1000 attained 56,135 percent growth. The numbers decrease rapidly as we go down the ranks.

  • Backup services supplier OwnBackup was ranked number 839 with 752 percent growth.
  • Cloud file sync-and-sharer Panzura at 1,343 with 485 percent ARR growth.
  • Unstructured data manager Komprise came in at number 1,971 with 306 percent growth.
  • Database supplier SingleStore achieved the 3,819 rank with 130 percent growth.
  • Storage hardware supplier OpenDrives came in at number 4,652 with 92 percent growth.

These are highly respectable numbers.

Panzura said it was growing at four times the rate of its nearest competitors within the survey period. CEO Jill Stelfox said: “Joining the Inc. 5000 list of fastest-growing private companies in America is an honor. … Our mission to provide every CIO with rock-solid security, complete visibility, and the advanced ability to put their data to work drives us each day.”

Kumar Goswami, CEO and co-founder of Komprise, said he was incredibly humbled in a statement, and: “Our rapid growth over the past few years speaks to real enterprise pains in managing data growth and discovering a frictionless, safe path to the cloud. We’re excited to help our customers on the next stage of unstructured data management maturity by enabling automated workflows to move the right unstructured data into cloud AI/ML and other big data tools.”

The list’s overall median growth rate was 230 percent, but the median of the top 500 companies was 2,144 percent – up from 1,820 percent for the 2021 list.

The Worldwide Insurance Network came in at 5,000 with 80 percent growth. Can we conclude that storage vendors that were not included in the Ince 5000 were excluded because they had less than 80 percent revenue growth from 2018 to 2021?

Inc, states: “To qualify, companies must have been founded and generating revenue by March 31, 2018. They must be US-based, privately held, for-profit, and independent – not subsidiaries or divisions of other companies – as of December 31, 2021.”

There is also a need to apply to Inc. for consideration as a listed business. Here’s what Inc said about applying for the 2021 list: “The application process is also simple: Input your contact information and a few company details, submit your revenue numbers, and finalize your payment. Once we’ve received your application, we’ll ask you to verify your 2017 and 2020 revenue figures. You’ll learn whether you made this year’s list by the end of July.”

The application fee for the 2023 list is $195 – not that much, but think about this: Ince made $975,000 just from the 5,000 successful applicants. If there were 5,000 unsuccessful applicants then it made $975,000 from them as well. (Ed – Perhaps Blocks & Files should run an annual top 1,000 storage company growth ranking report. We could make hundreds of thousands of … dream on.)

Our takeway is that companies in the list nominated themselves and paid Inc a small fee. Thus the Ince 5000 does not review all companies in the USA and rate their growth. We could conclude that storage companies not on the list simply did not bother applying – rather than that they failed to pass the bar to get into the top 5000. That’s why we don’t see companies like Weka and HYCU and many, many more in the list.

Bootnote

The top 500 companies are featured in the September issue of Inc. magazine, which will be available on August 23.