Home Blog Page 345

Your occasional storage digest featuring Druva, WekaIO with STACs of benchmarks, and more

PHAEDO manga comic cover

This week, Druva helps notebook and desktop users defeat ransomware, and WekaIO feasts on more STAC benchmarks.

Druva gets fiery-eyed

Druva, the backup service vendor, has cut a deal with security outfit FireEye to provide ransomware protection for desktop and notebook users. 

FireEye Helix, a cloud-hosted security operations platform, integrates via APIs with Druva InSync to inspect endpoint restoration from backup files.

Sean Morton, customer experience VP at FireEye, said: “Traditional backup solutions can be a ‘black box’, but Druva’s unique capabilities offer greater visibility into ongoing activities.”

Druva said the combined system identifies abnormal data restoration, ensuring data being restored is within the enterprises’ network. It verifies compliance to geography-based data access and restoration policy and makes visible who is accessing the system, tracking Unauthorised Admin Login attempts, password changes and Admin attempts to download or recover data.

The joint software generates alerts, according to pre-built rules. These trigger pre-configured playbooks to help security analysts assess an event and mitigate or fix it.

WekaIO wins STACs of benchmarks

WekaIO, the fast parallel file system software startup, has topped another set of STAC benchmarks

As we wrote 12 months ago: “The STAC-M3 Antuco and Kanaga benchmark suites are regarded as an industry standard for evaluating the performance of systems performing high-speed analytics on time series data. STAC benchmarks are widely used by banks and other financial services organisations.”

The STAC M3 tests involved a hefty setup with 32 HPE servers. The Kx kdb+ 3.6 database system was distributed across 14 HPE Proliant XL170r Gen10 servers, with data stored in a cluster of 18 HPE Proliant XL170r Gen10 servers with a total of 251TiB of SSD capacity, all accessed via WekaIO WekaFS 3.6.2 software.

WekaIO outperformed all publicly disclosed results in 11 of 24 Kanaga mean-response time (MRT) benchmarks and outperformed all publicly disclosed results in all Kanaga throughput benchmarks (STAC-M3.β1.1T.*.BPS).

When compared to a kdb+ solution involving an all-flash NAS and 4 database nodes (SUT ID KDB190430), it was faster in all 24 Kanaga MRT benchmarks and in 15 of 17 MRT Antuco benchmarks.

There’s more information in STAC benchmark test documentation (registration required).

Shorts

Fujitsu has announced PRIMEFLEX for VMware vSAN to streamline the deployment, operation, scalability and maintenance of VMware-based hyperconverged infrastructure (HCI). The system is intended for general purpose virtualization, virtual desktop infrastructures, big data and analytics, remote and branch office, edge computing and mission-critical workloads such as SAP HANA.

Kioxia has published a Manga comic designed by AI, called PHAEDO. This is the story of a homeless philosopher and Apollo, his robot bird, who try to solve crimes in Tokyo in 2030. Basically, AI software developed the character images using Nvidia GPUs and Kioxia SSDs. Kioxia says PHAEDO is the world’s first international manga created through human collaboration, high-speed and large-capacity memory and advanced AI technologies.

HPE has created vSAN ReadyNodes with VMware using ProLiant servers to add to its hyperconverged infrastructure appliance (HCIA) portfolio. The ProLiants are installed with vSphere, vSAN, and use firmware that complies with VMware Hardware Compatibility List (HCL). Customers receive HPE-based support for all Level 1 and Level 2 requests, with a handoff to VMware support for Level 3 software support requests.

IBM has joined the Active Archive Alliance. Chris Dittmer, IBM VP for high end storage, said: “Our archive storage solutions combine Exabytes of storage with geo-dispersed technology and built-in encryption for data integrity and confidentiality. We are excited to join the Active Archive Alliance and to help promote solutions that deliver rapid data search, retrieval, and analytics.” He’s talking mainly about disk-based IBM COS.

Wells Fargo senior analyst Aaron Rakers tells subscribers Micron pre- announced a solid (surprising) upside for their F3Q20 this week – revenue at $5.2-$5.4B and non-GAAP EPS of $0.75-$.80 vs. the company’s initial $4.6-$5.2B / $0.40-$0.70 guide. Rakers reckons demand for Micron’s server DRAM has shot up. 

WANdisco, which supplies replication technology, has won a global reseller agreement with an un-named systems integrator. This integrator, with 240,000 people in 46 countries – think a CAP Gemini-class business – is going to build its own data migration practice for moving data at scale into the public cloud. 

Veeam has published the Veeam 2020 Data Protection Trends Report, which reports that almost half of global organisations are hindered in their digital transformation (DX)journeys due to unreliable, legacy technologies. Forty-four per cent cite lack of IT skills or expertise, as one in 10 servers having unexpected outages each year — problems that last for hours and can cost hundreds of thousands of dollars. Veeam’s conclusion?  This points to an urgent need to modernise data protection and focus on business continuity to enable DX. Buy Veeam products in other words. No surprise here.

People

Commvault has hired Jonathan Bowl as general manager of Commvault UK, Ireland & Nordics.

Beth Phalen, President of Dell EMC’s Data Protection Division, has resigned.

SoftIron has appointed Andrew Moloney as its VP of strategy to lead the company’s go-to-market planning and execution as it expands its product portfolio and global presence. SoftIron recently completed a $34m Series B funding round.

IBM soups up Spectrum Protect Plus

IBM has extended Spectrum Protect Plus (SPP); a data protection and availability product for virtual environments.

In hybrid clouds, enterprises can deploy IBM Spectrum Protect Plus on premises to manage AWS EC2 EBS snapshots’ Alternatively, they can deploy IBM Spectrum Protect Plus on AWS for the “all-in-the-cloud experience”. Enhanced AWS workload support includes EC2 instances, VMware virtual machines, databases, and Microsoft Exchange.

There is better protection for containers. Developers can set policies to schedule Kubernetes persistent volume snapshots, replication to secondary sites and copying data to object storage or IBM Spectrum Protect for secure long-term data retention.

Developers can back up and recover logical persistent volume groupings using Kubernetes labels. IBM said this capability is key; as applications built using containers are actually logical groups of multiple components. For example, an application may have a Mongo DB container, a web service container and middleware containers. If these application components share a common label, users can use the Kubernetes label feature to select the logical application grouping instead of picking individual volumes (persistent volumes) that make up the application. 

Logical persistent volumes associated with Kubernetes namespaces can be backed up and recovered by developers.

Users can now back up, recover, reuse, and retain data in Windows file systems, with Spectrum Protect Plus agentless technology, including file-level backup and recovery of file systems on physical or virtualized servers.

Spectrum Protect

IBM continues to sell Spectrum Protect, which is a modern incarnation of the old Tivoli product,. The software backs up physical and virtual servers and in the public cloud. The latest release includes retention set to tape.

Spectrum Protect also enables users to back up the Spectrum Protect database directly to object storage including IBM Cloud Object Storage, AWS S3, Microsoft Azure, and other supported S3 targets.

There is a new licensing option for the Spectrum Protect Suite: the Committed Term License. These can be bought for a minimum of 12-months and up to a five-year term. IBM says the new pricing provides a lower-entry cost option, a flat price per TB, and increased flexibility with no vendor lock-in..

A new release of IBM’s Spectrum Copy Data Management’s lets users improve SAP HANA point in time (PIT) recovery with native log backups. Prior to this release, users could recover data using hourly snapshots. Log support enables much more gtranular recoveries.

Scality claims big savings for hospital data storage

Object storage supplier Scality says five healthcare customers each save an average $270,000 per PB over three years compared to their previous storage, and get data 52 per cent faster.

Applications include imaging, patient electronic health records, genomics sequencers, Biomedical, CCTV, EMR, radiology and accounting.

Data growth is a big issue for hospitals, Paul Speciale, Scality chief product officer, writes. “If we look at medium-to-large hospitals (those over 250 patient beds, which is common in cities with 100,000 people), in most cases there is now a petabyte scale data problem – medical image storage is becoming a much more prominent part of budgets.”

Scality has sponsored an IDC report to help persuade other healthcare customers to save money by using its RING software.

IDC interviewed five Scality customers – four hospital groups and a genomics research institution, -with an average of 2.7PB of data in their Scality RING object storage. A chart in the report shows the average costs for the Scality system and the customers’ equivalent storage systems not using Scality. Costs are split into initial and ongoing costs for storage hardware and software:

The Scality cost saving is 28 per cent, equating to $90,000 per PB per year or $90 per terabyte per year. Also, customers required 46 per cent less staff time such as scalability and ease of integration with other systems. One interviewee said: “Scality does not require a lot of management effort, unlike a SAN. Scality has allowed us to consolidate our storage arrays, which reduced the heterogeneous systems to manage a unified backup system.”

Scality numbers as customers more than 40 hospitals, hospital systems and genomics research institutions worldwide.

Fujifilm creates software framework for object storage on tape

  • Update: Scality Zenko is the S3 server inside Fujifilm’s Object Archive. 24 June 2020.
  • Update: Cloudian and the Object Archive added. 9 June 2020.

Fujifilm is paving the way for tape cartridges and libraries to become object storage archives.

Traditionally files from backup software are stored sequentially on magnetic tape for long-term retention. The last major innovation in tape storage access was LTFS, the Linear Tape File System, in 2011. This provides a file:folder interface to tape cartridges and libraries, with a drag and drop method for adding or retrieving files.

Now Fujifilm has developed a method for transferring objects and their metadata from disk or SSD drives to a tape system. This means tape can be used as an archive medium for object data.

Fujifilm’s new open source file format, OTFormat, enables objects and metadata to be written to, and read from, tape in native form.

The company’s Object Archive software uses OTFormat and an S3 API interface to provide the framework for an object storage tape tier – a software-defined tier, as Fujifilm puts it.

Update: Scality’s Zenko is the S3 server inside the Object Archive.

Fujifilm Object Archive

This tape tier can be an on-premises tape library. Fujifilm said storing petabytes of object data is much cheaper on tape drives than on drives or SSDs. It is also cheaper than the public cloud. However, tape’s slow access speed makes it suitable only for archiving low access rate objects or for object storage backup where slow restoration times are acceptable.

According to a Fujifilm White Paper, tape has a 4x data reliability superiority over disk drives. Tape also provides a literal air gap between the network and the tape cartridges when the latter are offline. This provides immunity from ransomware and other malware attacks, and a reliable means of recovering ransomeware-encrypted data.

LTO roadmap. Assume 18 – 24 months between generations.

Fujifilm is a maker of LTO tape; the current generation is LTO-8 and 12TB raw capacity per cartridge, 30TB compressed. The LTO roadmap extends to LTO-12 with 192TB raw capacity, 480TB compressed. LTO-12 tapes will hold up to 16 times more data than LTO-8 cartridges. There’s lots of capacity headroom, in other words.

No UI or data mover

Fujifilm had not revealed at time of publication a user interface and data mover for its Object Archive. Such an UI/data mover is needed to select older and low-access objects from a disk or SSD-based object store, and move them to a tape system. It would also present a list of objects on the tape system and a means of retrieval.

Fujifilm perhaps needs one or more third parties to provide object backup and restore capabilities. There is a long list of potential partners. Speaking of which …. Jerome Lecat, Scality’s CEO, when asked about Fujifilm’s Object Archive, told us: “Unfortunately I cannot tell you anything more specific today, but watch our news in the coming weeks, especially around Zenko.”

Cloudian and the Object Archive

Cloudian CEO Michael Tso tells us: “We know a lot about this. About 3 years ago, Fujifilm’s Japan based product team reached out to us to partner with them for this product. They use HyperStore as their reference S3 platform for their internal development and test, we acted as their advisor on architecture and S3 API design/compatability. We also hosted them in our US office multiple times, and are engaged with Fujifilm US team as well.”

“We like Fujifilm’s team, their product, we’ve supported their journey. And yes, HyperStore can tier into it. 

“I think this product is interesting in a few ways:

  1. S3 is THE standard – they didn’t implement some multi-protocol translation gateway, or their own dialect of S3 (like some other tape vendors), they built S3 natively into the box. 
  2. Object is the present – their data layout on tape is optimized for object. That’s a really big deal because (a) it’s a lot of work and (b) it helps resiliency and efficiency. They went to all this work because object is the present, file/block are the past.
  3. Object Stores need multi-cloud – OnPremises/Dark site cold archive is a real use case. HyperStore has native multi-cloud data management, so we simply plug in Fujifilm boxes as another “cloud target”, tier data in/out based on lifecycle policies. HyperStore’s single namespace can cover data in multiple clouds. All metadata stays in HyperStore so users can browse and search their entire namespace whether the actual data is in HyperStore, in a deep archive tape box, or in a public cloud.

This information’s means that Cloudian is ready to rock and roll with the Object Archive. We await the forthcoming news about its availability.

Snowflake climbs into embed with Salesforce

Snowflake has snuggled closer to its investor Salesforce with two tools that link their cloud-native systems.

The integrations enable customers to export Salesforce data to Snowflake and query it with Salesforce’s Einstein Analytics and Tableau applications. The idea is that enterprises should have a single repository for all their data.

Einstein Analytics Output Connector for Snowflake lets customers move their Salesforce data into the Snowflake data warehouse alongside data from other sources. Joint customers can consolidate all their Salesforce data in Snowflake. Automated data import keeps the Snowflake copy up to date.

Tableau dashboard

Einstein Analytics Direct Data for Snowflake enables users to run queries in their Snowflake Salesforce repository. The queries can also run data from other sources such as business applications, mobile apps, web activity, IoT devices, and datasets acquired through the Snowflake Data Marketplace and Private Data Exchange.

Einstein Analytics Output Connector for Snowflake will be available for customers later this year. Einstein Analytics Direct Data for Snowflake is in open beta and will also be generally available later this year.

Salesforce was a co-lead investor in Snowflake’s $479m funding round earlier this year, so it is literally invested in Snowflake’s success.

NetApp acquires Spot, a ‘game-changing’ cloud compute broker

NetApp is buying Spot.io, an Israeli startup that specialises in cloud cost controls, for an undisclosed sum. The price is said to be $450m, according to Calcalist, an Israeli publication.

Amiram Shacher, Spot CEO and co-founder, said in a blog post about the deal: “We are going to build the next game-changing business unit at NetApp.”

Spot claimed more than 500 customers, including Intel – a backer – Sony, Ticketmaster and Verizon, in a 2017 funding round. The company was founded just two years earlier and has amassed $52.6m in venture funding in total.

So what is NetApp buying? In effect, Spot is a cloud compute broker or virtual cloud provider that works with all three cloud giants. Spot’s Elastigroup product runs business applications on the lowest cost, discounted cloud compute instances, while maintaining service level agreements.

NetApp claims the technology can save up to 90 per cent of public cloud compute and storage cloud expenses. These typically account for up 70 per cent of total cloud spending.

Push these numbers through a mental spreadsheet and assume $1,000 per month public cloud spend and that compute and storage costs $700. By NetApp’s reckoning, Spot technology would saves $630, leaving total spend of $370 per month. If this technology works it’s a no-brainer.

NetApp will use Spot to establish an “application driven infrastructure” to enable customers to deploy more applications to public clouds. Spot’s as-a-service will provide continuous compute and storage optimisation of legacy enterprise applications, cloud-native workloads and data lakes.

Anthony Lye, head of NetApp’s public cloud services business, said in a canned quote: “Waste in the public clouds driven by idle resources and over-provisioned resources is a significant and a growing customer problem slowing down more public cloud adoption.

Diagram showing Spot optimising cloud compute costs and NetApp optimising cloud storage costs

“The combination of NetApp’s… shared storage platform for block, file and object and Spot’s compute platform will deliver a leading solution for the continuous optimisation of cost for all workloads, both cloud-native and legacy. Optimised customers are happy customers and happy customers deploy more to the public clouds.”

NetApp and the public cloud

Here we have an on-premises storage array supplier encouraging customers to consume its storage services in the public cloud. That seems significant.

NetApp bought CloudJumper at the end of April and gained technology on which to base a NetApp Virtual Desktop Service providing virtual desktop infrastructure from the public cloud to work-from-home office staff. This will involve cloud compute and storage and Spot technology could be used to optimise the associated costs.

In its latest results (Q4 fy2020) NetApp’s public cloud business contributed 7.9 per cent of the revenue in the quarter. There was a $111m annual recurring revenue run rate in the business, up 113 per cent. This is good growth but from a small base and William Blair analyst Jason Ader thinks the “ramp has been well below management targets”.

In short, NetApp need grow its public cloud business.

“The Public Clouds have become the default platforms for all new application development,” Lye writes in a company blog. Application developers “don’t want to have to understand infrastructure details to be able to develop and deploy their code… Why shouldn’t the infrastructure be able to determine how to optimise performance, availability and cost, even as demands change and evolve?”

The deal should close by the end of October subject to the usual conditions. Spot will continue to offer and support its products as part of NetApp.

Elastigroup tech

Elastigroup creates a public cloud virtual server compute instance composed of Spot Instances, reserved instances and on-demand instances. This virtual instance is always available and dynamic, changing the actual instance type to continually find and use the lowest cost instances available.

The number of servers is scaled as demand requires and as public cloud billing periods start and terminate. Spot Instance pricing operates in a spot market and the company’s algorithms predict spot market price fluctuations and provisions on-demand instances when the spot market price goes above them.

Spot background

Spot was established in Tel Aviv in 2015 by Shachar, Liran Polak and Ahron Twizer, who has since left the company.

Shachar and Polak were members of the MAMRAM or ‘cloud infrastructure’ unit of the Israel Defense Forces and responsible for managing IDF’s data centres and virtualization systems.

They later pursued degrees in computer science and their academic research focused on addressing data centre inefficiencies by utilising Spot Instances in the Amazon cloud.

Spot Instances were excess compute capacity sold by AWS with discounts up to 90 per cent compared to normal compute instance pricing. They could be turned off with two minutes notice if AWS needed the capacity for its normal business.

Shachar and Polak designed a machine learning model to predict when AWS would want the spot instances terminated and Shachar’s employed moved to AWS Spot Instances for its IT compute needs. From there, they extended the service to Azure and Google Cloud.

IBM AI storage supports Nvidia’s A100 GPU powerhouse

IBM’s Storage for Data and AI portfolio now supports the recently announced Nvidia DGX A100, which is designed for analytics and AI workloads.

David Wolford, IBM worldwide cloud storage portfolio marketing manager, wrote last week in a company blog: “IBM brings together the infrastructure of both file and object storage with Nvidia DGX A100 to create an end-to-end solution. It is integrated with the ability to catalog and discover (in real time) all the data for the Nvidia AI solution from both IBM Cloud Object Storage and IBM Spectrum Scale storage.”

Big Blue positions IBM Storage for Data and AI as components for a three-stage AI project pipeline; Ingest, Transform, and Analyse/Train. There are five products:

  • Cloud Object Storage (COS) data lake storage
  • Spectrum Discover file cataloguing and indexing software
  • Spectrum Scale scale-out parallel access file storage software,
  • ESS 3000 – an all-flash NVMe drive array, with containerised Spectrum Scale software installed on its Linux OS and with 24 SSD bays in a 2U cabinet
  • Spectrum LSF (load sharing facility) – a workload management and policy-driven job scheduling system for high-performance computing
IBM’s view of its storage and the AI Project pipeline

IBM is updating a Solutions Blueprint for Nvidia to include support for the DGX-A100. The new server uses Tesla A100 GPUs, which Nvidia claims is 20-times faster at AI work than the Tesla V100s used in the prior DGX-2.

Nvidia DGX-A100.

The IBM blueprint recommends COS to store ingested data, and function as a data lake. Spectrum Discover indexes this data and add metadata tags to its files. LSF manages AI project workflows and it is triggered by Spectrum Discover to move selected data from COS to the ES3000 with its Spectrum Scale software. There, it feeds the GPUs in the A100 when AI models are being developed and trained.

Other storage vendors, such as Dell, Igneous, NetApp, Pure Storage and VAST Data will also support the DGX A100. Some may try to cover the AI pipeline with a single storage array.

Data storage CEOs decry US racism, in response to George Floyd protests

The killing of George Floyd in Minneapolis on May 25 sparked mass protests against police brutality against black Americans across the USA. With President Trump urging the states to use the US military to quell civic unrest, the mostly peaceful protests show little sign of abating.

There are plenty of recent cases of unarmed black Americans who have died in police custody. But it feels different this time. For instance, in our small neck of the woods, CEOs of US tech companies are lining up to decry endemic racism against African-Americans and urging action to end this blight.

We now publish statements from four CEOs of data storage companies. We will update this article, if more storage leaders follow suit.

George Kurian

NetApp CEO George Kurian said in a statement that during the Covid-19 pandemic “at precisely the time when we are called upon to be our very best, we are witness to multiple acts of unspeakable cruelty and social injustice, particularly against the African-American community in the United States.”

He decried “the longstanding inability of America to truly confront our shared history,” and said NetApp has conducted an all-hands meeting to discuss the issue. He called on NetApp employees to “listen and learn from our underrepresented colleagues. 

“For it is in listening, that we can begin to really understand, and from genuine understanding comes empathy. And from empathy begins the journey to root out intolerance, bigotry, and injustice in all its forms.“

Dheeraj Pandey

Nutanix Chairman and CEO Dheeraj Pandey posted a message on LinkedIn: “Nutanix stands in solidarity with the Black community against hate, racism, and injustice. We join the community, and all of humanity, as we mourn the death of George Floyd, the latest senseless loss among many others.”

“To the Nutanix family, and most especially to our Black colleagues: we grieve with you. We see you and we support you. A world where you feel unsafe is a world that is unsafe for everyone. A world that is set up to see you fail, fails all of humankind. … Changing the status quo is everyone’s responsibility.”

Mohit Aron.

Mohit Aron, Cohesity founder and CEO, wrote on Twitter: “My heart goes out to anyone affected by racial intolerance. @Cohesity we don’t tolerate racism, bigotry, gender bias, or discrimination. I came to the U.S. because of the ideology that everyone should be treated equally and that is also why Respect is one of our core values.”

Dell Technologies founder and CEO Michal Dell posted a message to all Dell employees. It said: “The murder of George Floyd is an atrocity,” he wrote. “We all stand in horror, grieving as a nation alongside his family and his community. To see a man killed, a life ended cruelly and senselessly is something that will haunt me forever. But for people of color in communities all over this country and around the world – that footage is not a surprise, it is all too familiar. The fault lines of our society are laid bare. From the devastating and disproportionate impacts of COVID-19 to the devastating impacts of police brutality, the long-standing racial injustice in America that began 400 years ago is impossible to ignore. And the people who have been ignored are now demanding to be heard. We are listening.”

Michael Dell

He declared: ”Because for all the work we do within our own company, there will never be true justice or equality until we root out the rotten underbelly of racism that is eating away at the most cherished values we hold dear. 

Real change requires us all to actively participate in the hard work that lies ahead … the hard work that has to be done for our nation and our world to heal, grow stronger, and for us to move forward as one people with a shared voice.”

Datadobi gives object lesson in migrating S3 data

Datadobi, the file data migration specialist, has added S3 object transfer to its capabilities.

The company has supported S3 since 2017 via Dobisync, a product that copies files to an S3 target as a form of data protection. It has updated the technology to migrate from any S3 source to any S3 target and has included this tool in DobiMigrate 5.9, which hit the streets yesterday.

Carl D’Halluin, Datadobi CTO, said the new software release incorporates “everything we have learned about S3 object migrations into our DobiMigrate product, making it available to everyone. We have tested it with the major S3 platforms including AWS to enable our customers’ journey to and from the cloud.”

DataDobi migration diagram.

The S3 migration facility covers migration from any vendor or public cloud S3 store to any vendor or public cloud S3 store. It moves objects and object metadata, and verifies object transfer correctness by hashing each object as it is migrated. A fresh hash is calculated at the target site, and the source and target hashes are compared. If they match, the transfer has been successful. 

DobiMigrate creates a report to show every single hash of every single object, which is kept for future auditing. This could be an extremely large report if millions of S3 objects are migrated.

Migration progress is tracked on a dashboard. A formal switchover process can be run once all the objects have been moved.

Dobimigrate dashboard
DobiMigrate dashboard

DobiMigrate pricing is based on the number of terabytes to be migrated. Customers buy a fresh license if more data needs to be migrated subsequently.

Clumio debuts free snapshots with Amazon RDS data protection service

Clumio has expanded its AWS service coverage from EBS (Elastic Block Store) to RDS (Relational Database Service).

The SaaS backup startup, which came out of stealth last August, stores EBS and RDS snapshots in a separate AWS account from the customer’s own AWS account. This “air-gapping” separation adds security, particularly against ransomware, according to Clumio. (Note, the two AWS accounts are on the same AWS online system and so Clumio’s use of the term “air-gapping” is an analogy, not a description of a physical reality.)

Poojan Kumar, Clumio CEO said in a press statement yesterday: “Clumio delivers an authentic data protection service, that protects data outside of customer accounts for ransomware protection and lowers costs for long-term compliance.  There is no other company, product, or service that can deliver these values in the public cloud today.”

 Clumio RDS protection has three attributes: 

  • Operational point-in-time recovery and free snapshot orchestration for EBS and RDS workloads
  • RDS data recovery with restores, granular record and column retrieval and the aforesaid air-gapped snapshots
  • Compliance with long-term retention, record-level retrieval and deletion, export for legal hold/eDiscovery and data store. 
Poojan Kumar

Kumar told us the RDS record and column-level retrieval and EBS file retrieval “is huge and the beginning of more applications and use cases that can be run on top of the data we are protecting… There is much more to come.”

Clumio does not charge for the snapshots that Amazon RDS data protection creates, Kumar told us. “When it comes to the snapshot orchestration that everyone else charges for, that functionality is available in our free tier that is available for every customer.”

He said: “Enterprise customers tell us there isn’t much value in snapshot orchestration so we are instead focused on what is valuable to them – lower TCO for long-term retention and compliance for their data in the cloud.”

Kumar then took the gloves off: “No longer do customers need to pay for a product that just does snapshot orchestration. They can manage in-account snapshots with Clumio’s free tier. This throws out the window any product to protect cloud workloads by players like Rubrik, Cohesity, Druva, etc.”

Brian Cahill, technology director at FrogSlayer, a Texas software developer, provided a customer quote: “Clumio seamlessly handles operational recovery while significantly reducing our snapshot costs. Even better, I don’t have to worry about snapshot limitations, or develop scripts and retention algorithms because Clumio handles all of this. We now have line of sight to significantly reduce our TCO for long-term data retention since it is an integral part of the Clumio service.”

Clumio’s SaaS backup also protects VMware, VMware Cloud on AWS and Microsoft 365. Clumio for Amazon RDS goes live on June 11.

Cloudian object storage gains observability and analytics tool

Cloudian has released HyperIQ, a monitoring and observability system for its HyperStore object storage platform. The analytics software gives a single view of the Cloudian object storage infrastructure across the estate and can reduce mean time to repair, increase system availability and save costs, the company said today.

Functionality includes monitoring with real-time interactive dashboards and historical data, Customers can analyse resource utilisation by data centre, node, and services with drill down, using customisable dashboards. There are more than 100 available data panels for the dashboard.

HyperIQ screen.

Customers can monitor user activities, provide insights into usage patterns such as uploads/downloads, API usage, S3 transactions, request sizes and HTTP response codes and enforce security and compliance policies.

HyperIQ identifies trends and faults and send alerts, causing the creation of support cases, repair activity or system tuning to optimise operations. The software issues alerts for predicted hardware failures, assessing maintenance needs, and avoiding performance impacts. These alerts can be sent through multiple notification channels such as Slack, OpsGenie, Kafka and PagerDuty. HyperIQ also has a ServiceNow plug-in.

HyperIQ disk IO write time chart.

HyperIQ 1.0 supports HyperStore. The company will release an update later this year a version that supports HyperFile, Cloudian’s file gateway to to HyperStore. Cloudian’s ultimate goal is to make HyperIQ an AIOPs self-driving management tool.

Two versions of HyperIQ are available: HyperIQ Basic includes pre-configured dashboards and is offered to customers at no charge. HyperIQ Enterprise includes the analytics features, and is licensed by capacity at 0.025 cents per GB per month, including support. You can check out a HyperIQ datasheet for more information.

Cloudian said it has not yet looked at integrating HyperIQ with HPE InfoSight – HPE is a Cloudian reseller.

Rancher Labs branches out with Longhorn K8s storage

Rancher Labs, the Kubernetes management software developer, has moved up the stack with Longhorn, an open source storage solution for containers.

Sheng Liang, CEO of the California startup, said today: “Longhorn fills the need for 100 per cent open source and easy-to-deploy enterprise-grade Kubernetes storage solution.”

With this launch, Rancher Labs is directly competing with suppliers of storage for Kubernetes-orchestrated containers – which are also Rancher partners.

For cloud native development, enterprises need a container orchestration facility. Wrapping in storage provision with your Kubernetes distribution could save the bother of dealing with a third-party storage supplier such as Portworx and StorageOS. Also, Longhorn includes backup, which means that Rancher is competing with Kasten.

Grabbing K8s by the Longhorn

Longhorn is cloud-native and provides vendor-neutral distributed block storage. Features include thin-provisioning, non-disruptive volume enlargement, snapshots, backup and restore and cross-cluster disaster recovery. Users also get Kubernetes CLI integration and a standalone user interface.

Longhorn in Rancher Labs catalogue.

Storage volumes provisioned by Longhorn could live on the server’s local drives but Longhorn can also present existing NFS, iSCSI and Fibre Channel storage arrays. For instance, Dell EMC, NetApp and Pure Storage and Amazon’s Elastic Block Store could. be block storage sources. Longhorn will also work with Rancher Labs partners – and now competitors – PortworxStorageOS and MayaData’s OpenEBS to provide their storage for apps orchestrated by Rancher software.

Longhorn’s backup is a multi-cluster facility and can send the backup data to external storage. Its disaster recovery has defined RTO and RPO numbers unnecessary. Longhorn can also be updated without affecting storage volumes.

Longhorn is free of charge. Rancher has an app catalogue and users can download Longhorn from its website and also purchase support. There there are no licensing fees and node-based subscription pricing keeps costs down.

Longhorn console

Rancher Labs reminds us it is the cloud native industry’s favourite Kubernetes distribution, with some 30,000 active users and more than 100 million downloads. The company cites an IDC forecast that 70 per cent of enterprises will have deployed unified VMs, Kubernetes, and multi-cloud management processes by 2022.

Bootnote: Maya’s OpenEBS technology is using a version, a fork, of Longhorn to manage replication among the storage pods in its Virtual Storage Machines. A blog explains the details.