Home Blog Page 377

Virtual Instruments changes name to Virtana

Virtual Instruments has changed its moniker to Virtana and is rebranding the cloud cost control application gained through the acquisition of this Metricly in August this year as ‘CloudWisdom’.

Virtana can monitor app performance and cost in the public clouds, and contrast it with the comparable on-premises numbers. Customers can make better choices about where to run workloads. Using CloudWisdom customers plan, analyse and optimise their cloud workloads, services and resources.

Virtana said the company name change from Virtual Instruments reflects the change of direction since the company started out as a storage infrastructure monitor with instrumentation hooks into storage networking gear and storage systems. It added WorkloadWisdom, with its app workload performance analytics via the merger with Load DynamiX in 2016. It also bought Xangati that year to add virtualized server and cloud infrastructure performance monitoring. Then it bought Metricly and gained public cloud performance, capacity, and cost analysis capabilities.

Virtana has also developed resource capacity tracking and prediction alongside building application-to-base-infrastructure capabilities. It can also build application-to-and-through-infrastructure path maps: becoming app-aware. That means application outages and slowdowns can be tracked to specific infrastructure events such as a malfunctioning HBA port or network switch buffer overload, for resolution.

The public cloud adds difficulty because applications run on the cloud provider’s masked hardware and software resources in remote data centres. These are inaccessible at a raw state level and can be monitored only through the cloud provider’s access layer as compute instances and storage services.

We need AI help

Virtana is an enthusiastic advocate for “AIOps, a Gartner defined concept of performance and infrastructure monitoring. The company argues AI and machine learning models are needed because the incoming event stream for worklods in the on-premises and public cloud IT environment is too big and too complex for human operators to detect and fix things in real time. Virtana expects the monitoring and management of a customers IT estate to gradually embrace autonomous functions.

Virtana CEO Philippe Vincent said: “The future of IT operations is autonomous and hybrid… By providing customers with deep infrastructure visibility through an app-centric, real-time approach, we’re delivering the foundation for intelligent automation and helping customers de-risk their journey to the hybrid cloud.”

CloudWisdom and VirtualWisdom are available for free trials on Virtana’s website.

Commvault CEO boosts company growth hormone

Commvault is an old school enterprise data protection business re-inventing itself in a hurry.

I came to this conclusion after meeting CEO Sanjay Mirchandani last week at the Commvault GO conference in Denver. With my usual flair for putting people at ease, I kicked off the interview by asking him if he was a knee replacement, heart transplant or growth for Commvault. “Growth hormone,” he replied.

Let’s see why Commvault could do with a growth injection.

Sanjay Mirchandani

Commvault’s traditional business market has been assaulted on all fronts. By appliances; by Veeam offering simpler and faster virtual server backup; by the public cloud offering cheaper storage; secondary data management startups like Cohesity; and by a potential killer – SaaS-based data protection as a service.

The company remains at the top of Gartner’s backup and recovery Magic Quadrant but revenue growth since 2014 has been anaemic. Currently it is a $160m to $180m revenue per quarter company – a $600m – $700m run rate business. Faster-moving competitors like Veeam have surged up to a billion dollar run rate and beyond. Also Commvault typically makes small losses per quarter or low single digit net profits.

This has aroused the interest of activist investor Elliott Management, which took a 10 per cent stake in Commvault in 2018 and pressed for changes. The upshot was the resignation last year of previous CEO Bob Hammer and restructuring.

Now Mirchandani, who joined the company from Puppet Labs where he was also CEO, is accelerating and widening the reforms instigated by Hammer.

He is nine months into the job of rejuvenating Commvault and has already experienced a couple of tricky quarters with revenue misses. The first was part way through when he arrived, but the second one happened on his watch, and the effect was to galvanise his change agenda.

“The number one priority is top line growth,” Mirchandani told me: “It absolutely should be a billion dollar company.”

The route is via unifying data and storage management across on-premises IT and public clouds. Hedvig’s DSP, running in both environments will provide unified storage and Commvault can provide the unified data management. 

The moves Commvault are making include:

Growing outside the backup reservation

Commvault is trying to expand beyond data protection and into data management and primary storage. That means taking on the primary storage systems suppliers such as Dell EMC, Hitachi Vantara and HPE, IBM, NetApp and Pure Storage plus upstarts including Excelero, Kaminario, and Pavilion.

In doing this Commvault is trying to do something no backup supplier has ever done successfully, with the exception of Veritas, but as that company has been offloaded by Symantec to private equity and struggling I think we can discount it.

Primary storage supplier collision

Commvault is not making a full-frontal assault on primary storage now. The company forecasts a collision with primary storage vendors, based on future enterprise IT containerisation under a unified data and storage management umbrella. Hedvig and its Distributed Storage Platform is the path to this unified world.

The collision with primary storage will happen because Commvault’s enterprise data protection market is growing too slowly. However there is a growth opportunity in the small and medium enterprise market, where Commvault has been less successful to date. The company’s new Metallic SaaS is pitched at this market as a CAPEX-saving data protection service that removes the need for on-premises backup servers and dedupe-to-disk target appliances.

Companies risk data fragmentation as they adopt hybrid on-premises and public cloud IT models. This is because their data will reside on separate on-premises and public cloud data protection silos. Commvault thinks it has the remedy.

A data management software layer will bridge these silos, to glue fragmented data back together and provide a unified data management sphere across them. That’s the first step. Containerised applications will move between public clouds and the on-premises world, and data will have to move with them, in a universal data plane

The crux is that primary storage and secondary storage become use cases for data rather than separate silos. Data flows from a primary to a secondary storage state as business needs dictate. For example, primary data will be needed for, and generated by, transactions. It will be created by and flood in from sensors.

Then it will be protected, used for analysis or DevOps, needed for compliance, and saved for reference in archives. Data flows between the states will be easier when overseen by single storage utility or management system.

Unified storage and data management

The unified data management layer will need a unified storage layer underneath that supports block, file and object storage protocols. These are seen as different ways to access data at different stages in its lifecycle. Hedvig’s Distributed Storage Platform is that unified storage layer in Commvault’s scheme. And it provides both primary and secondary storage capability. “I feel the Hedvig bet is reasonably risk-free,” and Mirchandani told us. Let’s see.

As enterprises realise in the next few years that they need unified data management and storage management software covering primary and secondary storage applications, Commvault will be ready. It will be able to supply them with that software, and be a primary as well as a secondary storage supplier.  


Pensando bags big bucks to take on AWS Nitro

MPLS, the renowned Cisco spin-in quartet, are back in town with their latest startup, an edge computing company called Pensando Systems that seeks to out-do Amazon Nitro. Only this time it is HPE that is doing the backing.

Pensando emerged from stealth last week, announcing it had raised $145m in a Series C round led by HPE and Lightspeed Venture Partners. It will use the money to accelerate engineering, operations, and go-to-market activities.

Pensando claims its proprietary accelerator cards can turn existing on-premises architectures into next-generation clouds.

It said cloud service providers can use its product to gain an edge over Amazon’s Nitro for specialised compute instance and said the accelerator cards perform five to nine times better in productivity, performance and scale than AWS Nitro.

Nitro systems are dedicated hardware cards using ASICs that offload networking, storage and management tasks from EC2 host servers. AWS uses Nitro to design and deliver EC2 instance types with a selection of compute, storage, memory, and networking options.

Pensando claims to be able to do the same kind of customisable host server hardware-offload – only better, with no risk of lock-in and in a scale-out fashion.

The company said its developing product has been influenced by input from vendors such as HPE, NetApp and Equinix and it is already used by multiple Fortune 500 customers such as Goldman Sachs, an investor.

Fab four

Pensando was set up in 2017 and has had three funding rounds, totalling $278m – the latest reportedly gives a post-money valuation of $645m.

  • Series A founder-led round of $71m,
  • Series B customer-led round of $62m,
  • Series C round to raise up to $145m.

The founders are Mario Mazzola, CEO Prem Jain, Luca Cafiero and chief business officer Soni Jiandani. They are the MPLS team – the acronym is derived from their first names – who joined Cisco when it bought Crescendo Communications in 1993. They built eight $bn/year spin-in businesses for Cisco and were richly awarded for their labours.

The software-defined networking business was devised at Insieme, a spin-in startup acquired by Cisco for $836m in 2103. SAN switch developer Andiamo and Nexus switch and /UCS server inventor Nuova were two previous spin-ins at Cisco and Mazzola, Cafiero and Jain were involved with both; Jiandani was at Nuova.

The team were close to CEO John Chambers and they all resigned when Chuck Robbins replaced him in 2015. Chambers is Pensando’s chairman and an investor.

Close to the edge

Pensando’s product is a programmable edge computing accelerator that offloads a host server to deliver cloud, compute, networking, storage and security services which can be chained together in any order.

The edge angle is that edge computing will mean more data has to be processed where it is generated. Pensando helps servers to keep up, as its technology can terminate and encrypt one billion IoT connections in half a rack of servers.

The technology is called an Edge Services Platform and is based on a custom, programmable processor designed to execute a software stack delivering the various services, which is managed by a Venice centralised policy and services controller. It comprises Naples 100 and Naples 25 Distributed Services Cards (DSC) which deliver the service functions to a host server. They have:

  • Custom processor chip to accelerate on-premises servers
  • Software-defined customisable distributed infrastructure  services across cloud, compute, networking, storage, security, and virtual appliances
  • Edge-acceleration, scaling out linearly with any server environment and minimising latency and jitter while freeing up host CPU cycles
  • 100Gb/sec wire-speed encryption, and hardware isolation
  • Compatible with virtualized or bare-metal servers as well as containerised workloads
  • APIs for management.

The Venice controller distributes infrastructure services policies to active Naples nodes. It handles lifecycle management such as deploying in-service software upgrades to Naples nodes and delivers always-on telemetry and end-to-end observability.

The DSC-host attachment method is not revealed and nor is the environment required for the Venice controller software.

Known unknowns

Pensando claims the DSCs reduce host CPU utilisation by 20-40 per cent. This implies the DSC card fits between the host server and existing storage and networking resources. Pensando doesn’t say if this is the case, and doesn’t say how the card connects to those resources, but implies 100GbitE. Nor does it say which host server environments are supported. We are not told how the DSC card is programmed, nor how Pensando’s technology provides cloud services.

It stands to reason that potential Pensando customers have large and sustained incoming data flows requiring a workflow that can be implemented using Pensando programmable hardware to assist the host server’s CPU. The freed-up CPU cycles enable it to do more work and enable customers to avoid buying specialised devices or software to provide the functions that Pensando can host. For example, Pensando said its DSCs can replace firewalls, load balancers and encryption devices.

General availability for the DSCs has not been revealed.

Dell EMC diverges HCI into networking and storage units

Dell EMC has shunted its hyperconverged infrastructure (HCI) products from the server business unit and divided them between the storage and networking units.

Last month Jeff Boudreau was promoted from head of storage to run the Infrastructure  Systems Group (ISG) which embraces servers, storage, data protection and networking.

Ashley Gorakhpurwalla, general manager, Server and Infrastructure Systems, a 19-year Dell veteran, has resigned. Arthur Lewis, his replacement, was formerly SVP of the ISG Center of Competence. 

Boudreau has taken the opportunity to move the bulk of the HCI business, centered on the VxRail, to Tom Burns’ networking and solutions unit. The rationale is that HCI is a solutions business and best handled there. 

VxRail, VxRack, VxFLEX, and XC HCI were moved to Gorakhpurwalla’s server unit in January last year by then ISG boss, Jeff Clarke. 

The VxFlex part of the HCI product set has been assigned to Dan Inbar’s storage unit, as it is aligned with software-defined storage.

Concerning the mistaken belief that key value databases chew up SSDs

Blog: There is a misguided notion that key value databases are unfriendly to SSDs. That’s because unattended compaction layers on SSDs can shorten the SSD’s life.

You might think that key value code needs to be rewritten to cope with this.Wrong. I don’t claim to be an expert on key value databases, but I do know storage.

At risk of ruining the ending … just migrate cooler data off SSDs onto HDDs.

The Problem

Many key value databases employ compaction layers. If poorly administered and left un-attended, the data piles up. The compaction layers fill up again and again, and cause the data to be copied to another layer. And this cycle repeats. 

The result is excessive and SSD-unfriendly write amplification; this threatens the SSD’s write endurance.

For those unfamiliar with compaction layers and Log Structured Merge Trees, there’s a great primer by Ben Stopford.

Compaction diagram.

The Fix: A storage solution for very large, but cooler key value databases

Large capacity databases store cooler, historical data and experience minimum writes. Understandably, cooler/bigger data is stored on HDDs, not SSDs. Common sense can prevail.

Understandably, performance-sensitive databases are rarely high capacity. This is for the simple reason that most admins of key value databases have commonsense and move ageing logging information to cooler, larger historical databases. 

Additionally, if SSDs are used in situations where reliability is threatened by excessive writes, simply buy higher capacity SSDs. Bigger SSDs are harder to repeatedly fill up. 

Compare the Terabytes Written per Day (TWD) rating of small capacity versus larger capacity SSDs to understand why this is the case. (My previous blog explains why you should buy bigger capacity drives.) Then have spares and migrate cooler data off the SSDs.

Put yourself into the shoes of a large-ish data centre which uses key value for large but low write rate (cooler) databases..

Would it make sense to capture logs for this week’s data on SSDs, and then move older log data onto cooler HDD based systems? Of course, it would. 

For those few large and hot databases, does it make sense to just buy larger capacity, higher TWD SSDs for them? Of course it does. 

So why rewrite key value database code when the same problem can be solved better and cheaper with good admin?

In summary, be diligent about migrating cooler key value data off SSDs and onto HDDs. For bigger, hotter databases, buy larger capacity, higher TWD SSDs, use more SSDs, and have some spare SSDs on hand for failures.

Veeam saves N2WS’s federal business – by selling the company

Veeam Software has divested N2WS because the US government – an N2WS customer – was unhappy that the subsidiary was controlled by a company controlled and partly owned by two Russian executives.

A Veeam spokesperson told us: “Several months following Veeam’s acquisition of N2WS in late 2017, the US Government had requested certain information regarding the transaction. After some discussion with the US Government, in the first half of 2019, Veeam voluntarily agreed to sell N2WS so as to focus on a unified cloud platform. The sale of N2WS was completed in the third quarter of 2019.”

Veeam acquired N2SW for $42.5m to gain cloud-native enterprise backup and disaster recovery technology for Amazon Web Services. N2WS operates as a stand-alone company and was known as N2WS – A Veeam Company.

Details of the purchaser was not disclosed. The divestment is to be officially announced at Microsoft Ignite next month. We think Veeam will use the show to announce its own cloud-native backup storage service running on Azure and AWS.

Veaam is based in Switzerland and was founded in 2005 by CEO Andrei Baronov and Ratmir Timashev, head of sales and marketing. The privately owned company claimed $1bn-plus revenues for calendar year 2018.

How key value databases shorten the lifespan of SSDs

Explainer Organising a key value database for fast reads and writes can lower an SSD’s working life. This is how.

Key value database

A relational database stores data records organised into rows and columns and located by row;column addresses. A key value database, such as Redis and RocksDB, stores records (values) using unique keys for each record. Each record is written as a key value pair and the key is used to retrieve the record.

Compaction layers

It is faster to write data sequentially on disks and on SSDs. Writing key value pairs sequentially logging or journalling) is fast but finding (reading) a particular key value pair requirea a slow trawl through this log. Methods to speed up this reading include organising the data into Log Structured Merge Trees (LSTM). This slow writes a little and speed up reads a lot.

There is a fairly detailed introduction to LSTM issues by Confluent CTO Ben Stopford.

In an LSTM scheme groups of writes are saved, sequentially, to smaller index files. They are sorted to speed up reads. New, later groups of writes go into new index files. Layers of such files are built up.

Each index has to be read with a separate IO. The index files are merged or compacted every so often to prevent their number becoming too large and so making reads slower overall.

On SSDs such compaction involves write cycles in addition to the original data writes. This larger or amplified number of writes shortens the SSD’s life, since that is defined as the number of write cycles it can support.

What can be done? One response is to rewrite the key value database code to fix the problem. Another is to simply use larger SSDs and eject low access rate data periodically to cheaper storage. This saves the SSD’s capacity and working life for hot data. Read on for our columnist Hubbert Smith’s thoughts on this matter – Concerning the mistaken belief that key value databases chew up SSDs.


Commvault: Containers will enable unified data and storage management

Commvault and its new subsidiary Hedvig anticipate future production use of containers, with unified data and storage management.

CEO Sanjay Mirchandani and his chief storage strategist, ex-Hedvig CEO Avinash Lakshman, made this clear in briefings at Commvault GO in Denver this week.

They see enterprises making production use of containers in the near future – it is just DevOps use at present. Commvault sees a coming hybrid on-premises and multi-cloud environment. Containers and  heir data will move between on-premises and cloud environments. 

Customers will want an integrated system framework or control plane  that abstracts data from the infrastructure and manages it and the storage on which it resides. A data management facility separate from the storage management facility will lead to complexity and inefficiency. They need to be combined, and provide freedom from lock-in from storage players or cloud suppliers.

Primary storage

This is why Commvault bought Hedvig last month. And why it is saying it will move into primary storage. But there is no direct competition with Dell EMC, HPE or NetApp right now. Instead, in this coming container-centric future, customers will need unified and integrated storage and data managing software that runs across the hybrid multi-cloud environment they occupy. 

Commvault and Hedvig is ready to satisfy this need. And ready to satisfy it better than anyone else; Dell EMC, HPE, NetApp or others, because Commvault is preparing for this future now, Mirchandani claimed.

Hedvig news

In line with this idea, Hedvig is adding Container Storage Interface (CSI) support for Kubernetes management, erasure coding to use less space for data integrity than RAID, multi-tenancy and multi-data centre cluster management. Specifically:

  • Container Storage Interface (CSI) support, which enables enterprises to use Commvault for the management of Kubernetes and other container orchestrators (COs). Built-in data center availability, which helps enterprises improve data resiliency.
  • Support for erasure coding, which improves storage efficiency.
  • Support for multi-tenant data centres, including the ability to manage tenant level access, control, and encryption settings, which will allow managed service providers (MSPs) to deliver storage services across hybrid cloud environments.
  • Multi-data centre cluster management, alerting and reporting, so enterprises and MSPs can configure and administer all their data centres’ software-defined storage infrastructure from a single location.

Lakshman said in a prepared quote that these “new capabilities converge many of the latest storage, container and cloud technologies, allowing enterprises to automate manual infrastructure management processes and simplify their multi-cloud environments.”

By this line of thinking the separation of primary data and secondary data will reduce and that in turn will reduce the separation between primary storage and secondary storage. There is just data and it resides on whatever storage is necessary for data use at any particular time. 

We might also consider the notion that data lives in a unified data space, with a Hammerspace-like abstraction layer integrated with the Hedvig Distributed Storage Platform software enabling this.

We can expect a continual flow of developments as Commvault prepares Hedvig for this unified data and storage management future that embraces containerisation. It will also be integrating Hedvig’s technologies into its own portfolio of data protection offerings.

OpenIO ‘solves’ the problem with object storage hyperscalability

OpenIO, a French startup, is claimed the speed record for serving object data – blasting past one terabits-per-second to match high end SAN array performance.

Object storage manages data as objects in a flat address space spread across multiple servers and IO speed is inherently slower than SAN arrays or filer systems. However, in August 2019 Minio showed its object storage software can be a fast access store – and now OpenIO has demonstrated its technology combines high scalability and fast access.

Laurent Denel, OpenIO CEO

The company created an object storage grid on 350 commodity servers owned by Criteo, an ad tech firm, and achieved 1.372 Tbit/s throughput (171.5GB/sec). Stuart Pook, a senior site reliability engineer at Criteo, said in a prepared quote: “They were able to achieve a write rate close to the theoretical limits of the hardware we made available.”

OpenIO claimed this performance is a record because to date no other company has demonstrated object storage technology at such high throughput and on such a scale.

To put the performance in context, Dell EMC’s most powerful PowerMax 8000 array does 350GB/sec. Hitachi Vantara’s high-end VSP 5500 does 148GB/sec, slower than the OpenIO grid. Infinidat’s InfiniBox does 25GB/sec. 

Laurent Denel, CEO and co-founder of OpenIO, said:  “Once we solved the problem of hyper-scalability, it became clear that data would be manipulated more intensively than in the past. This is why we designed an efficient solution, capable of being used as primary storage for video streaming… or to serve increasingly large datasets for big data use cases.”

Benchmark this!

Blocks & Files thought that the workload on the OpenIO grid had to be spread across the servers, as parallelism was surely required to reach this performance level. But how was the workload co-ordinated across the hundreds of servers? What was the storage media?

Blocks & Files: How does OpenIO’s technology enable such high (1.37Tbit/s) write rates? 

OpenIO: Our grid architecture is completely distributed and components are loosely coupled. There is no central, nor distributed lock management, so no single component sees the whole traffic. This enables the data layer, the metadata layer and the access layer (the S3 gateways) to scale linearly. At the end of the day, performance is all about using the hardware of a server with the least overhead possible and multiplying by the number of servers to achieve the right figure for the targeted performance. That makes it really easy!

Blocks & Files: What was the media used?

OpenIO: We were using magnetic hard drives in 3.5 LFF format, of 8TB each. There were 15 drives per server; 352 servers. This is for the data part. The metadata were stored on a single 240GB SSD on each of the 352 servers. OpenIO typically requires less than 1% of the overall capacity for the metadata storage.

Blocks & Files: What was the server configuration? 

OpenIO: We used the hardware that Criteo deploys in production for their production datalake… These servers are meant to be running the Hadoop stack with a mix of storage through HDFS and big data compute. They are standard 2U servers, with 10GigE ethernet links. Here is the detailed configuration;

  • HPE 2U ProLiant DL380 Gen10, 
  • CPU 2 x Intel Skylake Gold 6140
  • RAM 384 GB (12 x 32 GB – SK Hynix, DDR4 RDIMM @2 666 MHz)
  • System drives 2 x 240 GB SSD (only one used for the metadata layer) Micron 5100 ECO
  • Capacity drives SATA HDD – LFF – Hotswap 15 x 8 TB Seagate
  • Network 2 x SFP+ 10 Gbit/s HPE 562FLR-SFP + Adapter Intel X710 (only one attached and functional)

Blocks & Files: Was the write work spread evenly across the servers?

OpenIO: Yes. The write work was coming from the production datalake of Criteo (2.500 servers), i.e. it was coming from many sources, and targeting many destinations within the OpenIO cluster, as each node, each physical server, is an S3 gateway, a metadata node and a data node at the same time. Criteo used a classic DNS Round Robin mechanism to route the traffic to the cluster (350 endpoints) as a first level of load balancing.

As this is never perfect, OpenIO implements our own load balancing mechanism as a secondary level: each of the OpenIO S3 gateways is able to share the load with the others. This produced a very even write flow, with each gateway taking 1/350 of the whole write calls.

Blocks & Files: How were the servers connected to the host system?

OpenIO: There is no host system. It is one platform, the production datalake of Criteo (2,500 nodes) writing data to another platform, the OpenIO S3 platform (352 nodes). The operation was performed through a distributed copy tool from the Hadoop environment. The tool is called distCp and it can read and write, from and to HDFS or S3.

From a network standpoint, the two platforms are nested together, and the servers from both sides belong to the same fully-meshed fabric. The network is non-limiting, meaning that each node can reach its theoretical 10 Gbit/s bandwidth talking to the other nodes.

Blocks & Files: How many hosts were in that system and did the writing?

OpenIO: It was 352 nodes running the OpenIO software, every node serving a part of the data, metadata and gateway operations. All were involved in the writing process… the nicest part, is that with more nodes, or more network bandwidth and more drives on each node, the achievable bandwidth would have been higher, in a linear way. As we are more and more focused on providing the best big data storage platform, we believe that the performance and scalability of the design of OpenIO will put us ahead of our league.

No change in Gartner backup and recovery MQ – and that’s a problem, according to Rubrik and Arcserve

The big news is that there are no significant changes in the Gartner 2019 Data Centre Backup and Recovery Magic Quadrant compared to the 2017 edition, and that angers Arcserve and Rubrik because think they deserve higher positions.

They have gone public with their disappointment, dismay and displeasure.

The 2019 MQ:

There is a Gartner MQ Explainer at the end of the article.

The scheduled 2018 edition had to be cancelled because most of the analyst team was recruited by Rubrik. This caused a delay until a May 2019 issue date, which was pushed because Rubrik complained and caused a further delay due to a review of its submission and status.

So here we are and no supplier present in the 2017 edition has improved their position enough to cross into a better box in the MQ,

Commvault is the top supplier and has been for 8 years. Veritas is in the Leaders’ quadrant and has been for 14 years. Other Leaders’ quadrant suppliers are Dell EMC, IBM and Veeam, and all three were present in the 2017 edition.

 

There are no challengers and four niche players; previous entrants Arcserve, Unitrends and  newcomers MicroFocus and Acronis. HPE, a 2017 player here, has been ejected.

MicroFocus was added because it bought HPE’s Data Protector product; which was why HPE was dropped.

There are three Visionaries, newby Cohesity and two suppliers from 2017; Actifio and Rubrik. Although Rubrik was ajudged by the Garner analysts to have significantly improved its execution ability it was not by enough to take it over the border into the Leaders’ quadrant, hence its irritation.

Arcserve was rated with both a lower execution ability than in 2017, despite it having increased its sales relative to other suppliers and made organisational changes to increase its execution ability. It is not a challenger and, like Rubrik, made an appeal to Gartner’s ombudsman, and, also like Rubrik, effectively got nowhere.

Gartner counsels MQ readers to read a Critical Capabilities document and not make snap decisions based merely on MQ placements.

Note. Here’s a standard MQ explainer: the “magic quadrant” is defined by axes labelled “ability to execute” and “completeness of vision”, and split into four squares tagged “visionaries”, “niche players”, “challengers” and “leaders”.

Commvault guns for mid-market with Metallic SaaS backup and recovery

Commvault today launched Metallic Backup, a set of cloud-native data protection services targeted at the “most commonly used workloads in the mid-market”.

The SaaS-y backup and recovery service is available on monthly or annual subscription and is wrapped in three flavours:

Metallic Core Backup and Recovery backs up file servers, SQL servers and virtual machines in the cloud and on-premises. Linux and Windows file servers are supported.

Metallic Office 365 Backup and Recovery backs up Exchange, OneDrive and SharePoint.

Metallic Endpoint Backup and Recovery backs up notebooks and PCs running Linux, Windows and MacOS. An endpoint software package is installed on the notebooks and PCs.

Metallic is a data protection game-changer, Robert Kaloustian, general manager of Metallic, proclaimed at Commvault GO in Denver: “Metallic is fast, secure, reliable and, with enterprise-grade scalability, ”

According to, Commvault, customers can sign up for Metallic and start their first backup in 15 minutes.Using Metallic, customers backup on-premises data to their own backup target system, or to their public cloud, Metallic’s public cloud, or a mixture of both. Metallic supports AWS and Azure.

Servers in the cloud are backed up to the cloud.

An on-premises backup gateway functions as a proxy between the on-premises data source and the cloud backup service. When using on-premises backup storage, backup data is stored on this backup gateway.

These three services are evolutions, functionally, of previously announced Commvault data protection-as-a-service offerings.

Commvault’s Complete Backup and Recovery as-a-Service (B&RaaS) portfolio was announced in October 2018, comprising Complete B&RaaS, Complete B&RaaS for Virtual Machines and Commvault Complete B&RaaS for Native Cloud Applications. 

Commvault Endpoint Data Protection as a Service was launched in November 2018. Metallic beta testing supported backing up Salesforce data. It is not supported in this release of the product. Neither does Metallic support disaster recovery-as-a-service.

Competition

Metallic is competing with other public cloud-based SaaS backup services from Clumio, Druva, and Igneous. Clumio is based on a Cloud Data Fabric hosted on Amazon S3 object storage, and backs up AWS and on-premises VMware virtual machines. Druva’s offering is AWS based and can tier data in AWS S3, Glacier and Glacier Deep Archive. Igneous is also AWS-based and can tier data to S3, Standard-IA. Glacier, and Glacier Deep Archive.

Prices and availability

Commvault partners sell Metallic which is available in the USA now, and will roll out worldwide in the future. Prices are based on monthly usage and are $0.20/GB/month for 0 -4TB, $0.18/GB/month for 5 – 24TB, $0.14/GB/month for 25 – 100B, and customers need to contact a Commvault partner for pricing at 100-plus TB/month

A 45-day free trial of Metallic can be accessed via metallic.io.


Your occasional storage digest, including Cohesity, SIOS and SAP

Cohesity grows nicely, SAP heads for the cloud and SIOS can save SAP in the cloud from disaster. Read on.

Cohesity has cracking fy 2019

Secondary storage converger Cohesity claimed a 100 per cent increase in software revenues in FY 2019, ended July 31, as it completed a transition to a software-led  business model. As it is a privately owned US entity, the company revealed no meaningful numbers in its announcement – a so-called ‘momentum release’.

Now for the bragging. Cohesity booked its first $10m-plus software order and the number of seven-figure orders grew 350 per cent. Customer numbers doubled and more than half licensed Cohesity’s cloud services. In the second half of FY 2019, more than 50 per cent of new contracts were recurring, Cohesity said to emphasise the appeal of its subscription-based software.

Its channel had a good year too. In the fourth quarter, more than 30 per cent of Cohesity’s software sold was installed on certified hardware from technology partners, up from single digits in FY 2018.

SAP gets sappy happy in the cloud

SAP announced that its SAP HANA Cloud Services combines SAP Data Warehouse Cloud, HANA Cloud and Analytics Cloud.

Data Warehouse Cloud is, as its name suggests, a data warehouse in the cloud with users able to use it in a self-service way. SAP said customers can avoid the high up-front investment costs of a traditional data warehouse and easily and cost-effectively scale their data warehouse as data demands grow. 

This Data Warehouse Cloud can be deployed either stand-alone or as an extension to customers’ existing on-premise SAP BW/4HANA system or SAP HANA software. More than 2,000 customers are registered for the beta program for SAP Data Warehouse Cloud.

SAP HANA Cloud is cloud-native software and offers one virtual interactive access layer across all data sources with a scalable query engine, decoupling data consumption from data management. Again it can be deployed stand-alone or as an extension to customers’ existing on-premise environments, allowing them to benefit from the cloud and the ability of SAP HANA to analyse live, transactional data. 

SAP Analytics Cloud is d to be embedded in SAP SuccessFactors solutions as well as SAP S/4HANA. The embedded edition of SAP Analytics Cloud is planned to be offered as a service under the SAP Cloud Platform Enterprise Agreement. Developers can activate this analytics service to build and integrate analytics into their applications through live connectivity with SAP HANA.

General availability for SAP Data Warehouse Cloud. HANA Cloud and Analytics Cloud is planned for Q4/2019.

Blocks & Files notes this SAP cloudy data warehouse activity is taking place as Yellowbrick and Snowflake continue their cloud data warehouse service growth. This sector is hot.

SIOS saves SAP in the cloud

SIOS Technology Corp. announced LifeKeeper 9.4 and DataKeeper 9.4 to deliver high availability (HA) and disaster recovery (DR) for SAP clusters in physical, virtual and cloud environments. 

Typical clustering solutions use a shared storage cluster configuration to deliver high availability, which is not available in the cloud, according to SIOS. The company provides the ability to create a clustered HA solution in the cloud, without shared storage, ensuring data protection between two separate servers.  It claims no other supplier can do this without using shared storage. Its alternative is simpler and cheaper.

SIOS DataKeeper continuously replicates changed data between systems in different cloud availability zones and regions, while LifeKeeper monitors SAP system services to ensure availability of SAP applications. If an outage occurs and the services cannot be recovered, SIOS LifeKeeper will automatically orchestrate a failover to the standby systems. 

Shorts

Aparavi File Protect & Insight (FPI) provides backup of files from central storage devices and large numbers of endpoints, to any or multiple cloud destinations.  It features Aparavi Data Awareness for intelligence and insight, along with global security, search, and access, for file protection and availability. Use cases for Aparavi FPI include file-by-file backup and retention for endpoints and servers, automation of governance policies at the source of data, and ransomware recovery.

Apple subsidiary Claris International has announced FileMaker Cloud and the initial rollout of Claris Connect. The company told us FileMaker Cloud is an expansion of the FileMaker platform that allows developers to build and deploy their FileMaker apps fully in the cloud or in hybrid environment.

The Evaluator Group has introduced a study of Storage as a Service (STaaS). You can download an abbreviated version from its website. It aims to guide enterprise STaaS users through the maze of approaches and issues they will encounter when evaluating a STaaS offering.

IBM Storage Insights has been redesigned with a new operations dashboard. Storage systems are listed in order of health status, and users can drill down to the components. Health status is based on the status that the storage system reports for itself and all its components. In previous versions the health status was based on Call Home events. The health status in IBM Storage Insights is closer now to what is shown in the storage system’s GUI and CLI.

Kingston Memory shipped more than 13.3m SSDs in the first half of 2019 – 11.3 per cent of the total number of SSDs shipped globally, according to TrendFocus research. That make Kingston the third-largest supplier of SSDs in the world, behind Samsung and Western Digital.

TrendFocus VP Don Jeanette said: “Our research finds that client SSDs make up the majority portion of units shipped while NVMe PCIe also saw gains due to demand in hyperscale environments. The storyline for the first half of 2019 is NAND shipments are increasing and pricing has bottomed out, thus driving SSD demand.”

According to Mellanox, lab tests by The Tolly Group prove its ConnectX 25GE Ethernet adapter outperforms the Broadcom NetXtreme E series adapter in terms of performance, scalability and efficiency. It has up to twice the throughput.

Violin Systems has formally announced it has moved to Colorado, merged with X-IO Storage, appointed Todd Oseth as President and CEO, and developed a flash storage product roadmap combining Violin and X-IO storage technologies. Oseth said: “We will soon be announcing our first step into the combined product line, which will deliver performance, reduced cost and enterprise features to the market.”

VMware has completed its acquisition of Carbon Black, in an all-cash transaction for $26 per share, representing an enterprise value of $2.1 billion.

WekaIO has been assigned a patent (10437675) for “distributed erasure coded virtual file system,” and has forty more patents pending. This fast file system supplier is a patent production powerhouse.

China’s Yangtze Memory Technology has improved its 64-Layer 3D NAND production yield, Digitimes reports. Expect the first products as early as the first 2020 quarter.