Home Blog Page 383

Veeam saves N2WS’s federal business – by selling the company

Veeam Software has divested N2WS because the US government – an N2WS customer – was unhappy that the subsidiary was controlled by a company controlled and partly owned by two Russian executives.

A Veeam spokesperson told us: “Several months following Veeam’s acquisition of N2WS in late 2017, the US Government had requested certain information regarding the transaction. After some discussion with the US Government, in the first half of 2019, Veeam voluntarily agreed to sell N2WS so as to focus on a unified cloud platform. The sale of N2WS was completed in the third quarter of 2019.”

Veeam acquired N2SW for $42.5m to gain cloud-native enterprise backup and disaster recovery technology for Amazon Web Services. N2WS operates as a stand-alone company and was known as N2WS – A Veeam Company.

Details of the purchaser was not disclosed. The divestment is to be officially announced at Microsoft Ignite next month. We think Veeam will use the show to announce its own cloud-native backup storage service running on Azure and AWS.

Veaam is based in Switzerland and was founded in 2005 by CEO Andrei Baronov and Ratmir Timashev, head of sales and marketing. The privately owned company claimed $1bn-plus revenues for calendar year 2018.

How key value databases shorten the lifespan of SSDs

Explainer Organising a key value database for fast reads and writes can lower an SSD’s working life. This is how.

Key value database

A relational database stores data records organised into rows and columns and located by row;column addresses. A key value database, such as Redis and RocksDB, stores records (values) using unique keys for each record. Each record is written as a key value pair and the key is used to retrieve the record.

Compaction layers

It is faster to write data sequentially on disks and on SSDs. Writing key value pairs sequentially logging or journalling) is fast but finding (reading) a particular key value pair requirea a slow trawl through this log. Methods to speed up this reading include organising the data into Log Structured Merge Trees (LSTM). This slow writes a little and speed up reads a lot.

There is a fairly detailed introduction to LSTM issues by Confluent CTO Ben Stopford.

In an LSTM scheme groups of writes are saved, sequentially, to smaller index files. They are sorted to speed up reads. New, later groups of writes go into new index files. Layers of such files are built up.

Each index has to be read with a separate IO. The index files are merged or compacted every so often to prevent their number becoming too large and so making reads slower overall.

On SSDs such compaction involves write cycles in addition to the original data writes. This larger or amplified number of writes shortens the SSD’s life, since that is defined as the number of write cycles it can support.

What can be done? One response is to rewrite the key value database code to fix the problem. Another is to simply use larger SSDs and eject low access rate data periodically to cheaper storage. This saves the SSD’s capacity and working life for hot data. Read on for our columnist Hubbert Smith’s thoughts on this matter – Concerning the mistaken belief that key value databases chew up SSDs.


Commvault: Containers will enable unified data and storage management

Commvault and its new subsidiary Hedvig anticipate future production use of containers, with unified data and storage management.

CEO Sanjay Mirchandani and his chief storage strategist, ex-Hedvig CEO Avinash Lakshman, made this clear in briefings at Commvault GO in Denver this week.

They see enterprises making production use of containers in the near future – it is just DevOps use at present. Commvault sees a coming hybrid on-premises and multi-cloud environment. Containers and  heir data will move between on-premises and cloud environments. 

Customers will want an integrated system framework or control plane  that abstracts data from the infrastructure and manages it and the storage on which it resides. A data management facility separate from the storage management facility will lead to complexity and inefficiency. They need to be combined, and provide freedom from lock-in from storage players or cloud suppliers.

Primary storage

This is why Commvault bought Hedvig last month. And why it is saying it will move into primary storage. But there is no direct competition with Dell EMC, HPE or NetApp right now. Instead, in this coming container-centric future, customers will need unified and integrated storage and data managing software that runs across the hybrid multi-cloud environment they occupy. 

Commvault and Hedvig is ready to satisfy this need. And ready to satisfy it better than anyone else; Dell EMC, HPE, NetApp or others, because Commvault is preparing for this future now, Mirchandani claimed.

Hedvig news

In line with this idea, Hedvig is adding Container Storage Interface (CSI) support for Kubernetes management, erasure coding to use less space for data integrity than RAID, multi-tenancy and multi-data centre cluster management. Specifically:

  • Container Storage Interface (CSI) support, which enables enterprises to use Commvault for the management of Kubernetes and other container orchestrators (COs). Built-in data center availability, which helps enterprises improve data resiliency.
  • Support for erasure coding, which improves storage efficiency.
  • Support for multi-tenant data centres, including the ability to manage tenant level access, control, and encryption settings, which will allow managed service providers (MSPs) to deliver storage services across hybrid cloud environments.
  • Multi-data centre cluster management, alerting and reporting, so enterprises and MSPs can configure and administer all their data centres’ software-defined storage infrastructure from a single location.

Lakshman said in a prepared quote that these “new capabilities converge many of the latest storage, container and cloud technologies, allowing enterprises to automate manual infrastructure management processes and simplify their multi-cloud environments.”

By this line of thinking the separation of primary data and secondary data will reduce and that in turn will reduce the separation between primary storage and secondary storage. There is just data and it resides on whatever storage is necessary for data use at any particular time. 

We might also consider the notion that data lives in a unified data space, with a Hammerspace-like abstraction layer integrated with the Hedvig Distributed Storage Platform software enabling this.

We can expect a continual flow of developments as Commvault prepares Hedvig for this unified data and storage management future that embraces containerisation. It will also be integrating Hedvig’s technologies into its own portfolio of data protection offerings.

OpenIO ‘solves’ the problem with object storage hyperscalability

OpenIO, a French startup, is claimed the speed record for serving object data – blasting past one terabits-per-second to match high end SAN array performance.

Object storage manages data as objects in a flat address space spread across multiple servers and IO speed is inherently slower than SAN arrays or filer systems. However, in August 2019 Minio showed its object storage software can be a fast access store – and now OpenIO has demonstrated its technology combines high scalability and fast access.

Laurent Denel, OpenIO CEO

The company created an object storage grid on 350 commodity servers owned by Criteo, an ad tech firm, and achieved 1.372 Tbit/s throughput (171.5GB/sec). Stuart Pook, a senior site reliability engineer at Criteo, said in a prepared quote: “They were able to achieve a write rate close to the theoretical limits of the hardware we made available.”

OpenIO claimed this performance is a record because to date no other company has demonstrated object storage technology at such high throughput and on such a scale.

To put the performance in context, Dell EMC’s most powerful PowerMax 8000 array does 350GB/sec. Hitachi Vantara’s high-end VSP 5500 does 148GB/sec, slower than the OpenIO grid. Infinidat’s InfiniBox does 25GB/sec. 

Laurent Denel, CEO and co-founder of OpenIO, said:  “Once we solved the problem of hyper-scalability, it became clear that data would be manipulated more intensively than in the past. This is why we designed an efficient solution, capable of being used as primary storage for video streaming… or to serve increasingly large datasets for big data use cases.”

Benchmark this!

Blocks & Files thought that the workload on the OpenIO grid had to be spread across the servers, as parallelism was surely required to reach this performance level. But how was the workload co-ordinated across the hundreds of servers? What was the storage media?

Blocks & Files: How does OpenIO’s technology enable such high (1.37Tbit/s) write rates? 

OpenIO: Our grid architecture is completely distributed and components are loosely coupled. There is no central, nor distributed lock management, so no single component sees the whole traffic. This enables the data layer, the metadata layer and the access layer (the S3 gateways) to scale linearly. At the end of the day, performance is all about using the hardware of a server with the least overhead possible and multiplying by the number of servers to achieve the right figure for the targeted performance. That makes it really easy!

Blocks & Files: What was the media used?

OpenIO: We were using magnetic hard drives in 3.5 LFF format, of 8TB each. There were 15 drives per server; 352 servers. This is for the data part. The metadata were stored on a single 240GB SSD on each of the 352 servers. OpenIO typically requires less than 1% of the overall capacity for the metadata storage.

Blocks & Files: What was the server configuration? 

OpenIO: We used the hardware that Criteo deploys in production for their production datalake… These servers are meant to be running the Hadoop stack with a mix of storage through HDFS and big data compute. They are standard 2U servers, with 10GigE ethernet links. Here is the detailed configuration;

  • HPE 2U ProLiant DL380 Gen10, 
  • CPU 2 x Intel Skylake Gold 6140
  • RAM 384 GB (12 x 32 GB – SK Hynix, DDR4 RDIMM @2 666 MHz)
  • System drives 2 x 240 GB SSD (only one used for the metadata layer) Micron 5100 ECO
  • Capacity drives SATA HDD – LFF – Hotswap 15 x 8 TB Seagate
  • Network 2 x SFP+ 10 Gbit/s HPE 562FLR-SFP + Adapter Intel X710 (only one attached and functional)

Blocks & Files: Was the write work spread evenly across the servers?

OpenIO: Yes. The write work was coming from the production datalake of Criteo (2.500 servers), i.e. it was coming from many sources, and targeting many destinations within the OpenIO cluster, as each node, each physical server, is an S3 gateway, a metadata node and a data node at the same time. Criteo used a classic DNS Round Robin mechanism to route the traffic to the cluster (350 endpoints) as a first level of load balancing.

As this is never perfect, OpenIO implements our own load balancing mechanism as a secondary level: each of the OpenIO S3 gateways is able to share the load with the others. This produced a very even write flow, with each gateway taking 1/350 of the whole write calls.

Blocks & Files: How were the servers connected to the host system?

OpenIO: There is no host system. It is one platform, the production datalake of Criteo (2,500 nodes) writing data to another platform, the OpenIO S3 platform (352 nodes). The operation was performed through a distributed copy tool from the Hadoop environment. The tool is called distCp and it can read and write, from and to HDFS or S3.

From a network standpoint, the two platforms are nested together, and the servers from both sides belong to the same fully-meshed fabric. The network is non-limiting, meaning that each node can reach its theoretical 10 Gbit/s bandwidth talking to the other nodes.

Blocks & Files: How many hosts were in that system and did the writing?

OpenIO: It was 352 nodes running the OpenIO software, every node serving a part of the data, metadata and gateway operations. All were involved in the writing process… the nicest part, is that with more nodes, or more network bandwidth and more drives on each node, the achievable bandwidth would have been higher, in a linear way. As we are more and more focused on providing the best big data storage platform, we believe that the performance and scalability of the design of OpenIO will put us ahead of our league.

No change in Gartner backup and recovery MQ – and that’s a problem, according to Rubrik and Arcserve

The big news is that there are no significant changes in the Gartner 2019 Data Centre Backup and Recovery Magic Quadrant compared to the 2017 edition, and that angers Arcserve and Rubrik because think they deserve higher positions.

They have gone public with their disappointment, dismay and displeasure.

The 2019 MQ:

There is a Gartner MQ Explainer at the end of the article.

The scheduled 2018 edition had to be cancelled because most of the analyst team was recruited by Rubrik. This caused a delay until a May 2019 issue date, which was pushed because Rubrik complained and caused a further delay due to a review of its submission and status.

So here we are and no supplier present in the 2017 edition has improved their position enough to cross into a better box in the MQ,

Commvault is the top supplier and has been for 8 years. Veritas is in the Leaders’ quadrant and has been for 14 years. Other Leaders’ quadrant suppliers are Dell EMC, IBM and Veeam, and all three were present in the 2017 edition.

 

There are no challengers and four niche players; previous entrants Arcserve, Unitrends and  newcomers MicroFocus and Acronis. HPE, a 2017 player here, has been ejected.

MicroFocus was added because it bought HPE’s Data Protector product; which was why HPE was dropped.

There are three Visionaries, newby Cohesity and two suppliers from 2017; Actifio and Rubrik. Although Rubrik was ajudged by the Garner analysts to have significantly improved its execution ability it was not by enough to take it over the border into the Leaders’ quadrant, hence its irritation.

Arcserve was rated with both a lower execution ability than in 2017, despite it having increased its sales relative to other suppliers and made organisational changes to increase its execution ability. It is not a challenger and, like Rubrik, made an appeal to Gartner’s ombudsman, and, also like Rubrik, effectively got nowhere.

Gartner counsels MQ readers to read a Critical Capabilities document and not make snap decisions based merely on MQ placements.

Note. Here’s a standard MQ explainer: the “magic quadrant” is defined by axes labelled “ability to execute” and “completeness of vision”, and split into four squares tagged “visionaries”, “niche players”, “challengers” and “leaders”.

Commvault guns for mid-market with Metallic SaaS backup and recovery

Commvault today launched Metallic Backup, a set of cloud-native data protection services targeted at the “most commonly used workloads in the mid-market”.

The SaaS-y backup and recovery service is available on monthly or annual subscription and is wrapped in three flavours:

Metallic Core Backup and Recovery backs up file servers, SQL servers and virtual machines in the cloud and on-premises. Linux and Windows file servers are supported.

Metallic Office 365 Backup and Recovery backs up Exchange, OneDrive and SharePoint.

Metallic Endpoint Backup and Recovery backs up notebooks and PCs running Linux, Windows and MacOS. An endpoint software package is installed on the notebooks and PCs.

Metallic is a data protection game-changer, Robert Kaloustian, general manager of Metallic, proclaimed at Commvault GO in Denver: “Metallic is fast, secure, reliable and, with enterprise-grade scalability, ”

According to, Commvault, customers can sign up for Metallic and start their first backup in 15 minutes.Using Metallic, customers backup on-premises data to their own backup target system, or to their public cloud, Metallic’s public cloud, or a mixture of both. Metallic supports AWS and Azure.

Servers in the cloud are backed up to the cloud.

An on-premises backup gateway functions as a proxy between the on-premises data source and the cloud backup service. When using on-premises backup storage, backup data is stored on this backup gateway.

These three services are evolutions, functionally, of previously announced Commvault data protection-as-a-service offerings.

Commvault’s Complete Backup and Recovery as-a-Service (B&RaaS) portfolio was announced in October 2018, comprising Complete B&RaaS, Complete B&RaaS for Virtual Machines and Commvault Complete B&RaaS for Native Cloud Applications. 

Commvault Endpoint Data Protection as a Service was launched in November 2018. Metallic beta testing supported backing up Salesforce data. It is not supported in this release of the product. Neither does Metallic support disaster recovery-as-a-service.

Competition

Metallic is competing with other public cloud-based SaaS backup services from Clumio, Druva, and Igneous. Clumio is based on a Cloud Data Fabric hosted on Amazon S3 object storage, and backs up AWS and on-premises VMware virtual machines. Druva’s offering is AWS based and can tier data in AWS S3, Glacier and Glacier Deep Archive. Igneous is also AWS-based and can tier data to S3, Standard-IA. Glacier, and Glacier Deep Archive.

Prices and availability

Commvault partners sell Metallic which is available in the USA now, and will roll out worldwide in the future. Prices are based on monthly usage and are $0.20/GB/month for 0 -4TB, $0.18/GB/month for 5 – 24TB, $0.14/GB/month for 25 – 100B, and customers need to contact a Commvault partner for pricing at 100-plus TB/month

A 45-day free trial of Metallic can be accessed via metallic.io.


Your occasional storage digest, including Cohesity, SIOS and SAP

Cohesity grows nicely, SAP heads for the cloud and SIOS can save SAP in the cloud from disaster. Read on.

Cohesity has cracking fy 2019

Secondary storage converger Cohesity claimed a 100 per cent increase in software revenues in FY 2019, ended July 31, as it completed a transition to a software-led  business model. As it is a privately owned US entity, the company revealed no meaningful numbers in its announcement – a so-called ‘momentum release’.

Now for the bragging. Cohesity booked its first $10m-plus software order and the number of seven-figure orders grew 350 per cent. Customer numbers doubled and more than half licensed Cohesity’s cloud services. In the second half of FY 2019, more than 50 per cent of new contracts were recurring, Cohesity said to emphasise the appeal of its subscription-based software.

Its channel had a good year too. In the fourth quarter, more than 30 per cent of Cohesity’s software sold was installed on certified hardware from technology partners, up from single digits in FY 2018.

SAP gets sappy happy in the cloud

SAP announced that its SAP HANA Cloud Services combines SAP Data Warehouse Cloud, HANA Cloud and Analytics Cloud.

Data Warehouse Cloud is, as its name suggests, a data warehouse in the cloud with users able to use it in a self-service way. SAP said customers can avoid the high up-front investment costs of a traditional data warehouse and easily and cost-effectively scale their data warehouse as data demands grow. 

This Data Warehouse Cloud can be deployed either stand-alone or as an extension to customers’ existing on-premise SAP BW/4HANA system or SAP HANA software. More than 2,000 customers are registered for the beta program for SAP Data Warehouse Cloud.

SAP HANA Cloud is cloud-native software and offers one virtual interactive access layer across all data sources with a scalable query engine, decoupling data consumption from data management. Again it can be deployed stand-alone or as an extension to customers’ existing on-premise environments, allowing them to benefit from the cloud and the ability of SAP HANA to analyse live, transactional data. 

SAP Analytics Cloud is d to be embedded in SAP SuccessFactors solutions as well as SAP S/4HANA. The embedded edition of SAP Analytics Cloud is planned to be offered as a service under the SAP Cloud Platform Enterprise Agreement. Developers can activate this analytics service to build and integrate analytics into their applications through live connectivity with SAP HANA.

General availability for SAP Data Warehouse Cloud. HANA Cloud and Analytics Cloud is planned for Q4/2019.

Blocks & Files notes this SAP cloudy data warehouse activity is taking place as Yellowbrick and Snowflake continue their cloud data warehouse service growth. This sector is hot.

SIOS saves SAP in the cloud

SIOS Technology Corp. announced LifeKeeper 9.4 and DataKeeper 9.4 to deliver high availability (HA) and disaster recovery (DR) for SAP clusters in physical, virtual and cloud environments. 

Typical clustering solutions use a shared storage cluster configuration to deliver high availability, which is not available in the cloud, according to SIOS. The company provides the ability to create a clustered HA solution in the cloud, without shared storage, ensuring data protection between two separate servers.  It claims no other supplier can do this without using shared storage. Its alternative is simpler and cheaper.

SIOS DataKeeper continuously replicates changed data between systems in different cloud availability zones and regions, while LifeKeeper monitors SAP system services to ensure availability of SAP applications. If an outage occurs and the services cannot be recovered, SIOS LifeKeeper will automatically orchestrate a failover to the standby systems. 

Shorts

Aparavi File Protect & Insight (FPI) provides backup of files from central storage devices and large numbers of endpoints, to any or multiple cloud destinations.  It features Aparavi Data Awareness for intelligence and insight, along with global security, search, and access, for file protection and availability. Use cases for Aparavi FPI include file-by-file backup and retention for endpoints and servers, automation of governance policies at the source of data, and ransomware recovery.

Apple subsidiary Claris International has announced FileMaker Cloud and the initial rollout of Claris Connect. The company told us FileMaker Cloud is an expansion of the FileMaker platform that allows developers to build and deploy their FileMaker apps fully in the cloud or in hybrid environment.

The Evaluator Group has introduced a study of Storage as a Service (STaaS). You can download an abbreviated version from its website. It aims to guide enterprise STaaS users through the maze of approaches and issues they will encounter when evaluating a STaaS offering.

IBM Storage Insights has been redesigned with a new operations dashboard. Storage systems are listed in order of health status, and users can drill down to the components. Health status is based on the status that the storage system reports for itself and all its components. In previous versions the health status was based on Call Home events. The health status in IBM Storage Insights is closer now to what is shown in the storage system’s GUI and CLI.

Kingston Memory shipped more than 13.3m SSDs in the first half of 2019 – 11.3 per cent of the total number of SSDs shipped globally, according to TrendFocus research. That make Kingston the third-largest supplier of SSDs in the world, behind Samsung and Western Digital.

TrendFocus VP Don Jeanette said: “Our research finds that client SSDs make up the majority portion of units shipped while NVMe PCIe also saw gains due to demand in hyperscale environments. The storyline for the first half of 2019 is NAND shipments are increasing and pricing has bottomed out, thus driving SSD demand.”

According to Mellanox, lab tests by The Tolly Group prove its ConnectX 25GE Ethernet adapter outperforms the Broadcom NetXtreme E series adapter in terms of performance, scalability and efficiency. It has up to twice the throughput.

Violin Systems has formally announced it has moved to Colorado, merged with X-IO Storage, appointed Todd Oseth as President and CEO, and developed a flash storage product roadmap combining Violin and X-IO storage technologies. Oseth said: “We will soon be announcing our first step into the combined product line, which will deliver performance, reduced cost and enterprise features to the market.”

VMware has completed its acquisition of Carbon Black, in an all-cash transaction for $26 per share, representing an enterprise value of $2.1 billion.

WekaIO has been assigned a patent (10437675) for “distributed erasure coded virtual file system,” and has forty more patents pending. This fast file system supplier is a patent production powerhouse.

China’s Yangtze Memory Technology has improved its 64-Layer 3D NAND production yield, Digitimes reports. Expect the first products as early as the first 2020 quarter.

Arcserve’s anguished appraisal of Gartner backup MQ process

It’s not just Rubrik; backup supplier Arcserve also has strong reservations about the validity of Gartner’s Data Centre Backup and Recovery MQ and is considering where else it could find reliably independent and objective analysis of its place in the market.

Yesterday Rubrik publicly revealed its dismay and strong objections to what it sees as unfairness, lack of objectivity and rigour in this MQ process and outcome. It also identified a conflict of interest between itself and the lead analyst, as, it claimed, he had pursued a job opportunity at Rubrik, and been rejected.

This MQ has been hotly awaited as the last edition came out in 2017. There have been changes in the suppliers’ organisations, strategy, products and market fortunes since then. So the latest edition, due we understand on Monday, has been enthusiastically awaited.

It has had a somewhat tortuous progression with a history of departed analysts replaced by a new team, altered criteria, strong supplier objections to provisional ratings by this team, and a delay due to a review process following this.

Changed criteria

When the new Garter analyst team started work it updated the criteria for inclusion and evaluation in the Data Centre Backup and Recovery MQ. A third supplier, troubled by this MQ’s process, gave us a look at it. The introduction states;

This 2019 “Magic Quadrant for Data Center Backup and Recovery Solutions” is an update to the “Magic Quadrant for Data Center Backup and Recovery Solutions” that last published in July 2017. As the backup and recovery market is continuously changing, we simplified the market definition.

Because The “Magic Quadrant for Data Center Backup and Recovery Solutions” focuses on upper-midmarket and large enterprise organizations we refined the inclusion / exclusion criteria by increasing focus on international presence and size of the protected environment.

The weight given to certain criteria was changed. For example;

  • In Completeness of Vision, the high rating for marketing strategy and sales strategy was changed from high to medium.
  • There was no rating in 2017 for “The soundness and logic of a vendor’s underlying business proposition” or its vertical/industry strategy and geographic strategy; they each received a medium weight in the 2019 MQ.
  • In the Ability to Execute section, a supplier’s marketing execution was downrated from high to medium.

These changes make comparison with the 2017 MQ rankings harder.

Arcserve points

According to Arcserve;

  • The Gartner analysts did not seem to have a good understanding of the market. Like Rubrik it felt that the analysts did not listen to raised objections and submissions, or simply did not understand them.
  • The Arcserve team was told that placement in the MQ had more to do with the Gartner analysts’ feelings than the results of a rigorous methodology.
  • Its growth was not reflected in its positioning, and growth declines by Veritas and Dell EMC were ignored by the analysts as well. The feeling is that the supplier growth area ratings are inconsistent and unfair.

Arcserve feels that the standing of this MQ is compromised by the poor and inconsistent information gathering process and the unfairness of the results. 

Rubrik took its objections to Gartner’s ombudsman, but to little or no no avail. So too did Arcserve, again, it felt, with little benefit.

Arcserve cannot use this MQ and the accompanying critical capabilities document to assess how itself and other suppliers are doing, because of the problems it identified. If the information gathering and assessing process for them was the same as that which it (and Rubrik) experienced then comparisons with the other vendors’ ratings and assessments are simply unreliable.

It thinks that it needs to find a better and more solid source for analysis and reviews of suppliers’ standings in the data centre backup and recovery market.

An alternative

No doubt it is well aware of the Forrester Wave.

Rubrik CEO Bipul Sinha asks; “Could an experienced and objective third party come to a different conclusion [than Gartner]? Naveen Chhabra, an Analyst with 20 years’ experience at IBM and Forrester, recently published the Forrester Wave for Data Resiliency Solutions, placing Rubrik in the Leader Category with the highest possible score for strategy. We now know the answer to that question.” 

In this Wave, Rubrik, Cohesity, Veeam and Commvault were all in the leaders, section, while, in the forthcoming Gartner backup and recovery MQ, Rubrik and Cohesity are visionaries.

The stakes are high. CIOs day a lot of attention to MQ position and ratings. A supplier’s growth could be hindered by an adverse placement and Rubrik, with its mega-funding, needs high growth to continue.

Arcserve would just like a fair crack of the whip, and doesn’t think Gartner is supplying that.

Gartner viewpoint

We asked Gartner what it thinks about the points made by Arcserve and it replied: “Please see the Gartner Office of Ombudsman’s blog post titled “Gartner Research Does Not Please Everyone, All the Time” in response to your inquiry. “

That states;

Independence and objectivity are paramount attributes of Gartner research, so even the perception of a conflict of interest requires careful examination by the Office of the Ombudsman. Rubrik absolutely did the right thing to contact us and voice its concerns. In this case, it took a considerable amount of time to thoroughly investigate every single complaint raised by Rubrik, ultimately resulting in the assignment of a new lead analyst for this research. We are fully satisfied that Gartner’s rigorous research methodologies, combined with the actions taken by the Research & Advisory leadership team throughout this process, ensures all the vendors in this market segment are accurately—and fairly—evaluated relative to their competitors in the final Magic Quadrant.

Unfortunately, Rubrik does not agree with Gartner’s point of view expressed in the Magic Quadrant, but we respect the company’s right to voice its opinion. We believe Gartner’s opinion on vendor capabilities in this market is accurately expressed in the Magic Quadrant, a rigorous, independent analysis that helps buyers navigate technology purchase decisions.


Rubrik declares war on Gartner over Data Center Backup and Recovery Magic Quadrant

Rubrik CEO Bipul Sinha has attacked Gartner’s about-to-be-published Data Centre Backup and Recovery MQ as “seriously flawed” and produced by an analyst who applied to join Rubrik but was rejected.

This Magic Quadrant hasn’t even been revealed  by Gartner yet. But here we have Rubrik launching a pre-emptive strike.

The MQ has been delayed since last year as, first, Gartner lost four analysts to Rubrik and one to Veeam. It had to appoint new analysts for the work and, according to Rubrik, the lead analyst was the person it decided not to hire, leading to a conflict of interest in the MQ’s production. 

Rubrik states: “After his sustained pursuit of a position with Rubrik, we declined to offer Analyst #5 a role, and he expressed his clear disappointment in light of his colleagues’ hirings at Rubrik.”

It appealed to Gartner’s ombudsman about this and there was a review of the MQ, causing a second delay of, we understand, some months. That review effectively came to naught and Rubrik’s position in the MQ, as a visionary we understand, was unchanged. We think it feels strongly that it should be in the Leader’s quadrant.

Sinha states Rubrik “engaged with the Gartner team over several months to remedy a number of significant issues and concerns to no avail, so we felt it was important that the market, including our customers, potential customers, partners and employees, have the full set of facts that are pertinent in objectively evaluating the information contained in this MQ.“

The 2017 Gartner Analyst Team positioned Rubrik in the lower right Visionary quadrant in that version of the MQ;

2017 Gartner Data Centre Backup and Recovery Magic Quadrant

Sinha thinks Rubrik has been treated unfairly: “Based on objective data, since the 2017 MQ, Rubrik has made significant progress in its business, out-paced all of its competitors, and has had a disproportionately large impact on the Data Center Backup and Recovery market. Yet, this progress has not manifested in any significant movement, as reflected in Rubrik’s position within the 2019 MQ.”

Rubrik states: “Despite a comprehensive 30-page survey submission and 25 formal analyst inquiries over the preceding 12 months, Gartner failed to get many basic facts correct in the draft MQ and Critical Capabilities. In the draft summaries shared with vendors, Rubrik found 17 inaccuracies covering missing functionality, customer adoption, and deployability.

“In some cases, it was clear that the analysts confused us with a smaller competitor in their description of an OEM relationship and in multiple descriptions of how our technology works.”

Blocks & Files cannot recall any vendor going to war with Gartner publicly over its position and standing in a Magic Quandrant. It is a measure of Rubrik’s displeasure and annoyance, and of the reputation of Gartner’s MQs, that Rubrik has taken this unprecedented step. 

Fellow supplier Cohesity, enters the MQ for the first time and is also positioned as a visionary. It is, we understand, pleased with that recognition and confident its position will improve as its vision and product roadmap are so good.

We also understand Veeam is pleased to stay in the Leaders’ quadrant.

Note. Here’s a standard MQ explainer: the “magic quadrant” is defined by axes labelled “ability to execute” and “completeness of vision”, and split into four squares tagged “visionaries”, “niche players”, “challengers” and “leaders”.

Hitachi Vantara: our VSP 5000 is the world’s fastest storage array

Hitachi Vantara has beefed up its high-end storage line-up with the VSP 5000 series, which it claims is the world’s fastest enterprise array.

The company can certainly claim bragging rights in IOPS terms but others perform better on latency and bandwidth.

The obvious comparison is with Dell EMC’s PowerMax which has a higher latency (sub-200μs), is slower and has lower capacity limits.

PowerMax 2000 has up to 2.7m random read IOPS and 1PB effective capacity. The VSP 5100 exceeds both, with 4.2m IOPS and 23PB of capacity.

PowerMax 8000 offers up to 15m IOPS, 350GB/sec and 4PB effective capacity. The VSP 5500 has up to 21m IOPS, 148GB/sec and 69PB capacity, making it slower on bandwidth but better in the other categories

Infinidat’s Infinibox array handle 32μs for reads and 38μs for writes. This is lower than the VSP 5000s 70μs. Also Infinibox has up to 1.3m IOPS and 15.2GB/sec throughput, giving it lower performance than the VSP 5100 (4.2m IOPS and 25GB/sec) and much lower than the VSP 5500 (21m IOPS and 149GB/sec).

Speccing out the VSP 5000

Hitachi Vantara’s 5000s have much higher capacity than the existing VSP F all-flash and G hybrid Series. They are designed to accelerate and consolidate transactional systems, containerised applications, analytics and mainframe storage workloads, so as to reduce data centre floor space and costs.

Two all-flash 5000 Series arrays – the 5100 and 5500 – provide block and file access storage. They have hybrid flash+disk variants – the 5100H and 5500H – and use an ‘Accelerated Fabric’ for internal connectivity between controllers and drives. Minimum latency is 70μs.

VSP 5000.

Claimed performance is 42m IOPS for the 5100 and 5100H, and 21m for the 5500 and 5500H arrays.

The F Series uses SAS commodity SSDs with up to 15TB capacity, or Hitachi’s proprietary Flash Module Drives (FMDs) with a maximum 14TB. NVMe SSDs are introduced with the 5000 Series, ranging up to 30TB in capacity. SAS SSDs and FMDs are also available.

The hybrid variants can use 2.4TB, 10,000rpm, 2.5-inch disk drives or 14TB, 7,200rpm, 3.5-inch disks.

The internal raw capacity limits for the 500 and 5100H are 23PB. It is 69PB for the 5500 and 5500H. External capacity limits are 287PB for all 5000s.

The 5100s have a single controller block with two controllers and 4 acceleration modules for the internal fabric. The 5500s have 1, 2 or 3 controller blocks, with 4 controllers and 8 acceleration modules per block. These use FPGAs and link to a central fabric infrastructure switch.

The FPGAs offload IO work from the controllers. The fabric uses direct memory access which, in tandem with the FPGAs, enables the high IOPS performance. Read a Hitachi Accelerated Fabric white paper here.

This internal PCIe fabric provides the foundation for a future internal NVMe-oF scheme.

The fabric is based on PCIe Gen 3 x4 lanes and is built with quadruple redundancy to deliver 99.999999 per cent availability, Hitachi Vantara claims. It allows tiering of data across controller blocks for improved price-performance.

Host interfaces supported are 16 and 32Gbits/sec Fibre Channel, 16Gbit/s FICON for mainframes, and 10Gbit/s iSCSI.

The 5000s can be upgraded to use NVMe over Fabrics and storage-class memory (Optane.) They are not supported yet. (PowerMax is shipping both.)

Deduplication

The 5000s have up to 7:1 data reduction ratio with deduplication and compression. The deduplication method uses machine learning models to optimise deduplication block size and use either in-line or post-process dedupe to get as much deduplication as possible while reducing the performance impact of dedupe processing.

Up to 5.5:1 data reduction is achievable without measurable reduction in system performance, Hitachi said.

We can position the VSP 5000 and F series in a 2D IOPS x bandwidth chart: 

The IOPS numbers are millions of IOPS.

This indicates how the existing VSP F1500 (4.8m IOPS, 48GB/sec) is faster than the 5100 (4.2m IOPS, 25GB/sec.) The 5500 is, of course, vastly more powerful than the F1500.

Existing VSP systems can be virtualized by the 5000 series – a property of the SVOS software, and become resources for the 5000s.

A VSP Cloud Connect Pack adds an HNAS 4000 file storage gateway to the system. It moves data to a public cloud to free up capacity. The moved data is made indexable and searchable.

There is a 100 per cent availability guarantee. Hitachi’s Global-Active Device (GAD) delivers synchronous clustering of applications between VSP 5000 sites that are up to 500 kilometres apart.

SVOS and Ops Center

Hitachi Vantara today also introduced Ops Center – infrastructure management software that uses AI to automate data centre management tasks

The Storage Virtualization Operating System (SVOS) RF 9 offers scale-out architecture and supports NVMe over Fabrics and Optane drives. Hitachi Vantara said the software incorporate “AI intelligence that adapts to changing conditions to optimise any workload performance, reduce storage costs and predict faults that could disrupt operations.”

The system has NVMe flash, SAS flash and disk drive storage classes. AI techniques and machine learning are used to dynamically promote and demote data to an optimized tier to accelerate applications.

Hitachi says Ops Center can automate up to 70 per cent of data centre workloads and provides “faster, more accurate insights to diagnose system health” and keep operations running.

HPE has announced its HPE XP8 Storage system; it OEMs Hitachi’sVSP arrays under the XP8 brand.

Hitachi Virtual Storage Platform (VSP) 5000 series, Hitachi Ops Center and SVOS RF 9 are available now.

Nutanix integrates with ServiceNow, dives into HPE’s GreenLake, wins HPE OEM deal

Nutanix logo
Nutanix logo

Nutanix opened its .NEXT conference in Copenhagen today, announcing an HPE GreenLake deal, its software pre-installed on HPE servers, and integration with ServiceNow for automated incident handling and management.

Nutanix’s Enterprise Cloud OS software, including the AHV hypervisor, is to be offered as part of a fully HPE-managed private cloud with customers paying on a service subscription basis. The agreement is initially focused on simplifying customer deployments of end user computing, databases, and private clouds.

GreenLake Nutanix is available to order across the 50-plus countries where HPE GreenLake is offered. Customers have the option to outsource operations to HPE PointNext services.

The second part of today’s HPE announcement is that Nutanix software – Acropolis, AHV and Prism – will be pre-installed on HPE ProLiant DX servers and shipped from HPE factories as a turnkey solution. 

The focus is on enterprise apps, big data analytics, messaging, collaboration, and dev/test.

ProLiant DX with Nutanix is generally available now. The hardware is supported by HPE and the software by Nutanix.

SimpliVity under pressure

Blocks & Files sees this as a near OEM deal; not a full one as the Nutanix SW is still visible to customers and there is split support. The announcement comes days after HPE’s adoption of the Datera server SAN  into its line-up for resellers. The turnkey Nutanix and Datera offerings both compete with HPE’s own SimpliVity HCI systems. 

IDC noted that Cisco’s HyperFlex HCI product overtook SimpliVity to take third place in HCI market revenue share terms in this year’s second quarter. HPE also said SimpliVity sales grew four per cent in its third fiscal 2019 quarter, down from it25 per cent growth in the previous quarter.

It looks like the SimpliVity product is not motoring fast enough to meet all of HPE’s HCI sales needs.

ServiceNow

Nutanix has integrated its hyperconverged infrastructure system with ServiceNow’s IT Operations Management (ITOM) cloud service. Nutanix and ServiceNow customers can automate incident handling with ITOM automatically discovering Nutanix systems data: HCI clusters, individual hosts, virtual machine (VM) instances, storage pools, configuration parameters and application-centric metrics. 

ServiceNow users can provision, manage and scale applications via Nutanix Calm blueprints, published as service catalog items in the Now Platform.

ServiceNow’s ITSM service is linked to Nutanix’ Prism Pro management facility and its X-Play automation engine. There is an X-Play action for ServiceNow so IT managers can notify their teams of incidents and alerts in the Nutanix environment, such as a host losing power or server running out of capacity.

Nutanix says automated incident handling is a mundane part of IT department activity but reduces the time spent on servicing incidents and issues.

Rajiv Mirani, Nutanix Cloud Platforms CTO, issued a quote “By integrating Nutanix software with ServiceNow’s leading digital workflow solutions, we are making it easier to deliver end-to-end automation of infrastructure and application workflows so that private cloud can deliver the same simplicity and flexibility as public cloud services.”

You can read a blog about the Nutanix ServiceNow integration. This ServiceNow capability is available now, through platform discovery for Acropolis and a Calm plug-in in the ServiceNow Store.


Backblaze 7.0

Forest fire
Forest fire

Backblaze today launched BlackBlaze Cloud Backup 7.0. It offers the ability to keep updated, changed and deleted files in its cloud-stored backups forever.

Backblaze is a cloud backup and storage service for businesses and consumers. It has 800PB under management, integrates with Veeam, and pitches itself as being cheaper than the public cloud giants.

For example, its B2 Cloud Storage costs $0.005/GB/month compared to Amazon S3’s $0.021/GB/month, Azure’a $0.018/GB/month and Google Cloud’s $0.020/GB/month.

At time of writing, the company quotes Carbonite at $288/year for 100GB of cloud backup, Crashplan Small business at $120/year for unlimited data, iDrive at $99.50/year for 250GB and Backblaze Cloud Backup at $60/year for unlimited data.

BackBlaze 7.0 extends the recover-from-delete/update function from the default 30 days to one year for $2/month or forever for $4/month plus $0.005/GB/month for versions modified on a customer’s computer more than 1 year ago.

Customers see a version history that extends back in time for 12 months or as long as they have been a Backblaze customer.

BackBlaze 7 also uploads files better. The maximum packet size has increased from 30MB to 100MB, which enables the app to transmit data more efficiently by better leveraging threading. This smooths upload performance, reduces sensitivity to latency, and leads to smaller data structures. It puts a smaller load on the source system.

Customers can sign into Backblaze using Office 365 credentials with single sign-on support. MacOS Catalina is also supported.