Home Blog Page 378

Arcserve’s anguished appraisal of Gartner backup MQ process

It’s not just Rubrik; backup supplier Arcserve also has strong reservations about the validity of Gartner’s Data Centre Backup and Recovery MQ and is considering where else it could find reliably independent and objective analysis of its place in the market.

Yesterday Rubrik publicly revealed its dismay and strong objections to what it sees as unfairness, lack of objectivity and rigour in this MQ process and outcome. It also identified a conflict of interest between itself and the lead analyst, as, it claimed, he had pursued a job opportunity at Rubrik, and been rejected.

This MQ has been hotly awaited as the last edition came out in 2017. There have been changes in the suppliers’ organisations, strategy, products and market fortunes since then. So the latest edition, due we understand on Monday, has been enthusiastically awaited.

It has had a somewhat tortuous progression with a history of departed analysts replaced by a new team, altered criteria, strong supplier objections to provisional ratings by this team, and a delay due to a review process following this.

Changed criteria

When the new Garter analyst team started work it updated the criteria for inclusion and evaluation in the Data Centre Backup and Recovery MQ. A third supplier, troubled by this MQ’s process, gave us a look at it. The introduction states;

This 2019 “Magic Quadrant for Data Center Backup and Recovery Solutions” is an update to the “Magic Quadrant for Data Center Backup and Recovery Solutions” that last published in July 2017. As the backup and recovery market is continuously changing, we simplified the market definition.

Because The “Magic Quadrant for Data Center Backup and Recovery Solutions” focuses on upper-midmarket and large enterprise organizations we refined the inclusion / exclusion criteria by increasing focus on international presence and size of the protected environment.

The weight given to certain criteria was changed. For example;

  • In Completeness of Vision, the high rating for marketing strategy and sales strategy was changed from high to medium.
  • There was no rating in 2017 for “The soundness and logic of a vendor’s underlying business proposition” or its vertical/industry strategy and geographic strategy; they each received a medium weight in the 2019 MQ.
  • In the Ability to Execute section, a supplier’s marketing execution was downrated from high to medium.

These changes make comparison with the 2017 MQ rankings harder.

Arcserve points

According to Arcserve;

  • The Gartner analysts did not seem to have a good understanding of the market. Like Rubrik it felt that the analysts did not listen to raised objections and submissions, or simply did not understand them.
  • The Arcserve team was told that placement in the MQ had more to do with the Gartner analysts’ feelings than the results of a rigorous methodology.
  • Its growth was not reflected in its positioning, and growth declines by Veritas and Dell EMC were ignored by the analysts as well. The feeling is that the supplier growth area ratings are inconsistent and unfair.

Arcserve feels that the standing of this MQ is compromised by the poor and inconsistent information gathering process and the unfairness of the results. 

Rubrik took its objections to Gartner’s ombudsman, but to little or no no avail. So too did Arcserve, again, it felt, with little benefit.

Arcserve cannot use this MQ and the accompanying critical capabilities document to assess how itself and other suppliers are doing, because of the problems it identified. If the information gathering and assessing process for them was the same as that which it (and Rubrik) experienced then comparisons with the other vendors’ ratings and assessments are simply unreliable.

It thinks that it needs to find a better and more solid source for analysis and reviews of suppliers’ standings in the data centre backup and recovery market.

An alternative

No doubt it is well aware of the Forrester Wave.

Rubrik CEO Bipul Sinha asks; “Could an experienced and objective third party come to a different conclusion [than Gartner]? Naveen Chhabra, an Analyst with 20 years’ experience at IBM and Forrester, recently published the Forrester Wave for Data Resiliency Solutions, placing Rubrik in the Leader Category with the highest possible score for strategy. We now know the answer to that question.” 

In this Wave, Rubrik, Cohesity, Veeam and Commvault were all in the leaders, section, while, in the forthcoming Gartner backup and recovery MQ, Rubrik and Cohesity are visionaries.

The stakes are high. CIOs day a lot of attention to MQ position and ratings. A supplier’s growth could be hindered by an adverse placement and Rubrik, with its mega-funding, needs high growth to continue.

Arcserve would just like a fair crack of the whip, and doesn’t think Gartner is supplying that.

Gartner viewpoint

We asked Gartner what it thinks about the points made by Arcserve and it replied: “Please see the Gartner Office of Ombudsman’s blog post titled “Gartner Research Does Not Please Everyone, All the Time” in response to your inquiry. “

That states;

Independence and objectivity are paramount attributes of Gartner research, so even the perception of a conflict of interest requires careful examination by the Office of the Ombudsman. Rubrik absolutely did the right thing to contact us and voice its concerns. In this case, it took a considerable amount of time to thoroughly investigate every single complaint raised by Rubrik, ultimately resulting in the assignment of a new lead analyst for this research. We are fully satisfied that Gartner’s rigorous research methodologies, combined with the actions taken by the Research & Advisory leadership team throughout this process, ensures all the vendors in this market segment are accurately—and fairly—evaluated relative to their competitors in the final Magic Quadrant.

Unfortunately, Rubrik does not agree with Gartner’s point of view expressed in the Magic Quadrant, but we respect the company’s right to voice its opinion. We believe Gartner’s opinion on vendor capabilities in this market is accurately expressed in the Magic Quadrant, a rigorous, independent analysis that helps buyers navigate technology purchase decisions.


Rubrik declares war on Gartner over Data Center Backup and Recovery Magic Quadrant

Rubrik CEO Bipul Sinha has attacked Gartner’s about-to-be-published Data Centre Backup and Recovery MQ as “seriously flawed” and produced by an analyst who applied to join Rubrik but was rejected.

This Magic Quadrant hasn’t even been revealed  by Gartner yet. But here we have Rubrik launching a pre-emptive strike.

The MQ has been delayed since last year as, first, Gartner lost four analysts to Rubrik and one to Veeam. It had to appoint new analysts for the work and, according to Rubrik, the lead analyst was the person it decided not to hire, leading to a conflict of interest in the MQ’s production. 

Rubrik states: “After his sustained pursuit of a position with Rubrik, we declined to offer Analyst #5 a role, and he expressed his clear disappointment in light of his colleagues’ hirings at Rubrik.”

It appealed to Gartner’s ombudsman about this and there was a review of the MQ, causing a second delay of, we understand, some months. That review effectively came to naught and Rubrik’s position in the MQ, as a visionary we understand, was unchanged. We think it feels strongly that it should be in the Leader’s quadrant.

Sinha states Rubrik “engaged with the Gartner team over several months to remedy a number of significant issues and concerns to no avail, so we felt it was important that the market, including our customers, potential customers, partners and employees, have the full set of facts that are pertinent in objectively evaluating the information contained in this MQ.“

The 2017 Gartner Analyst Team positioned Rubrik in the lower right Visionary quadrant in that version of the MQ;

2017 Gartner Data Centre Backup and Recovery Magic Quadrant

Sinha thinks Rubrik has been treated unfairly: “Based on objective data, since the 2017 MQ, Rubrik has made significant progress in its business, out-paced all of its competitors, and has had a disproportionately large impact on the Data Center Backup and Recovery market. Yet, this progress has not manifested in any significant movement, as reflected in Rubrik’s position within the 2019 MQ.”

Rubrik states: “Despite a comprehensive 30-page survey submission and 25 formal analyst inquiries over the preceding 12 months, Gartner failed to get many basic facts correct in the draft MQ and Critical Capabilities. In the draft summaries shared with vendors, Rubrik found 17 inaccuracies covering missing functionality, customer adoption, and deployability.

“In some cases, it was clear that the analysts confused us with a smaller competitor in their description of an OEM relationship and in multiple descriptions of how our technology works.”

Blocks & Files cannot recall any vendor going to war with Gartner publicly over its position and standing in a Magic Quandrant. It is a measure of Rubrik’s displeasure and annoyance, and of the reputation of Gartner’s MQs, that Rubrik has taken this unprecedented step. 

Fellow supplier Cohesity, enters the MQ for the first time and is also positioned as a visionary. It is, we understand, pleased with that recognition and confident its position will improve as its vision and product roadmap are so good.

We also understand Veeam is pleased to stay in the Leaders’ quadrant.

Note. Here’s a standard MQ explainer: the “magic quadrant” is defined by axes labelled “ability to execute” and “completeness of vision”, and split into four squares tagged “visionaries”, “niche players”, “challengers” and “leaders”.

Hitachi Vantara: our VSP 5000 is the world’s fastest storage array

Hitachi Vantara has beefed up its high-end storage line-up with the VSP 5000 series, which it claims is the world’s fastest enterprise array.

The company can certainly claim bragging rights in IOPS terms but others perform better on latency and bandwidth.

The obvious comparison is with Dell EMC’s PowerMax which has a higher latency (sub-200μs), is slower and has lower capacity limits.

PowerMax 2000 has up to 2.7m random read IOPS and 1PB effective capacity. The VSP 5100 exceeds both, with 4.2m IOPS and 23PB of capacity.

PowerMax 8000 offers up to 15m IOPS, 350GB/sec and 4PB effective capacity. The VSP 5500 has up to 21m IOPS, 148GB/sec and 69PB capacity, making it slower on bandwidth but better in the other categories

Infinidat’s Infinibox array handle 32μs for reads and 38μs for writes. This is lower than the VSP 5000s 70μs. Also Infinibox has up to 1.3m IOPS and 15.2GB/sec throughput, giving it lower performance than the VSP 5100 (4.2m IOPS and 25GB/sec) and much lower than the VSP 5500 (21m IOPS and 149GB/sec).

Speccing out the VSP 5000

Hitachi Vantara’s 5000s have much higher capacity than the existing VSP F all-flash and G hybrid Series. They are designed to accelerate and consolidate transactional systems, containerised applications, analytics and mainframe storage workloads, so as to reduce data centre floor space and costs.

Two all-flash 5000 Series arrays – the 5100 and 5500 – provide block and file access storage. They have hybrid flash+disk variants – the 5100H and 5500H – and use an ‘Accelerated Fabric’ for internal connectivity between controllers and drives. Minimum latency is 70μs.

VSP 5000.

Claimed performance is 42m IOPS for the 5100 and 5100H, and 21m for the 5500 and 5500H arrays.

The F Series uses SAS commodity SSDs with up to 15TB capacity, or Hitachi’s proprietary Flash Module Drives (FMDs) with a maximum 14TB. NVMe SSDs are introduced with the 5000 Series, ranging up to 30TB in capacity. SAS SSDs and FMDs are also available.

The hybrid variants can use 2.4TB, 10,000rpm, 2.5-inch disk drives or 14TB, 7,200rpm, 3.5-inch disks.

The internal raw capacity limits for the 500 and 5100H are 23PB. It is 69PB for the 5500 and 5500H. External capacity limits are 287PB for all 5000s.

The 5100s have a single controller block with two controllers and 4 acceleration modules for the internal fabric. The 5500s have 1, 2 or 3 controller blocks, with 4 controllers and 8 acceleration modules per block. These use FPGAs and link to a central fabric infrastructure switch.

The FPGAs offload IO work from the controllers. The fabric uses direct memory access which, in tandem with the FPGAs, enables the high IOPS performance. Read a Hitachi Accelerated Fabric white paper here.

This internal PCIe fabric provides the foundation for a future internal NVMe-oF scheme.

The fabric is based on PCIe Gen 3 x4 lanes and is built with quadruple redundancy to deliver 99.999999 per cent availability, Hitachi Vantara claims. It allows tiering of data across controller blocks for improved price-performance.

Host interfaces supported are 16 and 32Gbits/sec Fibre Channel, 16Gbit/s FICON for mainframes, and 10Gbit/s iSCSI.

The 5000s can be upgraded to use NVMe over Fabrics and storage-class memory (Optane.) They are not supported yet. (PowerMax is shipping both.)

Deduplication

The 5000s have up to 7:1 data reduction ratio with deduplication and compression. The deduplication method uses machine learning models to optimise deduplication block size and use either in-line or post-process dedupe to get as much deduplication as possible while reducing the performance impact of dedupe processing.

Up to 5.5:1 data reduction is achievable without measurable reduction in system performance, Hitachi said.

We can position the VSP 5000 and F series in a 2D IOPS x bandwidth chart: 

The IOPS numbers are millions of IOPS.

This indicates how the existing VSP F1500 (4.8m IOPS, 48GB/sec) is faster than the 5100 (4.2m IOPS, 25GB/sec.) The 5500 is, of course, vastly more powerful than the F1500.

Existing VSP systems can be virtualized by the 5000 series – a property of the SVOS software, and become resources for the 5000s.

A VSP Cloud Connect Pack adds an HNAS 4000 file storage gateway to the system. It moves data to a public cloud to free up capacity. The moved data is made indexable and searchable.

There is a 100 per cent availability guarantee. Hitachi’s Global-Active Device (GAD) delivers synchronous clustering of applications between VSP 5000 sites that are up to 500 kilometres apart.

SVOS and Ops Center

Hitachi Vantara today also introduced Ops Center – infrastructure management software that uses AI to automate data centre management tasks

The Storage Virtualization Operating System (SVOS) RF 9 offers scale-out architecture and supports NVMe over Fabrics and Optane drives. Hitachi Vantara said the software incorporate “AI intelligence that adapts to changing conditions to optimise any workload performance, reduce storage costs and predict faults that could disrupt operations.”

The system has NVMe flash, SAS flash and disk drive storage classes. AI techniques and machine learning are used to dynamically promote and demote data to an optimized tier to accelerate applications.

Hitachi says Ops Center can automate up to 70 per cent of data centre workloads and provides “faster, more accurate insights to diagnose system health” and keep operations running.

HPE has announced its HPE XP8 Storage system; it OEMs Hitachi’sVSP arrays under the XP8 brand.

Hitachi Virtual Storage Platform (VSP) 5000 series, Hitachi Ops Center and SVOS RF 9 are available now.

Nutanix integrates with ServiceNow, dives into HPE’s GreenLake, wins HPE OEM deal

Nutanix logo
Nutanix logo

Nutanix opened its .NEXT conference in Copenhagen today, announcing an HPE GreenLake deal, its software pre-installed on HPE servers, and integration with ServiceNow for automated incident handling and management.

Nutanix’s Enterprise Cloud OS software, including the AHV hypervisor, is to be offered as part of a fully HPE-managed private cloud with customers paying on a service subscription basis. The agreement is initially focused on simplifying customer deployments of end user computing, databases, and private clouds.

GreenLake Nutanix is available to order across the 50-plus countries where HPE GreenLake is offered. Customers have the option to outsource operations to HPE PointNext services.

The second part of today’s HPE announcement is that Nutanix software – Acropolis, AHV and Prism – will be pre-installed on HPE ProLiant DX servers and shipped from HPE factories as a turnkey solution. 

The focus is on enterprise apps, big data analytics, messaging, collaboration, and dev/test.

ProLiant DX with Nutanix is generally available now. The hardware is supported by HPE and the software by Nutanix.

SimpliVity under pressure

Blocks & Files sees this as a near OEM deal; not a full one as the Nutanix SW is still visible to customers and there is split support. The announcement comes days after HPE’s adoption of the Datera server SAN  into its line-up for resellers. The turnkey Nutanix and Datera offerings both compete with HPE’s own SimpliVity HCI systems. 

IDC noted that Cisco’s HyperFlex HCI product overtook SimpliVity to take third place in HCI market revenue share terms in this year’s second quarter. HPE also said SimpliVity sales grew four per cent in its third fiscal 2019 quarter, down from it25 per cent growth in the previous quarter.

It looks like the SimpliVity product is not motoring fast enough to meet all of HPE’s HCI sales needs.

ServiceNow

Nutanix has integrated its hyperconverged infrastructure system with ServiceNow’s IT Operations Management (ITOM) cloud service. Nutanix and ServiceNow customers can automate incident handling with ITOM automatically discovering Nutanix systems data: HCI clusters, individual hosts, virtual machine (VM) instances, storage pools, configuration parameters and application-centric metrics. 

ServiceNow users can provision, manage and scale applications via Nutanix Calm blueprints, published as service catalog items in the Now Platform.

ServiceNow’s ITSM service is linked to Nutanix’ Prism Pro management facility and its X-Play automation engine. There is an X-Play action for ServiceNow so IT managers can notify their teams of incidents and alerts in the Nutanix environment, such as a host losing power or server running out of capacity.

Nutanix says automated incident handling is a mundane part of IT department activity but reduces the time spent on servicing incidents and issues.

Rajiv Mirani, Nutanix Cloud Platforms CTO, issued a quote “By integrating Nutanix software with ServiceNow’s leading digital workflow solutions, we are making it easier to deliver end-to-end automation of infrastructure and application workflows so that private cloud can deliver the same simplicity and flexibility as public cloud services.”

You can read a blog about the Nutanix ServiceNow integration. This ServiceNow capability is available now, through platform discovery for Acropolis and a Calm plug-in in the ServiceNow Store.


Backblaze 7.0

Forest fire
Forest fire

Backblaze today launched BlackBlaze Cloud Backup 7.0. It offers the ability to keep updated, changed and deleted files in its cloud-stored backups forever.

Backblaze is a cloud backup and storage service for businesses and consumers. It has 800PB under management, integrates with Veeam, and pitches itself as being cheaper than the public cloud giants.

For example, its B2 Cloud Storage costs $0.005/GB/month compared to Amazon S3’s $0.021/GB/month, Azure’a $0.018/GB/month and Google Cloud’s $0.020/GB/month.

At time of writing, the company quotes Carbonite at $288/year for 100GB of cloud backup, Crashplan Small business at $120/year for unlimited data, iDrive at $99.50/year for 250GB and Backblaze Cloud Backup at $60/year for unlimited data.

BackBlaze 7.0 extends the recover-from-delete/update function from the default 30 days to one year for $2/month or forever for $4/month plus $0.005/GB/month for versions modified on a customer’s computer more than 1 year ago.

Customers see a version history that extends back in time for 12 months or as long as they have been a Backblaze customer.

BackBlaze 7 also uploads files better. The maximum packet size has increased from 30MB to 100MB, which enables the app to transmit data more efficiently by better leveraging threading. This smooths upload performance, reduces sensitivity to latency, and leads to smaller data structures. It puts a smaller load on the source system.

Customers can sign into Backblaze using Office 365 credentials with single sign-on support. MacOS Catalina is also supported.


Micron: HPC users should drop disk drives for ‘faster, more reliable’ QLC SSDs

QLC NAND memory is more reliable than HDDs and organisations that need storage for read-intensive applications should switch to these next generation SSDs. So says Micron.

Appropriate QLC flash workloads include processing data lakes, machine and deep learning, real-time analytics and Hadoop-style applications, plus Ceph, SQL and noSQL business intelligence and media streaming by content delivery networks, according to Micron.

Cloud services, vSAN capacity storage, and financial, regulatory and compliance storage could also get an QLC SSD boost, the computer memory maker said.

Micron notes these applications are read-intensive and require fast access. They do not require much over-writing which shortens the life of SSDs as their working life (endurance) is defined by the number of write cycles they support.

Micron’s interest here is that it has launched QLC (4bits/cell) SSDs which have a lower $/TB cost than current TLC (3bits/cell) drives – but a lower endurance. For SSDs this is measured in drive writes per day (DWPD). By contrast, for disk drives the number of terabytes written per year over the warranty period is quoted.

QLC SSD’s lower endurance means they are unsuitable for write-intensive workloads but they represent an alternative to disks for fast access read-intensive work. Kent Smith, a Micron SSD marketer, showed the best fit workload chart for its SSDs on this basis at the Flash Memory Summit 2019.

A slide from Micron’s FMS19 presentation

Smith also compared disk drives from two unnamed manufacturers and Micron’s 5210 ION QLC SSD. He devised a DWPD rating for disk drives from the two manufacturers to enable a like for like comparison between the two storage technologies. The disk DWPD calculation was simple; divide the TB/year number by 365. Here is the results table he presented:

Slide extract from Micron’s FMS19 presentation.

Micron says its 5210 SSD has a DWPD rating of 0.8. That means it has a 4x to 7x advantage over the HDDs in the table above. For example, vendor C’s 14TB has a TB/year rating of 550, which means an 0.11 DWPD value. Since the 5210’s DWPD rating is 0.8 that is slightly more than a 7x advantage in favour of the 521

[N.B. Seagate’s 14TB Exos disk drive has a 550TB/year workload rating. Western Digital’s 8TB, 10TB and 12TB Gold disk drives also have a 550TB/year rating. We deduce that vendor B is Western Digital and vendor C  is Seagate. These drives are warranted for five years.]

Why it matters

To show that this is important Micron quotes a Seagate disk drive datasheet; “Maximum rate of <550TB/YR (5-year warranty). Workloads exceeding the annualized rate may degrade the drive MTTF and impact reliability.”

That means HDD failure rates increase when HDDs hit their maximum workload rating – an HDD metric of total throughput. This applies to reads and writes whereas SSDs only wear when writing.

Counter arguments

Competing SSD manufacturer Samsung thinks that DWPD is no longer fit for purpose as a useful measure of SSD endurance. It argues that terabytes written (TBW) is a more appropriate way to measure endurance.

Samsung asserts that, as SSD capacities increase, DWPD requirements decrease. Content is rewritten fewer times on larger capacity SSDs and so it is inappropriate to use the same DWPD rating as lower capacity SSDs.

Blocks & Files thinks this is not relevant to Micron’s argument. The table above provides a 2,242TB/year workload rating for its SSD’s data writes, which is 4x higher than the disk drives. But the point Micron makes is that its SSD can specifically sustain more read work than the disk drives because their reliability deteriorates after the TB/year number is exceeded.

Consultant Hubbert Smith is not a fan of QLC SSDs, asserting that they are a whole new type of ugly, in a recent Blocks & Files column. Using QLC SSDs for cold storage is a bad idea, he said, but he excludes system vendors using these SSDs to drive to lower cost/GB for storage generally. ” In a system this is likely workable,” he writes.

Micron’s argument withstands comparison with Samsung’s and Smith’s views. In a nutshell, customers can read data in greater amounts and more frequently from the 5210 SSD than they can from HDDs, and at rates that exceed the disk drive’s warranted performance.

So, Micron wants us to understand – SSDs rule, OK! Eat your heart out, disk drive fans.

HPE inserts Datera into integrated server SAN

HPE has assembled a server SAN for resellers that integrates its servers and networking with Datera scale-out storage software.

Startup Datera develops enterprise-class clustered block and object storage software, which it pitches as an alternative to hyperconverged infrastructure (HCI), such as Nutanix and HPE’s Simplivity. The company also markets the software as a fast and flexible alternative to traditional external SAN array storage, a category that includes HPE’s 3PAR and Primera arrays.

The HPE Datera Cloud Kit is orderable from its resellers with a single SKU and includes:

  • 4 x ProLiant DL360 Gen 10 server nodes, each with 19.2TB or 38.4TB SATA SSDs
  • 76.8TB or 153.6TB raw capacity
  • 2 x M-Series SN2100M switches for the Datera backend network
  • Datera SW subscription license
  • 40/50GB or 100Gb DAC backend network cables
  • Smart Fabric Orchestrator license for network configuration and optimisation
  • Racking hardware for servers and switches

Cloud Kit supports for containers via Kubernetes, virtual machines, and bare metal servers. 

HPE Datera Cloud Kit diagram. ISCSI front-end cabling is not supplied.

HCI scalability

The HPE partnership provides Datera with big player credibility and a means to knock on enterprise doors. Datera CTO Hal Woods told us in a telephone briefing that enterprises want to buy from established suppliers.

According to Woods, customers tell Datera that HCI systems are difficult to manage after they grow to a few hundred TBs of capacity. Datera is part of a disaggregated alternative to HCI that can grow past that capacity level and is easier to manage, he said.

The Datera HPE Cloud Kit enables customers to expand their environment, with their choice of servers and storage media. Datera’s software automatically incorporates new nodes into the cluster and redistributes data to maximise performance.

It uses a lock-less, time-based protocol for cluster node communications and the cluster can respond in 37 microsecs to an IO request. Network and media access times is added to this, with 187 microsecs mentioned as a total latency figure. This is without using NVMe over Fabrics.

The software can automatically incorporate Optane Apache Pass DIMMs, assigning them for use as storage volumes by applications allotted the best storage service in Datera’s policy-driven scheme.

Datera CEO Guy Churchward gave out a quote, saying Datera notched up 500 per cent revenue growth in the first half of 2019, “fuelled by our go-to-market traction with HPE. The future looks even brighter for Datera, given IDC’s recent prediction that server-based storage will grow at seven times the rate of traditional storage technologies moving forward. Together with our reseller partners, the new Cloud Kit will drive trial and traction to capture that growth at the high end.”

Marty Lans, HPE GM, storage technologies and connectivity, said in a statement: “The move to the software-defined data centre has driven increased demand from HPE’s customer base for the Datera platform and the simplicity, automation, and resiliency it delivers.”

Woods told us that Datera’s product development roadmap includes looking at NVMe over Fabrics, with NVMe/TCP favoured at present. The company is developing exposure to automation players with APIs. Datera will also integrate with HPE’s InfoSight management and analytics offering and GreenLake services.

Hyperscalers will save disk drive makers from death by SSD

Will SSDs kill the enterprise disk drive business? Hyperscalers will continue to use disks out to 2035, if Seagate’s revenue forecasts are accurate, but enterprises will switch en masse to SSDs.

In his presentation at the Flash Memory Summit 2019 in Santa Clara, consultant David McIntyre cited Wikibon research showing enterprise disk drive use tailing off by 2026.

Wikibon Annual Flash Controller update – slide extract from McIntyre’s presentation at FMS 2019

Most enterprise disk drive capacity usage is in data centres with nearline 7,200rpm 3.5-inch capacity drive -8TB, 10TB and so forth. Wikibon forecasts this falling to less than five per cent of the revenue for capacity storage by 2026. SSDs will then take slightly more than 85 per cent of capacity storage revenues, effectively killing off disk drive use in enterprise data centres.

From the consulting room

Blocks & Files asked three consultants how they thought enterprise data disk drive use would change with the advent of SSDs.

GigamOM consultant Enrico Signoretti told us: “My opinion is that, if there isn’t something really big that we don’t know of yet, HDDs will remain for a long time but will become more of a niche product, only for very big infrastructures that can manage them… more or less like tapes now.

Hyperscaler data centres are an example of such big infrastructures.

“Simply put, HDD-based infrastructures will have a TCO that is not really compatible with enterprise operations (Cattle vs Pets kind of reasoning). In the research I’m doing right now on object storage, half of the vendors have (or are planning to release) all-flash object stores for enterprise use cases…”

That’s pretty supportive of the idea that SSD growth will drive down enterprise disk drive use to tape drive level.

Storage architect Chris Evans said: “I think [the scenario] makes sense as a simple cost level.”

He sees more than simple cost affecting the issue: “There’s a “slot cost” in using SSDs or HDDs that includes power, space, cooling, maintenance etc… For example, a 16TB Seagate NL drive will consume 5W idle, and up to 10 watts max. Compare this to a 2TB QLC Intel 660p M.2 drive with 0.1W active and 0.04W idle. Even if you multiply the SSD by a factor of 8, it is still much more power efficient.”  

More peripheral componentry is needed for disk drives and disk drives are slower to access than SSDs. “The only ways to solve HDD performance are to either go multi-actuator or add more cache… Both of these solutions will drive up HDD $/GB cost and so make SSDs more attractive earlier than we think.  This pushes HDDs towards a really low-use status.” 

Evans suggests that if SSDs get more traction at the protocol level to make them usable by hyperscalers, this again may tip the scales towards SSDs

Like Signoretti, Evans is comfortable with the idea that SSDs will eventually diminish enterprise disk drive use to a low level.

Hypo or Hyper?

Rob Peglar of Advanced Computation and Storage told us: “Eventually, SSDs will ‘takeover’ all but a trivial part of consumption and usage in enterprise.”

But not so with the hyperscalers; he cites Amazon, Apple, Facebook, Google, Microsoft, Baidu, Alibaba, and Tencent, which currently buy around half of all disk drives. This is amounts to 400EB of capacity per year – and several million disk drives each quarter.

They will carry on buying. Hyperscalers need an acceptable time to the first byte and good streaming speed for user-requested video, image and photo files. Disk is good enough and cheap. Tape is cheaper but too slow. SSD is too fast for the need and too expensive.

Peglar concludes: “SSD will, in our collective lifetimes, not replace HDD entirely. It will in enterprise, to a great extent, but not in hyperscale, which is where the majority of the business lies today and onward (perhaps increasingly so) into the future.”

So the consensus is that SSDs will kill the enterprise disk drive business but not the huge hyperscale HDD business. The hyperscalers also buy tape drives and libraries by the truck-load. Their capacity requirements are huge and they need to maximise price/performance for different storage use cases.

WD and Seagate mull 10-platter HDDs: Stopgap or BFF?

HAMR and MAMR disk drives could be delayed until 2022 with 10-platter, 20TB conventionally recorded drives coming in 2021.

Trendfocus, a research firm specialising in data storage, suggests the launch of conventional technology in 18TB capacities in the second half of 2020 could delay the adoption of shingled magnetic recording (SMR) until later in 2020. Such SMR drives would have a 20TB capacity.

Trendfocus said SMR nearline volumes remain low, Wells Fargo senior analyst Aaron Rakers wrote in an October 4 report to subscribers. Vendors must sort out their host-managed schemes to better meet performance requirements before large-scale adoption takes place, according to Trendfocus.

In turn this could shift HAMR/MAMR adoption to 2022.

In the meantime disk manufacturers could add capacity by bringing 10-platter drives to market. An 18TB nine-platter drive would then become a 20TB 10-platter product, assuming no increase in areal density. All HDD manufacturers continue to assess back-up plans for such 10-platter disk drives, Trendfocus said. That would imply they would arrive in 2021.

Spinning platters

Conventional disk drives use perpendicular magnetic recording technology. There is a limit to their areal density, and hence capacity. This is caused by increasing recorded bit state unreliability. As the bit areas become smaller, with areal densities beyond 1,200Gb/sq in, binary value becomes harder and harder to read, and can reverse.

Seagate and Western Digital have turned to alternative technologies to increase capacity beyond this limit.

Seagate’s HAMR (Heat-Assisted Magnetic Recording) HDDs and WD’s MAMR (Microwave-Assisted Magnetic Recording) and HDDs are classed as nearline drives, and rotate at 7,200rpm. They are filled with helium to allow thinner and more platters than air-filled disks – helium resistance is lower than air.

Seagate and Western Digital are currently making nine-platter drives. WD’s DC H550 tops out at 18TB and the SMR DC H650 reaches 20TB. They use some aspects of WD’s MAMR technology but are not full MAMR drives.

Seagate’s Exos X16 is a 16TB, nine-platter conventionally recorded drive with 1,000Gb/sq in area density. The company expects to introduce 18TB conventional HAMR drives and 20TB SMR HAMR drives with production ramping in the first half of 2020, both with nine platters.

Your occasional storage digest, including Violin, Lenovo and more

It’ s time to listen to some Violin practice and find out about Lenovo file tiering plus a host of news snippets from the storage industry. Read on.

Violin Systems re-org

All-flash array pioneer Violin Systems has decamped from Silicon Valley to Colorado where it is (a) saving money and (b) developing a new product.

CEO Todd Oseth told Blocks & Files office rents in Colorado are a twelfth of their San Jose equivalent and electricity is 30 per cent cheaper. “We’re healthy and doing better than we’ve ever done.”

Violin merged with Xiotech last year when it separated from the Axellio business. Violin is co-located with Axellio in the old Xiotech facility in Colorado.

The new product combines the attributes of Violin’s 7000 series and the VXS 8. The VXS 8 is effectively the old Xiotech ISE array.

Oseth said this next generation product: “combines the best of both companies. We will have a platform that has performance and all of the enterprise features of both product lines and will add some new technology that our great Richie Larry has developed that will give us some wonderful new features for the industry.“

There is a software focus so Violin “can use the multitude of NVMe platforms coming on line next year”.

Axellio CEO Bill Miller is on Violin’s board along with two people from the Soros organisation; The Soros group owns 90 per cent of Violin. Mark Lewis, who was executive chairman, has moved on to other things. He thought Oseth was better placed to execute the new product strategy and so resigned, the company told us. CMO Gary Lyng is on the hunt for a new opportunity.

Lenovo adds file tiering with Data Dynamics reselling deal

Lenovo wants to clear out old files from filers onto its own storage and is reselling Data Dynamics software as a means to that end.

Data Dynamics’ StorageX is the software and it locates old, less accessed files on NAS systems and migrate sthem to less expensive storage. On top of that it provides file data indexing and tagging for analytics processing.

StorageX was developed as file virtualization technology by Nuview Systems. It aggregated files across filers into a single namespace and could move files between the systems. Nuview itself was founded in 1996 by CEO Rahul Mehta. Brocade, a supplier of storage networking systems, bought Nuview and its StorageX product in 2006, thinking it could sell file virtualization running on its network switches.

This was not successful and Brocade closed the product down in 2012. Data Dynamics then bought the IP and made a go of it as a file replication and migration product, quietly developing it into a file lifecycle manager and analyser for large enterprises.

Shorts

Acronis has announced a $10 million investment in its service provider incentive program. Partners who sign up for a cloud service provider contract at the Acronis Cyber Summit will get six months of free usage. Acronis will also fast-trackdevelopment of products in its Cyber suite.

Cobalt Iron has changed the name of Adaptive Data Protection ot ‘Cobalt Iron Compass’. It is planning a series of updates over the next 12 months for the SaaS- enterprise data protection platform

Databricks has teamed up with visual analytics supplier Tableau. A new Databricks Connector, released in versionTableau 2019.3, is optimized for performance and leverages integration with Delta Lake, an open source storage layer that makes existing data lakes reliable at scale. Tableau users can now access and analyze massive datasets across the entire data lake with up-to-date and real-time data.

Dell EMC PowerVault ME4 Series is to support CloudIQ, the company’s free storage application that lets customers monitor, analyze and troubleshoot a storage environment from anywhere. It will also have new 3.84 TB SAS SSD for read-intensive operations. There is additional software and Host OS support; XenDesktop 7.1, OME 3.2, RHEL 8, VMware 6.7 U2, SRM 8.1.x and vCenter Plugin 6.7.

Dell EMC Virtual Storage Integrator is a plug-in that extends the VMware vSphere Client, allowing users of the vSphere Client to perform Dell EMC storage management tasks within the same tool they use to manage their VMware virtualised environment. VSI 8.0 was the first release to support the new vSphere HTML 5 Client.

Druva has been awarde three patents by the U.S. Patent and Trademark Office for its deduplication technology and data search capabilities. Patent No. 10,275,317 covers moving aged backup data to lower cost storage while maintaining deduplication benefits. Patent 10,387,378 covers network bandwidth and storage optimisation deployed while backing up multiple versions of the file. Patent 10,417,268 is about extracting the most relevant phrases from given unstructured data.

Excelero has been assigned US patent #10,372,374 – its fourth US patent – governing an approach that better addresses the “tail latency” experienced by large-scale private cloud operators with demanding application workloads. This IP can enable developers to achieve better storage efficiency and performance from shared NVMe Flash resources across an enterprise.

Chinese SSD controller supplier Goke demonstrated its prototype 2311 series drive at FMS19. It uses Kioxia’s XL-FLASH, a 3D NAND SLC technology with an overall 4K random read latency under 20μs and the final drives will offer a 4K random read latency in less than 15μs. They support up to 4TB capacity with a maximum write bandwidth of 1GB/sec and read bandwidth of 3GB/sec through a PCIe Gen3 x 4 interface. They will also support SM2/3/4 and SHA-256/AES-256 with built-in security engines. Blocks & Files thinks PCIe gen 4.0 would be a better fit to the basic speed of Toshiba’s NAND.

IBM Spectrum Scale has a new offering for persistent storage for containers ,which is based on Container Storage Interface (CSI) specification. This is available at https://github.com/IBM/ibm-spectrum-scale-csi-driver as open beta. It provides static provisioning, lightweight dynamic provisioning, fileset-based dynamic provisioning, multiple file systems support and remote mount support.

Kioxia America, Toshiba Memory America as was, demoed TrocksDB, an open-source improvement to RocksDB, at September’s Storage Developer Conference in Santa Clara. The demo used Kioxia SSD storage. TrocksD uses key values more efficiently with SSDs to enable improvements in storage and DRAM usage. It reduces the repeated data rewriting caused by application-generated write amplification and runs on any Linux HW supported by RocksDB.

Micron is working with Xilinx to boost the boot and dynamic configuration performance of Xilinx’s Versal, described as the industry’s first adaptive compute acceleration platform (ACAP). Versal will use Micron Xccela flash and other Micron memory technology to reduce system startup times and increase system responsiveness in automotive, industrial, networking and consumer applications that use artificial intelligence.

Percona Backup for MongoDB v1 is the first GA version of Percona’s MongoDB backup tool. It is designed to assist users who don’t want to pay for MongoDB Enterprise and Ops Manager but who do want a fully-supported community backup tool that can perform cluster-wide consistent backups in MongoDB.

RAID Inc has launched Pangea, a turnkey Lustre on ZFS appliance. It involves a partnership with Whamcloud and uses commodity hardware. Its ZFS Online Operating Module allows users to locate failed devices, scan subsystems, show inventory and more.

Retrospect has released Backup 16.5 with granular remote management functionality through the Retrospect Management Console and macOS Catalina support. The release marks the 30th anniversary of Retrospect Backup. the company is rolling out worldwide subscription pricing, online and through distribution, starting at $3.99 per month for a single computer. 

Cloud storage company Wasabi is partnering with LucidLink for object storage. Customers can buy Wasabi storage bundled with LucidLink’s monthly managed storage service or bring their own Wasabi storage account,


Ignore drive write messaging… SSDs are more reliable than HDDs, Period.

This blog mostly pertains to SSDs in server and storage arrays.

All NAND relies on electrical charges in silicon. Writing data involves a Program Erase (P/E) cycle, also known as a write cycle. And there are write cycle limits as to how many times a NAND cell can hold a charge:

  • SLC (1 bit/cell) write cycle is 100,000
  • MLC (2 bits/cell) write cycle 10,000
  • TLC (3 bits/cell) write cycle 3,000
  • QLC (4 bits/cell) write cycle 1,000

This is a race to zero. Why? Because more bits per cell equals a lower cost/bit. QLC NAND is cheap compared to SLC or MLC NAND.

Use cheap NAND for cat-videos or for USB thumb drives? Okay, fine.

But using cheap NAND for servers and storage arrays is a profoundly bad idea because high write workloads are a threat to SSD reliability. So using the cheapest possible flash for servers and storage arrays is unwise.

I am not a fan of the SSD marketing teams who repeatedly offered the lead marketing message of “the SSD I want you to buy … 3,000 P/E write endurance… they will surely fail”. Yuck – when that marketing message could have been “the SSDs I want you to buy are far more reliable than hard disk drives, which fail about one per cent per year.”

SSDs are more reliable than hard drives, period.

But we still have to deal with the reality of NAND write endurance. We should rightfully focus on the device – the SSD – and not the NAND itself.

Server/Storage SSDs added over provisioning, with spare cells held in reserve to take over when used cells reach their wear limit. Next, SSDs were rated with Drive Writes Per Day (DWPD), and then further improved with ratings of TWD (Terabytes Written per Day) so that the wear rate can be tracked.

It is obvious, but bears repeating: a 4TB SSD will have twice the TWD rating as a 2TB SSD. Buy bigger SSDs for write-heavy workloads (duh).

Our humble advice… Use SSDs. SSDs are more reliable than HDDs, period. And when the workload is write-heavy, simply buy bigger SSDs, as they are harder to fill up.

The endurance problem can also be solved another way; compression writes less data to SSDs, thereby preserving the available write endurance (P/E cycles) capacity. Today we know GZIP and its kin. These are antiquated and inefficient; you can expect to be less than delighted. Keep an eye out for new generation of “Cloud-native” compression storage services and software.

Note: Consultant and patent holder Hubbert Smith (Linkedin) has held senior product and marketing roles with Toshiba Memory America,  Samsung Semiconductor, NetApp, Western Digital and Intel. He is a published author and a past board member and workgroup chair of the Storage Networking Industry Association and has had a significant role in changing the storage industry with data centre SSDs and Enterprise SATA disk drives.

FileCloud plugs data leaks with Smart DLP

FileCloud has added data loss prevention via the release of Smart DLP.

FileCloud is an enterprise file synchronisation and sharing system (EFSS) supplier that runs on-premises or in the public cloud; AWS and Azure. It is an alternative to Owncloud, Box, Dropbox, and Egnyte and can be run as a self-hosted cloud or hybrid private/public cloud storage. 

The intention for Smart DLP is to prevent data leaks and safeguard enterprise content. The softwarew allows or bars file-level actions based on applied and pre-set rules that are checked in real-time. In effect the access rules form a layer of custom metadata about files managed by FileCloud and Smart DLP is the gatekeeper. 

We think the idea of using a repository’s file metadata as the basis for data loss prevention processes is a simple, smart and obvious – with hindsight – starting point. The concept has a lot of merit, subject to the admin overhead of setting up the rules and applying them.

Governance

Smart DLP is claimed to simplify compliance with GDPR, HIPAA, ITAR and CCPA data access regulations by identifying and classifying data. The product supports external security information and event management integration (SIEM) and SMS 2FA. 

Smart DLP rule management screen. Applying rules like this to thousands or tens of thousands of files will need some kind of process.

Enterprise data leak prevention features include the automation of prevention policies, monitoring data movement and usage, and detecting security threats. An admin person is required to define and apply the rules to files in the FileCloud repository.

Smart DLP controls user actions, such as the ability to login, download and share files, based on IP range, user type, user group, email domain, folder path, document metadata and user access agents; web browsers and operating systems. It will allow or deny selected user actions and log rule violation reports for future auditing.

A download stopped by Smart DLP.

The product will also find personally identifiable information (PII), protected health information (PHI), payment card information (PCI) and other sensitive content across user databases and team folders. Users can deploy built-in search patterns to identify PII and create custom search patterns and metadata sets for vertical business content.

FileCloud is a product of CodeLathe, a privately held software company, founded in 2016 and headquartered in Austin, TX. It claims 3000-plus business customers for File Cloud and more than one million users.