Home Blog Page 353

Western Digital begins mending fences in WD Red NAS drive SMR spat

Western Digital has heralded a positive shift in its approach to users of shingled WD Red NAS drives, via a short statement on the company blog.

As described in our recent article Western Digital admits 2TB-6TB WD Red NAS drives use shingled magnetic recording (SMR), some users can experience performance problems in situations such as adding the drives to RAID groups which use conventionally magnetic recording (CMR) drives.

Fellow disk drive makers Seagate and Toshiba use undocumented SMR technology in consumer desktop drives, but only WD has used them in low-end NAS drives.

WD wrote in the un-bylined blog, dated April 22:

The past week has been eventful, to say the least. As a team, it was important that we listened carefully and understood your feedback about our WD Red NAS drives, specifically how we communicated which recording technologies are used. Your concerns were heard loud and clear. Here is that list of our client internal HDDs available through the channel:

A table in the blog lists which of its internal consumer/small business/NAS drives use SMR and CMR technology:

WD said it will update its marketing materials – brochures and datasheets – to provide similar data and “provide more information about SMR technology, including benchmarks and ideal use cases”.

The final paragraphs affirm that WD recognises some customers are experiencing problems and is doing something about it:

“Again, we know you entrust your data to our products, and we don’t take that lightly. If you have purchased a drive,please call our customer care if you are experiencing performance or any other technical issues. We will have options for you. We are here to help.

More to come.”

Caringo claims cost advantage for object storage appliances

Caringo, an object storage software supplier, has launched a set of appliances that run a new version of its Swarm software.

Swarm 11.1 includes built-in content management, search and metadata management. It has improved S3 compliance, faster software performance, email and Slack alerting and has integrate Elasticsearch 6.

Caringo claims Swarm Server Appliances (SSA) start at 32 per cent less than the cost of other on-premises object storage systems, and 42 per cent less than Amazon S3 storage service fees for the same capacity over 3 years.

CEO Tony Barbagallo said the new appliances “can deliver instant access to archives, enabling remote workflows and streaming services”.

The company has launched four appliances.

  • The 1U SSA (Single Server Appliance) with 2 x 7.68TB SSDs for remote offices and small-to-medium workloads,
  • s3000 1U Standard Server with 12 x 14TB disk drives giving 168TB raw (111.4TB usable after replication and erasure coding) and clustered with minimum 3-nodes,
  • hd5000 4U High-Density server with 60 x 14TB drives meaning 840TB raw (665TB usable).
  • m1000 1U Management Server with 4 x 960GB SATA SSD and a single 256GB NVMe SSD. 
Caringo Swarm Server Appliances.

A cluster can scale to more than 1000 nodes. That delivers 504TB raw in 3u. 

Caringo s3000 Standard Storage Appliance

All software functions run in virtual machines on the SSA but in the m1000 Management Servers when clustered s3000 and/or hd5000 appliances are used. They can run on VMs in a customer’s virtual environment. Using the m1000 means content-related software functions run in flash, while bulk storage uses nearline drives.

Content can be backed up to any S3-compliant target, either in the public cloud or on-premises. 

The appliances and Swarm 11.1 are available now.

The ‘nines’ numbers

Caringo claims the SSA provides 10 ‘nines; of data durability (99.99999999 per cent) while a cluster of s3000s and hd5000s can provide between 13 and 25 ‘nines’ (up to 99.99999999999999999999999 per cent) dependent upon the specific data protection method and number of deployed nodes.

Two ‘nines’ (99%) means you could lose one object out of 100 in a year. Five ‘nines’ (99.999%) means you could lose one object out of 100,000 in a year. Ten ‘nines’ means a loss of up to one object from 10,000,000,000 (10 billion in a year. And 25 ‘nines’ means you could lose one object from 10,000,000,000,000,000,000,000,000 objects in a year. That’s  one in 10 septillion objects lost per year.


‘Recovery timing’ is everything: SK Hynix sees revenue growth on server demand, but warns of uncertainty ahead

Korean fabber SK Hynix‘s re-positioning toward newer and denser DRAM and NAND products paid off in the first 2020 quarter as it reported 6 per cent y/y revenue growth from its DRAM and NAND operations.

However, the firm’s CFO, Cha Jin-seok, said that the point at which global economies affected by the COVID-19 pandemic bottom recover was crucial to ascertain demand, telling an earnings conference call: “The biggest factor to our demand forecast is the stabilisation of COVID-19 and the recovery timing of global economic activity. If the economic recession is prolonged, we can’t rule out that even memory demand for servers could slow down.”

SK Hynix is the second largest manufacturer of memory semiconductors globally, and competes with Samsung and Micron.

Its first 2020 quarter saw revenues of ₩7.2trn ($6.2bn), up from the year-ago ₩6.7trn ($6.0bn), and net income falling from ₩1.1bn ($947m) a year ago to ₩649bn; ($559m) a 41 per cent drop. It was mainly because of its substantial product cost decreases that it was able to to make a profit, helped by SSD sales. 

The company had a dismal 2019 as lower demand created supply gluts leading to price falls. As a result it decided to accelerate transitions to denser DRAM and NAND processes which would lower production costs and enable it to compete better. In DRAM that meant planning a transitioning from 1Ynm to 1Znm products and increasing layer counts in 3D NAND.

How did the novel coronavirus pandemic affect the company in the short term? PC and mobile DRAM demand fell but server demand remained strong. NAND shipments rose because of this server demand strength.

Speaking about the firm’s new fab in Wuxi, Jiangsu Province and its $13.2bn 53,000m2 M16 semiconductor factory in Incheon city, Gyeonggi-do province, SK Hynix said: “With Wuxi, as you know, we have done the buildout last year, and the equipment starts to be moved in.” It added that it was on track for completion and that “for m16 as well work is still underway to complete the clean room by the end of this year.” 

It says the rest of the year is full of unprecedented uncertainty because of the pandemic. The company expects that global smartphone sales will decline, but the demand for IT products and services based on the social distancing trend will drive a growth in server memory demand in the mid- to long-term.

SK Hynix plans to move some DRAM capacity to making CMOS sensors. It will boost production of 1Ynm mobile DRAM and start mass-producing 1Znm DRAM in the second half of the year. The company is also boosting production of GDDR6 and HBM2E DRAM.

Wells Fargo managing director analyst Aaron Rakers noted new gaming consoles in the second half could increase GDDR6 demand with HBM2E demand lifted by high-performance computing needs and for high performance computing to boost HMB2E (high bandwidth) sales.

He thinks 5G smartphone sales could potentially increase in the second half of the year which would lift also DRAM demand.

On the NAND front, SK Hynix will focus more on 96-layer 3D NAND, lessening the amount of 72-layer product. It will also start 128-layer product mass production in this, the second quarter of 2020. Rakers told subscribers: “The company expects combined 96 and 128-Layer NAND to exceed 70 per cent of shipments in 2020.”

The company aims to sell more SSDs, now accounting for 40 per cent of NAND flash revenues, and add a data centre PCIe SSD product line to widen its market and increase its profitability.

The business picture for SK Hynix, looking ahead, is not that bleak. Absent a prolonged pandemic, it should be able to continue growing.

WFH economy fuelled ‘strong, accelerated’ demand from cloud, hyperscale, says Seagate as nearline disk ships drive topline up 18%

Seemingly driven by the remote work trend of the past few months , Seagate revenues rose strongly in its latest quarter, fuelled by demand for high-capacity drives from public cloud and hyperscale customers.

It reported revenues of $2.72bn, 18 per cent up on a year ago, in its third fiscal 2020 quarter ending March 31, 2020. Its net income was $320m, 64.1 per cent higher than a year ago.

The Seagate money-making machine’s quarterly progress.

While the Seagate topline swan glided smoothly over the waters, its feet paddled furiously to overcome supply chain and logistics problems, and build and ship record exabytes of nearline disk capacity. Consumer and mission-critical drive numbers were more affected by the pandemic.

CEO David Mosley said: “We delivered March quarter revenue and non-GAAP EPS above the midpoint of our guided ranges, supported by record sales of our nearline products and strong cost discipline,” in a prepared quote.

Summary financial numbers:

  • Free cash flow – $260m
  • Gross margin – 27.4 per cent
  • Diluted EPS – $1.22
  • Cash and cash equivalents – $1.6bn

Total hard disk drive (HDD) revenues were $2.53bn, up 19 per cent y/y. But non-HDD revenues, which includes Seagate’s SSD business, was more affected by pandemic supply chain issues, showing a mere 1.6 per cent y/y rise to $192m

Earnings call

In the earnings call Mosley said Seagate had worked to overcome pandemic-related supply chain problems, saying: “Today, our supply chains in certain parts of the world are almost fully recovered, including China, Taiwan and South Korea and we see indications for conditions to begin improving in other regions of the world.”

He said: “Demand from cloud and hyperscale customers was strong and accelerated toward the end of the quarter, due in part to the overnight rise in data consumption, driven by the remote economy brought on by the pandemic. …. The strength in nearline demand more than offset below seasonal sales for video and image applications such as smart cities, safety and surveillance, as COVID-19 related disruptions impacted sales early in the quarter.”

But: “With the consumer markets among the first to get impacted by the onset of the coronavirus, we saw greater than expected revenue declines for our consumer and desktop PC drives.”

Capacity rises

Seagate shipped 120.2EB of disk drive capacity, up 56.7 per cent y/y, with an average of 4.1TB per drive. Mass capacity (nearline) drives accounted for 57 per cent of Seagate’s overall revenue in the quarter ($1.56bn ), up from 40 per cent a year ago. This was 62 per cent of Seagate’s HDD revenues, up from 44 per cent a year ago.

CFO Gianluca Romano said: “The mass capacity part of the business is really growing strongly.” Mosley confirmed that Seagate should ship 20TB HAMR drives by the end of the year.

Nearline drives rule, it seems, with continued demand expected in the next quarter from cloud service suppliers and hyperscalers, and possibly the quarter after that too.

Seagate’s guidance for the fourth fy2020 quarter is for revenues of $2.6bn plus or minus 7 per cent.

Western Digital implies WD Red NAS SMR drive users are responsible for overuse problems

Western Digital published a blog earlier this week that suggests users who are experiencing problems with their WD Red NAS SMR drives may be over-using the devices. The unsigned article suggests they should consider more expensive alternatives.

WD said it regretted any misunderstanding: “WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.”

Western Digital Shingled Magnetic Recording diagram

The WD blog contains two paragraphs about perfomance:

“WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.”

The second paragraph explains: “The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users.”

WD suggests: “If you are encountering performance that is not what you expected, please consider our products designed for intensive workloads. These may include our WD Red Pro or WD Gold drives, or perhaps an Ultrastar drive. Our customer care team is ready to help and can also determine which product might be best for you.”

Defining moments

We think that the WD Red NAS SMR drives are not ideal for customers experiencing problems. The workload rate number – 180TB written per year ignores the need for an intermittent workload leaving sufficient idle time for background data management. 

WD shingled tracks diagram.

We also think that terms used by WD are not defined. For example:

  • What is data intensity?
  • What does a “typical small business/home NAS workload” mean? Apart from workload up to 180TB/year.
  • What does “intermittent” mean? Does it mean X minutes active followed by Y minutes inactive? What are X and Y?
  • What does “sufficient idle time” mean? Does it mean Z minutes per hour? What is ZX?

This woolliness makes it difficult to understand if a WD Red NAS SMR drive is suited to a particular workload or not.

The trade-off for HDD vendors

We asked Chris Evans, a data storage architect based in the UK, what he thought about WD’s blog. We publish his response below:

Chris Evans

With any persistent storage medium, we are at the mercy of how that technology is implemented. The trade-off for HDD vendors has been in making products capable of ever-increasing capacities while continuing to deliver reliability. Almost all the new techniques used in HDD capacity gains have a side effect. 

A few years ago, for example, HDDs started to get rate limits quoted – this wasn’t explicitly mentioned in product specifications, but obviously needed to be added as a warranty restriction because drives couldn’t write 24×7 with some of the latest technologies.

SMR represents a significant challenge (I wrote about it recently here – https://www.architecting.it/blog/managing-massive-media/) to the extent that WD’s own website (zonedstorage.io) references drive-managed SMR as having “highly unpredictable device performance”.  

That WD website, dated 2019, states: “Drive Managed disks are suitable for applications that have idle time for the drive to perform background tasks such as moving the data around. Examples of appropriate applications include client PC use and external backup HDDs in the client space.”

Evans continues: “I would expect in this circumstance that all HDD manufacturers explain when and how they are using SMR. It could be that SMR is used as a background task, so drives can cope with a limited amount of sustained write I/O, after which the performance cliff is hit and the drive has to drop to a consolidation mode to restack the SMR data. Customers would then at least know if they purchased SMR technology, that some degree of performance impact would be inevitable.

Whilst HDD vendors want to increase capacity and reduce costs (the $/GB equation is probably the only game in town for HDDs these days), a little transparency would be good. Tell us when impactful technology is being used so customers can anticipate the challenges – and of course appliance and SDS vendors can accommodate for this in their software updates.”

NetApp unveils Project Astra for Kubernetes love-in

NetApp today launched Project Astra, an initiative aimed at developing application data lifecycle management for Kubernetes-orchestrated containerised applications.

This is to be NetApp’s replacement for the now-cancelled NetApp Kubernetes Service (NKS), which did not support other Kubernetes distributions or provide data lifecycle services.

Anthony Lye, head of NetApp cloud data services, said: “Project Astra will provide a software-defined architecture and set of tools that can plug into any Kubernetes distribution and management environment.” 

That means containerised data creation, protection, re-use, archiving and deletion. Astra is based on the conviction that a stateful micro-services application and its data are a single entity and must be managed accordingly. For NetApp, container portability across environments really means container and data portability.

Astra is a work in progress and is conceived of as a cloud-delivered service. It has a managing element called the Astra control tower, which discovers applications and their data orchestrated by any Kubernetes distribution in public clouds or on-premises. 

The Astra control tower then optimises storage for performance and cost, unifies or binds the application with data management and provides backup and restore facilities for the containerised app and data entity.

The apps are conceived of as using data sources and generators such as Cassandra, Kafka, PostgreSQL and TensorFlow. Their data is stored on NetApp storage in AWS, Azure, GCP or on-premises ONTAP arrays. That means Cloud Volumes Service for AWS and GCP, and Azure NetApp Files. Astra provides authorisation and access control, storage provisioning, catalogs and app-data lifecycle tracking.

Astra’s control tower also handles portability, moving the app and its data between public clouds and the on-premises ONTAP world.

Project Astra sees NetApp collaborating with developers and operations managers to extend the capabilities of Kubernetes to stateful, data-rich workloads. NetApp intends to offer Astra as a service or as built-in code.

Eric Han.

Eric Han, NetApp’s Project Astra lead, was the first product manager for Kubernetes at Google in 2014. He said in today’s press release: “With Project Astra, NetApp is delivering on the true promise of portability that professionals working with Kubernetes require today and is working in parallel with the community and our customers to make all data managed, protected, and portable, wherever it exists.” 

Comment

NetApp is competing with Portworx, which aims to help Kubernetes manage containerised apps and infrastructure for all workloads. A containerised app lifecycle will be managed by Kubernetes with high-availability, disaster recovery, backup and compliance extensions. In a sense Portworx aims to be an orchestrator of storage services for containers while NetApp intends to be both an orchestrator and supplier of such storage services.

Quantum, a $400m t/o data storage vendor, nabs $10m small business PPP loan

COV-19 virus. CDC/ Alissa Eckert, MS; Dan Higgins, MAM - This media comes from the Centers for Disease Control and Prevention's Public Health Image Library (PHIL), with identification number #23312.

Updated; 17.22 BST, April 22. Quantum statements added. NAICS classification corrected.

Quantum, the veteran data storage vendor, has received a $10m loan from the US PPP fund, which is designed to help small businesses weather the Covid-19 pandemic.

According to an SEC filing dated 16 April, Quantum has received a $10m loan – the maximum allowable under the US Paycheck Protection Program (PPP).

A PPP fact sheet says the loans are intended for small businesses and sole proprietorships. Quantum reported $402.7m in revenues in its fiscal 2019 – which is not exactly small.

The PPP loan is ‘forgivable’ – in other words, it is written off if the business uses the money to “cover payroll costs, and most mortgage interest, rent, and utility costs over the 8 week period after the loan is made [and] Employee and compensation levels are maintained”.

Payroll costs are capped at $100,000 per year per employee and loan payments are deferred for six months.

Although the loans are intended primarily for small businesses and sole proprietorships, all businesses “including nonprofits, veterans organisations, Tribal business concerns, sole proprietorships, self-employed individuals, and independent contractors – with 500 or fewer employees can apply.”

Quantum’s 2019 annual report states: “We had approximately 800 employees worldwide as of March 31, 2019.”

The PPP fact sheet states: “Businesses in certain industries can have more than 500 employees if they meet applicable SBA (Small Business Administration) employee-based size standards for those industries.”

Update. A Quantum spokesperson said: “The SBA ( US Small Business Administration website ) sets its size standards for qualification based on the North American Industry Classification System (NAICS) industry code, and the size standards for the Computer Storage Device Manufacturing Industry (NAICS code 334112) is 1,250 employees.

“Quantum qualifies for the PPP which allows businesses in the Computer Storage Device Manufacturing industry with fewer than 1,250 employees to obtain loans of up to $10 million to incentivize companies to maintain their workers as they manage the business disruptions caused by the COVID-19 pandemic. Quantum employs 550 in the U.S. and 800 worldwide.”

SBA affiliation standards are waived for small businesses (1) in the hotel and food services industries; or (2) that are franchises in the SBA’s Franchise Directory; or (3) that receive financial assistance from small business investment companies licensed by the SBA.

The spokesperson added: “The PPP loan is saving jobs at Quantum — without it we would most certainly be forced to reduce headcount. We owe it to our employees – who’ve stuck with us through a long and difficult turnaround – to do everything we can to save their jobs during this crisis.”

Hitachi Vantara launches all-NVMe E990 flash array

Hitachi Vantara has added a high performance, all-flash E990 array to the VSP storage line, filling a gap between the high-end 5000 Series and the mid-range F Series.

Brian Householder, president of digital infrastructure at Hitachi Vantara, said in a statement: “Our new VSP E990 with Hitachi Ops Center completes our portfolio for midsized enterprises, putting AIOps to work harder for our customers so they can work smarter for theirs.”  

Hitachi V’s VSP – Virtual Storage Platform – consists of three tiers.

  • Top-end 5000 Series multi-controller, all-flash NVMe and SAS drive arrays with up to 21m IOPS, and down to 70μs latency 
  • Mid-range dual controller, all-flash F-Series with 600K to 4.8 million IOPS
  • Mid-range dual controller, hybrid flash/disk G Series with up to 4.8 million IOPS

The E990 is more powerful than the F Series, and the entry-level 5000 Series – the 5100, with its 4.2 million IOPS. But it slots underneath the 5500 which delivers 21 million IOPS.

E990 hardware and software

The E990 is a dual active:active controller array with an internal PCIe fabric and global cache design, as used in the 5000 Series. Latency is down to 64μs and performance is up to 5.8 million IOPS.

E990 controller chassis.

Colin Gallagher, VP for infrastructure product marketing, told us that this was lower than the 5000 because the caching system was global between two controllers – and not four, as with the 5000. Also the system uses hardware-assisted direct memory access and “looks like a multi-controller architecture”.

Raw capacity ranges from 6TB to 1.4PB in the 4U base enclosure. Aways on and adaptive data reduction pumps this up to a guaranteed 4:1 effective capacity.  Commodity SSDs are used throughout with 2U expansion cabs to lift capacity to the raw 287PB limit. Available SSDs have 1.9TB, 3.8TB, 7.6TB or 15TB capacities.

E990 rack with 2U expansion cabs.

The system’s maximum bandwidth is 30GB/sec, which is faster than the 5100’s 25GB/sec. There can be up to 80 x 32 or 16Gbit/s Fibre Channel ports and 40 x 10Gbit/s iSCSI (Ethernet) ports.

The system is controlled by Hitachi’s Storage Virtualization Operating System (SVOS) RF, which runs the other VSP arrays.

Hitachi categorises Ops Center as an AIOPs management system. It uses AI and machine learning techniques to simplify system management and provisioning for virtualized and containerised applications.

Like the 5000 Series, the E990 is ready to support storage-class memory and NVMe-over Fabrics, when customers demand. Gallagher said polls of VSP customers indicate little or no demand for either technology at present.

The E990 has a 100 per cent data availability guarantee.

Hitachi EverFlex

Hitachi’s EverFlex offers consumption-based options that range from basic utility pricing through custom outcome-based services to storage-as-a-service.

The company claims the E990 offers the industry’s lowest-cost IOPS – as low as $0.03 per IOPS. That means a 5.8 million IOPS system could cost $174,000.

The VSP E990, Hitachi Ops Center and EverFlex are available globally from Hitachi Vantara and resellers today.

Mainframe demand (again) boosts IBM storage sales

IBM has reported good storage revenue growth in the first 2020 quarter as robust demand for the System z15 mainframe carried DS8900 array sales in its wake.

The Register has covered IBM’s overall results and we focus on the storage side here.

IBM introduced the z15 mainframe in September 2019 and its revenue impact was apparent in the final 2019 quarter. The uplift in high end DS8900 shipments help to edge storage sales up three per cent in that quarter and 18 per cent in Q1 2020.

IBM’s Systems business unit reported $1.4bn in revenues, up 4 per cent, with system hardware climbing 10 per cent to $1bn. Mainframe revenues grew 61 per cent. However the midrange POWER server line declined 32 per cent and operating system software revenue fell nine per cent to $400m.

Storage growth in Q1 2020 (blue) accelerated the trend in Q4 2019 (red )

Citing the Covid-19 pandemic, IBM said general sales fell in March and that this had affected sales of Power systems.

IBM does not break Systems revenues down by segment or product line but CFO Jim Kavanaugh said in prepared remarks that the DS8900, which is tightly integrated with the mainframe, had a good quarter “especially in support of mission-critical banking workloads”.

He also referred to IBM’s FlashSystem line as a “new and simplified distributed storage portfolio, which supports hybrid multi-cloud deployments”.

IBM said it is expanding the digital sales channel for the Storage and Power business and that has a good pipeline in System Z and storage.

Lots of storage software

IBM CEO Arvind Krishna this week said the company’s main intention is to regain growth, with a focus on the hybrid cloud and AI. He said IBM will continue investing through acquisitions and may divest parts of the business that do not fit the new direction.

Blocks & Files anticipates that IBM will reorganise overall storage portfolio in the next few quarters as Krishna’s intentions are put into action.

With the July 2019 acquisition of Red Hat, IBM has two storage software product portfolios – the legacy Spectrum line plus Red Hat’s four storage products. These are:

  • OpenShift container storage
  • Ceph
  • Hyperconverged infrastructure
  • Gluster

We might expect these two portfolios to eventually converge.

Commvault sues Cohesity and Rubrik

Update; April 21, 2020 – Cohesity statement added. April 22, 220 – Rubrik statement added.

Commvault has filed suit against data management upstarts Cohesity and Rubrik, alleging patent infringement. It is seeking injunctive relief and unspecified monetary damages.

The patents in question concern data management technologies including cloud, data deduplication, snapshots, search, security and virtualization.

Commvault alleges that Rubrik and Cohesity have appropriated Commvault-patented inventions to short-circuit their development processes and minimise the investment required to build competitive products.

Warren Mondschein, Commvault general counsel, said in a statement: “Commvault is not a litigious company but given this clear patent infringement by Cohesity and Rubrik, we have a responsibility to file these lawsuits – we must stand up for our innovation and intellectual property.”

We understand Commvault did not talk with either company before announcing its lawsuits. This was confirmed by a Cohesity statement issued by Cohesity CMO, Lynn Lucas;

“It is not uncommon for legacy vendors to attempt to disrupt the disruptors with frivolous lawsuits in an attempt to stifle innovation and sales. In this case, we were made aware of Commvault’s lawsuit not by their legal representatives but via the media. We believe there is no merit to this complaint, and we will, of course, stand our ground and defend our technology vigorously. 

“This complaint appears to be an attempt to slow our rapid growth and impede our accelerating success. Our view is that innovation can’t be stopped. We believe the market is excited about our vision and extraordinary solutions, as evidenced by our recent $250 million Series E funding round and the 100 percent increase we’ve seen in customers as well as data under management.” 

A Rubrik statement said: “Rubrik does not comment on pending litigation.”

The three companies compete for data protection and management business. Commvault is a long-established vendor while Rubrik and Cohesity are the well-funded and fast growing new kids on the block.

Specifically the lawsuits, filed in Delaware, claim that Cohesity has infringed and continues to infringe at least one claim of U.S. Patent Nos. 7,725,671, 7,840,533, 8,762,335, 9,740,723, 10,210,048, and 10,248,657, and Rubrik has infringed and continues to infringe at least one claim of U.S. Patent Nos. 7,725,671, 7,840,533, 8,447,728, 9,740,723, 10,210,048, and 10,248,657.

The wording; “at least one claim” is oddly non-specific.

Note. Commvault is currently facing engagement with activist investor Starboard Value who wants its directors on Commvault’s board.

Spacebelt aims to store data in satellites

Cloud Constellation Corporation is building Spacebelt, a data storage service using low Earth orbit (LEO) satellites that is claimed to be more secure than any data vault on Earth.

The satellites are to form a patented high speed global cloud storage network of space-based data centres continuously interconnected with their own dedicated telecom backbone for high-value and highly sensitive data assets.

Spacebelt’s satellite storage and transmission network sidesteps worldwide jurisdictional restrictions and laws regarding how data is moved between countries. Using its private network and ultra-secure dedicated terminals, the system bypasses leaky internet and leased lines, CCC says.

A short April 14 announcement connecting IBM with Spacebelt prompted Blocks & Files to take a look at Cloud Constellation Corp. and its plans.

In the announcement, CCC said IBM had been given the results of a benchmarking test for VGG-13 Model Machine Learning applications hosted on Spacebelt’s satellite hardware. This was claimed to show “it’s a scalable, secure platform for highly secure services and mission-specific ML applications for commercial, government and military organizations.”

Cloud Constellation Corporation joined IBM’s PartnerWorld program in May 2018 to collaborate on cloud services based on IBM’s blockchain technology. It said it has a roadmap to support a portfolio of IBM cloud services on a SpaceBelt OpenShift cloud infrastructure, but no further details are available at time of publication.

Cloud Constellation Corporation

CCC was founded in 2015 and is based in Los Angeles. The company claimed at the time that using a satellite network for cloud storage would greatly reduce carbon emissions and energy bills.

CCC bagged a $5m A-round in 2016 and said in December 2018 it was arranging a $100m investment from the Hong Kong-based HCH Group, as part of a $200m B-round of funding. It said Spacebelt needed $480m to get the satellites into orbit and the system working.

However, in November 2019 the company said that the Committee on Foreign Investment in the United States (CFIUS) had identified difficulties centred on the HCH Group being a Chinese company. CCC said at the time it was talking with three other sources of funding, but has announced no further funding details.

Spacebelt hardware

The planned hardware is a ring of 10 low Earth orbit (LEO) satellites in a 650-kilometre equatorial orbit. They will be accessed from ground level via geostationary satellites orbiting 36,000 kilometres above the Earth.

LeoStella, a joint venture of Thales Alenia Space and Spaceflight Industries, will build the satellites, which are planned to be operational with CCC’s first DSaaS in the second 2022 quarter.

Accessing points on Earth must have a ground station with a very small aperture terminal (VSAT) that can link to these geostationary satellites. Then there is a network hop to the Spacebelt ring.

The Spacebelt satellites will be connected with redundant and self-healing photonic (laser) rings in a Layer 3 topology.

The number of satellites in this ring has risen and fallen as CCC has worked to develop Spacebelt technology and economics. Back in 2016 it said there would be 16 satellites in the belt. This dropped to 12 in September 2017, 8 in December 2018 and then rose to 10 in January this year.

That number could grow – CCC has said it will add satellites to the constellation for service scaling, new services, and new technology.

In 2017 it signed a deal with Virgin Orbit as launch partner for the 12 satellites then planned. Virgin Orbit had planned an air-launched rocket, containing a small satellite, released from a Boeing 747 flying at 35,000 feet. This obviates the need for a ground launch with a thumping big first stage rocket. Also the satellite launch process does not need the typical expensive space flight ground installation. There would be 12 individual missions with the first launch scheduled for 2019. That launch did not take place.

CCC is now considering an Arianespace Vega C rocket which could launch 10 satellites in a single mission, as an alternative to Virgin Orbit. The per satellite launch cost could be cheaper than the multiple launch Virgin Orbit scheme.

3-node space-based data vault

Only three satellites in Spacebelt’s ring are data stores, and data is replicated between them for redundancy. The other seven satellites are involved in relaying data.

The Spacebelt satellites are not geostationary, which means that they move in orbit above any ground station. The Spacebelt system has to work out which satellite is above a particular ground station. Then it has to organise data transmission to and from the three data storage satellites around the ring across the relay satellite network in order to reach the ground station.

Spacebelt’s storage capacity has changed from an initial 12 petabytes in February 2018 to 5PB in December 2018. Dennis Gatens, CCC chief commercial officer, told us in an email interview last week: “Our design has evolved where we will initially have 1.6 PB distributed across the 10 satellite constellation.”

Storage medium

In essence CCC is offering to store data in a three-node distributed data centre. The nodes happen to be in orbit. How fast you can get data in and out seems a basic question as is inquiring if the access protocol is block, file (NFS or SMB) or object (S3).

We understand that VSAT data rates range from 4 Kbit/s up to 16 Mbit/s. In IT data communications terms this is slow. We think this implies it will be a file transfer protocol; either NFS or SMB, rather than block.

Gatens said the data storage medium used in the satellites is a “closely held design detail”, as are the read and write IO rates and access protocols.

From the Blocks & Files point of view the basic answer is surely flash, hardened to withstand the solar radiation levels found in orbit. Disk drives are likely to break and are unfixable – unless a techie is rocketed into orbit to replace them.

Technology ageing

An aspect of the service is that the storage technology will be fixed for the operational life of the satellites. If that life is 10 years then the technology will be 10 years old at the satellite’s end-of-life.

It’s not really conceivable that a ground-based data storage facility would use the same storage technology for 10 years. That would be like using NAND flash from the year 2010 today, which would seem slow and expensive. It also means that the storage satellites would need to have sufficient over-provisioning to cope with their flash stores having a 10-year operational life.

A typical enterprise SSD has a 5-year warranty and is over-provisioned to support that.

To overcome this disadvantage Spacebelt has to offer pretty compelling benefits. CCC’s pitch is security and claimed fast data transmission speed.

Spacebelt users will be able to transport and store large blocks of data quickly and securely it claims, and without exposure to terrestrial communications infrastructure. This will protect their critical data from unauthorised access and also provide global communications with lower latency than today’s multi-hop networks.

Cloud Constellation’s marketing message is: “SpaceBelt DSaaS serves as a key market differentiator for our global partners, offering the ultimate air gap security to their enterprise customers reliant on moving highly sensitive, high value and mission-critical data around the world each day. Cloud Constellation’s mission is to insure our customers data is securely stored while providing robust, secure global connectivity.”

So, Spacebelt is both a secure data vault and high-speed data mover using its own private network. How is this more secure than a ground-based 3-node data vault?

Air-gapping

CCC said Spacebelt is air-gapped and therefore secure. Blocks & Files understands air-gapping to mean no network connectivity – as is the case with offline tape cartridges. We asked Gatens how CCC can say Spacebelt data storage is air-gapped when the satellites are permanently online.

Gatens replied: “We refer to the air gap concept as there is no connection to our network that is not installed and controlled by our operations, and each end point is located within our enterprise customer’s facilities and is directly attached to their network. There is no terrestrial network connectivity to SpaceBelt for users or network management.”

He is saying it’s all effectively private. An end-user customer’s own network connects to Spacebelt’s network via geostationary satellites acting as transponders that hook up to the Spacebelt ring. That means ransomware could in theory attack data held in Spacebelt – unless there is some barrier to that happening.

CCC needs to build a ground-based version of its 3-node data store, accessible through always-on VSAT connections and then prove to a satisfactory level that ransomware can’t attack the data in it.

Datera co-founder heads off to pastures new

Datera co-founder Marc Fleischmann has announced his departure from the data storage startup via a LinkedIn posting.

In his statement he said he was proud of what he and Datera had achieved. But he acknowledged: “I accept not guiding Datera as well as I could have over the last year. I’m glad I was able to help us finding strategic partners that were necessary for our survival and growth. I’ll always be grateful for what I learned at Datera, from all of you, and I hope I have given you what you need to succeed.”

Fleischmann, who was Datera’s first CEO, said he looks forward to “exploring my creativity again. Building new things requires that we step back, understand what inspires us and match that with what the world needs; that’s what I love and plan to do next.”

Marc Fleischmann

Datera CEO Guy Churchward said: “Regarding Marc, he’s an extremely smart, accomplished and driven entrepreneur and during the early phases of Datera he was absolutely instrumental in getting the business off the ground and rolling forward.

“As Datera moved into its next phase (the business of enterprise delivery and GTM focus), Marc concentrated on specific customer and regional opportunities for us and was not involved in the day to day operations throughout FY2019. I do obviously wish him the very best of luck in his future endeavours, I am sure it was a tough decision for him to make but he did feel it was time he wrote his next chapter.”

Datera provides an enterprise class high-performance scale-out and distributed software SAN with storage lifecycle management and an object storage facility. Channel partners include HPE. Founded in 2013, the company has taken in $63.9m in funding, including a $26m C-round in September 2018. Board member Churchward was appointed CEO a few months later.