NetApp today launched Project Astra, an initiative aimed at developing application data lifecycle management for Kubernetes-orchestrated containerised applications.
This is to be NetApp’s replacement for the now-cancelled NetApp Kubernetes Service (NKS), which did not support other Kubernetes distributions or provide data lifecycle services.
Anthony Lye, head of NetApp cloud data services, said: “Project Astra will provide a software-defined architecture and set of tools that can plug into any Kubernetes distribution and management environment.”
That means containerised data creation, protection, re-use, archiving and deletion. Astra is based on the conviction that a stateful micro-services application and its data are a single entity and must be managed accordingly. For NetApp, container portability across environments really means container and data portability.
Astra is a work in progress and is conceived of as a cloud-delivered service. It has a managing element called the Astra control tower, which discovers applications and their data orchestrated by any Kubernetes distribution in public clouds or on-premises.
The Astra control tower then optimises storage for performance and cost, unifies or binds the application with data management and provides backup and restore facilities for the containerised app and data entity.
The apps are conceived of as using data sources and generators such as Cassandra, Kafka, PostgreSQL and TensorFlow. Their data is stored on NetApp storage in AWS, Azure, GCP or on-premises ONTAP arrays. That means Cloud Volumes Service for AWS and GCP, and Azure NetApp Files. Astra provides authorisation and access control, storage provisioning, catalogs and app-data lifecycle tracking.
Astra’s control tower also handles portability, moving the app and its data between public clouds and the on-premises ONTAP world.
Project Astra sees NetApp collaborating with developers and operations managers to extend the capabilities of Kubernetes to stateful, data-rich workloads. NetApp intends to offer Astra as a service or as built-in code.
Eric Han.
Eric Han, NetApp’s Project Astra lead, was the first product manager for Kubernetes at Google in 2014. He said in today’s press release: “With Project Astra, NetApp is delivering on the true promise of portability that professionals working with Kubernetes require today and is working in parallel with the community and our customers to make all data managed, protected, and portable, wherever it exists.”
Comment
NetApp is competing with Portworx, which aims to help Kubernetes manage containerised apps and infrastructure for all workloads. A containerised app lifecycle will be managed by Kubernetes with high-availability, disaster recovery, backup and compliance extensions. In a sense Portworx aims to be an orchestrator of storage services for containers while NetApp intends to be both an orchestrator and supplier of such storage services.
COV-19 virus. CDC/ Alissa Eckert, MS; Dan Higgins, MAM - This media comes from the Centers for Disease Control and Prevention's Public Health Image Library (PHIL), with identification number #23312.
Updated; 17.22 BST, April 22. Quantum statements added.NAICS classification corrected.
Quantum, the veteran data storage vendor, has received a $10m loan from the US PPP fund, which is designed to help small businesses weather the Covid-19 pandemic.
According to an SEC filing dated 16 April, Quantum has received a $10m loan – the maximum allowable under the US Paycheck Protection Program (PPP).
A PPP fact sheet says the loans are intended for small businesses and sole proprietorships. Quantum reported $402.7m in revenues in its fiscal 2019 – which is not exactly small.
The PPP loan is ‘forgivable’ – in other words, it is written off if the business uses the money to “cover payroll costs, and most mortgage interest, rent, and utility costs over the 8 week period after the loan is made [and] Employee and compensation levels are maintained”.
Payroll costs are capped at $100,000 per year per employee and loan payments are deferred for six months.
Although the loans are intended primarily for small businesses and sole proprietorships, all businesses “including nonprofits, veterans organisations, Tribal business concerns, sole proprietorships, self-employed individuals, and independent contractors – with 500 or fewer employees can apply.”
Quantum’s 2019 annual report states: “We had approximately 800 employees worldwide as of March 31, 2019.”
The PPP fact sheet states: “Businesses in certain industries can have more than 500 employees if they meet applicable SBA (Small Business Administration) employee-based size standards for those industries.”
Update. A Quantum spokesperson said: “The SBA ( US Small Business Administration website ) sets its size standards for qualification based on the North American Industry Classification System (NAICS) industry code, and the size standards for the Computer Storage Device Manufacturing Industry (NAICS code 334112) is 1,250 employees.
“Quantum qualifies for the PPP which allows businesses in the Computer Storage Device Manufacturing industry with fewer than 1,250 employees to obtain loans of up to $10 million to incentivize companies to maintain their workers as they manage the business disruptions caused by the COVID-19 pandemic. Quantum employs 550 in the U.S. and 800 worldwide.”
SBA affiliation standards are waived for small businesses (1) in the hotel and food services industries; or (2) that are franchises in the SBA’s Franchise Directory; or (3) that receive financial assistance from small business investment companies licensed by the SBA.
The spokesperson added: “The PPP loan is saving jobs at Quantum — without it we would most certainly be forced to reduce headcount. We owe it to our employees – who’ve stuck with us through a long and difficult turnaround – to do everything we can to save their jobs during this crisis.”
Hitachi Vantara has added a high performance, all-flash E990 array to the VSP storage line, filling a gap between the high-end 5000 Series and the mid-range F Series.
Brian Householder, president of digital infrastructure at Hitachi Vantara, said in a statement: “Our new VSP E990 with Hitachi Ops Center completes our portfolio for midsized enterprises, putting AIOps to work harder for our customers so they can work smarter for theirs.”
Hitachi V’s VSP – Virtual Storage Platform – consists of three tiers.
Top-end 5000 Series multi-controller, all-flash NVMe and SAS drive arrays with up to 21m IOPS, and down to 70μs latency
Mid-range dual controller, all-flash F-Series with 600K to 4.8 million IOPS
Mid-range dual controller, hybrid flash/disk G Series with up to 4.8 million IOPS
The E990 is more powerful than the F Series, and the entry-level 5000 Series – the 5100, with its 4.2 million IOPS. But it slots underneath the 5500 which delivers 21 million IOPS.
E990 hardware and software
The E990 is a dual active:active controller array with an internal PCIe fabric and global cache design, as used in the 5000 Series. Latency is down to 64μs and performance is up to 5.8 million IOPS.
E990 controller chassis.
Colin Gallagher, VP for infrastructure product marketing, told us that this was lower than the 5000 because the caching system was global between two controllers – and not four, as with the 5000. Also the system uses hardware-assisted direct memory access and “looks like a multi-controller architecture”.
Raw capacity ranges from 6TB to 1.4PB in the 4U base enclosure. Aways on and adaptive data reduction pumps this up to a guaranteed 4:1 effective capacity. Commodity SSDs are used throughout with 2U expansion cabs to lift capacity to the raw 287PB limit. Available SSDs have 1.9TB, 3.8TB, 7.6TB or 15TB capacities.
E990 rack with 2U expansion cabs.
The system’s maximum bandwidth is 30GB/sec, which is faster than the 5100’s 25GB/sec. There can be up to 80 x 32 or 16Gbit/s Fibre Channel ports and 40 x 10Gbit/s iSCSI (Ethernet) ports.
The system is controlled by Hitachi’s Storage Virtualization Operating System (SVOS) RF, which runs the other VSP arrays.
Hitachi categorises Ops Center as an AIOPs management system. It uses AI and machine learning techniques to simplify system management and provisioning for virtualized and containerised applications.
Like the 5000 Series, the E990 is ready to support storage-class memory and NVMe-over Fabrics, when customers demand. Gallagher said polls of VSP customers indicate little or no demand for either technology at present.
The E990 has a 100 per cent data availability guarantee.
Hitachi EverFlex
Hitachi’s EverFlex offers consumption-based options that range from basic utility pricing through custom outcome-based services to storage-as-a-service.
The company claims the E990 offers the industry’s lowest-cost IOPS – as low as $0.03 per IOPS. That means a 5.8 million IOPS system could cost $174,000.
The VSP E990, Hitachi Ops Center and EverFlex are available globally from Hitachi Vantara and resellers today.
IBM has reported good storage revenue growth in the first 2020 quarter as robust demand for the System z15 mainframe carried DS8900 array sales in its wake.
The Register has covered IBM’s overall results and we focus on the storage side here.
IBM introduced the z15 mainframe in September 2019 and its revenue impact was apparent in the final 2019 quarter. The uplift in high end DS8900 shipments help to edge storage sales up three per cent in that quarter and 18 per cent in Q1 2020.
IBM’s Systems business unit reported $1.4bn in revenues, up 4 per cent, with system hardware climbing 10 per cent to $1bn. Mainframe revenues grew 61 per cent. However the midrange POWER server line declined 32 per cent and operating system software revenue fell nine per cent to $400m.
Storage growth in Q1 2020 (blue) accelerated the trend in Q4 2019 (red )
Citing the Covid-19 pandemic, IBM said general sales fell in March and that this had affected sales of Power systems.
IBM does not break Systems revenues down by segment or product line but CFO Jim Kavanaugh said in prepared remarks that the DS8900, which is tightly integrated with the mainframe, had a good quarter “especially in support of mission-critical banking workloads”.
He also referred to IBM’s FlashSystem line as a “new and simplified distributed storage portfolio, which supports hybrid multi-cloud deployments”.
IBM said it is expanding the digital sales channel for the Storage and Power business and that has a good pipeline in System Z and storage.
Lots of storage software
IBM CEO Arvind Krishna this week said the company’s main intention is to regain growth, with a focus on the hybrid cloud and AI. He said IBM will continue investing through acquisitions and may divest parts of the business that do not fit the new direction.
Blocks & Files anticipates that IBM will reorganise overall storage portfolio in the next few quarters as Krishna’s intentions are put into action.
With the July 2019 acquisition of Red Hat, IBM has two storage software product portfolios – the legacy Spectrum line plus Red Hat’s four storage products. These are:
OpenShift container storage
Ceph
Hyperconverged infrastructure
Gluster
We might expect these two portfolios to eventually converge.
Update; April 21, 2020 – Cohesity statement added. April 22, 220 – Rubrik statement added.
Commvault has filed suit against data management upstarts Cohesity and Rubrik, alleging patent infringement. It is seeking injunctive relief and unspecified monetary damages.
The patents in question concern data management technologies including cloud, data deduplication, snapshots, search, security and virtualization.
Commvault alleges that Rubrik and Cohesity have appropriated Commvault-patented inventions to short-circuit their development processes and minimise the investment required to build competitive products.
Warren Mondschein, Commvault general counsel, said in a statement: “Commvault is not a litigious company but given this clear patent infringement by Cohesity and Rubrik, we have a responsibility to file these lawsuits – we must stand up for our innovation and intellectual property.”
We understand Commvault did not talk with either company before announcing its lawsuits. This was confirmed by a Cohesity statement issued by Cohesity CMO, Lynn Lucas;
“It is not uncommon for legacy vendors to attempt to disrupt the disruptors with frivolous lawsuits in an attempt to stifle innovation and sales. In this case, we were made aware of Commvault’s lawsuit not by their legal representatives but via the media. We believe there is no merit to this complaint, and we will, of course, stand our ground and defend our technology vigorously.
“This complaint appears to be an attempt to slow our rapid growth and impede our accelerating success. Our view is that innovation can’t be stopped. We believe the market is excited about our vision and extraordinary solutions, as evidenced by our recent $250 million Series E funding round and the 100 percent increase we’ve seen in customers as well as data under management.”
A Rubrik statement said: “Rubrik does not comment on pending litigation.”
The three companies compete for data protection and management business. Commvault is a long-established vendor while Rubrik and Cohesity are the well-funded and fast growing new kids on the block.
Specifically the lawsuits, filed in Delaware, claim that Cohesity has infringed and continues to infringe at least one claim of U.S. Patent Nos. 7,725,671, 7,840,533, 8,762,335, 9,740,723, 10,210,048, and 10,248,657, and Rubrik has infringed and continues to infringe at least one claim of U.S. Patent Nos. 7,725,671, 7,840,533, 8,447,728, 9,740,723, 10,210,048, and 10,248,657.
The wording; “at least one claim” is oddly non-specific.
Note. Commvault is currently facing engagement with activist investor Starboard Value who wants its directors on Commvault’s board.
Cloud Constellation Corporation is building Spacebelt, a data storage service using low Earth orbit (LEO) satellites that is claimed to be more secure than any data vault on Earth.
The satellites are to form a patented high speed global cloud storage network of space-based data centres continuously interconnected with their own dedicated telecom backbone for high-value and highly sensitive data assets.
Spacebelt’s satellite storage and transmission network sidesteps worldwide jurisdictional restrictions and laws regarding how data is moved between countries. Using its private network and ultra-secure dedicated terminals, the system bypasses leaky internet and leased lines, CCC says.
A short April 14 announcement connecting IBM with Spacebelt prompted Blocks & Files to take a look at Cloud Constellation Corp. and its plans.
In the announcement, CCC said IBM had been given the results of a benchmarking test for VGG-13 Model Machine Learning applications hosted on Spacebelt’s satellite hardware. This was claimed to show “it’s a scalable, secure platform for highly secure services and mission-specific ML applications for commercial, government and military organizations.”
Cloud Constellation Corporation joined IBM’s PartnerWorld program in May 2018 to collaborate on cloud services based on IBM’s blockchain technology. It said it has a roadmap to support a portfolio of IBM cloud services on a SpaceBelt OpenShift cloud infrastructure, but no further details are available at time of publication.
Cloud Constellation Corporation
CCC was founded in 2015 and is based in Los Angeles. The company claimed at the time that using a satellite network for cloud storage would greatly reduce carbon emissions and energy bills.
CCC bagged a $5m A-round in 2016 and said in December 2018 it was arranging a $100m investment from the Hong Kong-based HCH Group, as part of a $200m B-round of funding. It said Spacebelt needed $480m to get the satellites into orbit and the system working.
However, in November 2019 the company said that the Committee on Foreign Investment in the United States (CFIUS) had identified difficulties centred on the HCH Group being a Chinese company. CCC said at the time it was talking with three other sources of funding, but has announced no further funding details.
Spacebelt hardware
The planned hardware is a ring of 10 low Earth orbit (LEO) satellites in a 650-kilometre equatorial orbit. They will be accessed from ground level via geostationary satellites orbiting 36,000 kilometres above the Earth.
LeoStella, a joint venture of Thales Alenia Space and Spaceflight Industries, will build the satellites, which are planned to be operational with CCC’s first DSaaS in the second 2022 quarter.
Accessing points on Earth must have a ground station with a very small aperture terminal (VSAT) that can link to these geostationary satellites. Then there is a network hop to the Spacebelt ring.
The Spacebelt satellites will be connected with redundant and self-healing photonic (laser) rings in a Layer 3 topology.
The number of satellites in this ring has risen and fallen as CCC has worked to develop Spacebelt technology and economics. Back in 2016 it said there would be 16 satellites in the belt. This dropped to 12 in September 2017, 8 in December 2018 and then rose to 10 in January this year.
That number could grow – CCC has said it will add satellites to the constellation for service scaling, new services, and new technology.
In 2017 it signed a deal with Virgin Orbit as launch partner for the 12 satellites then planned. Virgin Orbit had planned an air-launched rocket, containing a small satellite, released from a Boeing 747 flying at 35,000 feet. This obviates the need for a ground launch with a thumping big first stage rocket. Also the satellite launch process does not need the typical expensive space flight ground installation. There would be 12 individual missions with the first launch scheduled for 2019. That launch did not take place.
CCC is now considering an Arianespace Vega C rocket which could launch 10 satellites in a single mission, as an alternative to Virgin Orbit. The per satellite launch cost could be cheaper than the multiple launch Virgin Orbit scheme.
3-node space-based data vault
Only three satellites in Spacebelt’s ring are data stores, and data is replicated between them for redundancy. The other seven satellites are involved in relaying data.
The Spacebelt satellites are not geostationary, which means that they move in orbit above any ground station. The Spacebelt system has to work out which satellite is above a particular ground station. Then it has to organise data transmission to and from the three data storage satellites around the ring across the relay satellite network in order to reach the ground station.
Spacebelt’s storage capacity has changed from an initial 12 petabytes in February 2018 to 5PB in December 2018. Dennis Gatens, CCC chief commercial officer, told us in an email interview last week: “Our design has evolved where we will initially have 1.6 PB distributed across the 10 satellite constellation.”
Storage medium
In essence CCC is offering to store data in a three-node distributed data centre. The nodes happen to be in orbit. How fast you can get data in and out seems a basic question as is inquiring if the access protocol is block, file (NFS or SMB) or object (S3).
We understand that VSAT data rates range from 4 Kbit/s up to 16 Mbit/s. In IT data communications terms this is slow. We think this implies it will be a file transfer protocol; either NFS or SMB, rather than block.
Gatens said the data storage medium used in the satellites is a “closely held design detail”, as are the read and write IO rates and access protocols.
From the Blocks & Files point of view the basic answer is surely flash, hardened to withstand the solar radiation levels found in orbit. Disk drives are likely to break and are unfixable – unless a techie is rocketed into orbit to replace them.
Technology ageing
An aspect of the service is that the storage technology will be fixed for the operational life of the satellites. If that life is 10 years then the technology will be 10 years old at the satellite’s end-of-life.
It’s not really conceivable that a ground-based data storage facility would use the same storage technology for 10 years. That would be like using NAND flash from the year 2010 today, which would seem slow and expensive. It also means that the storage satellites would need to have sufficient over-provisioning to cope with their flash stores having a 10-year operational life.
A typical enterprise SSD has a 5-year warranty and is over-provisioned to support that.
To overcome this disadvantage Spacebelt has to offer pretty compelling benefits. CCC’s pitch is security and claimed fast data transmission speed.
Spacebelt users will be able to transport and store large blocks of data quickly and securely it claims, and without exposure to terrestrial communications infrastructure. This will protect their critical data from unauthorised access and also provide global communications with lower latency than today’s multi-hop networks.
Cloud Constellation’s marketing message is: “SpaceBelt DSaaS serves as a key market differentiator for our global partners, offering the ultimate air gap security to their enterprise customers reliant on moving highly sensitive, high value and mission-critical data around the world each day. Cloud Constellation’s mission is to insure our customers data is securely stored while providing robust, secure global connectivity.”
So, Spacebelt is both a secure data vault and high-speed data mover using its own private network. How is this more secure than a ground-based 3-node data vault?
Air-gapping
CCC said Spacebelt is air-gapped and therefore secure. Blocks & Files understands air-gapping to mean no network connectivity – as is the case with offline tape cartridges. We asked Gatens how CCC can say Spacebelt data storage is air-gapped when the satellites are permanently online.
Gatens replied: “We refer to the air gap concept as there is no connection to our network that is not installed and controlled by our operations, and each end point is located within our enterprise customer’s facilities and is directly attached to their network. There is no terrestrial network connectivity to SpaceBelt for users or network management.”
He is saying it’s all effectively private. An end-user customer’s own network connects to Spacebelt’s network via geostationary satellites acting as transponders that hook up to the Spacebelt ring. That means ransomware could in theory attack data held in Spacebelt – unless there is some barrier to that happening.
CCC needs to build a ground-based version of its 3-node data store, accessible through always-on VSAT connections and then prove to a satisfactory level that ransomware can’t attack the data in it.
Datera co-founder Marc Fleischmann has announced his departure from the data storage startup via a LinkedIn posting.
In his statement he said he was proud of what he and Datera had achieved. But he acknowledged: “I accept not guiding Datera as well as I could have over the last year. I’m glad I was able to help us finding strategic partners that were necessary for our survival and growth. I’ll always be grateful for what I learned at Datera, from all of you, and I hope I have given you what you need to succeed.”
Fleischmann, who was Datera’s first CEO, said he looks forward to “exploring my creativity again. Building new things requires that we step back, understand what inspires us and match that with what the world needs; that’s what I love and plan to do next.”
Marc Fleischmann
Datera CEO Guy Churchward said: “Regarding Marc, he’s an extremely smart, accomplished and driven entrepreneur and during the early phases of Datera he was absolutely instrumental in getting the business off the ground and rolling forward.
“As Datera moved into its next phase (the business of enterprise delivery and GTM focus), Marc concentrated on specific customer and regional opportunities for us and was not involved in the day to day operations throughout FY2019. I do obviously wish him the very best of luck in his future endeavours, I am sure it was a tough decision for him to make but he did feel it was time he wrote his next chapter.”
Datera provides an enterprise class high-performance scale-out and distributed software SAN with storage lifecycle management and an object storage facility. Channel partners include HPE. Founded in 2013, the company has taken in $63.9m in funding, including a $26m C-round in September 2018. Board member Churchward was appointed CEO a few months later.
This week’s digest covers file-sharing, flash, hyperconverged infrastructure, all-in-one storage, object storage, data protection and supplier responses to the Covid-19 pandemic. Dive straight in with Nutanix.
Nutanix China win
Nutanix software has been chosen to help to run more than 60 Tsingtao breweries in China.
Tsingtao had a general wish to upgrade its IT infrastructure and a particular need to support an intelligent retail business model. It is using Nutanix AHV systems to do this, plus the Enterprise Cloud OS and Prism Pro management software.
The focus is on on enterprise mobility management, risk management and financial accounting, content management system, business process management systems and manufacturing enterprise systems.
Nutanix remote working help
Nutanixhelped JM Finn, a UK investment firm, to support remote working for all employees in response to COVID-19 – and did it in about a week. Jon Cosson, head of IT at JM Finn, said: “Our infrastructure was already completely virtualised which made a big difference in enabling remote work. … Our Nutanix private cloud infrastructure, which powers all of our workloads including VDI, played an integral part in keeping our employees safe and productive while working remotely.”
OpenIO is faster than Minio
Object storage supplier OpenIO says it as as fast as competitor Minio on the TeraSort benchmark and faster than HDFS;
It is on a par with Minio on the Random TextWriter and Wordcount benchmark. Both outperform HDFS.
OpenIO vs Minio
HDFS is faster than OpenIO in the DFSIO benchmark, when using only a small number of small files. But, as the size of the datasets increases, OpenIO outperforms HDFS. This is especially true for very large datasets.
OpenIO claims these tests make it very clear that S3 object storage is now a credible primary storage solution for Hadoop. if your application manages a dataset of dozens of terabytes, as with Big Data use cases, you should consider OpenIO instead of Hadoop’s HDFS.
CTERA for DevOps
File-sharing cloud storage gateway supplier CTERA is supporting DevOps by making its products more manageable with a software development kit (SDK) for Python and Ansible automation.
CTERA’s software and devices enable global file sharing and access at endpoints ranging from single users to branch offices, via private or public cloud fabrics.
The CTERA SDK enables Python developers to create applications that use CTERA file storage. CTERA says these apps can scale to any size.
CTERA has made the Python facilities available so that an Ansible playbook can automate the provisioning of CTERA storage resources worldwide across multiple cloud providers. It says Ansible Collection embodies an infrastructure-as-code approach, meaning no scripting is needed nor any other programming.
The CTERA DevOps SDK and Ansible Collections are available on GitHub today under an open source license.
StorONE
The latest S1 Enterprise Storage Platform release from all-in-one storage supplier StorONE adds:
S1:Tier: moves data across multiple tiers of storage; from high-performance Optane or NVMe flash to high-density SAS flash storage, like QLC, to hard disk drives and then to the cloud for long-term archive. There can be a separate resource pool for NVMe SSDs.
S1:Snap: zero-impact, unlimited snapshots can tier older snapshot data to less-expensive hard disk-based or cloud storage. This lessens the need for a separate backup system.
S1:Object: create a volume that supports object storage via the S3 protocol. Now a single S1-powered storage server can support high-performance (1 million +) IOPS storage running on top of fibre or iSCSI, cost-effective, high-capacity NAS or object storage via NFS, SMB or S3.
S1:Replicate: provides asynchronous, semi-synchronous and synchronous replication of data from one StorONE system to another. Asynchronous replication acknowledges when the writes complete locally and to the local TCP buffer. The semi-synchronous replication setting acknowledges when data is written locally to the remote TCP buffer. Synchronous replication acknowledges when data is written locally and to the remote storage system.
With S1:Replicate Source and target storage clusters can have different drive redundancy settings, snapshot data retention policies and drive pool types. It means customers can have a less-expensive system at their disaster recovery site.
The company said last week it has the financial reserves to weather the COVID-19 pandemic.
APTARE IT Analytics 10.4 has new regulation support for public sector environments, and reporting engine upgrades. It enables new data collection from NetBackup Appliances, Dell EMC Avamar 19.1, Dell EMC Data Domain 6.2, Dell EMC NetWorker 9.2.1, HPE Nimble, NAKIVO 9.1.1, and VMware ESXi 6.5. APTARE IT Analytics 10.4.1 also features additional supported languages including French, Chinese, Korean and Japanese.
Backup Exec (BE) 21 has per-instance licensing, automated license updates, enhanced security to guard against ransomware. It has day-one support for vSphere 7.0 and vSAN 7.0, additional cloud regional support and broader physical support (CentOS 7.7 x64, Debian 10.0 x64, Oracle Linux 8 and 8.1, RedHat Enterprise Linux 8 and 8.1).
Veritas SaaS backup adds support for Microsoft Dynamics 365 CRM, with protection for Azure, Dynamics 365 and Office 365.
eDiscovery Platform 9.5 (eDP 9.5) introduces support for all major Web browsers, with legal holds and security enhancements. It has support for Enterprise Vault 12.5, Exchange 2019 and SharePoint 2016.
Veritas EV.cloud now includes Veritas Advanced Supervision 2.0, bringing intelligence and analysis to data supervision for organisations targeting advanced cloud-based archiving with Microsoft Office 365 or Google Gmail for data governance. Updates allow for classification-driven sampling and searching to help customers restrict relevant content from view sets and ensure that content is included in classification.
Shorts
Amazon Web Services ECS (Elastic Container Service) now supports the Amazon Elastic File System (EFS) file system. Both containers running on ECS and AWS Fargate can use EFS. AWS says this will help customers containerize applications that require shared storage such as content management systems, internal DevOps tools, and machine learning frameworks.
AWS is introducing a new Snowball management platform, new IAM capabilities, and support for task automation.
VMware has announced the integration of its Site Recovery Manager (SRM) with Pure Storage’s FlashArray products with VMware vSphere Virtual Volumes (vVols).
Backup as a service startup Clumio has achieved Amazon Web Services (AWS) Storage Competency status for its enterprise backup service.
Forward Insights has ranked Kingston in first place in worldwide channel SSD shipments with 18.3 per cent market share, ahead of semiconductor manufacturers Western Digital and Samsung (16.5 per cent and 15.1 per cent, respectively). According to Forward Insights, almost 120 million SSDs were shipped in the channel in 2019.
ObjectiveFS 6.7 includes dynamic scaling of threads, small file performance speedup, faster performance when running with disk cache, EC2 Instance Metadata Service v2 (IMDSv2), S3 connection tuning, and more – including 350+MB/sec read and write of large files..
Entertainment and media object storage supplier Object Matrix and Signiant have announced improved workflow compatibility between MatrixStore and Signiant Media Shuttle, a de facto industry standard for sending and sharing large files fast.
OwnBackup is providing assistance to pandemic-strained healthcare organisations with an OwnBackup Gratitude package providing backup and security services free of charge. It integrates with Salesforce Health Cloud.
Pavilion, an NVMe-oF array supplier, has gained VMware vSphere 7 certification for its Hyperparallel Flash Array.
HCI supplier Scale Computing said Q1 2020 revenue reached a record, growing more than 30 per cent. It reports growth from local government and education customers where IT demands have skyrocketed due to the pandemic – that’s led to work-from-home/teach-from-home requirements.
StorCentric’s Retrospect is offering free 90-day subscription licenses for every Retrospect Backup product. There are no strings attached. The user can backup all of their data absolutely free for 90 days. If at the end of the 90 days, they no longer wish to use Retrospect software — they can still access, retrieve, and restore all of their backed-up data.
NVMe SSD tester SANBlaze has announced the availability of TCG Opal verification testing for NVMe SSDs.
SoftNAS has changed its name to Buurst. It intends to charge a price for its products that is not based on capacity. It announced $5m additional capital from its investor base, bringing total equity capital raised to $35 million. SoftNAS will remain a core product offering from Buurst and is available on the AWS Marketplace and Azure Marketplace.
SMART MIP
Smart Modular Technologies has announced a higher density 16GB DDR4 Module-in-a-Package (MIP). The MIP is a tiny form factor design targeted for uses in IIoT, embedded computing, broadcast video, and mobile routers. It is available in two configurations, the standard 1Gx64 version or the two channels of x32 configuration to replace either soldered down DRAMs or SO-DIMMs.
Replication supplier WANdisco is donating its software to help researchers share and analyze big data to develop potential treatments and cures for COVID-19. I
Hybrid cloud data warehouse supplier Yellowbrick is providing free access to its cloud data warehouse to help aid researchers and companies actively working on a vaccine for COVID-19. Virtusa has teamed up with Yellowbrick to provide implementation consulting and access for its Life Sciences platform, vLife. Apply at www.yellowbrick.com/covid19/.
People
Acronis has hired Amy Luber as its Channel Chief Evangelist.
Quantum has hired James Mundle as global channel chief. He most recently served as VP of ww Channel Programs with VEEAM. Before that he was VP of w-w Channel Sales for Seagate’s Cloud Systems and Solutions business
Renaud Perrier, formerly Google’s former Head of Cloud ISV Partnerships, has become Senior Vice President of International Business Development and Operations at Virtru. The company has created TDF (Trusted Data Format); privacy technologies built on its data protection platform to govern access to data throughout its lifecycle – from creation to transmission, storage, analysis, and sharing.
Toshiba and Seagate confirmed to Blocks & Files that there is undocumented use of SMR technology in some of their drives. We think it is now time for the PC vendors to come clean.
Desktop and laptop system makers need to be explicit in data sheets and marketing literature when their disk drives use SMR. This will prevent avoidable mishaps of the WD Red NAS variety.
A senior industry source, who declined to be named, told us: “It’s actually not surprising that WD and Seagate offered to OEM out SMR HDDs for desktops – after all, they are cheaper per TB. And sadly, it is also not surprising that the desktop vendors such as Dell and HP integrated them into their machine without ‘telling’ their customers, the end-user consumer (and/or the business desktop buyer, usually a procurement agent)… So, I think the fault is spread around the supply chain – not just the HDD manufacturers.”
SMR is cheaper
In its statement (full text below), WD explains that certain sub-8TB WD Red SMR drive users could experience problems, and also that it uses conventional magnetic recording (CMR) technology in 8TB-14TB WD Red NAS drives.
So why did WD use SMR drives for the sub-8TB capacity points? Very simply, with fewer platters and read and write heads, SMR is a cheaper way to deliver the same capacity as CMR.
WD uses SMR in its 1TB, 2TB, 3TB, 4TB and 6TB Red drives and conventional recording in its 8TB, 10TB, 12TB and 14TB Red drives. We see here a split product line using with each half using different disk recording technology underneath one brand.
And why WD did not use SMR in the 8TB and abovedrives if it could deliver “an optimal performance experience for users”.
WD said in its statement: “In our testing of WD Red drives, we have not found RAID rebuild issues due to drive-managed SMR technology.”
However, users on the Reddit, Synology and smartmontools forums did find problems; for example with ZFS RAID set enlargements and with FreeNAS.
Alan Brown, a network manager at UCL Mullard Space Science Laboratory, who alerted us to the SMR issue, said: “These drives are not fit for purpose. In this case because they have a relatively provable and repeatable firmware bug which result in them throwing hard errors, but in more general purposes because SMR drives marketed as NAS/RAID drives have such appalling and variable throughput that they are unusable.”
“Even the people using Seagate SMR drives are reporting 10 second pauses in writes at times and those who had reasonable performance with SMR-from-start arrays have confirmed that resilvering a replacement drive in has turned out to be a major issue which they didn’t fully appreciate until they actually tried it.”
Western Digital statement
Shingled magnetic recording (SMR) is a hard drive technology that efficiently increases areal density and capacity for users managing increasing amounts of data, thus lowering users’ TCO. There are both device-managed and host-managed types, each for different use cases.
All our WD Red drives are designed to meet or exceed the performance requirements and specifications for common and intended small business/home NAS workloads. WD Red capacities 2TB-6TB currently employ device-managed shingled magnetic recording (DMSMR) to maximize areal density and capacity. WD Red 8-14TB drives use conventional magnetic recording (CMR). DMSMR should not be confused with host-managed SMR (HMSMR), which is designed for data center applications having respective workload requirements and host integration.
DMSMR is designed to manage intelligent data placement within the drive, rather than relying on the host, thus enabling a seamless integration for end users. The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users.
WD Red drives are designed and tested for an annualized workload rate up to 180TB. Western Digital has seen reports of WD Red use in workloads far exceeding our specs and recommendations. Should users’ use cases exceed intended workloads, we recommend WD Red Pro or Ultrastar data center drives.
Western Digital works extensively with customers and the NAS vendor and partner communities to continually optimize our technology and products for common uses cases. In collaboration with major NAS providers, we work to ensure WD Red HDDs (and SSDs) at all capacities are compatible with a broad set of host systems. In our testing of WD Red drives, we have not found RAID rebuild issues due to DMSMR technology.
Our customers’ experience is important to us. We will continue listening to and collaborating with the broad customer and partner communities to innovate technologies that enable better experiences with, more efficient management of and faster decisions from data.
ScaleFlux has added hardware compression to its computational flash storage drive, effectively doubling capacity and increasing performance by 50 per cent.
JB Baker, senior director of product management for ScaleFlux, supplied a quote: “Experience gained from the global deployment of our previous drives have led us to significant enhancements in the CSD 2000. Customer feedback is showing that the simultaneous reduction in storage costs and improvements in application latency and performance … is a compelling value proposition.”
Computational storage systems process data in the storage drive, thereby offloading the host server CPUs and improving overall performance.
The CSD 2000’s hardware engine has been updated with GZIP compression/decompression – which means no added latency. This doubles effective capacity – 4TB and 8TB raw capacity options increase to 8TB and 16TB. Application performance also improves.
Aerospike ACT 3.2 transactions per second (tps) increase 1.5x,
MySQL SysBench tps 1.5x,
PostgreSQL SysBench update_non_index 2.8x.
According to ScaleFlux, the CSD 2000 delivers 40 to 70 per cent more IOPS than NVMe SSDs on mixed read and write OLTP workloads. NVMe SSD performance typically drops off as the write proportion of any workload increases according to ScaleFlux, which claims the CSD 2000 maintains performance within a narrow band, regardless of the read and write mix.
Alibaba has qualified Scaleflux’s computational storage for use with the Chinese hyperscaler’s data centre infrastructure stack, specifically the POLARDB relational database.
ScaleFlux’s original CSS 1000 drive incorporates a Xilinx FPGA paired with 2TB to 8TB of 3D NAND flash. It uses off-the-shelf code packages to accelerate Aerospike, Apache HBase, Hadoop and MySQL, OpenZFS and Ceph.
The CSD 2000 comes in the 2.5-inch (U.2) form factor. A PCIe add-in card will be available in a few weeks.
Kaminario has adapted its VisionOS storage software for AWS, Azure and Google Cloud Platform – and claims it offers cheaper storage and more services than the cloud vendors’ native offerings.
Kaminario is the first block access array vendor to port its storage array software to all three public clouds. The company said it provides a consistent storage facility covering on-premises all-flash array SANs and their equivalents on AWS, Azure and GCP.
Kaminario’s Flex container orchestration and information services run across these environments as well as its Clarity management and AIOps service.
CEO Dani Golan claimed in a press briefing this week that no other supplier has this level of private and public cloud orchestration. The service enables customers to avoid storage and storage service lock-in to any public cloud supplier, he said.
Kaminario signalled its hybrid multi-cloud intentions in December last year. At the time CTO Eyal David said: ”There needs to be a data plane which delivers a common set of shared services that enable companies to decouple the management and movement of data from the infrastructure it runs on.”
Flex and Clarity form that data plane.
Cost savings
Kaminario said it can provide a 30 per cent or greater cost-saving compared to the public cloud’s own block-access storage services. It suggests customers with 100TB storage or more in the public cloud could benefit from the service.
Derek Swanson, Kaminario field CTO, said VisionOS in the public cloud ‘thin-provisions’ storage – meaning you pay for what you use. In contrast, the cloud providers ‘thick-provision’ storage – i.e. you pay for what you allocate. Also snapshots in the public cloud are full copies whereas Kaminario snapshots are metadata-based and almost zero-space. This saves a huge amount of money compared to native public cloud snapshots, according to Swanson.
Storage performance in the public cloud typically rises with allocated capacity, he said. But Kaminario decouples storage from compute in the public cloud – so you could have high-performance and low-capacity Kaminario storage in the cloud.
The competition
Golan said Kaminario’s hybrid multi-cloud capability means it no longer competes for legacy SAN business with suppliers such as Dell EMC, NetApp or Pure Storage.
According to Swanson, Pure’s Cloud Block Store, with its active:passive controllers, is slower than Kaminario’s VisionOS in the public cloud and lacked data services. Also he pointed out that Pure uses proprietary hardware for its on-premises arrays, and this is not replicated in Cloud Block Store, again making it slower.
NetApp’s Cloud Volumes services were also limited compared to Kaminario’s offerings, Swanson argued. He said NetApp’s Cloud Volumes lacks active:active symmetric controllers, unlike Kaminario, and so is a slower performer than VisionOS.
Kaminario roadmap
Blocks & Files expects Kaminario to add support for tiering data off to public cloud archive services, such as Amazon’s Glacier, with an S3 interface. File-level access protocols might also be supported.
Swanson and Golan said other public clouds would be supported in the future.
Kaminario in brief
Kaminario was founded in 2008 and has taken in $218m in funding. The initial product was the scale-up and scale-out K2 all-flash array. The company separated itself from hardware manufacture in January 2018 with a deal for Tech Data to build certified appliance hardware.
Later that year it embraced Western Digital’s composable systems. The company began moving to a subscription-based business model in mid 2019 and now it is 100 per cent subscription-based and “cashflow-positive”, Golan said.
VAST Data has completed a $100m funding round during the covid-19 pandemic which values the all-flash array storage startup at $1.2bn.
The company will spend the new money on building sales teams and on research and development. This includes work on the next-generation product line, which is expected to launch in 2022 or 2023.
VAST Data publicly launched its first high-end array in February 2019. Deduplicated data is stored in QLC SSDs, referenced using metadata stored on Intel Optane drives, with NVMe-over-Fabrics access to the flash SSDs.
VAST sums of money
Renen Hallak
VAST claims first year sales were significantly higher than any storage vendor in IT history but did not reveal numbers. Pure Storage reported $6m revenues in its first fiscal year – so that provides a base comparison. VAST’s average selling price is more than $1m.
VAST told us the sales momentum had prompted unsolicited funding approaches from new VCs. Due to covid-19 there were no face-to-face meetings with the investors, CEO Renen Hallak said. “It was all done through videoconferencing.”
VAST Data notes it has achieved $1bn unicorn status faster than any IT infrastructure startup to date, and has made a little graph to show this.
Total funding is now $180 million and the latest round includes cash from new investors Next47, Commonfund Capital and Mellanox Capital plus existing investors 83 North, Dell Technologies Capital, Goldman Sachs, Greenfield Partners and Norwest Venture Partners.
Hallak wants the world to know that the company is well-funded: “Considering that VAST has not even tapped into its $40m Series B financing, the company now has a $140m war chest to satisfy global customer demand for next-gen infrastructure, and to enable data driven applications through continued innovation.”
The pandemic has encouraged some customers, especially in the hedge fund and health sectors, to buy because they can converge other systems onto VAST and save money. Also they can run and analyse more historic data than before, according to Hallek.
He anticipates VAST’s support of Intel Optane and container storage will fuel sales growth as both technologies are gaining traction.
File and object workloads
VAST Data rack
Hallak told Blocks & Files that the VAST array is used mostly by large enterprises for file and object workloads.
They like being able to store primary data on the array because of its speed, as well as secondary and tertiary data because of its cost-effectiveness.
This is valued by data-intensive customers such as hedge funds, which can run real-time analyses on more old data than with other arrays, according to Hallak.
He said the Dell EMC and NetApp scale-out file systems are typical competitors, adding that the company has also won AI deals against Dell.
VAST will make a major Universal Storage v3.0 software release in coming weeks. This may include support for SMB and S3, along with military grade encryption and cloud-based replication.
Data storage simplification
VAST Data claims that the data storage market has reached a tipping point and that simplified storage is the way forward. Certainly, the trend in the storage array business is for product line simplification.
For example, IBM has converged its midrange Storwize and FlashSystem lines into a single FlashSystem product. And Dell is preparing the imminent launch of MidRange.Next, which unifies the Unity, XtremIO and SC arrays.
Hitachi Vantara, like Pure Storage, has several hardware arrays running the same operating system.
Infinidat’s single-tier high end Infinibox system uses nearline disk drives for bulk data storage and DRAM caching for performance. Unlike VAST Data, the Infinibox is primarily used for block storage and the companies do not compete for business, Hallek told us.
NetApp focuses on AFF ONTAP but still sells E-Series and Solidfire all-flash arrays.
HPE has yet to simplify its array line-up, which features the XP8, Primera, 3PAR and Nimble products. Increasingly this seems like a matter of ‘when’, rather than ‘if’.