Home Blog Page 361

Your occasional storage round-up featuring Kioxia, Samsung, Veritas and more

Kioxia America

Kioxia America has added snapshots and clones to its KumoScale all-flash storage system.

KumoScale

It has done this by including Kubernetes CSI-compliant snapshots and clones to the KumoScale NVMe-over Fabrics-based software. This means faster backup for large, stateful containerised applications such as databases. Generally, database operations have to be quiesced during database backup. Snapshotting takes seconds and releases the database for normal operation more quickly than other backup methods based on copying database records or their changes.

KumosScale snapshotting works with the CSI-compatible snapshot feature of Kubernetes v1.17. Users can incorporate snapshot operations in a cluster-agnostic way and enable application-consistent snapshots of Kubernetes stateful sets, directly from the Kubernetes command line.

Separately, Kioxia has rebranded all the old Toshiba Memory Corp. consumer products with the Kioxia moniker, meaning microSD/SD memory cards, USB memory sticks and SSDs. These are for use with smartphones, tablets and PCs, digital cameras and similar devices.

Qumulo gets cosy with Adobe

Scale-out filesystem supplier Qumulo is working with Adobe so work-from-home video editing and production people can collaborate to do work previously done in a central location.

Two Adobe software products are involved: Adobe Premiere Pro and After Effects and the two companies say they enable collaborative teams to create and edit video footage using cloud storage with the same levels of performance, access and functionality as workstations in the studio.

You can register for a Qumulo webinar to find out more.

Samsung develops 160+ layer 3D NAND

Samsung has accelerated the development of its 160-plus layer 3D NAND,a string stacking arrangement of two x 80+ layer chips, Korea’s ET News reports. This will be Samsung’s seventh 3D NAND generation in its V-NAND product line.

Samsung’s gen 6 V-NAND is a 100+ layer chip – the precise layer count is a secret, which started sampling in the second half of 2019.There is no expected date for the 160+ layer chip to start sampling. But it looks like Samsung wants to ensure it is a generation ahead of its competitors, and thereby have lower costs in $/TB terms.

China’s Yangtze Memory Technology Corporation announced this month that it is sampling string-stacked 128-layer 3D NAND. SK hynix should sample a 128-layer chip by the end of 2020.

WekaIO launches Weka AI

WekaIO, a vendor of fast filesystems, talks of AI distributed across edge, data centres (core) and the public cloud, with a multi-stage data pipeline running across these locations. It says each stage within this AI data pipeline has distinct storage IO requirements: massive bandwidth for ingest and training; mixed read/write handling for extract, transform, load (ETL); ultra-low latency for inference; and a single namespace for entire data pipeline visibility.

Naturally Weka’s AI offering meets all these varied pipeline stage requirements and delivers fast insights at scale.

Weka AI is a framework of customisable reference architectures (RAs) and software development kits (SDKs) with technology alliances like Nvidia, Mellanox, and others. The company said engineered systems with partners ensure that Weka AI will provide data collection, workspace and deep neural network (DNN) training, simulation, inference, and lifecycle management for the AI data pipeline.

Weka claims its filesystem can deliver more than 73 GB/sec bandwidth to a single GPU client. You can check out a datasheet to get more information.

Veritas says dark data causes CO2 emissions

Data protector and manager Veritas says storing cold, infrequently-accessed data on high-speed storage makes the global warming crisis worse. This so-called dark data sits on fast flash or disk drives and so consumes energy that it doesn’t actually need.

Veritas claims on average 52 per cent of all data stored by organisations worldwide is ‘dark’ as those responsible for managing it don’t have any idea about its content or value.

The company estimates that 6.4 million tonnes of CO2 will be unnecessarily pumped into the atmosphere this year as a result. It cities IDC forecast that the amount of data that the world will be storing will grow from 33ZB in 2018 to 175ZB by 2025. This implies that, unless people change their habits, there will be 91ZB of dark data in five years’ time.

Veritas’ announcement says we should explore its Enterprise Data Services Platform to get more information on data protection in the world of dark data – but there’s no specific information there linking it to decreasing dark data to reduce global warming.

Shorts

Databricks is hosting the Hackathon for Social Good as part of the Spark + AI Summit virtual event on June 22-26. The data analytics vendor is encouraging participants to focus on one of these three issues for their project: provide greater insights into the COVID-19 pandemic; reduce the impact of climate change; or drive social change in their community

Enterprises with office workers accessing critical data face having these people, sometimes thousands of them, work from home and, be default, using relatively insecure Internet links to access this sensitive data. They can set up virtual private networks (VPNs) to provide secure links but this entails additional complexity. FileCloud  says it can provide VPN-levels of security with its seamless access to on-premises file shares from home without a VPN.

Its software uses common working folders. There is has built-in ransomware, anti-virus and smart data leak protection and there is no need to change file access permissions.

DBM Cloud Systems, which automates data replication with metadata, has joined Pure Storage’s Technical Alliance Partner program. That means DBM’s Advanced Intelligent Replication Engine (AIRE) is available for Pure Storage customers to replicate and migrate petabyte-scale data directly to Pure Storage FlashBlade, from most object storage platforms, including AWS, Oracle, Microsoft and Google.

In-memory real-time analytics processing platform GigaSpaces has announced its v15.5 software release. The upgrade doubles performance overall and introduces ElasticGrid, a cloud-native orchestrator, which the company claims is 20 per cent faster than Kubernetes.

Igneous has updated its DataDiscover and DataFlow software services.

DataDiscover provides a global catalog of all a customer’s files across its on-premises and public cloud stores. A new LiveView feature provides real-time insight into files by type, group, access time, keyword and so forth. The LiveViews (reports) can be shared with other users, taking account of their permissions. Users can find files faster with the LiveView feature.

DataFlow is a migration tool. It supports new NAS devices and cloud filesystems or objects without vendor lock-in. Data can be moved between the on-premises and public cloud environments, whether they it is stored in NFS, SMB or S3 object form. NFS and SMB files can be moved to an S3 object store.

Nutanix HCI software has been certified to run with Avid Media Composer video editing software and the Avid MediaCentral media collaboration platform. It says it is the first HCI-powered system to be certified for Avid products.

Verbatim is launching a range of M.2 2280 internal SSDs, delivering high speeds and low power consumption for desktop, ultrabook and notebook client upgrades.

Infinidat adds K8s support and multi-cloud storage data services

Infinidat has plunked a CSI (container storage interface) driver into high-end InfiniBox arrays, along with services that support multi-cloud environments.

Infinidat VP Erik Kaulberg said in a phone briefing last week that companies are getting strategic about containers, and he cited Rancher Labs CEO Sheng Liang’s proclamation that “Kubernetes is the new Linux; you run it everywhere.”

He said multi-cloud is “the strategic direction for most companies”, and that this had led Infinidat to write a CSI driver that supports both strategic trends. This entailed adding services on top of the basic Kubernetes integration.

Customers are accustomed to the storage data services that legacy applications provide and they want these services with containerised applications, Kaulberg said.

The Infinidat CSI driver provides persistent volume (PV) block storage to a Kubernetes pod – which is a set of containers with shared storage and networking and a runtime specification. The driver provides a range of services such as dynamic provisioning, resizing, cloning, snapshots, external dataset import and restores.

This enables the transfer of PVs between InfiniBox arrays in a customer data centre, and to and from Infinidat’s public Neutrix cloud service. The PVs can be transferred from there to Kubernetes pods running in an EKS cluster in AWS, Azure or GCP.

Elastic Data Fabric

These storage systems form an Elastic Data Fabric. This clusters multiple Infinidat storage systems across multiple on-premises data centres and data in public clouds into a single global storage system that is scalable up to multiple exabytes.

The PVs are movable within the Elastic Data Fabric and are accessed across Fibre Channel, iSCSI, NFS and NFS TreeQs links. (NFS TreeQ is an Infinidat NFS implementation featuring directory quotas.)

Kaulberg expects container numbers to grow substantially as enterprises adopt a microservices approach. That means PV counts will increase and in anticipation Infinidat has scaled its CSI driver count to support hundreds of thousands of PVs.

Infinidat’s CSI driver enables InfiniBox to operate with VMware’s Tanzu Kubernetes Grid, Red Hat OpenShift, docker, Google’s Anthos and other Kubernetes suppliers.

InfiniBox arrays are distinguished by their use of DRAM caching, supported by some flash storage, to provide flash levels of performance with disk storage levels of pricing. It is the only prominent block and file storage array supplier that does not have an all-flash array.

The InfiniBox CSI driver is available free of charge with an Apache 2 license via Github and VSX, with native deployments by an OpenShift Operator or Helm.

Critical thinking: NetApp builds Scale-out Data Protection with Commvault

NetApp has launched a backup / disaster recovery system based on Commvault software that runs NetApp HCI and stores backup data on its all-flash FAS arrays and StorageGRID object storage.

According to NetApp, Scale-out Data Protection (SDP) protects “all major operating systems, applications, and databases on virtual and physical servers, NAS shares, cloud-based infrastructures, and endpoint/mobile devices”.

Brett Roscoe, VP, product management, NetApp, said: “The launch of SDP provides our joint customers with a simple, turn-key solution that uses NetApp HCI to enhance the scalability and robustness of the Commvault software in protecting their most critical data across hybrid cloud environments.”

NetApp and Commvault said they have around 1,200 joint customers.

SDP

SDP has a NetApp validated architecture and incorporates Commvault Complete Backup and Recovery Software, which executes as a traditional, backup orchestrating media server in the HCI system.

Blocks & Files diagram.

Commvault runs on the source systems and its snapshot capability provides the first line of defence against data loss, with its support of more than 300 array snapshot engines. SDP has near-instant restoration capabilities, according to NetApp, because of this.

The primary backup tier is a NetApp AFF array, which is designed for fast access to primary data file and block storage.

NetApp Commvault SDP diagram

Protection data can be copied across to a StorageGRID object storage system for secondary longer term retention. The StorageGRID system can be in the same data centre or remote, thus providing disaster recovery capability. Restored virtual machines from the StorageGRID or AFF systems can be fired up in the NetApp HCI control appliance as a stopgap until the damaged source systems can be made functional again.

Protection data can also optionally be written to supported public clouds. Customers can get air-gapped ransomware protection from S3-accessed tape-based services in these clouds; meaning AWS Glacier/Glacier Deep Archive and Azure Archive. This functionality has not specifically been tested in the NetApp validated architecture.

Scale-out

NetApp and Commvault emphasise the scale-out capabilities of the NetApp HCI control system. Coupled with the AFF system as the primary tier, this confirms that SDP is a high-end data protection system for critical data.

An all-flash FlashBlade data protection array from Pure Storage also uses fast flash and is positioned as a primary data flash array and being used for backups. Pure’s FlashArray//C uses slower speed flash.

The NetApp SDP product bundle is available from NetApp and its channel partners.

Western Digital begins mending fences in WD Red NAS drive SMR spat

Western Digital has heralded a positive shift in its approach to users of shingled WD Red NAS drives, via a short statement on the company blog.

As described in our recent article Western Digital admits 2TB-6TB WD Red NAS drives use shingled magnetic recording (SMR), some users can experience performance problems in situations such as adding the drives to RAID groups which use conventionally magnetic recording (CMR) drives.

Fellow disk drive makers Seagate and Toshiba use undocumented SMR technology in consumer desktop drives, but only WD has used them in low-end NAS drives.

WD wrote in the un-bylined blog, dated April 22:

The past week has been eventful, to say the least. As a team, it was important that we listened carefully and understood your feedback about our WD Red NAS drives, specifically how we communicated which recording technologies are used. Your concerns were heard loud and clear. Here is that list of our client internal HDDs available through the channel:

A table in the blog lists which of its internal consumer/small business/NAS drives use SMR and CMR technology:

WD said it will update its marketing materials – brochures and datasheets – to provide similar data and “provide more information about SMR technology, including benchmarks and ideal use cases”.

The final paragraphs affirm that WD recognises some customers are experiencing problems and is doing something about it:

“Again, we know you entrust your data to our products, and we don’t take that lightly. If you have purchased a drive,please call our customer care if you are experiencing performance or any other technical issues. We will have options for you. We are here to help.

More to come.”

Caringo claims cost advantage for object storage appliances

Caringo, an object storage software supplier, has launched a set of appliances that run a new version of its Swarm software.

Swarm 11.1 includes built-in content management, search and metadata management. It has improved S3 compliance, faster software performance, email and Slack alerting and has integrate Elasticsearch 6.

Caringo claims Swarm Server Appliances (SSA) start at 32 per cent less than the cost of other on-premises object storage systems, and 42 per cent less than Amazon S3 storage service fees for the same capacity over 3 years.

CEO Tony Barbagallo said the new appliances “can deliver instant access to archives, enabling remote workflows and streaming services”.

The company has launched four appliances.

  • The 1U SSA (Single Server Appliance) with 2 x 7.68TB SSDs for remote offices and small-to-medium workloads,
  • s3000 1U Standard Server with 12 x 14TB disk drives giving 168TB raw (111.4TB usable after replication and erasure coding) and clustered with minimum 3-nodes,
  • hd5000 4U High-Density server with 60 x 14TB drives meaning 840TB raw (665TB usable).
  • m1000 1U Management Server with 4 x 960GB SATA SSD and a single 256GB NVMe SSD. 
Caringo Swarm Server Appliances.

A cluster can scale to more than 1000 nodes. That delivers 504TB raw in 3u. 

Caringo s3000 Standard Storage Appliance

All software functions run in virtual machines on the SSA but in the m1000 Management Servers when clustered s3000 and/or hd5000 appliances are used. They can run on VMs in a customer’s virtual environment. Using the m1000 means content-related software functions run in flash, while bulk storage uses nearline drives.

Content can be backed up to any S3-compliant target, either in the public cloud or on-premises. 

The appliances and Swarm 11.1 are available now.

The ‘nines’ numbers

Caringo claims the SSA provides 10 ‘nines; of data durability (99.99999999 per cent) while a cluster of s3000s and hd5000s can provide between 13 and 25 ‘nines’ (up to 99.99999999999999999999999 per cent) dependent upon the specific data protection method and number of deployed nodes.

Two ‘nines’ (99%) means you could lose one object out of 100 in a year. Five ‘nines’ (99.999%) means you could lose one object out of 100,000 in a year. Ten ‘nines’ means a loss of up to one object from 10,000,000,000 (10 billion in a year. And 25 ‘nines’ means you could lose one object from 10,000,000,000,000,000,000,000,000 objects in a year. That’s  one in 10 septillion objects lost per year.


‘Recovery timing’ is everything: SK Hynix sees revenue growth on server demand, but warns of uncertainty ahead

Korean fabber SK Hynix‘s re-positioning toward newer and denser DRAM and NAND products paid off in the first 2020 quarter as it reported 6 per cent y/y revenue growth from its DRAM and NAND operations.

However, the firm’s CFO, Cha Jin-seok, said that the point at which global economies affected by the COVID-19 pandemic bottom recover was crucial to ascertain demand, telling an earnings conference call: “The biggest factor to our demand forecast is the stabilisation of COVID-19 and the recovery timing of global economic activity. If the economic recession is prolonged, we can’t rule out that even memory demand for servers could slow down.”

SK Hynix is the second largest manufacturer of memory semiconductors globally, and competes with Samsung and Micron.

Its first 2020 quarter saw revenues of ₩7.2trn ($6.2bn), up from the year-ago ₩6.7trn ($6.0bn), and net income falling from ₩1.1bn ($947m) a year ago to ₩649bn; ($559m) a 41 per cent drop. It was mainly because of its substantial product cost decreases that it was able to to make a profit, helped by SSD sales. 

The company had a dismal 2019 as lower demand created supply gluts leading to price falls. As a result it decided to accelerate transitions to denser DRAM and NAND processes which would lower production costs and enable it to compete better. In DRAM that meant planning a transitioning from 1Ynm to 1Znm products and increasing layer counts in 3D NAND.

How did the novel coronavirus pandemic affect the company in the short term? PC and mobile DRAM demand fell but server demand remained strong. NAND shipments rose because of this server demand strength.

Speaking about the firm’s new fab in Wuxi, Jiangsu Province and its $13.2bn 53,000m2 M16 semiconductor factory in Incheon city, Gyeonggi-do province, SK Hynix said: “With Wuxi, as you know, we have done the buildout last year, and the equipment starts to be moved in.” It added that it was on track for completion and that “for m16 as well work is still underway to complete the clean room by the end of this year.” 

It says the rest of the year is full of unprecedented uncertainty because of the pandemic. The company expects that global smartphone sales will decline, but the demand for IT products and services based on the social distancing trend will drive a growth in server memory demand in the mid- to long-term.

SK Hynix plans to move some DRAM capacity to making CMOS sensors. It will boost production of 1Ynm mobile DRAM and start mass-producing 1Znm DRAM in the second half of the year. The company is also boosting production of GDDR6 and HBM2E DRAM.

Wells Fargo managing director analyst Aaron Rakers noted new gaming consoles in the second half could increase GDDR6 demand with HBM2E demand lifted by high-performance computing needs and for high performance computing to boost HMB2E (high bandwidth) sales.

He thinks 5G smartphone sales could potentially increase in the second half of the year which would lift also DRAM demand.

On the NAND front, SK Hynix will focus more on 96-layer 3D NAND, lessening the amount of 72-layer product. It will also start 128-layer product mass production in this, the second quarter of 2020. Rakers told subscribers: “The company expects combined 96 and 128-Layer NAND to exceed 70 per cent of shipments in 2020.”

The company aims to sell more SSDs, now accounting for 40 per cent of NAND flash revenues, and add a data centre PCIe SSD product line to widen its market and increase its profitability.

The business picture for SK Hynix, looking ahead, is not that bleak. Absent a prolonged pandemic, it should be able to continue growing.

WFH economy fuelled ‘strong, accelerated’ demand from cloud, hyperscale, says Seagate as nearline disk ships drive topline up 18%

Seemingly driven by the remote work trend of the past few months , Seagate revenues rose strongly in its latest quarter, fuelled by demand for high-capacity drives from public cloud and hyperscale customers.

It reported revenues of $2.72bn, 18 per cent up on a year ago, in its third fiscal 2020 quarter ending March 31, 2020. Its net income was $320m, 64.1 per cent higher than a year ago.

The Seagate money-making machine’s quarterly progress.

While the Seagate topline swan glided smoothly over the waters, its feet paddled furiously to overcome supply chain and logistics problems, and build and ship record exabytes of nearline disk capacity. Consumer and mission-critical drive numbers were more affected by the pandemic.

CEO David Mosley said: “We delivered March quarter revenue and non-GAAP EPS above the midpoint of our guided ranges, supported by record sales of our nearline products and strong cost discipline,” in a prepared quote.

Summary financial numbers:

  • Free cash flow – $260m
  • Gross margin – 27.4 per cent
  • Diluted EPS – $1.22
  • Cash and cash equivalents – $1.6bn

Total hard disk drive (HDD) revenues were $2.53bn, up 19 per cent y/y. But non-HDD revenues, which includes Seagate’s SSD business, was more affected by pandemic supply chain issues, showing a mere 1.6 per cent y/y rise to $192m

Earnings call

In the earnings call Mosley said Seagate had worked to overcome pandemic-related supply chain problems, saying: “Today, our supply chains in certain parts of the world are almost fully recovered, including China, Taiwan and South Korea and we see indications for conditions to begin improving in other regions of the world.”

He said: “Demand from cloud and hyperscale customers was strong and accelerated toward the end of the quarter, due in part to the overnight rise in data consumption, driven by the remote economy brought on by the pandemic. …. The strength in nearline demand more than offset below seasonal sales for video and image applications such as smart cities, safety and surveillance, as COVID-19 related disruptions impacted sales early in the quarter.”

But: “With the consumer markets among the first to get impacted by the onset of the coronavirus, we saw greater than expected revenue declines for our consumer and desktop PC drives.”

Capacity rises

Seagate shipped 120.2EB of disk drive capacity, up 56.7 per cent y/y, with an average of 4.1TB per drive. Mass capacity (nearline) drives accounted for 57 per cent of Seagate’s overall revenue in the quarter ($1.56bn ), up from 40 per cent a year ago. This was 62 per cent of Seagate’s HDD revenues, up from 44 per cent a year ago.

CFO Gianluca Romano said: “The mass capacity part of the business is really growing strongly.” Mosley confirmed that Seagate should ship 20TB HAMR drives by the end of the year.

Nearline drives rule, it seems, with continued demand expected in the next quarter from cloud service suppliers and hyperscalers, and possibly the quarter after that too.

Seagate’s guidance for the fourth fy2020 quarter is for revenues of $2.6bn plus or minus 7 per cent.

Western Digital implies WD Red NAS SMR drive users are responsible for overuse problems

Western Digital published a blog earlier this week that suggests users who are experiencing problems with their WD Red NAS SMR drives may be over-using the devices. The unsigned article suggests they should consider more expensive alternatives.

WD said it regretted any misunderstanding: “WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.”

Western Digital Shingled Magnetic Recording diagram

The WD blog contains two paragraphs about perfomance:

“WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.”

The second paragraph explains: “The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users.”

WD suggests: “If you are encountering performance that is not what you expected, please consider our products designed for intensive workloads. These may include our WD Red Pro or WD Gold drives, or perhaps an Ultrastar drive. Our customer care team is ready to help and can also determine which product might be best for you.”

Defining moments

We think that the WD Red NAS SMR drives are not ideal for customers experiencing problems. The workload rate number – 180TB written per year ignores the need for an intermittent workload leaving sufficient idle time for background data management. 

WD shingled tracks diagram.

We also think that terms used by WD are not defined. For example:

  • What is data intensity?
  • What does a “typical small business/home NAS workload” mean? Apart from workload up to 180TB/year.
  • What does “intermittent” mean? Does it mean X minutes active followed by Y minutes inactive? What are X and Y?
  • What does “sufficient idle time” mean? Does it mean Z minutes per hour? What is ZX?

This woolliness makes it difficult to understand if a WD Red NAS SMR drive is suited to a particular workload or not.

The trade-off for HDD vendors

We asked Chris Evans, a data storage architect based in the UK, what he thought about WD’s blog. We publish his response below:

Chris Evans

With any persistent storage medium, we are at the mercy of how that technology is implemented. The trade-off for HDD vendors has been in making products capable of ever-increasing capacities while continuing to deliver reliability. Almost all the new techniques used in HDD capacity gains have a side effect. 

A few years ago, for example, HDDs started to get rate limits quoted – this wasn’t explicitly mentioned in product specifications, but obviously needed to be added as a warranty restriction because drives couldn’t write 24×7 with some of the latest technologies.

SMR represents a significant challenge (I wrote about it recently here – https://www.architecting.it/blog/managing-massive-media/) to the extent that WD’s own website (zonedstorage.io) references drive-managed SMR as having “highly unpredictable device performance”.  

That WD website, dated 2019, states: “Drive Managed disks are suitable for applications that have idle time for the drive to perform background tasks such as moving the data around. Examples of appropriate applications include client PC use and external backup HDDs in the client space.”

Evans continues: “I would expect in this circumstance that all HDD manufacturers explain when and how they are using SMR. It could be that SMR is used as a background task, so drives can cope with a limited amount of sustained write I/O, after which the performance cliff is hit and the drive has to drop to a consolidation mode to restack the SMR data. Customers would then at least know if they purchased SMR technology, that some degree of performance impact would be inevitable.

Whilst HDD vendors want to increase capacity and reduce costs (the $/GB equation is probably the only game in town for HDDs these days), a little transparency would be good. Tell us when impactful technology is being used so customers can anticipate the challenges – and of course appliance and SDS vendors can accommodate for this in their software updates.”

NetApp unveils Project Astra for Kubernetes love-in

NetApp today launched Project Astra, an initiative aimed at developing application data lifecycle management for Kubernetes-orchestrated containerised applications.

This is to be NetApp’s replacement for the now-cancelled NetApp Kubernetes Service (NKS), which did not support other Kubernetes distributions or provide data lifecycle services.

Anthony Lye, head of NetApp cloud data services, said: “Project Astra will provide a software-defined architecture and set of tools that can plug into any Kubernetes distribution and management environment.” 

That means containerised data creation, protection, re-use, archiving and deletion. Astra is based on the conviction that a stateful micro-services application and its data are a single entity and must be managed accordingly. For NetApp, container portability across environments really means container and data portability.

Astra is a work in progress and is conceived of as a cloud-delivered service. It has a managing element called the Astra control tower, which discovers applications and their data orchestrated by any Kubernetes distribution in public clouds or on-premises. 

The Astra control tower then optimises storage for performance and cost, unifies or binds the application with data management and provides backup and restore facilities for the containerised app and data entity.

The apps are conceived of as using data sources and generators such as Cassandra, Kafka, PostgreSQL and TensorFlow. Their data is stored on NetApp storage in AWS, Azure, GCP or on-premises ONTAP arrays. That means Cloud Volumes Service for AWS and GCP, and Azure NetApp Files. Astra provides authorisation and access control, storage provisioning, catalogs and app-data lifecycle tracking.

Astra’s control tower also handles portability, moving the app and its data between public clouds and the on-premises ONTAP world.

Project Astra sees NetApp collaborating with developers and operations managers to extend the capabilities of Kubernetes to stateful, data-rich workloads. NetApp intends to offer Astra as a service or as built-in code.

Eric Han.

Eric Han, NetApp’s Project Astra lead, was the first product manager for Kubernetes at Google in 2014. He said in today’s press release: “With Project Astra, NetApp is delivering on the true promise of portability that professionals working with Kubernetes require today and is working in parallel with the community and our customers to make all data managed, protected, and portable, wherever it exists.” 

Comment

NetApp is competing with Portworx, which aims to help Kubernetes manage containerised apps and infrastructure for all workloads. A containerised app lifecycle will be managed by Kubernetes with high-availability, disaster recovery, backup and compliance extensions. In a sense Portworx aims to be an orchestrator of storage services for containers while NetApp intends to be both an orchestrator and supplier of such storage services.

Quantum, a $400m t/o data storage vendor, nabs $10m small business PPP loan

COV-19 virus. CDC/ Alissa Eckert, MS; Dan Higgins, MAM - This media comes from the Centers for Disease Control and Prevention's Public Health Image Library (PHIL), with identification number #23312.

Updated; 17.22 BST, April 22. Quantum statements added. NAICS classification corrected.

Quantum, the veteran data storage vendor, has received a $10m loan from the US PPP fund, which is designed to help small businesses weather the Covid-19 pandemic.

According to an SEC filing dated 16 April, Quantum has received a $10m loan – the maximum allowable under the US Paycheck Protection Program (PPP).

A PPP fact sheet says the loans are intended for small businesses and sole proprietorships. Quantum reported $402.7m in revenues in its fiscal 2019 – which is not exactly small.

The PPP loan is ‘forgivable’ – in other words, it is written off if the business uses the money to “cover payroll costs, and most mortgage interest, rent, and utility costs over the 8 week period after the loan is made [and] Employee and compensation levels are maintained”.

Payroll costs are capped at $100,000 per year per employee and loan payments are deferred for six months.

Although the loans are intended primarily for small businesses and sole proprietorships, all businesses “including nonprofits, veterans organisations, Tribal business concerns, sole proprietorships, self-employed individuals, and independent contractors – with 500 or fewer employees can apply.”

Quantum’s 2019 annual report states: “We had approximately 800 employees worldwide as of March 31, 2019.”

The PPP fact sheet states: “Businesses in certain industries can have more than 500 employees if they meet applicable SBA (Small Business Administration) employee-based size standards for those industries.”

Update. A Quantum spokesperson said: “The SBA ( US Small Business Administration website ) sets its size standards for qualification based on the North American Industry Classification System (NAICS) industry code, and the size standards for the Computer Storage Device Manufacturing Industry (NAICS code 334112) is 1,250 employees.

“Quantum qualifies for the PPP which allows businesses in the Computer Storage Device Manufacturing industry with fewer than 1,250 employees to obtain loans of up to $10 million to incentivize companies to maintain their workers as they manage the business disruptions caused by the COVID-19 pandemic. Quantum employs 550 in the U.S. and 800 worldwide.”

SBA affiliation standards are waived for small businesses (1) in the hotel and food services industries; or (2) that are franchises in the SBA’s Franchise Directory; or (3) that receive financial assistance from small business investment companies licensed by the SBA.

The spokesperson added: “The PPP loan is saving jobs at Quantum — without it we would most certainly be forced to reduce headcount. We owe it to our employees – who’ve stuck with us through a long and difficult turnaround – to do everything we can to save their jobs during this crisis.”

Hitachi Vantara launches all-NVMe E990 flash array

Hitachi Vantara has added a high performance, all-flash E990 array to the VSP storage line, filling a gap between the high-end 5000 Series and the mid-range F Series.

Brian Householder, president of digital infrastructure at Hitachi Vantara, said in a statement: “Our new VSP E990 with Hitachi Ops Center completes our portfolio for midsized enterprises, putting AIOps to work harder for our customers so they can work smarter for theirs.”  

Hitachi V’s VSP – Virtual Storage Platform – consists of three tiers.

  • Top-end 5000 Series multi-controller, all-flash NVMe and SAS drive arrays with up to 21m IOPS, and down to 70μs latency 
  • Mid-range dual controller, all-flash F-Series with 600K to 4.8 million IOPS
  • Mid-range dual controller, hybrid flash/disk G Series with up to 4.8 million IOPS

The E990 is more powerful than the F Series, and the entry-level 5000 Series – the 5100, with its 4.2 million IOPS. But it slots underneath the 5500 which delivers 21 million IOPS.

E990 hardware and software

The E990 is a dual active:active controller array with an internal PCIe fabric and global cache design, as used in the 5000 Series. Latency is down to 64μs and performance is up to 5.8 million IOPS.

E990 controller chassis.

Colin Gallagher, VP for infrastructure product marketing, told us that this was lower than the 5000 because the caching system was global between two controllers – and not four, as with the 5000. Also the system uses hardware-assisted direct memory access and “looks like a multi-controller architecture”.

Raw capacity ranges from 6TB to 1.4PB in the 4U base enclosure. Aways on and adaptive data reduction pumps this up to a guaranteed 4:1 effective capacity.  Commodity SSDs are used throughout with 2U expansion cabs to lift capacity to the raw 287PB limit. Available SSDs have 1.9TB, 3.8TB, 7.6TB or 15TB capacities.

E990 rack with 2U expansion cabs.

The system’s maximum bandwidth is 30GB/sec, which is faster than the 5100’s 25GB/sec. There can be up to 80 x 32 or 16Gbit/s Fibre Channel ports and 40 x 10Gbit/s iSCSI (Ethernet) ports.

The system is controlled by Hitachi’s Storage Virtualization Operating System (SVOS) RF, which runs the other VSP arrays.

Hitachi categorises Ops Center as an AIOPs management system. It uses AI and machine learning techniques to simplify system management and provisioning for virtualized and containerised applications.

Like the 5000 Series, the E990 is ready to support storage-class memory and NVMe-over Fabrics, when customers demand. Gallagher said polls of VSP customers indicate little or no demand for either technology at present.

The E990 has a 100 per cent data availability guarantee.

Hitachi EverFlex

Hitachi’s EverFlex offers consumption-based options that range from basic utility pricing through custom outcome-based services to storage-as-a-service.

The company claims the E990 offers the industry’s lowest-cost IOPS – as low as $0.03 per IOPS. That means a 5.8 million IOPS system could cost $174,000.

The VSP E990, Hitachi Ops Center and EverFlex are available globally from Hitachi Vantara and resellers today.

Mainframe demand (again) boosts IBM storage sales

IBM has reported good storage revenue growth in the first 2020 quarter as robust demand for the System z15 mainframe carried DS8900 array sales in its wake.

The Register has covered IBM’s overall results and we focus on the storage side here.

IBM introduced the z15 mainframe in September 2019 and its revenue impact was apparent in the final 2019 quarter. The uplift in high end DS8900 shipments help to edge storage sales up three per cent in that quarter and 18 per cent in Q1 2020.

IBM’s Systems business unit reported $1.4bn in revenues, up 4 per cent, with system hardware climbing 10 per cent to $1bn. Mainframe revenues grew 61 per cent. However the midrange POWER server line declined 32 per cent and operating system software revenue fell nine per cent to $400m.

Storage growth in Q1 2020 (blue) accelerated the trend in Q4 2019 (red )

Citing the Covid-19 pandemic, IBM said general sales fell in March and that this had affected sales of Power systems.

IBM does not break Systems revenues down by segment or product line but CFO Jim Kavanaugh said in prepared remarks that the DS8900, which is tightly integrated with the mainframe, had a good quarter “especially in support of mission-critical banking workloads”.

He also referred to IBM’s FlashSystem line as a “new and simplified distributed storage portfolio, which supports hybrid multi-cloud deployments”.

IBM said it is expanding the digital sales channel for the Storage and Power business and that has a good pipeline in System Z and storage.

Lots of storage software

IBM CEO Arvind Krishna this week said the company’s main intention is to regain growth, with a focus on the hybrid cloud and AI. He said IBM will continue investing through acquisitions and may divest parts of the business that do not fit the new direction.

Blocks & Files anticipates that IBM will reorganise overall storage portfolio in the next few quarters as Krishna’s intentions are put into action.

With the July 2019 acquisition of Red Hat, IBM has two storage software product portfolios – the legacy Spectrum line plus Red Hat’s four storage products. These are:

  • OpenShift container storage
  • Ceph
  • Hyperconverged infrastructure
  • Gluster

We might expect these two portfolios to eventually converge.