Home Blog Page 351

Cash-rich Cohesity blames pandemic for job cuts

Cohesity today cited the covid-19 pandemic for laying off or furloughing “a small percentage” of its 1300-strong workforce. The data management startup is cutting the jobs just one month after completing a $250m funding round.

The company declined to specify numbers, but said in a statement today that it “remains focused on spending investment dollars wisely to ensure fiscal responsibility and long-term success”.

Cohesity added: “To manage through this time of economic uncertainty and volatility, Cohesity has taken steps to reduce our operating expenses. Unfortunately, as part of that effort, a small percentage of employees have been furloughed or are no longer with the company. This is not a decision the company takes lightly. We value contributions from each and every employee and regret that the pandemic has created this challenging period.”

In semi-related news, Nutanix is to furlough 1465 staff – a quarter of the workforce – for two weeks, on a rolling basis between now and October.

Your occasional storage digest, featuring Pure Storage and others

FlashBlade gets file and object replication

Pure Storage has announced V3.0 of its Purity//FB FlashBlade operating system. FlashBlade is Pure’s unified, scale-out all-flash file and object storage system. New features include:

  • File Replication for disaster recovery of file systems. Read-only data in the target replication site enables data validation and DR testing.
  • Object Replication – replication of object data between two FlashBlades improves the experience for geographically distributed users by providing lower access latency and increasing read throughout. Replication of object data in native format from FlashBlade to Amazon S3 means customers can use cloud for a secondary copy, or use public cloud services to access data generated on-premises. 
  • S3 Fast Copy – S3 copy processes within the same FlashBlade Bucket now use “reference-based copies” – Data is not physically copied so the process is faster. Fast Copy does not apply to S3 uploads and copying or copying between different buckets.
  • Zero touch provisioning (ZTP) – After FlashBlade hardware is installed, ZTP completes the setup remotely; an IP address is automatically obtained via the management port via DHCP. A REST token (“PURESETUP”) allows access to the array with a set of released APIs to perform its basic configuration and setup the static management network. When setup completes  the “PURESETUP” token becomes invalid and DHCP is terminated.

V3 also has File system Rollback; a file system data protection feature enabling fast recovery of file systems from snapshots, and NFS v4.1 Kerberos authentication. There are Audit Logs and SNMP support enhancements for improved security, alerting, and monitoring capabilities.

FlashBlade now has a peak backup speed of 90TB/hour and peak restore speed of 270TB/hour.

Public cloud disk drive and SSD ships

Wells Fargo analyst Aaron Rakers told subscribers cloud-driven near line HDD units are now approaching 70 per cent of total HDD industry capacity shipped, and account for more than 60 per cent of total HDD industry revenue. 

Also enterprise SSDs are estimates to account for 20-25 per cent of total NAND flash industry bits shipped, with cloud accounting for 50-60 per cent or more of total bit consumption. 

Shorts

DigiTimes has reported (paywall access)  Western Digital is increasing enterprise disk drive prices by up to 10 per cent due to pandemic-caused production and supply chain cost increases. A WD spokesperson told Blocks & Files the company does not comment on its pricing.

NetApp is partnering with Iguazio so that NetApp’s ONTAP AI on-premises storage and public cloud Cloud Volumes storage participate in Iguazio’s machine learning data pipeline software. Iguazio is compatible with KubeFlow 1.0 machine learning software.

Alluxio, which supplies open source cloud data orchestration software, announced an offering in collaboration with Intel to offer an in-memory acceleration layer with 2nd Gen Intel Xeon Scalable processors and Intel Optane persistent memory. Benchmarking results show 2.1x faster completion for decision support queries when adding Alluxio and PMem compared to only using disaggregated S3 object storage. An I/O intensive benchmark delivers a 3.4x speedup over disaggregated S3 object storage and a 1.3x speedup over a co-located compute and storage architecture.

Broadcom’s Emulex Fibre Channel host bus adapters (HBAs)  support ESXi v7.0, and provide NVMe-oF over FC to/from ESXi v7.0 hosts. NetApp, Broadcom and VMware have a validated NVMe/FC server and storage SAN setup.

China’s CXMT (ChangXin Memory Technologies) has signed a DRAM patent license agreement with Rambus, strengthening its potential as a DRAM chip supplier.

FileShadow has announced an integration partnership with Fujitsu, allowing consumers to scan documents, from Fujitsu scanners, directly into their FileShadow Cloud Storage Vault. FileShadow collects the file, preserves it with its secure cloud vault and curates it further with machine learning (ML)-generated tags for images and optical character recognition (OCR) of written text.

GigaSpaces, the provider of InsightEdge, an in-memory real-time analytics and data processing platform, has closed a $12m round of funding. Fortissimo Capital led the round, joined by existing investors Claridge Israel and BRM Group. Total funding is now $53m.

MemSQL has announced that it has been selected as a Red Hat Marketplace certified vendor.

Supermicro has introduced BigTwin SuperServers and Ultra systems validated to work with Red Hat’s hyperconverged infrastructure software.

Backblaze assails Big Three cloud download ‘tax’, slashes S3 prices

Backblaze, the cloud backup vendor, is picking a fight with Amazon in its own back yard by offering much cheaper S3-compatible cloud storage and quicker downloads.

The company has released S3-compatible APIs in a beta test to enable customers to redirect data workflows using S3 targets to Backblaze B2 Cloud Storage. It claims it offers infinitely scalable, durable offsite storage at a quarter of the price of other options, meaning Amazon S3, Azure, and Google Cloud Storage.

Backblaze storage pod

Blocks & Files asked a Backblaze spokesperson about price: “B2 Cloud Storage prices are not changing for people who want to use S3 APIs.There is one price for storage – $0.005 per GB per month… that’s one quarter of the price of S3, GCS, and Azure,” he said.

Gleb Budman talking about Backblaze in YouTube video

“On top of that, the Big Three have complicated tiered pricing that requires pricing tables to sort out. In addition, downloading data from B2 Cloud Storage is $0.01 per GB – one ninth of the price of S3, GCS, Azure. Again, the Big Three have complex pricing tables just for downloads.

“The tax that the Big Three charge for using your data in downloading is astounding and reflects the walled garden approach that Backblaze has disrupted.”

Blocks & Files asked how does Backblaze B2 egress pricing and access times compare to AWS Glacier?

The spokesperson said: “Glacier is the closest in terms of pricing, but Glacier is not a good comparison. That is cold storage. But since Backblaze B2 Cloud Storage is hot the performance is more appropriately compared to S3, Azure, and GSC. That’s the beauty of it, one quarter the cost, but near 1:1 performance. This is why media and entertainment companies love it for their workflows. They can use B2 as active archive, among countless other things.”

IBM Aspera, Veeam, Quantum, Igneous, LucidLink, Storage Made Easy and other suppliers said they will support B2 Cloud Storage as a destination for customers using their S3 workflows.

Backblaze passed a milestone of storing an exabyte of customer data in March, and has built its business from being founded in 2007 with just $3m in external funding.

Check out a Backblaze pricing calculator to find out more.

Clumio debuts ‘air-gapped’ backup for Microsoft 365

Clumio customers can now protect their Microsoft 365 workloads using the startup’s AWS-based data protection as a service. Backup account-separation is a key aspect of the new facility.

Clumio backs up Microsoft 365 user data in separate accounts and claims that this imposes an air-gap between the user’s account and the backup data. We think this stretches the meaning of ‘air-gap’, which generally describes offline tape cartridges that have no network connection, rather than two separate public cloud accounts.

Microsoft advises Microsoft 365 users to “regularly back up your content and data that you stored on the services or store using third-party apps and services.”  Clumio says it’s the best such third party backup service – competitors include Druva – citing superior ransomware protection.

A spokesperson told us: “Clumio’s service backs up the data outside the customer’s account in an immutable format to ensure that data cannot be compromised. This means that even when the bad guys get access to the customer’s network, they have no access to compromise Clumio’s data.”

He claimed: “Other solutions use the same security credentials for backup and production. Others keep backup copies, backup storage, or compute, in the customer’s production accounts leaving them exposed to ransomware or data loss if the account credentials are compromised.”

Clumio’s Cloud Data Fabric backs up VMware virtual machines running vSphere on-premises or in the VMware Cloud, using AWS S3 object storage. Clumio also provides general SaaS Backup for AWS, backing up apps in AWS accounts that use EC2 and EBS, and storing them in a separate account.

Read a Clumio blog to find out more.

Strong flash performance underpins Western Digital Q3

Western Digital revenues climbed 14 per cent on the back of record flash memory performance to $4.18bn in the third fiscal 2020 quarter ended April 3. The company generated $17m net income, a big improvement on the -$581m loss for the same period last year.

David Goeckeler, WD’s new CEO, said in a statement: “While I couldn’t have anticipated the unprecedented series of events that have transpired, I’m very proud of how the company has responded to an extremely dynamic environment with dedicated focus both on our employees’ safety as well as delivering our market leading technology to our customers.”

Flash up, HDD down

WD’s HDD revenues fell 2.4 per cent in the quarter to $2.1bn. Total disk exabyte shipments fell six per cent Q/Q but capacity enterprise exabyte shipments grew 50 per cent Y/Y. WD said this means it maintained the leading position in the capacity enterprise drive category.

WD’s flash revenues jumped 28 per cent to $2.1bn, and now equals the company’s HDD business in size.

Earnings call

In the earnings call Goeckler said: “We encountered some supply disruptions in the quarter. However, due to the efforts of our operations team, we saw supply trends improve as the quarter progressed.” However, there were “additional costs associated with logistics and other manufacturing activity.”

The disk issue

In the earnings call Wells Fargo analyst Aaron Rakers commented: “It looks like you definitely kind of underperformed some of your peers on nearline” – Seagate’s high-capacity enterprise disk drives, in other words.

Goeckeler replied: “On the nearline side, I mean, look, we’re happy with where the product performed. The 14-terabyte is still performing well. 18-terabyte shipped for revenue this quarter, as we talked about. That we made that commitment, we delivered on that… we’re happy with where the portfolio is.”

The problem, as we see it, is that Seagate is shipping a lot of 16TB drives, unlike WD which is focusing on 14TB drives. WD is pinning hopes on its 18TB drive doing well, while Seagate has a 20TB drive on its way.

Rakers’ chart showing WD’s loss of nearline disk exabytes shipped market share

In a mail to subscribers Rakers estimated WD has lost nearline drive market share to Seagate, with a nine per cent Q/Q drop to 45 per cent (see chart above.)

WD’s combined HDD and SSD client devices revenues grew 13 per cent in the quarter to $1.83bn. The company attributed pandemic-induced home working fuelled strong demand for notebook SSDs. Data Centre product revenues grew 2 per cent to $1.5bn and Client Solutions (consumer retail products) brought in $821m, up 2 per cent on the year, with retail sales affected by the pandemic.

The Q4 outlook is $4.35bn revenues at the mid-point estimate, up 19.7 per cent and would mean full fy2020 revenues of $16.86bn, up 1.6 per cent. CFO Bob Eulau anticipates fourth quarter client SSD revenues will grow strongly as working from home continues. Also new games consoles will use more flash instead of disk storage.

WD has suspended dividend payments to conserve cash.

VAST Data adds SMB support with Universal Storage upgrade

VAST Data has added SMB support and replication in the V3 release of its Universal Storage array software.

Jeff Denworth, VP of products at VAST, claimed today: “Version 3 is the launch vehicle that brings Universal Storage to enterprises, government customers, and content organisations who have suffered under the weight of legacy storage and storage tiering.”

With Version 3, VAST has written its own SMB software stack so that Windows and MacOS applications can use VAST storage and have multi-protocol access between NFS and SMB. With the SMB server software a client fails over to another VAST server when the server they’re connected to fails.

VAST’s new Snap-to-Objects feature replicates a data snapshot to a second VAST array or on-premises S3 target system or to the public cloud for archiving. It also enables disaster recovery for file and object data in a VAST array.

The company now supports FIPS-class AES-256 encryption of data stored in its Optane and QC flash SSDs and has improved its data reduction efficiency. Denworth told us the new release kicks in another 25 per cent gain in dedupe efficiency on average, although there is variance between different types of data.

The upgrade includes enhanced user behaviour monitoring, performance improvements and management features.

The VAST Data hardware array uses QLC flash for bulk data storage, with added 3D XPoint media to boost metadata handling, deduplication and other management functions. The richly-funded startup announced the alleged disk drive array killer in February 2018.

DASEd but not confused

VAST’s disaggregated and shared-everything (DASE) architecture represents a generational change, according to Denworth, who explains his thinking in a company blog. If his assumptions are correct, the shared-nothing, Dell EMC Isilon-type architectures are toast. But don’t get out the butter and jam just yet. Generational changes take time to play out.

NetApp pushes ‘VDI at scale’ via CloudJumper takeover

NetApp
NetApp CloudJumper

NetApp has bought a North Carolina VDI company called CloudJumper for an undisclosed sum. The storage giant will use CloudJumper’s technology to underpin the new NetApp Virtual Desktop Service, which provides virtual desktop infrastructure from the public cloud to work-from-home office staff.

With the surge in pandemic-induced work-from-home arrangements, this acquisition is a timely move to gain customers for NetApp storage in the cloud.

Anthony Lye, GM of NetApp’s cloud data services business unit, said in the press announcement: “The ability to provide a consistent virtual desktop experience at scale while keeping data available and secure without sacrificing performance has always been important and is especially critical in today’s unprecedented environment.

“NetApp and CloudJumper provides a simplified management platform for delivering virtual desktop infrastructure, storage and data management across Microsoft Azure, AWS and Google Cloud with best in class virtual desktop management combined with best in class storage and data services.”

NetApp VDS is available immediately on NetApp Central and is integrated with Azure NetApp Files and Cloud Volumes. The company said it will invest in the Cloudjumper channel.

The customer message is that CloudJumper VDI gets backed up by NetApp’s enterprise-class storage and management facilities.

CloudJumper history

Established in 2016, the privately funded CloudJumper has developed what it calls a workspace-as-a-service (WaaS). The company’s Cloud Workspace Management Suite (CWMS) cloud-native software provides Windows virtual desktop infrastructure (VDI) services through the Azure cloud. Versions of CWMS are also available running in AWS, Google and regional public cloud suppliers. 

CWMS is used by thousands of business customers and competes with other virtual desktop interface (VDI) suppliers such as Citrix. The coronavirus pandemic has led to it onboarding thousands of new Windows Virtual Desktop users as desktop workspaces are needed for remote work-from-home office staff.

CloudJumper bought a small company called IndependenceIT in February 2018, and uses the technology acquired to deliver workspaces, applications and desktops-as-a-service.


Veeam upgrades Microsoft Azure backup

Veeam has released Veeam Backup for Microsoft Azure, extending its data protection embrace to Azure Blob storage and improving multi-coud portability.

Previously the company offered basic functionality only, introducing backup for virtual machines in Azure, in Veeam Backup & Replication v9.5.

Veeam global technologist David Hill has written an informative blog about the new Azure facilities.

Microsoft Veeam Backup for Microsoft Azure diagram

Although Microsoft deploys and protects the Azure infrastructure, Azure users are responsible for protecting their data and the applications they run in Azure virtual machines.

The Veeam Azure backup helps them do this. Security is built around Azure service accounts and Active Directory integration. Features include:

  • Integrated and agentless snapshot automation for frequent restore points 
  • Backup to, and long-term retention in, Azure Blob storage 
  • Full and file-level recoveries
  • Multi-cloud portability via Veeam’s portable backup format, which also supports on-premises Veeam sites
  • Azure backup cost calculator with guidance on snapshot, backup, traffic and transaction costs on a monthly basis
  • Cross-subscription and cross-region backup for added resilience,
  • Multi-factor authentication
  • Turnkey deployment through the Azure Marketplace

Veeam Backup for Microsoft Azure is available in free and paid editions. The free edition gives you backup for up to 10 Azure VMs, with no limitations on restores.

Rubrik delivers faster vSphere backups, quicker Oracle restores and stronger compliance

Rubrik is speeding up its software for Oracle and vSphere customers and has made compliance easier with Andes 5.2, the latest release of its cloud data management suite.

Rubrik president Dan Rogers said today: “We are introducing a significant performance boost for organisations that rely on monster VMware VMs. DBAs will also have access to new tooling that will make it easier to do their jobs, with more control to quickly deliver clones when needed.”

Andes 5.2 provides multi-node streaming for vSphere, as Rubrik blogger Roman Konarev explains: “Rubrik breaks down a large VMDK into smaller pieces called shards and processes them in parallel on multiple Rubrik nodes.” That can mean up to five times faster backups for multi-terabyte virtual machines and a 3x restore speed increase. Rubrik says the software has export resumability, making it more resistant to temporary network interruptions.

Oracle

Oracle database admins get faster database cloning for test and dev. A Rubrik blog by Saurav Das explains that the LiveMount feature can clone a production database onto an alternate host without having to provision storage there. No file copying is involved – the Oracle data files are mounted directly from the backups, using an NFS share on the target host.  

A damaged database can be restored to the most recent state before a failure and near-zero recovery times can be achieved – irrespective of the database size – by mounting Oracle database files on the Rubrik cluster via an NFS share. When the database is fully recovered, DBAs can migrate the files from the Rubrik cluster to the target host.

Andes 5.2 supports Oracle Database Appliance, Oracle Exadata, Oracle 12cR2, 18c, and 19c, Oracle RAC One Node, Oracle RAC, and Oracle Direct NFS.

Compliance

Compliance admins get a new set of pre-defined roles so they can grant safe access to sensitive data faster. Data snapshots can be placed on indefinite hold to meet legal and compliance needs. Data can be replicated to multiple instead of single targets and stored data objects that fall out of compliance to prescribed service level agreement policies can be automatically identified.

Andes 5.2 enables SQL Server DBAs to download the snapshots and transactions log files that comprise a backup, making it easy to transfer and use the log files for audits.

Lastly, Pure’s FlashBlade can be used as an S3 archival target.

Toshiba publishes list of consumer HDDs that use shingled magnetic recording

Toshiba has revealed which of its desktop-type drives use SMR (shingled magnetic recording) technology, with its potential for slower performance due to continuous random writes.

Some users have complained that desktop SMR drives can exhibit poor performance in certain instances, such as loading a large gaming application composed of a large number of files.

This is because when a file is read, the operating system’s access time metadata is updated and written back to the drive. Such access time collection and storage is a default element of file metadata in Windows and MacOS. Continuous access time updates in the OS are random disk writes and so fall into the SMR performance vulnerability zone.

Toshiba uses SMR technology – previously undocumented – in several desktop drives and in some video surveillance HDDs:  P300 6TB, P300 4TB, DT02 6TB, DT02 4TB, DT02-V 6TB and DT02-V 4TB.

Certain notebook PC, game consoles, and external consumer drives also use SMR: L200 2TB, L200 1TB, MQ04 2TB and MQ04 1TB.

Toshiba said it “works extensively with notebook and desktop PC vendors on the selection of the appropriate storage media to help ensure the data integrity, reliability and planned lifetime requirements of the system”.

The company does not use SMR in the N300, a NAS drive intended for the consumer market – unlike Western Digital which uses SMR in some low-end WD Red NAS devices.

Micron reinvents storage IO stack for the solid state age

Micron has devised a modified storage IO stack for Linux that delivers lower latency, faster performance and longer life. The US chipmaker said the ‘heterogeneous-memory storage engine’ (HSE) is host-level software, not device-level.

HSE works with SSDs and storage class memory and is extensible to new interfaces and storage devices for applications across databases, IoT, 5G, AI, HPC and object storage.

The code optimises performance and endurance by orchestrating data placement across DRAM and multiple classes of SSDs or other solid-state storage devices. It implements a key:value store, and scales to terabytes of data and hundreds of billions of keys per store.

A storage engine hooks up an application such as a database to storage drives and their controllers and enables them to talk direct to drives. It is not the actual drive controller. Micron’s HSE code sits in a host and replaces a standard or existing storage IO stack, so it needs to be integrated with the application. Micron has facilitated this integration by making HSE open source.

Micron has tested HSE-enabled workloads against the RocksDB storage engine, using YCSB (Yahoo! Cloud Serving Benchmark) workloads and four Micron 9300 SSDs. HSE improved performance throughput 6x, reduced latency 11x and lengthened flash endurance 7x. It achieved this by reducing write amplification.

Micron has also integrated HSE with MongoDB and claims an 8x throughput improvement.

HSE is available on Github, where there is an HSB Wiki resource. A blog by Larry Hart, Micron director of product marketing, provides additional information.

HSE uses

We think Micron’s HSE initiative is motivated in part to encourage third-party vendors to modify their applications to work with its upcoming 3D XPoint SSD drives.

Ceph is a potential integration candidate for HSE, and Stefanie Chiras, VP and GM of Red Hat Enterprise Linux, said in the announcement release: “We see enormous potential in the technologies being introduced by Micron, especially as it takes an innovative approach in lowering the latency between compute, memory and storage resources.”

Scality, an object storage supplier, has also provided a supporting quote, courtesy of field CTO Brad King: “While our storage software can support ‘cheap and deep’ on the lowest-cost commodity hardware for the simplest workloads, it can also exploit the performance benefits of technologies like flash, storage class memory and SSDs for very demanding workloads.

“Micron’s HSE technology enhances our ability to continue optimising flash performance, latency and SSD endurance without trade-offs.”


Kioxia’s software-enabled flash could be a game changer in SSD management

Kioxia today introduced software-enabled flash (SEF), a radical development in solid state storage management that gives users the ability to optimise for specific workloads using flash ‘personalities’.

Currently, the standard SSD controller assumption is that one-size-fits-all, apart from relatively crude read- or write-optimisations for specific products. Kioxia overturns this with a software-defined flash controller that enables dynamically reconfigurable flash at the SDD level or – for hyperscalers – the build-it-yourself flash storage pool.

Eric Ries, SVP, memory storage strategy division at KIOXIA America, said in a statement: “Our customers have been pushing for the ability to drive operational efficiency in the data centre programmatically, and SEF technology will meet this need by placing access and control of flash directly in the hands of hyperscale programmers.”

SEF virtualizes the dies and enables the operator to dynamically control how flash is optimised across thousands of dies to match it to specific workload needs. For instance, hyperscalers can gain better latency control, with host software managing tasks on the SSD through API access. This means background activities will not hinder latency-sensitive work.

As workload requirements change, a hyperscaler or large enterprise could reconfigure a population of SSDs and their dies to provide more performance and cost efficient use for the new workload.

The SEF and API scheme also means that host software can be used to manage the NAND dies across flash generation changes.

Virtual devices

The controller or SEF unit hardware is a system-on-chip (SoC) unit with a micro-controller and flash dies mounted on a printed circuit board.

The SoC has a PCIe interface. Sub-units handle NAND block and page programming of timing, read tasks with error correction, cell health, defect management, and endurance extension algorithms. A DRAM controller sub-unit enables the optional addition of DRAM.

The SEF SoC sub-divides the NAND dies under its control into sub-domains or ‘virtual devices’. Each virtual device can have different characteristics such as quality of service arrangements and their own personality. These could include block device, Zoned Names Spaces, TRocksDB, Firecracker or a custom hyperscale flash translation layer (FTL). The host can control data placement using the virtual devices.

The virtual devices are dynamically reconfigurable through API access. Some or all of these personalities can operate in parallel on the same SSD, with the SEF software isolating the virtual device domains from each other. This capability may seem more useful as SSD capacities rise. The API code is open source and gives access to the full capacity of the NAND dies.

Check out a Kioxia technical introduction to its software-defined flash controller and API.