Home Blog Page 354

Your occasional enterprise storage digest featuring CTERA, Nutanix, OpenIO, StorONE and Veritas

This week’s digest covers file-sharing, flash, hyperconverged infrastructure, all-in-one storage, object storage, data protection and supplier responses to the Covid-19 pandemic. Dive straight in with Nutanix.

Nutanix China win

Nutanix software has been chosen to help to run more than 60 Tsingtao breweries in China.

Tsingtao had a general wish to upgrade its IT infrastructure and a particular need to support an intelligent retail business model. It is using Nutanix AHV systems to do this, plus the Enterprise Cloud OS and Prism Pro management software.

The focus is on on enterprise mobility management, risk management and financial accounting, content management system, business process management systems  and manufacturing enterprise systems.

Nutanix remote working help

Nutanix helped JM Finn, a UK investment firm, to support remote working for all employees in response to COVID-19 – and did it in about a week. Jon Cosson, head of IT at JM Finn, said: “Our infrastructure was already completely virtualised which made a big difference in enabling remote work. … Our Nutanix private cloud infrastructure, which powers all of our workloads including VDI, played an integral part in keeping our employees safe and productive while working remotely.”

OpenIO is faster than Minio

Object storage supplier OpenIO says it as as fast as competitor Minio on the TeraSort benchmark and faster than HDFS;

It is on a par with Minio on the Random TextWriter and Wordcount benchmark. Both outperform HDFS.

OpenIO vs Minio

HDFS is faster than OpenIO in the DFSIO benchmark, when using only a small number of small files. But, as the size of the datasets increases, OpenIO outperforms HDFS. This is especially true for very large datasets.

OpenIO claims these tests make it very clear that S3 object storage is now a credible primary storage solution for Hadoop. if your application manages a dataset of dozens of terabytes, as with Big Data use cases, you should consider OpenIO instead of Hadoop’s HDFS.

CTERA for DevOps

File-sharing cloud storage gateway supplier CTERA is supporting DevOps by making its products more manageable with a software development kit (SDK) for Python and Ansible automation.

CTERA’s software and devices enable global file sharing and access at endpoints ranging from single users to branch offices, via private or public cloud fabrics.

The CTERA SDK enables Python developers to create applications that use CTERA file storage. CTERA says these apps can scale to any size.

CTERA has made the Python facilities available so that an Ansible playbook can automate the provisioning of CTERA storage resources worldwide across multiple cloud providers. It says Ansible Collection embodies an infrastructure-as-code approach, meaning no scripting is needed nor any other programming.

The CTERA DevOps SDK and Ansible Collections are available on GitHub today under an open source license.

StorONE

The latest S1 Enterprise Storage Platform release from all-in-one storage supplier StorONE adds:

  • S1:Tier: moves data across multiple tiers of storage; from high-performance Optane or NVMe flash to high-density SAS flash storage, like QLC, to hard disk drives and then to the cloud for long-term archive. There can be a separate resource pool for NVMe SSDs.
  • S1:Snap: zero-impact, unlimited snapshots can tier older snapshot data to less-expensive hard disk-based or cloud storage. This lessens the need for a separate backup system.
  • S1:Object: create a volume that supports object storage via the S3 protocol. Now a single S1-powered storage server can support high-performance (1 million +) IOPS storage running on top of fibre or iSCSI, cost-effective, high-capacity NAS or object storage via NFS, SMB or S3.
  • S1:Replicate: provides asynchronous, semi-synchronous and synchronous replication of data from one StorONE system to another. Asynchronous replication acknowledges when the writes complete locally and to the local TCP buffer. The semi-synchronous replication setting acknowledges when data is written locally to the remote TCP buffer. Synchronous replication acknowledges when data is written locally and to the remote storage system.

With S1:Replicate Source and target storage clusters can have different drive redundancy settings, snapshot data retention policies and drive pool types. It means customers can have a less-expensive system at their disaster recovery site.

The company said last week it has the financial reserves to weather the COVID-19 pandemic.

Veritas

Veritas’ Enterprise Data Services Platform now includes:

  • APTARE IT Analytics 10.4 has new regulation support for public sector environments, and reporting engine upgrades. It enables new data collection from NetBackup Appliances, Dell EMC Avamar 19.1, Dell EMC Data Domain 6.2, Dell EMC NetWorker 9.2.1, HPE Nimble, NAKIVO 9.1.1, and VMware ESXi 6.5. APTARE IT Analytics 10.4.1 also features additional supported languages including French, Chinese, Korean and Japanese.
  • Backup Exec (BE) 21 has per-instance licensing, automated license updates, enhanced security to guard against ransomware. It has day-one support for vSphere 7.0 and vSAN 7.0, additional cloud regional support and broader physical support (CentOS 7.7 x64, Debian 10.0 x64, Oracle Linux 8 and 8.1, RedHat Enterprise Linux 8 and 8.1).
  • Veritas SaaS backup adds support for Microsoft Dynamics 365 CRM, with protection for Azure, Dynamics 365 and Office 365.
  • eDiscovery Platform 9.5 (eDP 9.5) introduces support for all major Web browsers, with legal holds and security enhancements. It has support for Enterprise Vault 12.5, Exchange 2019 and SharePoint 2016.
  • Veritas EV.cloud now includes Veritas Advanced Supervision 2.0, bringing intelligence and analysis to data supervision for organisations targeting advanced cloud-based archiving with Microsoft Office 365 or Google Gmail for data governance. Updates allow for classification-driven sampling and searching to help customers restrict relevant content from view sets and ensure that content is included in classification.

Shorts

Amazon Web Services ECS (Elastic Container Service) now supports the Amazon Elastic File System (EFS) file system. Both containers running on ECS and AWS Fargate can use EFS. AWS says this will help customers containerize applications that require shared storage such as content management systems, internal DevOps tools, and machine learning frameworks.

AWS is introducing a new Snowball management platform, new IAM capabilities, and support for task automation.

VMware has announced the integration of its Site Recovery Manager (SRM) with Pure Storage’s FlashArray products with VMware vSphere Virtual Volumes (vVols).

Backup as a service startup Clumio has achieved Amazon Web Services (AWS) Storage Competency status for its enterprise backup service.

Forward Insights has ranked Kingston in first place in worldwide channel SSD shipments with 18.3 per cent market share, ahead of semiconductor manufacturers Western Digital and Samsung (16.5 per cent and 15.1 per cent, respectively). According to Forward Insights, almost 120 million SSDs were shipped in the channel in 2019.

ObjectiveFS 6.7 includes dynamic scaling of threads, small file performance speedup, faster performance when running with disk cache, EC2 Instance Metadata Service v2 (IMDSv2), S3 connection tuning, and more – including 350+MB/sec read and write of large files..

Entertainment and media object storage supplier Object Matrix and Signiant have announced improved workflow compatibility between MatrixStore and Signiant Media Shuttle, a de facto industry standard for sending and sharing large files fast.

OwnBackup is providing assistance to pandemic-strained healthcare organisations with an OwnBackup Gratitude package providing backup and security services free of charge. It integrates with Salesforce Health Cloud.

Pavilion, an NVMe-oF array supplier, has gained VMware vSphere 7 certification for its Hyperparallel Flash Array.

HCI supplier Scale Computing said Q1 2020 revenue reached a record, growing more than 30 per cent. It reports growth from local government and education customers where IT demands have skyrocketed due to the pandemic – that’s led to work-from-home/teach-from-home requirements.

StorCentric’s Retrospect is offering free 90-day subscription licenses for every Retrospect Backup product. There are no strings attached. The user can backup all of their data absolutely free for 90 days. If at the end of the 90 days, they no longer wish to use Retrospect software — they can still access, retrieve, and restore all of their backed-up data.

NVMe SSD tester SANBlaze has announced the availability of TCG Opal verification testing for NVMe SSDs.

SoftNAS has changed its name to Buurst. It intends to charge a price for its products that is not based on capacity. It announced $5m additional capital from its investor base, bringing total equity capital raised to $35 million. SoftNAS will remain a core product offering from Buurst and is available on the AWS Marketplace and Azure Marketplace.

SMART MIP

Smart Modular Technologies has announced a higher density 16GB DDR4 Module-in-a-Package (MIP). The MIP is a  tiny form factor design targeted for uses in IIoT, embedded computing, broadcast video, and mobile routers. It is available in two configurations, the standard 1Gx64 version or the two channels of x32 configuration to replace either soldered down DRAMs or SO-DIMMs. 

Replication supplier WANdisco is donating its software to help researchers share and analyze big data to develop potential treatments and cures for COVID-19. I

Hybrid cloud data warehouse supplier Yellowbrick is providing free access to its cloud data warehouse to help aid researchers and companies actively working on a vaccine for COVID-19. Virtusa has teamed up with Yellowbrick to provide implementation consulting and access for its Life Sciences platform, vLife. Apply at www.yellowbrick.com/covid19/.

People

Acronis has hired Amy Luber as its Channel Chief Evangelist.

Quantum has hired James Mundle  as global channel chief. He most recently served as VP of ww Channel Programs with VEEAM. Before that he was VP of w-w Channel Sales for Seagate’s Cloud Systems and Solutions business

Renaud Perrier, formerly Google’s former Head of Cloud ISV Partnerships, has become Senior Vice President of International Business Development and Operations at Virtru. The company has created TDF (Trusted Data Format); privacy technologies built on its data protection platform to govern access to data throughout its lifecycle – from creation to transmission, storage, analysis, and sharing.

SMR in disk drives: PC vendors also need to be transparent

Western Digital late last week issued a statement in response to the revelation of the company’s undocumented use of SMR (Shingled Magnetic Media recording) in 2TB and 6TB WD Red NAS drives.

Toshiba and Seagate confirmed to Blocks & Files that there is undocumented use of SMR technology in some of their drives. We think it is now time for the PC vendors to come clean.

Desktop and laptop system makers need to be explicit in data sheets and marketing literature when their disk drives use SMR. This will prevent avoidable mishaps of the WD Red NAS variety.

A senior industry source, who declined to be named, told us: “It’s actually not surprising that WD and Seagate offered to OEM out SMR HDDs for desktops – after all, they are cheaper per TB. And sadly, it is also not surprising that the desktop vendors such as Dell and HP integrated them into their machine without ‘telling’ their customers, the end-user consumer (and/or the business desktop buyer, usually a procurement agent)… So, I think the fault is spread around the supply chain – not just the HDD manufacturers.”

SMR is cheaper

In its statement (full text below), WD explains that certain sub-8TB WD Red SMR drive users could experience problems, and also that it uses conventional magnetic recording (CMR) technology in 8TB-14TB WD Red NAS drives.

So why did WD use SMR drives for the sub-8TB capacity points? Very simply, with fewer platters and read and write heads, SMR is a cheaper way to deliver the same capacity as CMR.

WD uses SMR in its 1TB, 2TB, 3TB, 4TB and 6TB Red drives and conventional recording in its 8TB, 10TB, 12TB and 14TB Red drives. We see here a split product line using with each half using different disk recording technology underneath one brand.

And why WD did not use SMR in the 8TB and abovedrives if it could deliver “an optimal performance experience for users”.

WD said in its statement: “In our testing of WD Red drives, we have not found RAID rebuild issues due to drive-managed SMR technology.”

However, users on the Reddit, Synology and smartmontools forums did find problems; for example with ZFS RAID set enlargements and with FreeNAS.

Alan Brown, a network manager at UCL Mullard Space Science Laboratory, who alerted us to the SMR issue, said: “These drives are not fit for purpose. In this case because they have a relatively provable and repeatable firmware bug which result in them throwing hard errors, but in more general purposes because SMR drives marketed as NAS/RAID drives have such appalling and variable throughput that they are unusable.”

“Even the people using Seagate SMR drives are reporting 10 second pauses in writes at times and those who had reasonable performance with SMR-from-start arrays have confirmed that resilvering a replacement drive in has turned out to be a major issue which they didn’t fully appreciate until they actually tried it.”

Western Digital statement

Shingled magnetic recording (SMR) is a hard drive technology that efficiently increases areal density and capacity for users managing increasing amounts of data, thus lowering users’ TCO. There are both device-managed and host-managed types, each for different use cases.

All our WD Red drives are designed to meet or exceed the performance requirements and specifications for common and intended small business/home NAS workloads. WD Red capacities 2TB-6TB currently employ device-managed shingled magnetic recording (DMSMR) to maximize areal density and capacity. WD Red 8-14TB drives use conventional magnetic recording (CMR). DMSMR should not be confused with host-managed SMR (HMSMR), which is designed for data center applications having respective workload requirements and host integration.

DMSMR is designed to manage intelligent data placement within the drive, rather than relying on the host, thus enabling a seamless integration for end users. The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users.

WD Red drives are designed and tested for an annualized workload rate up to 180TB. Western Digital has seen reports of WD Red use in workloads far exceeding our specs and recommendations. Should users’ use cases exceed intended workloads, we recommend WD Red Pro or Ultrastar data center drives.

Western Digital works extensively with customers and the NAS vendor and partner communities to continually optimize our technology and products for common uses cases. In collaboration with major NAS providers, we work to ensure WD Red HDDs (and SSDs) at all capacities are compatible with a broad set of host systems. In our testing of WD Red drives, we have not found RAID rebuild issues due to DMSMR technology.

Our customers’ experience is important to us. We will continue listening to and collaborating with the broad customer and partner communities to innovate technologies that enable better experiences with, more efficient management of and faster decisions from data.

Scaleflux adds hardware compression to computational storage

ScaleFlux has added hardware compression to its computational flash storage drive, effectively doubling capacity and increasing performance by 50 per cent.

JB Baker, senior director of product management for ScaleFlux, supplied a quote: “Experience gained from the global deployment of our previous drives have led us to significant enhancements in the CSD 2000. Customer feedback is showing that the simultaneous reduction in storage costs and improvements in application latency and performance … is a compelling value proposition.” 

Computational storage systems process data in the storage drive, thereby offloading the host server CPUs and improving overall performance.

The CSD 2000’s hardware engine has been updated with GZIP compression/decompression – which means no added latency. This doubles effective capacity – 4TB and 8TB raw capacity options increase to 8TB and 16TB. Application performance also improves.

  • Aerospike ACT 3.2 transactions per second (tps) increase 1.5x,
  • MySQL SysBench tps 1.5x,
  • PostgreSQL SysBench update_non_index 2.8x.

According to ScaleFlux, the CSD 2000 delivers 40 to 70 per cent more IOPS than NVMe SSDs on mixed read and write OLTP workloads. NVMe SSD performance typically drops off as the write proportion of any workload increases according to ScaleFlux, which claims the CSD 2000 maintains performance within a narrow band, regardless of the read and write mix.

Alibaba has qualified Scaleflux’s computational storage for use with the Chinese hyperscaler’s data centre infrastructure stack, specifically the POLARDB relational database. 

ScaleFlux’s original CSS 1000 drive incorporates a Xilinx FPGA paired with 2TB to 8TB of 3D NAND flash. It uses off-the-shelf code packages to accelerate Aerospike, Apache HBase, Hadoop and MySQL, OpenZFS and Ceph.

The CSD 2000 comes in the 2.5-inch (U.2) form factor. A PCIe add-in card will be available in a few weeks.

Kaminario ports storage software to the public clouds

Kaminario has adapted its VisionOS storage software for AWS, Azure and Google Cloud Platform – and claims it offers cheaper storage and more services than the cloud vendors’ native offerings.

Kaminario is the first block access array vendor to port its storage array software to all three public clouds. The company said it provides a consistent storage facility covering on-premises all-flash array SANs and their equivalents on AWS, Azure and GCP.

Kaminario’s Flex container orchestration and information services run across these environments as well as its Clarity management and AIOps service.

CEO Dani Golan claimed in a press briefing this week that no other supplier has this level of private and public cloud orchestration. The service enables customers to avoid storage and storage service lock-in to any public cloud supplier, he said.

Kaminario signalled its hybrid multi-cloud intentions in December last year. At the time CTO Eyal David said: ”There needs to be a data plane which delivers a common set of shared services that enable companies to decouple the management and movement of data from the infrastructure it runs on.”

Flex and Clarity form that data plane.

Cost savings

Kaminario said it can provide a 30 per cent or greater cost-saving compared to the public cloud’s own block-access storage services. It suggests customers with 100TB storage or more in the public cloud could benefit from the service.

Derek Swanson, Kaminario field CTO, said VisionOS in the public cloud ‘thin-provisions’ storage – meaning you pay for what you use. In contrast, the cloud providers ‘thick-provision’ storage – i.e. you pay for what you allocate. Also snapshots in the public cloud are full copies whereas Kaminario snapshots are metadata-based and almost zero-space. This saves a huge amount of money compared to native public cloud snapshots, according to Swanson.

Storage performance in the public cloud typically rises with allocated capacity, he said. But Kaminario decouples storage from compute in the public cloud – so you could have high-performance and low-capacity Kaminario storage in the cloud.

The competition

Golan said Kaminario’s hybrid multi-cloud capability means it no longer competes for legacy SAN business with suppliers such as Dell EMC, NetApp or Pure Storage.

According to Swanson, Pure’s Cloud Block Store, with its active:passive controllers, is slower than Kaminario’s VisionOS in the public cloud and lacked data services. Also he pointed out that Pure uses proprietary hardware for its on-premises arrays, and this is not replicated in Cloud Block Store, again making it slower.

NetApp’s Cloud Volumes services were also limited compared to Kaminario’s offerings, Swanson argued. He said NetApp’s Cloud Volumes lacks active:active symmetric controllers, unlike Kaminario, and so is a slower performer than VisionOS.

Kaminario roadmap

Blocks & Files expects Kaminario to add support for tiering data off to public cloud archive services, such as Amazon’s Glacier, with an S3 interface. File-level access protocols might also be supported.

Swanson and Golan said other public clouds would be supported in the future.

Kaminario in brief

Kaminario was founded in 2008 and has taken in $218m in funding. The initial product was the scale-up and scale-out K2 all-flash array. The company separated itself from hardware manufacture in January 2018 with a deal for Tech Data to build certified appliance hardware.

Later that year it embraced Western Digital’s composable systems. The company began moving to a subscription-based business model in mid 2019 and now it is 100 per cent subscription-based and “cashflow-positive”, Golan said.

VAST Data scores $100m and transforms into a data storage unicorn

VAST Data has completed a $100m funding round during the covid-19 pandemic which values the all-flash array storage startup at $1.2bn.

The company will spend the new money on building sales teams and on research and development. This includes work on the next-generation product line, which is expected to launch in 2022 or 2023.

VAST Data publicly launched its first high-end array in February 2019. Deduplicated data is stored in QLC SSDs, referenced using metadata stored on Intel Optane drives, with NVMe-over-Fabrics access to the flash SSDs.

VAST sums of money

Renen Hallak

VAST claims first year sales were significantly higher than any storage vendor in IT history but did not reveal numbers. Pure Storage reported $6m revenues in its first fiscal year – so that provides a base comparison. VAST’s average selling price is more than $1m.

VAST told us the sales momentum had prompted unsolicited funding approaches from new VCs. Due to covid-19 there were no face-to-face meetings with the investors, CEO Renen Hallak said. “It was all done through videoconferencing.”

VAST Data notes it has achieved $1bn unicorn status faster than any IT infrastructure startup to date, and has made a little graph to show this.

Total funding is now $180 million and the latest round includes cash from new investors Next47, Commonfund Capital and Mellanox Capital plus existing investors 83 North, Dell Technologies Capital, Goldman Sachs, Greenfield Partners and Norwest Venture Partners.

Hallak wants the world to know that the company is well-funded: “Considering that VAST has not even tapped into its $40m Series B financing, the company now has a $140m war chest to satisfy global customer demand for next-gen infrastructure, and to enable data driven applications through continued innovation.”

The pandemic has encouraged some customers, especially in the hedge fund and health sectors, to buy because they can converge other systems onto VAST and save money. Also they can run and analyse more historic data than before, according to Hallek.

He anticipates VAST’s support of Intel Optane and container storage will fuel sales growth as both technologies are gaining traction.

File and object workloads

VAST Data rack

Hallak told Blocks & Files that the VAST array is used mostly by large enterprises for file and object workloads.

They like being able to store primary data on the array because of its speed, as well as secondary and tertiary data because of its cost-effectiveness.

This is valued by data-intensive customers such as hedge funds, which can run real-time analyses on more old data than with other arrays, according to Hallak.

He said the Dell EMC and NetApp scale-out file systems are typical competitors, adding that the company has also won AI deals against Dell.

VAST will make a major Universal Storage v3.0 software release in coming weeks. This may include support for SMB and S3, along with military grade encryption and cloud-based replication.

Data storage simplification

VAST Data claims that the data storage market has reached a tipping point and that simplified storage is the way forward. Certainly, the trend in the storage array business is for product line simplification.

For example, IBM has converged its midrange Storwize and FlashSystem lines into a single FlashSystem product. And Dell is preparing the imminent launch of MidRange.Next, which unifies the Unity, XtremIO and SC arrays.

Hitachi Vantara, like Pure Storage, has several hardware arrays running the same operating system.

Infinidat’s single-tier high end Infinibox system uses nearline disk drives for bulk data storage and DRAM caching for performance. Unlike VAST Data, the Infinibox is primarily used for block storage and the companies do not compete for business, Hallek told us.

NetApp focuses on AFF ONTAP but still sells E-Series and Solidfire all-flash arrays.

HPE has yet to simplify its array line-up, which features the XP8, Primera, 3PAR and Nimble products. Increasingly this seems like a matter of ‘when’, rather than ‘if’.

Toshiba desktop disk drives have shingles too

Toshiba told Blocks & Files yesterday that its P300 desktop disk drives use shingled magnetic recording technology (SMR), which can exhibit slow data write speeds. However, the company does not mention this in end user drive documentation.

All three disk drive manufacturers – Western Digital, Seagate and Toshiba – have now confirmed to Blocks & Files the undocumented use of SMR technology in desktops HDDs and in WD’s case, WD Red consumer NAS drives. SMR has enabled the companies to reach higher capacity points than otherwise possible. But this has frustrated many users, who have speculated why their new drives are not working properly in their NAS set-ups.

According to the Geizhals price comparison website, Toshiba’s P300 desktop 4TB (HDWD240UZSVA) and 6TB (HDWD260UZSVA) SATA disk drives use SMR.

Toshiba P300 SMR results from Geizhals search

A P300 datasheet does not mention SMR. It states: “Toshiba’s 3.5-inch P300 Desktop PC Hard Drive delivers a high performance for professionals.” However, SMR drives can deliver a slow rewrite speed.

Blocks & Files asked Toshiba to confirm that the 4TB and 6TB P300 desktop drives use SMR and clarify which drives in its portfolio use SMR.

A company spokesperson told us: “The Toshiba P300 Desktop PC Hard Drive Series includes the P300 4TB and 6TB, which utilise the drive-managed SMR (the base drives are DT02 generation 3.5-inch SMR desktop HDD).

“Models based on our MQ04 2.5-inch mobile generation all utilise drive-managed SMR, and include the L200 Laptop PC Hard Drive Series, 9.5mm 2TB and 7mm 1TB branded models.

“Models based on our DT02 3.5-inch desktop generation all utilise drive managed SMR, and include the 4/6TB models of the P300 Series branded consumer drives.”

The company also told us which other desktop drives did and did not use SMR:

  • MD07ACA – 7,200rm 12TB, 14TB CMR (base of X300 Performance Hard Drive Series branded models
  • MD04 – 7,200rpm – 2, 4, 5, 6TB CMR (base for X300 Performance Hard Drive Series branded models
  • DT02 – 5,400rpm – 4, 6 TB SMR (base for P300 Desktop PC Hard Drive Series 4TB and 6TB branded models)
  • DT01 – 7, 200rpm – 500GB, 1,2,3 TB CMR (base for P300 Desktop PC Hard Drive Series 1/2/3TB branded models)

Why SMR is sub-optimal for write-intensive workloads

Shingled magnetic recording gets more data to disk plates by partially overlapping write tracks, leaving the read track within them clear. Read IO speed is unaffected but data rewrites requires blocks of tracks to be read edited with the new data and rewritten as a new block. This lengthens data rewrite time substantially compared with conventionally recorded drives.

Write-intensive workloads are worse affected by SMR delays than read-intensive workloads. Therefore SMR drives are typically used for archival-type applications and not for real-time mixed or route-intensive use cases.

Caching writes to a non-shingled zone of the drive and writing them out to the shingled sectors in idle time will hide the slow rewrite speed effectively – until the cache fills when rewrite IO requests are still coming in.

The cache is then flushed and all the data written to the shingled area of the drive, causing a pause of potentially many seconds while this is done.

Cohesity pumps out ransomware alerts 24×7 via Helios smartphone app

Cohesity has developed a smartphone app for its Helios management software, to better enable admin staff to respond to external threats in real time.

Cohesity smartphone Helios alert

For instance, the manager of a Cohesity infrastructure can now receive ransomware alerts 24×7 on their Apple or Android phone. They respond to the alert by logging in to Helios via a web browser to manage their Cohesity installation.

The manager could initiate system changes, stage a recovery or get to the workload where an anomaly had been spotted faster than simply relying on regular checks, Cohesity claims.

Vineet Abraham, Cohesity SVP of products and engineering, said in a statement: “Cybercriminals don’t just work during office hours and having a way to monitor the health of your data clusters from a mobile device 24 hours a day, seven days a week, and receive notifications  that could uncover a ransomware attack in action, could save your organisation millions of dollars and keep brand reputations intact.” 

Data change rates

Using Cohesity software, an organisation can converge all its secondary data into a single storage vault covering on-premises and public cloud environments. Data is backed up and protected with immutable snapshots, and the public cloud can be used as a file tier and to store backup data.

Helios can detect a ransomware attack by tracking data change rates and recognising any larger than normal daily change rates. Such a change could be the result of ransomware data encryption activity, general malware or people maliciously trying to modify data in the production IT environment. 

Helios also matches data stored and storage utilisation against historic patterns as well as data change rates.

This data state tracking uses machine learning models. Helios send alerts to the Cohesity manager’s phone and also to the customer’s IT security facility so that any attack activity can be stopped and systems disinfected. 

Cohesity’s Helios mobile app also provides:

  • Support case status tracking across a customer’s entire Cohesity estate
  • Cohesity installation health
  • Protection status of virtual machines, databases, and applications
  • Storage utilisation and performance for backup, recovery, file services and object storage.

Cohesity last week announced the completion of a $250m capital raise which valued the company at $2.5bn.

Seagate ‘submarines’ SMR into 3 Barracuda drives and a Desktop HDD

Some Seagate Barracuda Compute and Desktop disk drives use shingled magnetic recording (SMR) technology which can exhibit slow data write speeds. But Seagate documentation does not spell this out.

Yesterday we reported Western Digital has submarined SMR drives into certain WD Red NAS drives. The company acknowledged this when we asked but it has not documented the use of SMR in the WD Red drives. This has left many users frustrated and speculating for the reason why the new drives are not working properly in their NAS set-ups. Since this article was first published Toshiba has also confirmed the undocumented use of SMR in some desktop hard drives.

Geizhals, a German-language price comparison website, lists seven Seagate SMR drives

  • Barracuda 2TB – 7,200rpm – SATA 6gig – model name – ST2000DM008
  • Barracuda 4TB – 5,400rpm – SATA 6gig – ST4000DM004
  • Barracuda 8TB – 5,400rpm – SATA 6gig – ST8000DM004
  • Desktop HDD 5TB – 5,900rpm – SATA 6gig – ST5000DM000
  • Exos 8TB – 5900rpm – SATA 6gig – ST8000AS0003
  • Archive v2 6TB – 5,900rpm – SATA 6gig – ST6000AS0002
  • Archive v2 8TB – 5,900rpm – SATA 6gig – ST8000AS0002

Public Seagate documentation for these Barracudas and the Desktop HDD do not mention SMR. For example,

The Archive drives are for archiving and Exos drives are optimised for maximum storage capacity and the highest rack-space efficiency. Seagate documentation for the Exos and Archive HDDs explicitly spells out that they use SMR.

Seagate markets the Barracuda Compute drives as fast and dependable. Yet it is the nature of SMR drives that data rewrites can be slow.

When we asked Seagate about the Barracudas and the Desktop HDD using SMR technology, a spokesperson told us: “I confirm all four products listed use SMR technology.”

In a follow-up question, we asked why isn’t this information is not explicit in Seagate’s brochures, data sheets and product manuals – as it is for Exos and Archive disk drives?

Seagate’s spokesperson said: “We provide technical information consistent with the positioning and intended workload for each drive.”

Update

Seagate issued this statement on April 21: “Seagate confirms that we do not utilize Shingled Magnetic Recording technology (SMR) in any IronWolf or IronWolf Pro drives – our NAS solutions family. Seagate does not market the BarraCuda family of products as being suitable for NAS applications, and does not recommend using BarraCuda drives for NAS applications. Seagate always recommends to use the right drive for the right application.”

Why SMR drives are sub-optimal for write-intensive workloads

Shingled magnetic recording gets more data to disk plates by partially overlapping write tracks, leaving the read track within them clear. Read IO speed is unaffected but data rewrites requires blocks of tracks to be read edited with the new data and rewritten as a new block. This lengthens data rewrite time substantially compared with conventionally recorded drives.

Write-intensive workloads are worse affected by SMR delays than read-intensive workloads. Therefore SMR drives are typically used for archival-type applications and not for real-time mixed or route-intensive use cases.

Caching writes to a non-shingled zone of the drive and writing them out to the shingled sectors in idle time will hide the slow rewrite speed effectively – until the cache fills when rewrite IO requests are still coming in.

The cache is then flushed and all the data written to the shingled area of the drive, causing a pause of potentially many seconds while this is done.

Shingled hard drives have non-shingled zones for caching writes

The revelation that some WD Red NAS drives are shipped with DM-SMR (Drive-Managed Shingled Magnetic Recording) prompted us to ask more detailed questions to the Blocks & Files reader who alerted us to the issue.

Alan Brown, a British university network manager, has a high degree of SMR smarts and we are publishing our interview with him to offer some pointers to home and small business NAS disk drive array builders.

Blocks & Files: Can you contrast CMR (Conventional Magnetic Recording) and SMR write processes?

Alan Brown: When you write to a CMR disk, it just writes to sectors. Some drives will try to optimise the order the sectors are written in, but they all write data to the exact sector you tell them to write it to.

When you write to a SMR disk it’s a bit like writing to SSD – no matter what sector you might THINK you’re writing to, the drive writes the data where it wants to, then makes an index pointing to it (indirect tables).

Blocks & Files shingling diagram

What’s below is for Drive-managed SMR drives.  Some of this is conjecture, I’ve been trying to piece it together from the literature.

Essentially, unlike conventional drives, a SMR drive puts a lot of logic and distance between the interface and the actual platter. It’s far more like a SSD in many ways (only much much slower).

SMR disks have multiple levels of caching – DRAM, then some CMR zones and finally shingled zones

In general, writes are to the CMR space and when the disk is idle the drive will rearrange itself in the background – tossing the CMR data onto shingled areas – there might be 10-200 shingled “zones”. They’re all “open”(appendable) like SSD blocks are. If a sector within a zone needs changing, the entire zone must be rewritten (in the same way as SSD blocks) and zones can be marked discarded (trimmed) in the same way SSD blocks are.

Blocks & Files: What happens if the CMR zone becomes full?

Alan Brown: When the CMR zone fills the drive may (or may not) start appending to a SMR zone – and in doing so it slows down dramatically.

If the drive stops to flush out the CMR zone, then the OS is going to see an almighty pause (ZFS reports dozens of delays exceeding 60 seconds – the limit it measures for – and I measured one pause at 3 minutes). This alone is going to upset a lot of RAID controllers/software. [A] WD40EFAX drive which I zero-filled averaged 40MB/sec end to end but started at 120MB/sec. (I didn’t watch the entire fill so I don’t know if it slowed down or paused).

Blocks & Files: Does resilvering (RAID array data rebalancing onto new drive in group) involve random IO?

Alan Brown: In the case of ZFS, resilvering isn’t a block level “end to end” scan/refill, but jumps all over the drive as every file’s parity is rebuilt. This seems to trigger a further problem on the WD40EFAXs where a query to check a sector that hasn’t been written to yet causes the drive to internally log a “Sector ID not found (IDNF)” error and throws a hard IO error from the interface to the host system.

RAID controllers (hardware or software, RAID5/6 or ZFS ) will quite sensibly decide the drive is bad after a few of these and kick the thing out of the array if it hasn’t already done so on the basis of a timeout.

[Things] seems to point to the CMR space being a few tens of GB up to 100GB on SMR drives. So, as long as people don’t continually write shitloads, they won’t really see the issue, which means when for most people “when you’re you’re resilvering is the first time you’ll notice something break.”

(It certainly matches what I noticed – which is that resilvering would run at about 100MB/s for about 40 minutes then the drives would “DIE” and repeatedly die if I attempted to restart the resilvering, however if I left it an hour or so, they’d run for 40 minutes again before playing up.)

Blocks & Files: What happens with virgin RAID arrays?

Alan Brown: When you build a virgin RAID array using SMR, it’s not getting all those writes at once. There are a lot of people claiming “I have SMR raid arrays, they work fine for me”.

Rather tellingly … so far none of them have come back to me when I’ve asked: “What happens if you remove a drive from the RAID set, erase it and then add it back in so it gets resilvered?”

Blocks & Files: Is this a common SMR issue?

Alan Brown: Because I don’t have any Seagate SMR drives, I can’t test the hypothesis that the IDNF issue is a WD firmware bug rather than a generic SMR issue. But throwing an error like that isn’t the kind of thing I’d associate with SMR as such – I’d simply expect throughput to turn to shit.

It’s more likely that WD simply never tested adding drives back to existing RAID sets or what happens if a SMR drive is added to a CMR RAID set after a disk failure – something that’s bound to happen when they’re hiding the underlaying technology – shipped a half-arsed firmware implementation and blamed users when they complained (there are multiple complaints about this behaviour. Everyone involved assumed they had “a bad drive”).

To make matters worse, the firmware update software for WD drives is only available for Windows and doesn’t detect these drives anyway.

The really annoying part is that the SMR drive was only a couple of pounds cheaper than the CMR drive, but, when I purchased these drives, the CMR drive wasn’t in stock anyway.

I just grabbed 3 WD Reds to replace 3 WD Reds in my home NAS (as you do…), noticed the drives had larger cache, compared the spec sheet, couldn’t see anything different (if you look at the EFRX vs EFAX specs on WD’s website you’ll see what I mean) and assumed it was just a normal incremental change in spec.

Blocks & Files: What about desktop drive SMR?

Alan Brown: The issue of SMR on desktop drives is another problem – I hadn’t even realised that this was happening, but we HAD noticed extremely high failure rates on recent 2-3TB drives we put in desktops for science work (scratchpad and NFS-cache drives). Once I realised that was going on with the Reds, I went back and checked model numbers on the failed drives. Sure enough, there’s a high preponderance of drive managed-SMR units and it also explains mysterious “hangs” in internal network transfers that we’ve been unable to account for up until now.

I raised the drives issue with IxSystems – our existing TruNAS system is approaching EOL, so I need to replace it and had a horrible “oh shit, what if the system we’ve specified has hidden SMR in it?” sinking feeling. TruNAS are _really_ good boxes.  We’ve had some AWFUL NAS implementations in the past and the slightest hint of shitty performance would be politically dynamite on a number of levels so I needed to ensure we headed off any trouble at the pass.

It turns out ixSystems were unaware of the SMR issue in Reds – and they recommend/use them in the SOHO NASes. They also know how bad SMR drives can be (their stance is “SMR == STAY AWAY”) and my flagging this raised a lot of alarms.

Blocks & Files: Did you eventually solve the resilvering problem?

Alan Brown: I _was_ able to force the WD40EFAX to resilver – by switching off write caching and lookahead. This dropped the drive’s write speed to less than 6MB/sec and the resilvering took 8 days instead of the more usual 24 hours. More worryingly, once added, a ZFS SCRUB (RAID integrity check) has yet to successfully complete without that drive producing checksum errors, even after 5 days of trying.

I could afford to try that test because RAIDZ3 gives me 3 parity stripes, but it’s clear the drive is going to have to come out and the 3 WD Reds returned as unfit for the purpose for which they are marketed.

Western Digital admits 2TB-6TB WD Red NAS drives use shingled magnetic recording

Some users are experiencing problems adding the latest WD Red NAS drives to RAID arrays and suspect it is because they are actually shingled magnetic recording drives submarined into the channel.

Alan Brown, a network manager at UCL Mullard Space Science Laboratory, the UK’s largest university-based space research group, told us about his problems adding a new WD Red NAS drive to a RAID array at his home. Although it was sold as a RAID drive, the device “keep[s] getting kicked out of RAID arrays due to errors during resilvering,” he said.

Resilvering is a term for adding a fresh disk drive to an existing RAID array which then rebalances its data and metadata across the now larger RAID group.

Brown said: “It’s been a hot-button issue in the datahoarder Reddit for over a year. People are getting pretty peeved by it because SMR drives have ROTTEN performance for random write usage.”

SMR drives

Shingled media recording (SMR) disk drives take advantage of disk write tracks being wider than read tracks to partially overlap write tracks and so enable more tracks to be written to a disk platter. This means more data can be stored on a shingled disk than an ordinary drive.

However, SMR drives are not intended for random write IO use cases because the write performance is much slower than with a non-SMR drive. Therefore they are not recommended for NAS use cases featuring significant random write workloads.

Smartmontools ticket

Brown noted: “There’s a smartmontools ticket in for this [issue] – with the official response from WDC in it – where they claim not to be shipping SMR drives despite it being trivial to prove otherwise.”

That ticket’s thread includes this note:

“WD and Seagate are _both_ shipping drive-managed SMR (DM-SMR) drives which don’t report themselves as SMR when questioned via conventional means. What’s worse, they’re shipping DM-SMR drives as “RAID” and “NAS” drives This is causing MAJOR problems – such as the latest iteration of WD REDs (WDx0EFAX replacing WDx0EFRX) being unable to be used for rebuilding RAID[56] or ZFS RAIDZ sets: They rebuiild for a while (1-2 hours), then throw errors and get kicked out of the set.”

(Since this article was published Seagate and Toshiba have also confirmed the undocumented use of shingled magnetic recording in some of their drives.)

The smartmontools ticket thread includes a March 30, 2020, mail from Yemi Elegunde, Western Digital UK enterprise and channel sales manager:

“Just a quick note. The only SMR drive that Western Digital will have in production is our 20TB hard enterprise hard drives and even these will not be rolled out into the channel. All of our current range of hard drives are based on CMR Conventional Magnetic Recording. [Blocks & Files emboldening.] With SMR Western Digital would make it very clear as that format of hard drive requires a lot of technological tweaks in customer systems.”

WD’s website says this about the WD Red 2TB to 12TB 6Gbit/s SATA disk drives: “With drives up to 14TB, the WD Red series offers a wide array of solutions for customers looking to build a high performing NAS storage solution. WD Red drives are built for up to 8-bay NAS systems.” The drives are suitable for RAID configurations.

Synology WD SMR issue

There is a similar problem mentioned on a Synology Forum where a user added 6TB WD Red [WD60EFAX] drive to a RAID setup using three WD Red 6TB drives [WD60EFRX] in SHR1 mode. He added a fourth drive to convert to SHR2 but conversion took two days and did not complete.

The hardware compatibility section on Synology’s website says the drive is an SMR drive:

The Synology forum poster said he called WD support to ask if the drive was an SMR or conventionally recorded drive: “Western Digital support has gotten back to me. They have advised me that they are not providing that information so they are unable to tell me if the drive is SMR or PMR. LOL. He said that my question would have to be escalated to a higher team to see if they can obtain that info for me. lol”

Also: “Well the higher team contacted me back and informed me that the information I requested about whether or not the WD60EFAX was a SMR or PMR would not be provided to me. They said that information is not disclosed to consumers. LOL. WOW.“

Price comparison

A search on Geizhals, a German language price comparison site, shows various disk drives using shingled magnetic media recording. Here, for example, is a search result listing for WD Red SATA HDDs with SMR technology. The result is:

However, a WD Red datasheet does not mention SMR recording technology.

WD comment

We brought all these points to Western Digital’s attention and a spokesperson told us:

“All our WD Red drives are designed meet or exceed the performance requirements and specifications for common small business/home NAS workloads. We work closely with major NAS providers to ensure WD Red HDDs (and SSDs) at all capacities have broad compatibility with host systems.

“Currently, Western Digital’s WD Red 2TB-6TB drives are device-managed SMR (DMSMR). WD Red 8TB-14TB drives are CMR-based.

“The information you shared from [Geizhals] appears to be inaccurate.

“You are correct that we do not specify recording technology in our WD Red HDD documentation.

“We strive to make the experience for our NAS customers seamless, and recording technology typically does not impact small business/home NAS-based use cases. In device-managed SMR HDDs, the drive does its internal data management during idle times. In a typical small business/home NAS environment, workloads tend to be bursty in nature, leaving sufficient idle time for garbage collection and other maintenance operations.

“In our testing of WD Red drives, we have not found RAID rebuild issues due to SMR technology.

“We would be happy to work with customers on experiences they may have, but would need further, detailed information for each individual situation.”

Comment

Contrary to what WD channel staff have said, the company is shipping WD Red drives using SMR technology. (Since publication of this article, Western Digital has published a statement about SMR use in 2TB-6TB WD Red NAS drives.)

WD told us: “In a typical small business/home NAS environment, workloads tend to be bursty in nature, leaving sufficient idle time for garbage collection and other maintenance operations.”

Not all such environments are typical and there may well not be “sufficient idle time for garbage collection and other maintenance operations”.

We recommend posters on the Synology forum, data hoarder Reddit and smartmontools websites to get back in touch with their WD contacts, apprise them of the information above and let them know that WD is “happy to work with customers on experiences they may have”.

Nearline hard drive shipments carry on growing

Disk shipments in 2020’s first quarter show nearline drives taking up almost half of all HDD units and the majority of capacity and revenue. Shipments for all other types of hard disk drives declined in the quarter, according to the specialist market research firm Trendfocus.

This confirms the general trend for increased nearline drive unit sales, taking up more capacity and revenue share of the HDD market. Hyperscaler buyers such as the cloud service providers represent the main proportion of nearline disk buyers.

Trendfocus estimates somewhere between 66.5 million and 68.1 million disk drives shipped in the first quarter – 67.3 million at the mid-point, which is a 13 per cent decline year on year. Seagate had 42 per cent unit share, Western Digital took 37 per cent and Toshiba 21 per cent.

The 3.5-inch high-capacity nearline category totalled 15.7 million drives, up 43 per cent. This represents 23.3 per cent of all disk drive shipments.

Nearline drives accounted for about 150EB of capacity, up 65 per cent compared with +43 per cent and +89 per cent in Q3 2019 and Q4 2019. According to Wells Fargo analyst Aaron Rakers, nearline drives could account for 55 per cent of total Q1 2020 HDD industry revenue, up two per cent on 2019.

Some 24 million 2.5-inch mobile drives were shipped in Q1, which is down more than 45 per cent y/y, according to Rakers. He thinks some of the decline reflects the “supply chain disruption from covid-19”. Seagate and Western Digital each had about 35 per cent unit share and Toshiba shipped 30 per cent.

About 23 million 3.5-inch desktop drives shipped in the quarter, down 20 per cent on the previous quarter, Rakers noted, citing “seasonal declines, as well as reduced mid-quarter PC and surveillance production due to covid-19”. 

The last category is for 2.5-inch enterprise or mission-critical drives and just 4.2 million were shipped, down 12 per cent on the previous quarter. Rakers said: “Shipments continue to face increased SSD replacement.”


YTMC stakes claim for top table with 128 layer 1.33Tb QLC 3D NAND

YMTC Xtacking
YMTC Xtacking

China’s Yangtze Memory Technology Corporation (YMTC) has begun sampling what it claims is the world’s highest density and fastest bandwidth NAND flash memory.

YMTC has developed the X2-6070 chip with a 1.333Tb capacity and 1.6Gbit/s IO speed using 128-layer 3D NAND with QLC (4 bits per cell) format. The chipmaker has also launched the X2-9060 chip with 512Gbit capacity and a TLC (triple-level cell) format from its 128 layers.

YMTC 128-layer QLC NAND chips

Grace Gong, YMTC’s SVP of marketing and sales, said the company will target the new QLC product at consumer grade solid-state drives initially and then extend the range into enterprise-class servers and data centres.

QLC is coming

Gregory Wong, principal analyst of Forward Insights, said in the YMTC press release: “As client SSDs transition to 512GB and above, the vast majority will be QLC-based. The lower read latency of enterprise and datacenter QLC SSDs compared to hard drives will make it suitable for read-intensive applications in AI, machine learning and real-time analytics, and Big Data. In consumer storage, QLC will become prevalent in USB flash drives, flash memory cards, and external SSDs.”

YMTC said the X2-6070 has passed sample verification on SSD platforms through working with multiple controller partners. We can expect several QLC and TLC SSDs using the X2-6070 and X2-9060 chips to launch in China, and possibly in other countries, between now and mid-2021.

With this launch, YMTC, a self-acknowledged newcomer to the flash memory industry, has joined the mainstream NAND vendors in developing 100+ layer technology.

The net result of their efforts is that SSDs using 100+layer 3D NAND in QLC format should ship from multiple sources in 2021.

A table shows how the vendors compare. 

Xtacking

YMTC’s new chips use a second generation of the company’s proprietary Xtacking technology. This separates the manufacture of the NAND chips and controller chips, with each on their own wafers. The chips are attached electrically through billions of metal VIAs (Vertical Interconnect Accesses). The VIAs are formed across each NAND and controller wafer in one process step and the chips are then cut out. NAND chips and controller chips are mated so that the VIAs line up (shown by the red pillars in the diagram below). 

Xtacking diagram with peripheral controller logic die placed above separate NAND die and the two electrically connected with VIAs (green pillars).

There are two benefits to this design, according to YMTC. First, the chip area is reduced by 20 to 30 per cent because the peripheral controller logic is not placed on one side of the chip, as is otherwise the case. Secondly, the controller logic can be developed in parallel with the NAND die and this saves three months of development time, according to YMTC.

YMTC’s current NAND chip has 64 layers and we understand that the 128-layer chip is in effect two 64-layers, stacked one above the other in a string-stacking scheme. The 64-layer chip also uses Xtacking to put the controller logic above the NAND cells. YMTC’s 128-layer chip uses a second generation of Xtacking technology.

Layer cake

As well as YMTC, other suppliers also place the NAND die’s peripheral circuitry under or over the 3D NAND cells. For example, Samsung’s Cell over Peripheral architecture, arranges the bit-storing cells on top of the CMOS logic needed to operate the die. Intel and Micron have similar CMOS-under-Array designs.

SK hynix is preparing the 16TB 128-layer TLC PE8111 SSD in the EDSFF IU long ruler format, with sampling due in the second half of the year. A 32TB product will follow, but we don’t have a sampling date. Both chips use a 1Tbit die.

In January Western Digital and Kioxia announced 112-layer 3D NAND, with early shipping dates pegged at Q4 2020. BiCS5 technology will be used to build TLC and QLC NAND.

In December last year Micron said it was developing 128-layer chips with replacement gate (RG) technology. Gen 1 RG technology will start production in the second half of 2020, with slightly lower-cost chips. A gen 2 chip that brings in more layers, lower costs and a strong focus on QLC will appear in fy2021.

Last September Intel said it will move from 96-layer to 144-layer 3D NAND and will ship 144-layer SSDs in 2020. This is likely to be a pair of 72-layer string stacks.