Home Blog Page 379

Ignore drive write messaging… SSDs are more reliable than HDDs, Period.

This blog mostly pertains to SSDs in server and storage arrays.

All NAND relies on electrical charges in silicon. Writing data involves a Program Erase (P/E) cycle, also known as a write cycle. And there are write cycle limits as to how many times a NAND cell can hold a charge:

  • SLC (1 bit/cell) write cycle is 100,000
  • MLC (2 bits/cell) write cycle 10,000
  • TLC (3 bits/cell) write cycle 3,000
  • QLC (4 bits/cell) write cycle 1,000

This is a race to zero. Why? Because more bits per cell equals a lower cost/bit. QLC NAND is cheap compared to SLC or MLC NAND.

Use cheap NAND for cat-videos or for USB thumb drives? Okay, fine.

But using cheap NAND for servers and storage arrays is a profoundly bad idea because high write workloads are a threat to SSD reliability. So using the cheapest possible flash for servers and storage arrays is unwise.

I am not a fan of the SSD marketing teams who repeatedly offered the lead marketing message of “the SSD I want you to buy … 3,000 P/E write endurance… they will surely fail”. Yuck – when that marketing message could have been “the SSDs I want you to buy are far more reliable than hard disk drives, which fail about one per cent per year.”

SSDs are more reliable than hard drives, period.

But we still have to deal with the reality of NAND write endurance. We should rightfully focus on the device – the SSD – and not the NAND itself.

Server/Storage SSDs added over provisioning, with spare cells held in reserve to take over when used cells reach their wear limit. Next, SSDs were rated with Drive Writes Per Day (DWPD), and then further improved with ratings of TWD (Terabytes Written per Day) so that the wear rate can be tracked.

It is obvious, but bears repeating: a 4TB SSD will have twice the TWD rating as a 2TB SSD. Buy bigger SSDs for write-heavy workloads (duh).

Our humble advice… Use SSDs. SSDs are more reliable than HDDs, period. And when the workload is write-heavy, simply buy bigger SSDs, as they are harder to fill up.

The endurance problem can also be solved another way; compression writes less data to SSDs, thereby preserving the available write endurance (P/E cycles) capacity. Today we know GZIP and its kin. These are antiquated and inefficient; you can expect to be less than delighted. Keep an eye out for new generation of “Cloud-native” compression storage services and software.

Note: Consultant and patent holder Hubbert Smith (Linkedin) has held senior product and marketing roles with Toshiba Memory America,  Samsung Semiconductor, NetApp, Western Digital and Intel. He is a published author and a past board member and workgroup chair of the Storage Networking Industry Association and has had a significant role in changing the storage industry with data centre SSDs and Enterprise SATA disk drives.

FileCloud plugs data leaks with Smart DLP

FileCloud has added data loss prevention via the release of Smart DLP.

FileCloud is an enterprise file synchronisation and sharing system (EFSS) supplier that runs on-premises or in the public cloud; AWS and Azure. It is an alternative to Owncloud, Box, Dropbox, and Egnyte and can be run as a self-hosted cloud or hybrid private/public cloud storage. 

The intention for Smart DLP is to prevent data leaks and safeguard enterprise content. The softwarew allows or bars file-level actions based on applied and pre-set rules that are checked in real-time. In effect the access rules form a layer of custom metadata about files managed by FileCloud and Smart DLP is the gatekeeper. 

We think the idea of using a repository’s file metadata as the basis for data loss prevention processes is a simple, smart and obvious – with hindsight – starting point. The concept has a lot of merit, subject to the admin overhead of setting up the rules and applying them.

Governance

Smart DLP is claimed to simplify compliance with GDPR, HIPAA, ITAR and CCPA data access regulations by identifying and classifying data. The product supports external security information and event management integration (SIEM) and SMS 2FA. 

Smart DLP rule management screen. Applying rules like this to thousands or tens of thousands of files will need some kind of process.

Enterprise data leak prevention features include the automation of prevention policies, monitoring data movement and usage, and detecting security threats. An admin person is required to define and apply the rules to files in the FileCloud repository.

Smart DLP controls user actions, such as the ability to login, download and share files, based on IP range, user type, user group, email domain, folder path, document metadata and user access agents; web browsers and operating systems. It will allow or deny selected user actions and log rule violation reports for future auditing.

A download stopped by Smart DLP.

The product will also find personally identifiable information (PII), protected health information (PHI), payment card information (PCI) and other sensitive content across user databases and team folders. Users can deploy built-in search patterns to identify PII and create custom search patterns and metadata sets for vertical business content.

FileCloud is a product of CodeLathe, a privately held software company, founded in 2016 and headquartered in Austin, TX. It claims 3000-plus business customers for File Cloud and more than one million users.

Dell EMC loses vision but still leads 2019 files and object Magic Quadrant

Gartner has published the 2019 Magic Quadrant for distributed files and object storage, which places Dell EMC, IBM, Scality and Qumulo in the leaders’ box.

Here is the 2019 chart:

Sixteen suppliers feature – with no vendors dropped from last year or new vendors added. But don’t expect to see Western Digital feature in next-year’s MQ. The company is included in the Challengers quadrant but is exiting data centre systems – and that includes the proposed divestiture of ActiveScale object storage.

Seven suppliers, including top-ranked Dell EMC, scored lower on completeness of vision compared to the 2018 MQ.

In the chart below we have superimposed the 2019 MQ on the 2018 results and highlight major moves with yellow arrows, to show changes more clearly.

The 2018 MQ supplier positions and names are in light grey and the 2019 counterparts are in dark blue. We have added a low left to top right diagonal line to clarify supplier positioning.

Cloudian makes the most positive move, up and slightly right in the MQ ranks. SUSE, Dell EMC and Red Hat made big moves leftward showing a weakened vision rating. Chinese suppliers Huawei and Inspur made significant improvements in their ability to execute, as did Caringo.

Qumulo’s Molly Presley said: “Qumulo moved to the lead position as the most top and right. Qumulo is the only company in the Leader Quadrant that improved our position this year.”

Five suppliers could break into the leader’s quadrant next year; Cloudian and Hitachi Vantara from the challenger box; and Red Hat, NetApp and Pure Storage from the visionaries.

Gartner has named some vendors to watch. They are Cohesity, MinIO, Nutanix, Quobyte, Rozo Systems, and WekaIO. 

Obligatory Magic Quadrant explanation

Gartner’s Magic Quadrant is a rectangular box presented as a two-axis chart divided into four equal-sized quarters. The vertical axis is a low to high ability to execute spectrum. The horizontal axis is a similar low to high spectrum rating completeness of vision. The top quarter squares are labelled Challengers on the left and Leaders on the right. The lower pair of squares are Niche Players on the left and Visionaries on the right.

Interview with Veeam co-founder Ratmir Timashev

Veeam’s growth is extraordinary. Founded in 2006, the privately-owned backup vendor said it surpassed $1bn annual revenues in 2018. The company dominates the data protection of virtualized servers and is extending coverage to AIX, Solaris and other operating systems. 

But backup is changing shape as it intertwines with data security, takes on SaaS models, and protects data in the public clouds.

Can Veeam continue growing outside its home territory, the vast on-premises VMware server market? Blocks & Files discussed this topic with Ratmir Timashev, co-founder and head of sales and marketing at Veeam.

Rather Timashev.

Blocks & Files: How does Veeam see the general data protection environment and market?

Ratmir Timashev: Business people realise data is very important and they’re making big efforts to digitise their operations. Everyone has to have the right procedures and processes. A big milestone is complying with GDPR. Most companies are not ready to satisfy their GDPR requirements, and most are under-estimating the risk. We can help companies in relation to backup and GDPR.

Blocks & Files: How does Veeam view the rise of SaaS backup?

Timashev: We see the need and we’re responding to it based on our partners. We’re a 100 per cent channel business and the market is being driven by service providers. We have our VCSP – Veeam Cloud and Service Provider – program with over 20,000 partners. They provide Veeam software as a service.

We have had a very good offering for two to three years, Office 365 backup. It’s our fastest-growing product. We’ll do about $50m in subscription this year; it was around $20m last year. Version 4 is coming later this year. It’s a major improvement and adds backup to object storage. People will be able to backup O365 to AWS S3 and Azure Blob storage directly. It opens up lots of opportunity for VCSP partners.

Blocks & Files: Is there a need for cloud-native SaaS backup (the Clumio/Druva pitch)?

Timashev: There’s some merit but the architecture has to be hybrid, covering on-prem and the cloud, like we do. Just being pure cloud-based is not enough. Veeam’s is the right approach

Blocks & Files: With the rise of niche backup approaches can specialists keep generalists at bay? e.g; HYCU and Nutanix (and Google), and Ownbackup and Salesforce.

Timashev: Veeam had the same strategy 13 years ago, focusing on VMware. Fortunately VMware became a standard. If a niche grows like VMware, and becomes huge, then okay. If not then customers will want a heterogeneous solution covering bare metal, virtualized servers and multi-cloud.

Customers want to move around; from their data centre to the clouds and back. If you can’t do it then you’re limiting customers; repatriation is hindered. By supporting AWS and Azure, Veeam makes customers more agile. They can move from on-prem to the cloud and back.

Blocks & Files: Will public cloud service providers build or buy their own data protection products?

Timashev: AWS already provides some capability. Generally platform providers, from mainframes on, provide some basic capability. But they rely on third parties for advanced services. Backup is not a high priority for the cloud service providers; it doesn’t make or break the business. They don’t wake up on Monday morning worrying about backup. So it doesn’t get enough priority or investment.

Also, customers want to backup from AWS to on-prem or to Google or Azure. AWS will never do this. So Veeam provides a holistic solution in a hybrid, multi-cloud environment.

Blocks & Files: Can you say anything about Veeam’s roadmap.

Timashev: As I said already, V4 of O365 backup will come later this year and open up new opportunities. V10 of our Veeam Availability Suite is due in December and will have advanced backup for files and NAS devices. That will complete all the capabilities.

We’re working on providing cloud-native support for AWS and Azure. And we want to develop a comprehensive data management and protection capability for physical, virtual and the cloud environments.

Comment

Veaam’s argument is logical – namely, customers operating in hybrid environments want single, all-encompassing data protection. The company is well-positioned to expand its total addressable market if that facility also provides data management.

But competition is gathering apace, with Rubrik, Cohesity, Commvault, Veritas and others heading in the same direction. We look forward to the clash of the backup titans.

Cohesity debuts file-tiering storage space saver

Cohesity has announced a way of migrating files directly to its secondary data storage platform.

The company’s SmartFiles product extracts files from NAS systems and imports or tiers them into the Cohesity DataPlatform (CDP), leaving more space for primary files on the source filers. 

The goal is to stop the main filers getting clogged up with old, less-frequently accessed files. In CDP, files are deduplicated, compressed, and small file optimisation is applied. This provides more storage space which improves total cost of ownership.

A spokesperson said: “Cohesity DataPlatform is a software platform that provides the ability to create file shares that can be accessed via NFS or SMB/CIFS protocols. SmartFiles is a new feature within DataPlatform that goes beyond scale-out NAS. Backup customers want to use the same platform for data protection as they do for files and object services. We’re giving them that option today. “

David Noy, product management VP at Cohesity, gave out a prepared quote: “It is the first unstructured data solution that bring apps to the data, giving customers the ability to easily run antivirus, file audit, and content search natively on the platform.”

According to Cohesity, such apps cannot run on traditional NAS systems.

SmartFiles “includes multi tier data management and the ability to run file specific Cohesity Marketplace apps directly on the Cohesity DataPlatform… It automatically tiers cold data, be that files or objects, and it is moved by policy to a cost-effective storage tier, or to the cloud.”

This is a three-tier set up with primary files on the NAS; secondary or less frequently accessed files on Cohesity; and rarely accessed files in a cloud archive. 

Files stored in CDP are accessed via NFS, SMB, or as objects, via S3, with unified permissions across Windows and Linux. They can be found through the Helios search facility which looks across multiple sites and Cohesity instances in AWS and Azure.

A Cohesity spokesperson said objects could not be imported into SmartFiles as it is a “software solution that resides within Cohesity DataPlatform, software which runs on HPE, Cisco appliances and Cohesity’s own C6000 series appliance. Objects can, however, be imported into DataPlatform using S3 compatible REST APIs, and stored there.”

A Cohesity YouTube video, ‘SmartFiles: File Stubbing and Symlinks, suggests stubs are left behind when files move off the primary NAS. 

SmartFiles Youtube video

Until this launch, files would move onto CDP via its backup process, leaving source files untouched. With SmartFiles, Cohesity enables direct migration of old files from the NAS boxes into its scale-out, clustered system.

Files are selected for migration by age of last access, size and other parameters with these set as policies and a file migration process run at set times. Source filers include NetApp, Isilon and Pure Storage.

SmartFiles is available now for Cohesity DataPlatform customers. Pricing is not revealed.

Competition

File lifecycle management products include Komprise’s  Intelligent Data Management, Spectra Logic’s recently announced StorCycle and Data Dynamics’ long-established StorageX.

Komprise uses symbolic links so that file access by users and applications still works and recently announced an analytics function. StorCycle was announced last month and so is new. Spectra pitches the software as affordable file management thats store files on Spectra Logic’s own Black Pearl and other systems.

StorageX is a mature product that’s 16 years-old and used by 26 of the Fortune 100.


Sayonara, Toshiba Memory… Konishiwa, KIOXIA

LogoWatch Today Toshiba Memory Holdings officially rebrands as KIOXIA and has unveiled the new logo. 

It’s… grey.

What do the marketing folks say? “The silver of Kioxia’s new logo will be the company’s official corporate colour, meant to represent the superior quality of its memory technology. In addition to silver, the company will have communication colours including, light blue, magenta, light green, orange, yellow, white and black.”

Kioxia is pronounced ‘KeeOxchia’ and should be printed in all-caps.

Naohisa Sano, Kioxia’s chief marketing officer, said: “Our full brand colour palette of bright, vibrant colors represents KIOXIA’s fun, future-driven culture, and passion for using memory to create new experiences and a colourful future for the world. Our new corporate logo and brand identity better reflect KIOXIA’s mission and vision to uplift the world with memory, using technological innovation to create new value for society.”

Kioxia’s logo is black on its website.

Consumer retail products such as solid state drives, SD cards and USB flash memory, will be released under the Toshiba Memory brand name until the end of the year. Kioxia kicks in in January 2020.

China’s DRAM push will hurt Micron, Samsung and SK hynix. Who will fold?

China’s entry into DRAM and NAND chipmaking will prolong the NAND downturn and will result in the acquisition or collapse of a DRAM supplier.

This is the nub of China’s Memory Ambitions, a report by semiconductor analyst Jim Handy of Objective Analysis.

China has begun a self-sufficiency project in semiconductors because the cost of importing nearly all its needs are very high.

Long-term it will save money and become a wealthier country if it uses foreign trade earnings to builds domestic semiconductor industries.

According to Nikkei Asian Review, China aims to produce “70% of its own chips by 2025. The current figure, thought to be between 10% to 30%, leaves Chinese companies reliant on foreign chipmakers”.

And vulnerable. The ongoing trade spat between the Trump administration and China has seen restrictions imposed by the US Department of Commerce on Huawei that bar US firms trading with the Chinese tech giant. The US delayed the trade ban for 90 days on August 2019.

China’s plans

The two biggest types of semiconductors are DRAM and NAND.

The DRAM market has three suppliers: Samsung, SK Hynix and Micron. The combination of new fab cost to build the latest DRAM technology, lowish market demand growth and areal density increase, means that the industry cannot afford the entry of another supplier. If China Inc. enters the DRAM market then it will be a fourth supplier. Handy’s assessment is that the DRAM industry will contract back to three suppliers. Either Samsung, SK Hynix or Micron will exit the industry or be acquired. 

Example chart from report.

Blocks & Files suggestion is that Micron will buy SK Hynix and so add DRAM and NAND capabilities in a single move.

The NAND industry has higher demand growth and its suppliers – Intel, Micron, Samsung, SK Hynix, Toshiba and Western Digital – can withstand the entry of China Inc. Handy thinks China’s YMTC will enter mass NAND wafer production in 2023. This will extend the current over-supply for a year or two and depress vendor earnings. 

The report is detailed and covers the effect of China’s entry into the DRAM and NAND markets on tool suppliers and OEM customers. Handy suggests Western suppliers will need to partner with Chinese companies to progress in the Chinese market.

Contact Objective Analysis to obtain a copy. 

Dell EMC’s ECS object store gains mid-range companion

Dell EMC has announced a mid-range ECS object store plus additional security software features to suit US DoD needs.

Effectively it has added higher-capacity drives to the entry-level chassis to create a bigger system that should cost much less than a high-end box.

Dell EMC positions ECS as building on the legacy of Centera and Atmos object storage systems. The ECS500 is a fourth generation ECS appliance and fits mid-way between the gen 3 ECS300 and ECS3000 appliances.

  • EX300 – 12 disks/2U node – 1TB, 2TB,4TB or 8TB drives – to 1.54PB/rack
  • EX500 – 12 or 24 drives/2U node – 8TB or 12TB drives – to 4.6PB/rack
  • EX3000 – 12TB drives  – 45 or 90 disks/4U node – to 8.6PB/rack

Comparisons

For comparison, 1U and 4U Cloudian HyperStore appliances can use 14TB drives with up to 12 or 70 per chassis.  The maximum capacities per chassis is 980TB. These Cloudian systems also store metadata in SSDS for faster access. 

NetApp StorageGRID systems come in 2U x 12 drive, 4U x 60 drive and 5U x 58 drive chassis. With 12TB drives the maximum chassis capacity is 720TB. 

The ECS range

Back in the Dell EMC camp the EX300 has 10GbitE front and backend access whereas the EX500 and EX3000 use faster 25GbitE. There is one server per node with the EX300 and EX500 systems and the EX3000 can have one or two servers per node. 

The 2U and 4U ECS chassis

The EX3000S has a single server and there are a minimum of five nodes and a maximum of eight per rack. There are two servers in the EX3000D chassis and between six and 16 in a rack. Like the EX3000S, the EX300 and EX500 have a minimum of five nodes. 

The EX3000S has either 45, 60 or 90 drives while the EX3000D supports 30 or 45 drives: that extra server takes up space in the chassis. Dell EMC said all nodes within an EX3000 rack must always have the same hard drive configuration.

EX300 nodes of differing drives sizes can be mixed if they are added in groups of four nodes at a time.

There are up to 16 EX500 appliances (nodes) per rack. All ECS systems scale out beyond a single rack.

ECS software

ECS software supports object (S3, Swift, Atmos), file (NFS v3) and HDFS access protocols.

V3.4 ECS software adds:

  • Advanced Security Technical Implementation Guide hardening, external key management support, custom alerts, and additional security features.
  • Improved monitoring metrics and native Grafana data visualisation capabilities to help make accurate forecasts and act on capacity alerts.
  • Reduced metadata overhead per object, to increase usable capacity.
  • Support for EX500.

ECS 3.4 and the EX500 appliance are available now although pricing has not been revealed. Check out an ECS architecture guide here.

Your occasional storage digest, including Dell, IDC on HCI, Intel Optane, Nebulon and more

There are several news items about Intel Optane in this week’s roundup, as Intel’s Optane software ecosystem building efforts pick up. But Cloudera, first.

Cloudera’s cloud data warehouse

Cloudera is now competing with Snowflake and Yellowbrick Data. The company has launched the Cloudera Data Warehouse, a cloud-native service for self-service enterprise analytics on the new Cloudera Data Platform (CDP). It enables quick deployment and easy administration of cloud data warehousing, moving on-premise workloads to the cloud with consistent security and governance. It ingests data anywhere, at massive scale, from structured, unstructured, and edge sources.

Anupam Singh, GM of data warehouse at Cloudera, said: “Hundreds of users can simply provision their own resources at the click of a button and analyse all data together, wherever it is on-premises or in the cloud, without breaking the bank, without breaking metadata and security, and lastly, without locking data into proprietary formats and silos.”  

Dell Analysts’ day

William Blair analyst Jason Ader attended a Dell Analysts’ Day last week and sent subscribers this summary;

  1. Dell believes that given increasing IT complexity, customers will look to consolidated strategic vendors, with Dell uniquely positioned to serve that role; 
  2. Dell’s focused execution in the storage market could allow the company to reclaim lost market share and increase competitive pressure on pure-play vendors like NetApp; 
  3. Continued integration of VMware’s portfolio positions Dell to be a leader in private/hybrid cloud infrastructure, though Dell will continue to support VMware’s open structure, allowing it to partner and integrate with many of Dell’s competitors; 
  4. Dell is betting big on AI/ML technologies and integrating them into its products, leveraging Dell Technologies Capital to gain exposure to emerging companies (e.g. it has invested in Noodle.ai and Graphcore);
  5. Dell remains focused on improving profitability and deleveraging (core debt remains at $36.4bn; leverage ratio of 3.6 times), which will limit Dell’s ability to use spending to spur growth and/or its balance sheet for major M&A activity.

Ader points out Dell’s storage business has reclaimed 375 basis points of previously lost share. Dell believes that it is well positioned to continue reclaiming share, targeting a reclaim of roughly two-thirds of previously lost share as feasible.

Dell has made structural changes, including cutting down the number of products from 88 to 22, increasing sales-and-marketing spend, and reorganising R&D to be more collaborative and focus on newer products.

It has integrated VxRail with vSAN and put them on the same release schedule, which has simplified the company’s HCI offering and messaging. The stronger integration of Dell EMC and VMware could put more pressure on Nutanix.

Ader thinks that room remains for both Dell/VMware and Nutanix to be successful (currently a two-horse race competitively).

Hazelcast in-memory software and Optane

In-memory software business Hazelcast has agreed with Intel to use Optane for real-time applications, artificial intelligence and internet of things systems for enterprises. Its Project Veyron is focused on running Hazelcast technologies on Intel second generation Xeon Scalable processors and Optane DC Persistent Memory (DIMMs). Hazelcast said Project Veyron will accelerate the completion of parallel in-memory tasks, complex analyses for more sophisticated models and the use of structured and unstructured data sets.

Raker’s take on IDC HCI report

Wells Fargo senior analyst Aaron Rakers has added more intelligence to the bare bones IDC storage tracker HCI report, published last week. He told subscribers that IDC estimates that the total HCI market revenue at $1.825bn, saw continued deceleration of y/y growth in 2Q19 at +24 per cent y/y, compared to +69 per cent, +57 per cent, and +47 per cent y/y in 3Q18, 4Q18, and 1Q19. 

Is HCI growth tailing off?

Rakers thinks the most notable takeaway is VMware’s continued strength with total vSAN + Ready Nodes revenue totaling $694M in C2Q19, +39 per cent y/y and equating to a ~38 per cent market share.

This compares to Nutanix‘s software + hardware revenue (including third party OEMs) which totaled ~$522M in C2Q19, +5 per cent y/y and implying a ~29 per cent share (vs. 34 per cent a year ago). NetApp’s HCI revenue was estimated at ~$39M, up 113 per cent y/y, but was down 15 per cent q/q.

Hmm. NetApp isn’t exactly experiencing soaring growth with its Elements HCI product.

HP SimpliVity revenue grew 30 per cent y/y to $125M, while Cisco’s HyperFlex revenue stood at ~$114M, up 47 per cent y/y.

Blocks & Files notes that IDC put Cisco in the top 3 HCI supplier revenue space and not HPE. Weird.

Intel’s Stratix FPGAs and Optane

Intel has started shipping Stratix 10 DX field programmable gate arrays (FPGA). They support Intel Ultra Path Interconnect (UPI), PCI-Express (PCIe) Gen 4 x16 and a new controller for Intel Optane technology to provide high-performance acceleration.

Stratix supports select Intel Optane DC persistent memory dual in-line memory modules (DIMMs). They increase bandwidth and provide coherent memory expansion and hardware acceleration for upcoming select Xeon SP CPUs. 

Intel Stratix FPGA.

The Stratix memory controller supports up to eight Optane DC persistent memory modules – Intel doesn’t like using the DIMM word – per FPGA (up to 4TB of non-volatile memory).

Intel said the UPI interface in combination with future select Xeon SPs should deliver 37 per cent lower latency (than Xeons without UPI) and improve overall system performance via coherent data movement and a theoretical peak transfer rate of 28 GB/sec.

The PCI Gen 4 x 16 interface delivers peak data bandwidth of 32 GB/sec. Apps should realise about two times more throughput.

VMware is one of many early access participants, and aims to develop coherent FPGA and CPU acceleration solutions.

Intel and penta level cell flash

At its Memory and Storage day in Seoul, Sep 26. Intel talked about penta level cell flash (5 bit/s cell). This is its slide.

The slide above shows PLC flash with 32 separate voltage levels that are detected by some read function in the chart on the right.

Intel is not committing to delivering a product but hints strongly at the possibility. A PLC SSD would have 25 per cent more capacity than QLC (4 bits/cell) flash and should have a lower cost/TB. With QLC flash having endurance around 1,000 write cycles, then PLC flash could be in the sub-500 range and be even more limited to read-centric workloads than QLC.

What Nebulon and its Nebunerds are building

In November 2018 Nebulon, a California startup founded by four 3PAR veterans, announced itself to the world as a ‘cloud-defined storage startup’. We covered the birth announcement at the time but no other details were forthcoming.

Since then we have gleaned some information about the company via Twitter and Linkedin posts made by company execs and also from recruitment ads.

The company has secured $14.5m from unnamed sources and employs at least 85 people – who it calls Nebunerds. It aims to drastically simplify storage and to deliver secure, scalable and powerful data insights by building cloud-defined storage.

This is the company’s term for software-defined storage and boils down to building a cloud back-end enterprise-class store for IoT edge devices. The store will be a scalable, efficient, secure and highly available cloud data platform for data ingestion, transformation, storage and computation.

Nebulon’s software will provide real-time monitoring, analysis and automation of its operations and of the data inside it. It is cloud-native, built from micro-services, and will run on AWS in the first instance.

We don’t know if there will be Nebulon agent software in the edge devices or if they transmit data to the back end via the S3 protocol. We anticipate early product news in the second half of next year.

Packet, Formulus Black and Optane

Developers using Packet’s cloud platform will be able to test, validate, and optimise data-intensive and real-time application workloads on Forsa.

Made by Formulus Black, Forsa OS runs application software re-encoded into bit markers and in memory.

Formulus Black has devised algorithms that optimise I/O between Optane DC persistent memory memory and CPU. Based on initial testing, the net result includes decreased CPU usage, more TPS/IOPS and lower latency under maximum load conditions

Forsa can pool memory from multiple CPU sockets on a single system or across systems. For instance, on a four-socket server, memory on all four CPU sockets can be pooled together. It can stitch memory from CPU sockets on a second server, creating memory-based storage devices with 12-24TB of capacity (assuming 512GB x 24 Optane DC persistent memory modules per server). 

Forsa’s BLINK feature enables users to backup and restore data and system settings from Optane DIMMs to local SSDs, or a network storage target.

Also Forsa can increase the size of persistent memory block devices that are at full capacity, create clones and snapshots, and manage persistent memory resources on multiple nodes from one management console.

You can sign up for a free trial of Forsa on Packet.

Quest updates NetVault

Quest Software’s V12.4 of NetVault Backup adds support for SAP HANA and Nutanix’ AHV hypervisor and better support for Office 365 Exchange Online and OneDrive. HANA and AHV support is via plug-ins.

Office 365 users can backup and restore user mailboxes to any cloud, disk or tape system. -based storage. Restoration is quite granular, down to files and emails which can be placed in particular Outlook folders.Users can  back up and restore data from OneDrive user and group files and folders. Office 365 Active Directory and SharePoint Online support should arrive by the end of the year.

Shorts

The bloggers at cloud storage service provider Backblaze are celebrating 10 years of operations with this post “Petabytes on a Budget: 10 Years and Counting.” The company has also blogged why raising prices is hard.

Microsoft ranks first as the most strategic vendor for customers, according to a double-blind market survey conducted by Dell in the first half of this year, Microsoft’s portfolio of Azure, Office 365, Teams, and DevOps is resonating with customers who view it as a one-stop shop for many of the technologies that underpin their digital transformation.

In the Dell survey Microsoft garnered 20 per cent of votes for most strategic vendor, IBM took second place with 13 per cent of votes and Dell placed third with 10 per cent (8 per cent attributed to Dell-EMC and two per cent to VMware),William Blair analyst Jason Ader reported in a note to subscribers.

Tom Bish, an IBM Systems Storage Subject Matter expert, introduces a YouTube video about “On Premise Storage Options for an OpenShift, Kubernetes environment.” He describes various Block and File Storage solutions, the tools available to manage them along with back-up and resiliency considerations. You can also learn about how the Container Storage Interface provides a conduit between your clusters and the storage devices you’ve selected.

Micron’s Crucial unit has announced the X8 Portable SSD with 500GB and 1TB capacities, sequential read/write speeds up to 1050 MB/sec, and a 3-year limited warranty.

Serverless search and analytics company Rockset announced the capability to analyse raw events from Apache Kafka in real time. Rockset’s tech enables SQL on NoSQL data.

The National Library of Scotland is using Scality RING object storage software to preserve and protect collections. It’s digitising 120 miles of shelving. There will be one copy in each of two on-premises RINGs in their Edinburgh and Glasgow data centres and a third copy in AWS Glacier Deep Archive.

Supermicro’s high-performance 4-Way MP SuperServer is available as an Intel select solution for SAP HANA. It enables customers to use Intel Optane DC persistent memory for SAP HANA scale-out servers.

Toshiba Memory America said its KumoScale storage software for NVM Express over Fabrics (NVMe-oF) v3.11 now supports Graphite and Prometheus telemetry frameworks. These integrations are built on the KumoScale REST API architecture, with adapters added for each new framework.

WANdisco has launched LiveAnalytics, a technology that makes migrated and migrating data immediately and continuously available for analysis. LiveAnalytics works in tandem with WANdisco’s LiveMigrator, so business operations and analytics can continue as data shifts to the cloud.

Zerto has announced the availability of Zerto 7.5 with expanded functionality with Azure, including support for Azure Managed Disks, scale-sets and Azure VMware Solution (AVS); integrations with HPE StoreOnce Catalyst; certification and support for VMware vSphere APIs for I/O Filtering (VAIO); and advanced analytics for reporting, planning and customisation of disaster recovery and long-term data retention.

People

Paul Forte

Actifio has a new sales boss, Paul Forte, who is called its Chief Revenue Officer. He was CRO at Channel Advisor and an EVP for North American Sales at Monster before that. Actifio told Blocks & Files that Ranajit Nevatia, SVP & GM Global Sales for Actifio GO & cloud business development, continues in place reporting to CEO Ash Ashutosh.

SaaS backup startup Clumio has poached Chad Kenny from Pure Storage and appointed him as VP and chief technologist. Kenny was VP for Products and Solutions at Pure. His responsibilities at Clumio will focus on building the Clumio technology partner ecosystem, guiding the longer-term product roadmap and promoting the right way to leverage the cloud for data protection.

Jai Menon, a former Dell and IBM exec, has joined Fungible as its chief scientist “to accelerate customers’ transition to a data-centric infrastructure.”

NetApp CMO Jean English left in August and is now CMO at Palo Alto Networks. Jeff McCullough, NetApp’s VP for the Americas channel, has also left, according to reports.

Nutanix has promoted Cyril VanAgt to lead iChannel and OEM activities in the Europe, Middle East and Africa region.

Tintri by DDN has hired Anand Ghatnekar as VP of engineering and Mario Blandini as chief marketing officer and chief evangelist.

Lorenzo Flores has been appointed as Vice Chairman for Toshiba Memory Holdings, effective in November, 2019. TMH is going to become Kioxia Holdings on October 1. Flores was Xilinx’s CFO and an Intel exec some time before that. 


Why Micron fears Optane will eat its server DRAM lunch

The pricing of Optane memory, also called 3D XPoint, will cannibalise some DRAM sales and this could give Micron a big headache. 

“The revenues of 3D XPoint memory are likely to be matched by similar-sized sales declines in the server DRAM market,” writes Objective Analysis’s Jim Handy in a July 2019 update to his 3D XPoint Report.

On the other hand, the “SSD market for 3D XPoint will remain relatively small, with the new memory serving more to displace DRAM memory rather than NAND flash SSDs or HDDs,”  he adds.

Since Optane DIMMs only work with newer and more expensive Xeon SP processors they will help Intel sell more of them to the extent that customers buy into this scenario.

Intel will care a lot about this as processors are its main source of income. Intel does not make DRAM and so will not care if DRAM sales are affected by Optane DIMMs.

Why Optane is cheaper than DRAM

Intel positions Optane as less expensive and slower to access than DRAM but more expensive and faster to access than NAND SSDs. An Objective Analysis chart, plotting storage, memory and cache types against bandwidth and price per GB, shows this positioning:

Jim Handy’s memory-storage hierarchy chart

Handy writes: “3D XPoint Memory won’t fit into the memory/storage hierarchy… unless it is sold for a lower price than DRAM. If its price is higher than DRAM’s price it will not be adopted.” It has, in effect a price ceiling. 

Optane shoehorns its way into a memory hierarchy gap with SSDs below it and DRAM above it. Where will it gain most market share? Handy believes “that the 3D XPoint memory’s biggest impact will be to the server DRAM market.”

Customers get the highest application performance increase for their Optane cost that way.

Using Optane, servers can have a larger memory space as Optane DIMMs have four times the density of DRAM DIMMS. So servers can have more memory space. They can accommodate more or larger applications, which then run faster.

That’s because the apps avoid storage IO and therefore delays from IO requests passing through the storage software stack.

When Optane SSDs are used instead of NAND SSDs performance increase is limited to 5x to7x the SSD access speed because the storage software stack still has to be traversed by IOs. “For most applications the price premium is too high for the performance improvement,” Handy writes.

In other words Optane SSD prices have a lot further to go to cannibalise standard SSD sales.

DRAM DIMM and Optane DIMM price relationship

Handy notes that Optane DIMM prices are held to a constant relationship to DRAM DIMM prices.

Optane DIMMs are priced at slightly more than half the cost of half the DRAM capacity. High-density DRAM DIMMs cost more than lower density versions. and the same is true for Optane DIMMs. The logic is that Optane DIMMs are priced to replace DRAM.

When Optane is used instead of DRAM, applications (a) run a little more slowly because they have less DRAM, but (b) then run a heck of a lot faster because they have far less storage IO. Overall, the apps run much faster because (b) more than cancels out (a). That’s the theory.

Systems with Optane DIMMs must have some DRAM as well. Handy calculates that “At 6TB the memory in a DRAM plus Optane DIMM system costs 30 per cent less than a DRAM-only system.” With a 2X or more application speed up, depending on the Optane DIMM mode, that’s a worthwhile cost saving.

Why this is a problem for Micron

Micron makes and sells DRAM. Why should it displace a 256GB DRAM DIMM with a 256GB Optane DIMM when the DRAM DIMM brings in more revenue, and is likely profitable whereas Optane, being produced in relatively low volumes, is not?

We might expect Micron to focus on ways to sell Optane that don’t affect DRAM sales. For example, it may seek to displace Samsung or SK Hynix DRAM with its 3D XPoint DIMMs – branded Quantx.

Or, in the extreme case, abandon the Optane market altogether. Handy speculates this may be the case, with Micron having “desire to extricate itself from the 3D XPoint business while still satisfying alternate-sourcing agreements made by the company prior to XPoint’s 2015 introduction”.

Using QLC for cold storage is a fool’s errand

Editor’s Note: This blog by storage industry veteran Hubbert Smith begins with a brief history of SLC-MLC-TLC-QLC and then gives a forecast of what to expect from QLC. It pertains mostly to server/cloud SSDs.

History

SLC – single level cell. It stores a single bit and a cell uses 2 charge levels. A full charge of 1.8V is a one and zero charge is a zero.

MLC – multi level cell stores two bits. A cell uses several charge levels to store two bits; 1.8V, 1.2V, .6V, and 0V.

TLC – three level cell stores three bits. A cell uses even more charge levels to store three bits; 1.8V, 1.5V, 1.2V, 0.9V, 0.6V, 0.3V, 0V.

QLC – quad level cell stores four bits. A cell uses even more charge levels… you get the idea. Voltage charge deltas are even tighter and more error prone.

Let’s look at the capacity and the value of the NAND and SSD:

  • As compared to SLC; MLC doubles capacity, adds 100 per cent more bits per cell.
  • As compared to MLC; TLC adds 50 per cent more bits per cell.
  • As compared to TLC; QLC adds only 25 per cent more bits per cell.

Diminishing returns 

QLC is only 25 per cent better capacity than TLC, and with every generation the industry trades slower and slower performance with poorer write endurance. With just 25 per cent better capacity than TLC, QLC shows diminishing returns.

Additionally NAND is ugly and QLC NAND is a whole new level of ugly. Here’s why.

NAND cells are less than perfect. Firmware goes through all sorts of contortions to identify and correct media errors. As mentioned earlier, SSD data retention relies on the electrical charge of a cell and given enough time these electrical charges will evaporate. Recharging cells is one of the many maintenance tasks handled by firmware. When an SSD is plugged in, the firmware will refresh cell charges every 30 to 60 days. 

What happens when an SSD is without a power source to refresh cell charges? No power means no cell recharge. Sooner or later the electrons will drift away and the cell electrical charge will evaporate; data loss occurs.

QLC promise and reality

The promise of QLC is very different than the reality of QLC. There are systems vendors attempting to drive to lower cost/GB. In a system this is likely workable.

There are memory vendors attempting to use QLC for cold storage; these folks are naively over-selling QLC.

Humble advice 

  1. Stick to proven TLC. Let someone else save a nickel and learn hard lessons.
  2. Consider SLC for systems where the data sets are small but the over-writes are high. (Its endurance is far higher than that of MLC, TLC and QLC.)

Note: Consultant and patent holder Hubbert Smith (Linkedin) has held senior product and marketing roles with Toshiba Memory America, Samsung Semiconductor, NetApp, Western Digital and Intel. He is a published author and a past board member and workgroup chair of the Storage Networking Industry Association, and has had a significant role in changing the storage industry with data centre SSDs and Enterprise SATA disk drives.


Micron results suffers Huawei-sized hangover

Micron’s fourth quarter revenues were affected by the US trade dispute with China – and Huawei in particular. Quarterly revenues fell 42 per cent to $4.87bn, down from $8.44bn a year ago. Net income declined from $4.33bn to $561m.

Its outlook for the next quarter is $5.1bn at the mid-point, which contrasts with $7.9bn reported a year ago. 

Micron CEO Sanjay Mehrotra said: “We are encouraged by signs of improving industry demand, but are mindful of continued near-term macroeconomic and trade uncertainties.” 

You can be certain of that, with President Trump’s trade dispute with China nowhere near resolution. The Trump administration has barred sales of American software and componentry to firms on the so-called Entity list without an export licence.

Micron stated fourth quarter sales to Huawei – its biggest customer – were lower than anticipated. According to IHS Markit Huawei bought $1.7bn worth of DRAM and $1.1bn worth of NAND in 2018 – not just from Micron.

Micron has applied for licenses with the U.S. Department of Commerce to sell more products to Huawei. But the company said if U.S restrictions against Huawei continue it could see a worsening decline in sales to Huawei in coming quarters.

Q3 and Q4 fy19 revenue bars show slump is bottoming out.

Full fiscal 2019 revenues fell 23 per cent to $23.4bn and net income fell 55 per cent to $6.31bn.

DRAM sales accounted for 63 per cent of overall revenues and were down 48 per cent on the year. NAND sales, 31 per cent of overall revenues, declined 32 per cent.

Free cashflows were $263m for the quarter and $4.08bn for the full year. Micron has $9.3bn in cash, marketable investments, and restricted cash. Making DRAM and NAND eats capital, and Micron expects capital expenditures in fiscal 2020 to be between $7bn and $8bn. 

Micron Technology President and CEO Sanjay Mehrotra said: “Micron delivered fourth quarter results ahead of expectations, capping a fiscal 2019 in which we executed well in a challenging environment, significantly improved our competitive position, and returned cash to shareholders through share repurchases.” 

Q4 revenues for Micron’s four business units were:

  • Compute and Networking – $1.9bn revenues – down 56 per cent
  • Mobile – $1.4bn – down 26 per cent
  • Storage -$848m – down 32 per cent
  • Embedded – $705m – down 24 per cent

The DRAM and NAND downturns are ending, according to Micron. DRAM demand has bounced back as issues affected the first half of 2019 dissipated, while NAND elasticity – i.e. lower prices – is driving robust demand. NAND bit shipments will be higher than DRAM bit ships in the next quarter.

Micron chalked up record revenue and unit shipments in consumer SSDs and said it is positioned to gain share in the NVMe SSD market in fiscal 2020. It saw quarterly rises in demand for cloud and enterprise data centre products.

The company said it was on track to ship its first 3D XPoint products by the end of calendar 2019. That will signal the end of Intel’s monopoly in selling XPoint products.