Home Blog Page 424

Storage news sprinkles to start the week

The amount of data to be stored, processed and protected grows incessantly. That’s fuelling product growth and partnerships between suppliers as they jostle to taker advantage of the benign storage weather climate.

Cisco HyperFlex and Nexenta target VDI

Cisco is using Nexenta’s Virtual Storage Appliance (VSA) to provide NFS and SMB file services to its HyperFlex hyper-converged system. This SW-based NAS features inline data reduction, snapshots, data integrity, security, and domain control, with management through VMware’s vCenter plugin.

The two say HyperFlex plus Nexentastor VSA is good for VDI with support for home directories, user profiles, and home shares.

It obviates the need for HyperFlex customers to add a separate NAS filer to support VDI.

The two suggest the combined system is suited for remote office/branch office (ROBO) and backup/disaster recovery (BDR), supporting legacy enterprise, new cloud-native, and 5G-driven telco cloud apps, in part as it eliminates the need for separate file servers.

Cray meets good weather in South Africa

A Cray XC30-AC supercomputer, with a Lustre file system, used by the South African Weather Service’s (SAWS) since 2014, has been upgraded.

SAWS has doubled its compute capacity, tripled storage capacity and mire than doubled bandwidth.

The upgrade involved;

  • Growing from 88 to 172 Ivy Bridge compute nodes 
  • Upgrading processors from Ivy Bridge 2695v2 to Ivy Bridge 2697v2
  • The system went from 1.5 cabinets to 3 cabinets (48 blades)
  • Storage was upgraded from Sonexion 1600 arrays  with 0.52 PB capacity and 12 GB/sec bandwidth to  ClusterStor L300 storage with 1.8PB capacity and 27 GB/sec.

Ilene Carpenter, earth sciences segment director at Cray, said: “With the Cray system upgrades in place, SAWS has the storage and compute resources needed to handle an increasing number of hydro-meteorological observations and to run higher fidelity weather and climate models.”

IDC’s Global StorageSphere

IDC’s inaugural Global StorageSphere forecast; a variation on the old Digital Universe reports, predicts the amount of stored data is going to grow; there’s a thing.

The installed base of storage capacity worldwide will more than double to 11.7 zettabytes (ZB) over the 2018-2023 forecast period. IDC says it measures the size of the worldwide installed base of storage capacity across six types of storage media, but doesn’t publicly identify the six media types. 

We asked IDC and spokesperson Mike Shirer tells us: “The six types of storage media in the report are:  HDD, Tape, Optical, SSD, and NVM-NAND / NVM-other.  Share is dominated by HDD, as you might expect.” Indeed.

The three main findings, according to IDC, are;

  • Only 1-2 per cent of the data created or replicated each year is stored for any period of time; the rest is either immediately analysed or delivered for consumption and is never saved to be accessed again. Suppliers added more than 700 EB of capacity across all media types to the worldwide installed base in 2018, generating over $88 billion in revenue,
  • The installed base of enterprise storage capacity is outpacing consumer storage capacity, largely because consumers increasingly rely on enterprises, especially cloud offerings, to manage their data. By 2023, IDC estimates that enterprises will be responsible for managing more than three quarters of the installed base of storage capacity.
  • The installed base of storage capacity is expanding across all regions, but faster in regions where cloud datacentres exist. The installed base of storage capacity is largest in the Asia/Pacific region and will expand to 39.5 per cent of the Global StorageSphere in 2023.

Mimecast’s textbook example of business growth

Mail archiver and protector Mimecast delivered yet another quarter of record growth.

Revenues for its third fiscal 2019 quarter were $87.6m, providing a $350m annual run rate, and an increase of 30 per cent on last year.

THere’s not a trace of seasonality in this revenue growth curve.

There was a $500K profit; unusual as Mimecast generally runs with a regular quarterly loss to fuel its growth.

Mimecast closed a record number of six figure transactions and a seven figure deal that was the largest ever for the company. It was with a financial services customer working across the Middle East and Africa.

A thousand new customers were recruited in the quarter, more than last quarter’s 900 but less than the year-ago’s 1,200. The total customer count is 33,000.

The outlook for the fourth quarter of 2019 revenue is a range of $90.6m to $91.5m.  Full year 2019 revenue is expected to be in the range of $338.7m to $339.7m. Full-year 2020 revenue is expected to be in the range of $413m to $427m.

Mimecast bought Simply Migrate last month, with software technology to migrate mail archives into Mimecast’s Cloud Archive vaults. The price was not disclosed.

Seagate and the EMEA Datasphere

Another IDC report, commissioned by Seagate, looks at the data produced (not necessarily stored) in the EMEA region. It predicts the EMEA Datasphere will grow from 9.5ZB to 48.3ZB from 2018 to 2025, a growth rate slightly lower than the global average (CAGR of 26.1 per cent vs 27.2 per cent.)

The main findings;

  • AI, IoT and entertainment streaming services rank among the key drivers of data growth,
  • Streaming data is expected to grow by 7.1 times from 2015 to 2025; think YouTube, Netflix and Spotify,
  • AI data is growing at a 68 per cent CAGR, while IoT-produced data will grow from 2 per cent to a 19 per cent share of the EMEA datasphere by 2025,
  • The percentage of data created at the edge will nearly double, from 11 per cent to 21 per cent of the region’s datasphere,
  • China’s Datasphere is currently smaller than the EMEA Datasphere (7.6ZB vs 9.5ZB in 2018), but by 2025 it will have overtaken Europe to emerge as the largest Datasphere in the world, with 48.6ZB.

Prissily Seagate and IDC advise that businesses need to see themselves as responsible stewards of data, taking proactive steps to protect consumer data while reaping the benefits of a global, cloud-based approach to data capitalisation, whatever data capitalisation means. Yeah, yeah.

Veeam did well – again

Backup and recovery supplier Veeam, which has a succession of record-breaking quarters has had another one leading to a near $billion year. Total global bookings in 2018 were $963m were, 16 per cent higher than 2017’s $827m.

The global customer count is over 330,000. At the end of 2017 it was 282,000, and Veeam says it has been acquiring new customers at a 4,000/month rate. More than 9,900 new EMEA customers in EMEA were acquired in the fourth 2018 quarter; mind-boggling numbers.

Cloudhas been the fastest growing segment of Veeam’s business for the past 8 quarters. Veeam reported 46 per cent y-o-y growth in its overall cloud business for 2018. The Veeam Cloud & Service Provider (VCSP) segment grew 23 per cent y-o-y, and there are 21,700 Cloud Service Providers, 3,800 licensed to provide Cloud Backup & DRaaS using Veeam Cloud Connect.

Ratmir Timashev, co-founder and EVP of Sales & Marketing at Veeam, said: “We have solidified our position as the dominant leader in Intelligent Data Management and one of the largest private software companies in the world.”

Veeam recently took in a $500m investment to help drive its growth. Timashev said: “We are leading the industry by empowering businesses to do more with their data backups, providing new ways for organisations to generate value from their data, while solving other business opportunities.”

Blocks & Files thinks acquisitions are coming as Veeam strengthens its data management capabilities to compete more with Actifio, Cohesity and Rubrik. We reckon it will hit the $1bn by the mid-year point.

WANdisco gets cash and stronger AWS partnership

Replicator WANdisco has raised $17.5m through a share issue taken up by existing shareholders;

  • Merrill Lynch International
  • Ross Creek Capital Management, LLC
  • Global Frontier Partners, LP
  • Davis Partnership, LP
  • Acacia Institutional Partners LP, Conservation Fund LP and Conservation Master Fund (Offshore), LP

CEO and chairman Dave Richards said the cash will be used to: “to leverage a number of significant opportunities to expand our existing partner relationships.”

WANdisco has received Advanced Technology Partner status with Amazon Web Services, the highest tier for AWS Technology Partners. Richards said this new partner status: “significantly  expands our sales channel opportunities.”  

Shorts

Amazon Elastic File System Infrequent Access (EFS IA) is now generally available. It is a new storage class for Amazon EFS designed for files accessed less frequently, enabling customers to reduce storage costs by up to 85 per cent compared to EFS Standard storage class. With EFS IA, Amazon EFS customers simply enable Lifecycle Management, and any file not accessed after 30 days gets automatically moved to the EFS IA storage class.

A major US defence contractor is implementing an Axellio FabricXpress system using Napatech’s FPGA-based SmartNIC software and hardware. The Axellio/Napatech 100G capture and playback appliance will be used as a test and measurement system to capture and store 100 gigabits per second data-to-disk to allow precise replay. Napatech provides high-speed lossless data capture and replay.

Cisco’s second fiscal 2019 quarter results earnings call revealed this HyperFlex HCI snippet; CFO Kelly Kramer said; “We had a great Q2, we executed well with strong top-line growth and profitability.

“Total product revenue was up 9 per cent to $9.3bn, Infrastructure platform grew 6 per cent. Switching saw double-digit growth in the campus. … Wireless also had double-digit growth … Routing declined due to weakness in service provider. We also saw decline in data centre servers partially offset by strength in hyperconverged.” So HyperFlex revenues grew though UCS servers generally did not.

Disk array supplier INFINIDAT has been recognised as a February 2019 Gartner Peer Insights Customers’ Choice for General-Purpose Disk Arrays, based on reviews and ratings from end-users. So too have DEll EMC, HPE and NetApp.

Intel will build a new semiconductor fab in Hillsboro Oregon, but hasn’t said what kind of chips it will make; CPUs, NAND or Optane. The new fab will be a a third phase of the existing D1X manufacturing complex and its exact size and timing haven’t been specified.

Mail manager and archiver Mimecast’s third fiscal 2019 quarter revenues were up 30 per cent y-o-y to $87.6m. It added 1,000 new customers in the quarter to reach a total of 33,300; impressive. A total of 41 per cent of Mimecast customers used it in conjunction with Microsoft Office 365 during the quarter compared to 29 per cent a year ago.

Scale-out filer start up Qumulo said that, in FY19; 

  • Its partner ecosystem, including HPE, was responsible for 100 per cent of bookings and the source of about 40 per cent of its deals.
  • More than 65 per cent of new business was partner-initiated
  • It saw year-over-year triple-digit growth with HPE, with new customers across enterprise, commercial, State/Local government and Education (SLED), Federal government and small/medium business (SMB).

People

Rafe Brown has been appointed as Mimecast’s CFO. He was previously CFO of SevOne, being CFO of Pegasystems before that.

Leonard Iventosch has joined scale-out filer Qumulo as its Chief Channel Evangelist, coming from HPE’s Nimble unit, where he was VP for world-wide channels.

Model9’s mission to murder mainframe VTLs

“We love mainframe and we want to make it better,” says Model9 co-founder and CEO Gil Peleg.

He wants to demonstrate his love by killing off mainframe Virtual Tape Libraries (VTLs) with a software explosive to blow them up.

The mainframe market includes the largest companies in the world. It is a worthwhile market niche.

Mainframes have traditionally used tape drives and libraries for backup and longer term storage. These have largely been supplanted by disk arrays controlled by software presenting a virtual tape library interface to mainframe applications. These VTLs are much faster than tape drives and libraries at both writing and retrieving data.

However, in the x86 server world VTLs have in their turn been supplanted by deduplicating to-disk arrays acting as backup targets, by object storage arrays and by the public cloud offering backup and archival storage based on object technology, such as Amazon’s S3 and Glacier.

Model9 saw this and reckons it can bring object storage goodness, on-premises and in the cloud, to mainframes making their backup and allied operations less expensive, more scalable and convenient and with no vendor lock-in.

Its technology connects the mainframe directly over TCP/IP to any network-attached storage and enables the supplementing or complete elimination of the need for virtual tape libraries and physical tapes.

Gil Peleg

The product is called Model9 Backup and Recovery for z/OS.

Since then it has developed its software. Peleg tells us; “Model9 is a shipping product, with customers running in production and customers who have totally eliminated IBM tape hardware and replaced IBM and CA backup software with our solution.”

It supports the following targets:

  • Cloud: Amazon S3, Glacier, Azure, Google Cloud and IBM Cloud,
  • Object: Hitachi Content Platform (HCP), IBM Cloud Object Storage, EMC Elastic Cloud Storage, NetApp StorageGRID and Microsoft Azure Stack,
  • Immutable: Amazon Vault Lock, Hitachi HCP, NetApp StorageGRID,
  • Traditional: NAS, SAN, DASD (Direct Access Storage Drive).

It co-exists with: 

  • Legacy Backup: IBM DFHSM, CA-DISK and FDR/ABR
  • Tape Management: IBM RMM, CA-1, CA-TLMS, BMC Control-M/Tape
  • Virtual Tape Software: IBM VTFM, CA-VTape
  • Security Software: RACF, CA-TSS, CA-ACF2

This is a heck of a lot of software functionality to develop in two years or so, Particularly in the assembler-driven world of mainframe software development.

The software consists of a Backup Agent on z/OS and a Management Server on Linux or z/Linux as shown in the diagram below;

Java vs. Assembler

The Israeli company was started in 2016 by four mainframe people: CEO Gil Peleg, CMO Motti Tal, VP Busness development Adi Pundak-Mintz, and departed CTO Yuval Kashtan.

It has had two smallish funding rounds; a pre-seed one and a seed round, both in 2017, and with the amounts undisclosed.

How come Model9 has been able to do all this in a comparatively short time?

Peleg points to three key differentiators over other Mainframe vendors that “allow us to move very fast.”

“The mainframe side of the product is developed in Java, unlike any other infrastructure solution in the mainframe world that are developed in Assembler. Our patented technology is all about how to perform low-level mainframe I/O operations and invoke system services from Java. So developing the majority of the product in Java instead of Assembler allows us to use standard tools and practices and develop high quality software much faster than the norm in this market.”

Secondly: “We support S3 directly from z/OS which enables us to quickly develop and make adjustments between the different cloud storage player offerings and provide support for AWS, Azure, Google Cloud and IBM Cloud. In 2016 we also participated in IBM’s cloud accelerator called IBM AlphaZone and were the first company to connect the IBM mainframe to the IBM Cloud. We also certified with the major NAS/SAN players such as NetApp and Infinidat.”

And the third factor is this; “Our R&D team is composed of highly skilled mainframe software engineers, all under 40 with 15 years of experience in mainframe, storage, cyber security and enterprise software development. All of our R&D team are guys who started their professional career as mainframe engineers in the Israeli Army’s computer centre, at one of its elite computing units. This young and agile team with startup mentality enables us to develop very fast in compare to other companies our market.”

Cascade Lake AP, Optane persistent memory and endurance

Intel’s new Cascade Lake AP processor supports Optane memory but the company is not revealing details about endurance. This raises doubts about its real world use.

The AP (Advanced Processor) version of Intel’s Cascade Lake CPU adds Optane support to the x86 lineup. One reason for having direct support in the processor is that a server with both DRAM and Optane has two tiers of memory: fast DRAM memory and slower Optane memory; more formally Optane persistent memory.

DDR4 DRAM latency is in the 14ns area while Optane DC Persistent Memory averages 350ns for reads. This is the classic NUMA (non-universal memory access) issue seen in multi-processor servers with near and far memory having different response times but present inside a single processor server with two types of memory.

Cascade Lake AP differs from the SP (Standard Procecessor) version by having an integrated memory controller which supports running both DRAM and Optane memory in the same memory structure.

Cascade Lake AP is a multi-chip package combining two 24-core processors connected by a UPI link, into a single 48-core CPU.

A Cascade Lake AP has 12 memory channels, each capable of supporting two DDR4 DIMMs. These can be DRAM DIMMs, Optane DIMMs or one of each.

Optane DIMMS come in 128GB, 256GB or 512GB capacity points. There can be a maximum of up to 3TiB of DRAM or 6TiB of Optane in a Cascade Lake AP system, or, more likely, a mix of DRAM and Optane with the DRAM acting as a cache.

Cascade Lake AP servers can be 2-socket systems, in which case they will support up to 12TiB of Optane memory.

An Optane SSD has over-provisioning (extra cells) to prolong its endurance (active life). With this the 750GB P4800X has a 5-year warranty and a listed 41PB written endurance rating. This is equivalent to 30DWPD (drive writes per day.) That has gone up from the initial 20DWPD number.

Intel has not revealed the endurance of its Optane DIMMs. If they are used in direct access memory mode; treated as memory by applications in other words and not as storage we can reasonably expect a large amount of writes to take place.

We are all waiting for Intel to release its Optane DC Persistent Memory endurance numbers and then we can get a good idea of real world performance. Hopefully they will be good.

NetApp beats WekaIO

Well, this fun Two can play at the node addition game. NetApp beat WekaIO in the SPEC SFS 20i4 software build benchmark.

WekaIO had cleanly swept the SPEC SFS 2014 results and continually topped the popular SW build test part by adding nodes to its Matrix scale-out parallel file system.

NetApp is using the same tactic, and adding nodes to its all-flash A800 array, with a 12-node system scoring 6,200 SW builds at an overall response time of 0.83 msecs. That beats WekaIO’s 5,700 and its 0.26 msecs ORT, which had beat an 8-node A800’s 4,200 SW builds at an 0.78 mess ORT.

Blocks & Files would suggest that, in price/performance terms, WekaIO with its Supermicro BigTwin server hardware, is much more cost-effective than NetApp. But NetApp has its mature software environment to offer in turn.

SPEC SFSing is turning into a duopoly, game dominated by NetApp and WekaIO. Both can keep on adding nodes to leapfrog each other.


NetApp building moat around ONTAP base

NetApp says flash and cloud integration enables it to build a moat around its core ONTAP business base, despite revenues becoming temporarily becalmed.  

Its third quarter revenues of $1.56bn were up just 2 per cent annually, missing Wall Street estimates. The company forecasts no annual growth at the mid-point of its next quarter’s revenue guidance: $1.64bn.

CEO George Kurian told analysts in NetApp’s Q3 fy19 earnings call that he is confident NetApp will see growth resuming in 2020 because it has a three-way growth strategy based on flash arrays, HCI and multi (private + public) cloud offerings.

He said that the customer “requirement for hybrid multi-cloud capabilities is creating three significant market transitions: disk to flash, traditional IT to private cloud, and on-premises infrastructure to hybrid clouds.” NetApp has and is responding to each of these transitions.

All-flash arrays

First the good news, NetApp is doing well in the fast-growing all-flash market, growing 19 per cent year-over-year to an annualised net revenue run rate of $2.4 billion. Kurian said: “Wit only 15 per cent of our installed base currently running all-flash arrays, the runway for this secular transition remains in the early innings.”

He noted a” lot of weak large players in disk-based system that we will take share from; IBM, Hitachi, Fujitsu, Oracle, HPE, there’s a lot of them. And even Dell has a challenged mid-range portfolio and [high-end] portfolio. So, we feel good about our opportunities.”

Private cloud

Kurian stated: “The second major market transition we’re exploiting is the shift from traditional IT to private cloud. SolidFire and NetApp HCI are the building blocks for private cloud deployments, enabling customers to bring public cloud like experience and economics into their data centres.”

Kurian was asked by a call analyst: ”Are you prepared at this point to provide kind of a [SolidFire stand-alone and HCI] run rate that you think is reasonable or a percentage of product revenue?” 

In the earnings call he declined a request by an analyst to name the [SolidFire stand-alone and HCI] run rate he thought “is reasonable or a percentage of product revenue?”

In response, he said the” momentum in our private cloud business that began in the October quarter [Q2] accelerated in Q3” and “We’ll tell you more as we head into fiscal ’20.” 

NetApp is late to the hyperconverged market and has a mountain to climb here. Dell EMC and Nutanix appear to have sewn it up with dominating market shares.

Hybrid or multi-cloud

Kurian said the shift from on-premises infrastructure to hybrid clouds is “the third key market transition that we are taking advantage of to expand our business.”

He claimed: “Only NetApp is building a comprehensive set of cloud data services available across multiple clouds,” and: “Our cloud data services annualised recurring revenue is approximately $33m, up 22 per cent from Q2.”

“Roughly two-thirds of early Cloud Volumes Service customers are new to NetApp,” which means they could be sold other NetApp offerings.

The CEO said: “We are seeing accelerating momentum with our private cloud solutions, and our public cloud solutions are positioned to deliver strong growth in FY ’20.” 

Isn’t this a little late?

“I would say we’re a bit behind where we expected to be in terms of the operational readiness of our service offerings with our cloud providers. We are, as we said, generally available with AWS. We are, you know, in controlled pilot production projects with both Azure and Google, and we expect them to be available imminently.”

New growth angles

Kurian believes: ”Our private and public cloud solutions enable us to reach new buyers. Our flash hybrid cloud infrastructure and AI solutions are serving as pillars of customers’ new architectures and we are seeing adoption of our cloud offerings as part of our customers’ foundation for moving applications and data to the cloud.”

Business might be lacklustre now because of worries in larger enterprises about tensions with China and the US public sector but, hopefully, these are short-term and will get resolved. Then NetApp will surge ahead. 

How to get a self-castrated bull: Dave Hitz retires from NetApp

It’s the end of an era; NetApp co-founder and EVP Dave Hitz is retiring.

The news came in a blog and signals the end of a near 27-year remarkable career showing that nice guys can get to the top.

He writes: “I think of NetApp as my child. In any parent’s life, there comes a time when you are proud of your kid, you love your kid—but you don’t want to see them every day! If you’ve done a good job as a parent, they will do well on their own.”

He becomes a Founder Emeritus with CEO George Kurian’s co-operation and so will retain a connection with he company he founded with self-effacing James Lau and Michael Malcolm, although a much less active one.

Lau retired in late 2015. Malcolm had resigned earlier, in 1994, after a disagreement. 

Left to right: James Lau, Dave Hitz, Byron Rakitzis (one of the founders at Igneous in Seattle), unknown, Michael Malcolm.

Hitz memorably co-wrote a book; “How to Castrate a Bull,” and has served as an evangelising EVP and a sort of keeper of the company’s conscience.

Now he is separating himself almost completely from NetApp and this link to NetApp’s past is stretched wafer thin. Anyone who j-has watched Hitz present or seen him draw diagrams on his iPad will remember an effervescent character brimming with ideas and the way to express them.

Dave Hitz is a legendary engineer and will be sorely missed.

Radian’s Co-operative Zones make SSDs last longer

Radian Memory Systems can make SSDS last longer and perform more predictably and consistently.

The company has released its branded Symphonic Cooperative Zones technology on its RMS-350 U.2 and RMS-325 edge card SSDs. This is how it works.

Radian sub-divides SSDs into zones that can be operated and managed together with the host. The outcome is a reduction in unexpected latency delays, increased parallelism – equals more performance – and endurance.

Lost in Flash Translation

Currently SSDs have a Flash Translation Layer and carry out garbage collection and wear-levelling themselves, with no reference to the host (asynchronously). These operations can interfere with and delay host-initiated read and write requests; the host sees an unexpected latency spike.

They can also cause more writes than anticipated, in a process known as write amplification. This reduces the working life, or endurance, of the SSD.

The newly-arrived QLC (4bits/cell) SSDs have intrinsically lower endurance than TLC (3bits/cell) SSDs and so write amplification is more undesirable. In turn TLC flash has lower endurance than MLC (2bits/cell) flash.

Radian sets up so-called co-operative zones that can be accessed in parallel. These consist of NAND blocks – or NAND Erase Units in Radian terminology. They are subsets of a physically separate region of the SSD,  based upon NAND dies and channels, and called an iso-box.

Zones are configurable in size and the host sees sequential write zones of idealised flash, mini virtual SSDs. The co-operative zones appear as a range of contiguous logical block addresses (LBAs) accessible via conventional addressing through the NVMe command set.

Symphonic Cooperative Zones

Cooperative zones can be factory-configured to support different types of memory, ranging from NVRAM to SLC (1bit/cell) and TLC today, and for SCM (Storage-Class Memory) and QLC in the future. Radian also says the technology supports specialised ultra low latency SLC variations in the future as well. Blocks & Files understand this refers to Samsung’s Z-SSD and other possible products.

This feature can be used to tier data within an SSD to maximise performance, capacity and cost efficiencies. 

Routine wear levelling and NAND maintenance (data retention, scrubbing, error handling) are performed internally by the SSD. The device initiates co-operative requests to the host if required data movement could conflict with other (host-initiated) I/O access.

If additional wear levelling or other NAND maintenance is required that could conflict with host I/O access latencies, the host is also alerted. That means the host can respond by rescheduling IO requests.

As part of garbage collection, hosts are responsible for selecting valid data and a relocation destination on a different zone, either performing a copy/write operation directly or using Radian’s optional Delegated Move command (NVMe vendor extension) that delegates the data transfer to the SSD.

Zones are erased through the use of a ‘Zone Reset’ command (NVMe vendor extension) that is issued by the host to the SSD, or via a zone aligned NVMe deallocate command.

Radian emphasises no host library support is needed. The functionality is carried out entirely by SSD device firmware.

Write amplification can also be caused by mismatches between the host file system’s segment size and the ones used by the SSD. Radian has an Address Space Layout (ASL) configurator which enables users to configure the SSD zones to match the host file system’s segment size. This eliminates the write amplification that would otherwise occur if the host segments and SSD zones were not aligned.

Further FTLs, if log-structured, can have their own internal garbage cleaning operations, and these can can conflict with regular garbage collection, causing more wrote amplification and latency spiking. Because Radian’s technology eliminates the FTL this additional write amplification and latency spiking are both eliminated.

Radian supplies its SSD technology and products to system OEMs, cloud providers, and for licensing to device manufacturers.  The company says thousands of its NVMe storage devices have been deployed in data centre applications by some of the industry’s largest OEMs.


Dell EMC adds compression to Isilon all-flash filer

Dell EMC’s Isilon group has added compression to the F810 all-flash filer, guaranteeing logical capacity at twice raw capacity.

There is also ClarityNow software to locate, access and manage data across file and object storage on-premises and in the cloud.

in common with other systems in the range, the F810 can scale out to a 144-node cluster. The table below provides a snapshot of the range.

The F810 is roughly equivalent to an F800 with added in-line compression. Dell EMC says this can compress data up to 3x but guarantees at least 2x for one year, giving it a 2.2PB logical capacity per  4U – 60 drive chassis. A 144-node cluster of F810s would have up to 79.6PB of logical capacity.

The IOPS and throughput rating of the F810 are identical to those of the F800 but the raw capacity range is different – 230TB to 924TB. This compares to the F800’s 96TB to 924TB. The F810 can be added to existing Isilon clusters.

More Clarity

ClarityNow provides a foundation on which Dell EMC can build secondary file data management facilities to compete with Cohesity and Rubrik. 

The unstructured data management package provides a unified global file system view across heterogeneous distributed storage and the public cloud. Users have self-service capabilities to index, find, use and move files anywhere within this file system.

The software can cope with billions of files, according to Dell EMC, and the heterogeneity appears to be a useful attribute.


WekaIO hits the SPEC SFS 2014 ball out of the park

WekaIO has made a clean sweep of all five SPEC SFS 2014 benchmark categories.

SPEC SFS 2014 is a file serving benchmark with five different workloads: software builds, video streaming (VDA), electronic design automation, virtual desktop infrastructure and database. Each category test results in a numerical score and an overall response time (ORT.)

WekaIO’s Matrix software is a scale-out, parallel file system. It topped the software build category in January 2019 and has now achieved the top rank in outright performance and in response time, for the other categories

Here is a table it published:

A composite chart shows WekaIO’s overall position:

The missing columns in the chart are there because not all vendors enter test results in each of the five categories.


According to this benchmark WekaIO Matrix is the fastest file serving system available.

Our understanding is that WekaIO’s performance grows linearly as nodes are added. If a competing vendor beats WekaIO in any category then it can just add nodes and beat them in turn.

Such is the nature of scale-out file storage.

Its two software build results, with a score of 1,200 from a 4-node system and 5,700 from a 6-node system, bear this out.

Comment

If WekaIO can beat any other competitor via node addition, then it can be argued that it has effectively killed the SPEC SFS 2014 benchmark.

Blocks & Files contends that you can’t have a benchmark in which there is only ever one winner. This benchmark needs a price/performance aspect or some other attribute so that there is another way of ranking suppliers rather than sheer performance modified by overall response time.

Your occasional storage digest featuring Crossbar, CTERA, Snowflake and Pavilion Data

Some bits and bobs for your perusal. Let’s go.

Short items

ReRAM developer Crossbar Inc. is setting up a consortium of AI technology providers that are developing hardware and software products for edge and cloud applications. The consortium aims to accelerate development and time-to-market of ready-made, power-efficient, scalable AI computing platforms (using ReRAM.)

CTERA Networks, which provides secure edge-to-centre file services, said it doubled its enterprise subscription revenues in 2018 compared to 2017.  It added more than 50 new large enterprise customers in the year and gained $30m in new funding. CEO Liran Eshel said: “The balance of enterprise storage investment has shifted away from NAS and traditional systems towards multi-cloud solutions that provide scale and cost savings.”

NVMe over Fabric storage startup Pavilion Data has joined the STAC Benchmark Council.The STAC tests are a specialised set of benchmarks for the finance industry, focused on financial industry applications, such as tick (time point series based) data analysis. It recently joined joined the Storage Networking Industry Association (SNIA) to help create standards around NVMe-oF for data management and security.

Datewarehouser-in-the-cloud Snowflake finished its fiscal 2019 at the end of January and wants us to know it’s has a great year. There was the $450m funding  plus it doubled its post-money valuation to $3.9 billion. This places the company among the top 25 most highly valued private US technology companies. It more than tripled revenue YOY, a 257 per cent increase (no actual revenue number provided) and the customer count tripled to surpass 1,400. Snowflake is present in nine countries and intends to expand to 20 by the end of 2019.

Customers

London-based, financial services firm Institutional Protection has deployed Scale Computing’s hyperconverged HC3 product. Andrew Ingram, CIO, said: “One of our major challenges was the ability to add capacity as planning for future resources was challenging, but with Scale Computing we can simply scale-out as and when needed.”

People

Delphix has appointed Dan Rogers to its board. He is CMO at ServiceNow and his CV mentions stints at AWS, Salesforce, Symantec, and Microsoft. 

Infinidat has named Tony DeLisio as VP, enterprise financial business strategy and Pete Byrne as regional sales director for the South-Central US. DeLisio has spent time at HPE, Citi, EMC and Hitachi Data Systems, among others. Byrne comes from Pure Storage, where he was the regional sales director for TOLAM and the Southeast. Prior to this, Byrne was a sales irector at Commvault, running the company’s OEM business in North America, and at EMC, where he served as area vice Ppesident of Sales.

Jorge Titinger, ex-CEO of SGI, has joined the board of HPC storage software and system supplier Panasas.

Farzad Haghighi has joined Pavilion Data Systems as VP Global Support, coming from a similar position at Vexata. Mike Gustafson has joined Pavilion’s advisory board. His career includes CEO positions at Virident (bought by Western Digital) and BlueArc (bought by Hitachi Data Systems.) He currently serves as executive chairman and member of the board of directors for Druva and as an independent member of the board of directors for Reltio, Everspin Technologies, Matterport and PDF Solutions.

Pure Storage has hired Matthew Oostveen as chief technology officer and VP of Asia Pacific and Japan. He joins Pure from Dell EMC.

Storage startup Datrium tipped for unicorn stardom

CB Insights, a financial research firm, has teamed up with the New York Times to publish a list of 50 startups on their way to unicorn status. We note the inclusion of Datrium, a privately held enterprise storage vendor, as a contender to hit the $1bn or greater valuation that makes a money unicorn.

There are several existing unicorn status storage startups, including Actifio, Cohesity and Rubrik. What are Datrium’s chances of joining this select bunch?

HCI consolidation

Datrium makes a semi-hyperconverged infrastructure (HCI) system in which the compute nodes each run a storage agent/controller and access a shared storage array. This is an alternative to creating a virtual SAN using the local storage on each node.

Datrium is one of several HCI-class startups, but life is increasingly tough in a market consolidating around Nutanix, a startup that made it to IPO, and Dell EMC’s in-house VxRAIL system.

Two prominent startups were acquired: Cisco bought Springpath and HPE bought SimpliVity. NetApp has developed its own in-house system using SolidFire all-flash storage technology as a component. DataCore also has its own system.

Startups Pivot3 and Scale Computing are, like Datrium, growing their businesses, while Maxta has crashed. HiveIO bought the assets of Atlantis, another HCI failure.

Datrium was started in 2012 and has taken in $165m in funding through four funding rounds, Valuation at the last funding round – $60m in 2018 – was not revealed. at the time.

IBM thinks analog chips to accelerate machine learning

IBM says machine learning could be accelerated by up to a thousand times by using analog chips based on Phase-Change Memory.

Phase-Change Memory is based on a chalcogenide glass material which changes its phase from crystalline to amorphous and back again when suitable electrical currents are applied. Each phase has a differing resistance level, which is stable until the phase is changed. The two resistance levels are the basis for binary one or zero values. 

PCM devices are non-volatile and access latency is at DRAM levels, making them an example of storage-class memory. The Intel-Micron 3D XPoint technology is based on PCM.

An IBM blog reveal the company is setting up a research centre to develop next-generation AI hardware and look at nanotechnology. Technoloy partners includeNew York Staye, the SUNY Polytechnic Institute, Samsung, Mellanox, and Synopsis.

New processing hardware

The blogger, Mukesh Khare, VP semiconductor and AI hardware at IBM Research, says current machine learning limitations can be overcome by using new processing hardware such as;

  • Digital AI Cores and approximate computing
  • In-memory computing with analog cores
  • Analog cores with optimised materials

He mentions mapping Deep Neural Networks (DNN) to analog cross-point arrays (Analog AI cores). These have non-volatile memory materials at the array crosspoints to store weights.

Numerical values in DNN calculations are weighted to improve the accuracy of decisions in the course of training.

These factors can be arrived at directly with a crosspoint PCM array without needing host server CPU intervention so providing in-memory computing, with no need for data movement. This is an analog array, in contrast with digital counterpart such asIntel XPoint SSD or DIMM.

The PCM cells record synaptic weights along an 8-step gradient between the amorphous and crystalline states. The conductance or resistance of each of the steps can be altered with electrical pulses. These 8 steps provide 8-bit precision in the DNN calculations.

Crossbar arrays of non-volatile memories can accelerate the training of fully connected neural networks by performing computation at the location of the data.

Computation inside the analog memory chip

The IBM research paper states:

“Analog non-volatile memories (NVM) can efficiently accelerate the “backpropagation” algorithm at the heart of many recent AI advances. These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics.

“Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up.  This lets us perform many calculations at the same time, rather than one after the other.  And instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all the computation inside the analog memory chip.”

Check out an IBM AI Research blog page for more articles exploring these ideas.