Home Blog Page 384

Your occasional Storage Digest featuring Qumulo, Seagate, StorageCraft and more

Start reading already 🙂

Qumulo updates software

Scale-out filer supplier Qumulo has streamed out a set of software updates over the past quarter:

  • Alternate Data Streams – Increase data access across both Windows and Mac environments by enabling files to receive more than one stream of data.
  • Audit Tracking – View who has accessed, modified, or deleted files.
  • Enhanced Replication.
  • Mac OS X Finder Enhancements – Display performance is now 10 times faster than SMB display performance. Improved file management with the “._hidden” files no longer being created and the “finder tags” being retain with their tags when copied. 
  • IO Assurance – Balance performance during a drive rebuild 
  • C5n clusters now available in AWS.
  • Enhanced Real-Time Analytics – Visibility into capacity usage and trends for snapshots, metadata, and data independently. 
  • Refreshed Management Interface.
  • Upgrade Path Options – Move between bi-weekly and quarterly upgrade paths flexibly. 

Seagate LaCie Rugged SSDs

Seagate has launched a set of LaCIe external SSDs for entertainment and media professionals.

The handheld Rugged SSD has a Seagate FireCuda NVMe SSD inside, Seagate Secure self-encrypting technology with password protection. Users will get USB 3.1 Gen 2 speeds of up to 950MB/sec.

The palm-sized and faster Rugged SSD Pro is a scratch disk with a FireCuda NVMe SSD inside for Thunderbolt 3 speeds up to 2800MB/s sec. It’s suitable for up to 8K high-res and super slow motion footage and has the latest Thunderbolt 3 controller. USB 3.1 is also supported.

The Rugged BOSS SSD has 1TB of capacity with speeds of up to 430MB/sec and direct file transfers via the integrated SD card slot and USB port. A built-in status screen provides real-time updates on transfers, capacity, and battery life. LaCie BOSS apps on iOS and Android enable users to view, name and delete footage.

The Rugged SSD has MSRPs of £189.99 (500GB), £319.99 (1TB) and £529.99 (2TB) and is available from Amazon and other resellers.

The Rugged SSD Pro has MSRPs of £429.99 (1TB) and £749.99 (2TB), and is available from Amazon and the following resellers: Jigsaw; CCK; Protape; Wex.

The Rugged BOSS SSD has an MSRP of £499.99, and is available from Amazon.

StorageCraft

StorageCraft, the US-based data protection business, has published the findings of an independent global research study on experiences and attitudes of IT decision-makers around data management. 

Some 88 per cent of UK respondents believe data volume will increase 10-fold or more in the next five years. And 81 per cent expressed concern about risk and business impact when asked about the potential impact of this growth.

  • 61 per cent expect an increase in operational costs
  • 47 per cent foresee an inability to recover quickly enough in the event of a data outage
  • 46 per cent anticipate they will be more susceptible to security risks
  • 32 per cent envisage that strategic projects will fail
  • 29 per cent predict that revenue generation will lag

So they had better buy StorageCraft data protection.

Shorts

Architecting IT has published a user guide ‘NVMe in the Data Centre’.It can be bought and downloaded here.

Cloudera results for the second quarter of fiscal year 2020, ended July 31, 2019, saw total revenue of $196.7m, subscription revenue of $164.1m and annualised recurring revenue grew 16 per cent Y/Y. Loss from operations was $89.1m, compared to -$29.4m a year ago.

Object storage supplier Cloudian says it had record (but unquantified) revenue in the first half of its fiscal year ended July 31, 2019, with significant year-over-year growth in both quarters. Sales in North America for the period increased more than 65 per cent. 

Cloud-based data protection supplier Cobalt Iron has announced Cyber Shield, an extension of its Adaptive Data Protection SaaS product that locks down and shields its customers’ backup data from loss, corruption, or attack.

Enterprise storage software startup Datera claimed first-half business growth of over 500 per cent which included its largest quarterly billings, largest new deal and largest expansion deal. CEO Guy Churchward said: “It’s like a switch was thrown in the last six months.”

Dell EMC Isilon and NVIDIA’s joint reference architecture, featuring the all-flash Isilon F800 with the NVIDIA DGX-1 GPU server, is now commercially available in EMEA as an integrated turnkey AI system sold through joint strategic channel partners.

Research house Dell ‘Oro Group said the worldwide server and storage systems market declined eight per cent year-over-year in 2Q 2019. This was the first decline in eight quarters with softness in the enterprise and cloud sectors.

Hitachi Vantara has launched Lumada Manufacturing Insights, a suite that uses AI and ML to eliminate data silos and help manufacturers optimise machine, production and quality outcomes.

HPE said it delivered robust performance in all flash array revenue with 22.7 per cent year over year growth in the second 2019 quarter, according to IDC. It brags that the the other two vendors in the top three – Dell and NetApp – posted year over year declines.

IBM Spectrum Scale can be used as platform storage for running containerised Hadoop/Spark workloads.

Kingston Technology Europe has been ranked the top DRAM module supplier in the world, according to the latest rankings by DRAMeXchange. 

South Africa’s Newzroom Afrika, a 24/7 news channel, has bought a 1PB cluster of MatrixStore object storage. It has access to content provided by Interconnect (for Avid Interplay) and the Vision application. The cluster will be installed in a facility in Johannesburg.

This is what 1PB of MatrixStore looks like.

Mellanox is on track to ship over one million ConnectX and BlueField Ethernet network adapters in Q3 2019, a new quarterly record.

Cloud file services supplier Nasuni has a new patent No. 10,311,153 for a “versioned file system with global lock” that covers Nasuni Global File Lock technology.

HPC storage supplier Panasas has published a case study featuring the marine survey activities of customer Magseis Fairfield, a Norway-based geophysics firm providing ocean bottom seismic acquisition services for exploration and production companies. 

Cloud file collaboration service provider Panzura has a signed partnership deal with Workspot. The latter’s Global Desktop Fabric for Cloud VDI enables Panzura customers to deploy cloud desktops and workstations in any Azure region around the globe. 

IDC lists UK-based Redstor in all five main categories of Market Glance: Data Protection as a Service, Q3 2019: backup as a service, archive as a service, disaster recovery as a service, workload migration and backup/recovery tools.

Seagate has sold a 140,000 square feet Cupertino office building to a realty investment firm, Rubicon Point Partners, for $107.5m

VAST Data has joined the STAC (financial industry benchmarking) council.

IDC examines Western Digital’s ActiveScale product family. The report – ‘The Economic and Operational Benefits of Moving File Data to Object-Based Storage’ is available for download via WD.

People

Acronis has appointed Kirill Tatarinov as executive vice chairman “to help lead the company in its next phase of growth,”. A board member since Dec 2018, Tatarinov previously held leadership positions at Citrix and Microsoft.

Veritas has appointed Mike Walkey as head of channel sales and the company’s alliances with Amazon Web Services, Google, Microsoft and other cloud providers. He was SVP of strategic partners and alliances at Hitachi Vantara where he resigned – to apparently retire.

Cohesity, Veeam and Rubrik join Commvault in Forrester’s backup and recovery premier league

Forrester Research has demoted all the legacy vendors except Commvault in a new report that compares the top 10 storage backup and recovery suppliers.

Joining Commvault at the front of the pack are startups Rubrik, Cohesity and Veeam. Veritas has slipped slightly compared with the last outing of Forrester’s report in 2017 while HPE-Micro Focus, IBM and Dell EMC have fallen backwards – with Dell EMC faring worst of all.

The Forrester Wave: Data Resiliency Solutions, Q3 2019 report ranks Commvault, Rubrik, Cohesity, and Veeam as Leaders; Veritas, Druva and Actifio as Strong Performers; IBM and Micro Focus as Contenders; and Dell EMC as a Challenger. Download it here (registration needed.)

Q3 2019 Forrester Data Resiliency Wave diagram.

Forrester produced a Q3 2017 edition of this report and we have summarised the changes in a table:

Commvault has strengthened its leadership since 2017 and new entrants Cohesity, Druva and Rubrik jump in with strong positions. 

This shows how far some legacy players have fallen behind. Forrester argues users need single-pane-of-glass management across all data sources, a comprehensive policy framework, recoverability, and security. The inference is that some legacy vendors are not providing this.

Compared to the Q3 2017 wave, startup Actifio has a weaker placing in the Strong Performer group. The legacy data resiliency players Dell EMC, IBM and Veritas all have weaker placements in the Q3 2019 Wave chart. In fact Forrester says Dell EMC declined to participate in the full Forrester Wave evaluation process.

Give us a Wave

The Forrester Wave is a graphical representation of suppliers in four categories; Challengers, Contenders, Strong Performers and Leaders. They are positioned in a 2D space with a weak-to-strong current offering vertical axis and a weak-to-strong strategy horizontal axis resulting in four quarter circles. Conceptually it is like a Gartner Magic Quadrant; presentationally it is not.

The strongest positions in each segment are closest to the low left-high right diagonal and furthest along it. The chart is part of a report analysing the suppliers’ rankings, using 40 criteria. 


VAST Data gets faster and safer

VAST Data, the storage startup that wants to kill hard drives has released a software update for its Universal Storage platform that increases performance and system resiliency.

The company launched Universal Storage in February 2019 and it features the use of QLC flash, Optane media and NVMe over Fabrics. It is available running on VAST or third-party hardware or in software-only form. 

New features in Universal Storage v2.0 include continuous snapshots, faster NFS serving and asymmetric cluster support.

Snapshots do not involve copying data or metadata. The claim is users can take snapshots with a very fine degree of granularity without compromising performance, storage capacity or SSD wear.

According to VAST product management VP Jeff Denworth the faster NFS server provides the performance of a parallel file system, the IOPS and consistency of an all-flash array with a NAS appliance’s deployment simplicity. This suits high bandwidth machine learning and deep learning applications. 

In a recent blog Denworth said NFS with RDMA can deliver up to 20.5GB/sec:

VAST NFS server speed table

The asymmetric expansion means VAST system clusters can feature independent expansion of stateless storage servers and NVMe storage enclosures. According to VAST, classic storage systems manage data and failures at the drive level. VAST’s architecture manages data at a flash erase block level, which is a smaller unit of data than a full drive. Data and data protection stripes are virtualized and written across a global pool of flash drives and can be moved to other drives without concern for physical locality. 

That means different NVMe storage enclosures can co-exist in a cluster. Servers in a cluster run VAST’s microservices, and these stateless containers access and operate on any flash device. A load scheduler operates across the cluster and is server resource-aware. This enables differently-powered and sized servers to co-exist in a VAST cluster. 

Business momentum

VAST said today it has delivered more than 50PB of capacity to its customers, which number in the dozens, with several – VAST’s term – making multi-petabyte orders. We take that to mean 36 or more customers.

The company said outselling other storage startups from the recent past and has issued a series of unverifiable claims.

  • VAST sold more in its first three months of GA than Pure Storage sold in its first year
  • VAST has sold more after its first full two quarters than Isilon’s first two years combined
  • VAST has sold more after its first full two quarters than DataDomain’s first two years combined

IBM touts super-fast storage array for z15 mainframe

IBM launched its latest z15 mainframe today and gave it a new high-end all-flash array to play with. This is the DS8900F and it holds more data with doubled Fibre Channel access speed compared with its predecessor, the hybrid flash/disk DS8880.

Sales of the DS8900F will be closely tied to those of the z15 as a new mainframe refresh cycle starts, just over two years after the z14 generation was launched.

There are two DS8900F models, the entry-level DS8910F and larger and faster DS8950F. The DS8950F has seven nines availability, in common with the DS8880. and is intended to consolidate all mission-critical workloads for IBM Z, LinuxONE, Power Systems and distributed environments.

The upgrade is intended to keep IBM mainframe customers happily connecting DS8000 series arrays to their mainframes and keep Dell EMC. HPE and Infinidat out of this customer base. As well as all-flash technology, the storage array offers faster-than NVMe-oF access speed, cloud tiering, more secure data and AI-powered array monitoring delivered as a cloud service.

Speeds and feeds

The main point of contrast for the DS8900F is with the DS8880, which it replaces. This uses POWER8 CPUs in its controllers and has a maximum capacity of 4,608TB, made up from disk drives and SSDs allied to 614TB of flash cards. 

Mainframe hosts access the DS8900F array via FICON while other hosts use Fibre Channel.

The DS8900F uses faster POWER9 CPUs and its top capacity is 5.9PB, almost 30- per cent more than the DS8880. The supported drive list runs from 800GB flash cards to 15.36TB – nearly ten times more than the DS8880’s 1.6TB top-end drive. 

The big black DS8900F rack box

An EasyTier facility moves data between higher and lower performance flash drives based on the access frequency. Fibre Channel access is either 16Gbit/s or 32Gbit/s. There is no support for NVMe-over-Fabrics.

IBM is making great play with the DS8900F’s response times, saying it’s down to 18μs with mainframe zHyperLink technology and 90μs minimum without.

The new array delivers 50 per cent reduction in transaction time for Db2 workloads, 60 per cent increase in IOPS and 150 per cent increase in sequential throughput. These figures are compared with un-named arrays in IBM internal tests.

The zHyperLink tech is a point-to-point FICON connection limited to 150m distance. The mainframe-attached link provides up to 10 times lower latency than normal FICON.

Security, cloud tiering and AI insights

The DS8900F offers IBM Safeguarded Copy – hidden copies of data that provide immutable points of data recovery and are protected from modification or deletion due to user errors, malicious destruction, or ransomware attacks.

The array can shunt older data to IBM’s Cloud Object Storage,  IBM Cloud, Amazon S3 and an IBM TS7700 tape system configured as an object storage target. This Transparent Cloud Tiering  (TCF) is neat, incorporating Hierarchical Storage Management within the array. There is no need for additional hardware or software, such as SpectraLogic’s StorCycle.

Through integration with the mainframe z/OS, TCT provides up to 50 per cent savings in mainframe CPU utilisation when migrating large datasets, compared to other traditional archiving methods. That’s based on an internal IBM comparison using an EC12 mainframe.

As every mainstream storage array supplier is now doing, IBM is also giving the DS8900F a cloud-delivered and AI-powered system monitoring and support facility, which it calls Storage Insights.

This monitors health, capacity and performance and provides proactive best practices and uses AI-based analytics to identify potential issues.  Storage Insights simplifies opening tickets, automating log uploads to IBM, and providing configuration, capacity and performance information to IBM technicians.

Competition

The main competition for IBM’s new storage box will come from Dell EMC, Hitachi Vantara and Infinidat, aiming to sell to IBM accounts which have X86 and Power servers as well as mainframes.

The DS8900F integrates with z15 and LinuxONE mainframe systems and so will compete with Dell EMC’s PowerMAX ,which also supports FICON connectivity.

PowerMax delivers sub-200μs latency with NVMe flash SSDs but the DS8900F with zHyperLink offers 18μs. PowerMax tops out at 4PB.

Infinidat’s InfiniBox array has latencies of 32μs for reads and 38μs for writes when using NVMe-oF with Remote Direct Memory Access over converged Ethernet (RoCE.) 

The InfiniBox F6300 has a 4PB raw capacity maximum but thus can be boosted to an effective10PB through inline compression and space-efficient snapshots. There is no FICON connectivity for InfiniBox.

Hitachi Vantara’s VSP F1500 supports 34.6PB raw capacity level and offers FICON and Fibre Channel connectivity. Data reduction can boost capacity further.

Check out a DS8900F datasheet here. Apply to IBM for pricing details.

Nasuni buddies up with Cloudtenna to speed global file search

Nasuni has teamed up with Cloudtenna to improve the search function of Cloud File Services.

The pitch is that Nasuni’s Cloud File Services customers have large and distributed file data sets running into billions of files.

“File search infrastructure faces a unique set of requirements that goes beyond the footprint of traditional search infrastructure used for log-search and site-search,” Cloudtenna CEO Aaron Ganek said. “It has to be smart enough to reflect accurate file permissions. It has to be smart enough to derive context to boost search results and has to do all this in a fraction of second.

Cloudtenna’s DirectSearch is a recommendation engine that completes search queries on distributed billion-plus file datasets in less than a second. A recommendation engine is algorithmic code that predicts how a user would rate items in a list. 

Using DirectSearch, Nasuni customers “can always find exactly what they are looking for quickly and easily within the entirety of their global file share,” Will Hornkohl, VP of alliances, said, “all while using a single login for all file sources and while conducting intelligent searches that reflect personalised contextual insights that are modelled on each individual user’s file activity, history, teams and relationships.”

DirectSearch has connectors to various storage repositories and SaaS apps and search results are personalised for each user; ranked for relevancy based on per-user and per-team file activity history.

The software uses distributed data set crawlers to locate new files in near-real-time so that file search results are up to date. The file indexing uses image and media analysis with Tensorflow.


Western Digital buys NVMe-oF startup Kazan Networks

Western Digital has bought a small startup called Kazan Networks and its NVMe over Fabrics Ethernet connectivity products. Financial terms are undisclosed and the Kazan people are joining WD.

Kazan makes Onyx NVMe target bridge adaptors that enable NVMe JBOFs (Just a Bunch of Flash drives) to connect directly to NVMe-oF networks. The company claims Onyx has the world’s lowest-power, lowest-latency and highest performance for such a bridge.

Kazan’s Onyx bridge adapter

The company also produces a Fuji ASIC which can do the same job at system board level for smaller NVMe target systems.

WD can use Kazan technology in its IntelliFlash JBOF and OpenFlex composable disaggregated infrastructure (CDI) systems. 

Phil Bullinger, head of WD’s data center systems business, provided a quote confirming this: “The addition of Kazan Networks will further expand Western Digital’s leadership in disaggregated data infrastructure and accelerate the advancement of new, CDI-ready NVMe-oF platforms optimised for our customers’ next-generation hyperscale workloads.”

Kazan CEO Margie Evashenk

Kazan Networks CEO Margie Evashenk chimed in too: “The close integration of purpose-built controller and storage technology is pivotal to realising the full benefits of advanced next-generation fabric architectures, including lower power, higher performance and lower cost.”

Kazan Networks was founded in Northern California in 2014 and pulled in a $4.5m funding round in July 2016. WD was an investor.

Komprise Deep Analytics ‘finds the needle in the data haystack in minutes’

Komprise today announced the release of Komprise Deep Analytics as a plugin for Intelligent Data Management 2.11.

Komprise Deep Analytics creates a virtual data lake from various data sources and exports it for analysis, cutting analytics preparation time by a claimed 60 per cent.

Komprise’s starting point is that the source data may be unstructured and spread across many different repositories holding billions of files across millions of directories.

Komprise’s Intelligent Data Management 2.11 examines file usage rates and punts less-used files to lower cost storage. The software builds a distributed index of files with support for standard and extended metadata. This index and metadata identifies sets of files in different file stores across a customer’s IT estate and combines them in a so-called virtual data lake. 

The data retains all the permissions, access control, security and metadata when it is exported from the data lake to Komprise Deep Analytics. This enables the plug-in to handle the data as a discrete entity, as opposed to a disparate collection of data sets.

Komprise cited David-Kenneth Turner, IT manager at Northwestern University, who said the software is “ike finding a needle in a haystack in minutes. “Unstructured data is often made up of billions of files across millions of directories and finding the right data can be virtually impossible. With Komprise Deep Analytics, we can now find the data we need in minutes. For example, if we required all data created by ex-employees, or all files related to a specific project, we are able to operate on it as if it is a distinct entity, even if the data is residing in different storage solutions from multiple vendors and clouds.”

Deep Analytics is deployed in the cloud as a managed service or on-premises. Cloud functionality is available immediately, and the on-premises version is available later this year.


Caringo releases Swarm 11

Caringo is speeding up large video transfers with Swarm 11, an update for its video-centric object storage platform.

Swarm 11 integrates with on-demand workflows and provides:

  • Large file bulk upload in the content UI, 
  • Partial File Restore (video clipping), 
  • File sharing, 
  • Backup to any Amazon S3 region/device.

Any Swarm domain or an entire cluster can be backed up to Amazon S3, Glacier or an S3-compliant device or service via S3 Lifecycle rules.

Multi GB-sized files are uploaded to the object store straight from a browser, using parallel ingest streams. There is no need for an intervening gateway or spooler device to ensure all file fragments are uploaded correctly. 

Partial file restore – or clipping – downloads specific portions of a video, in contrast with tape video archives which transfer the entire video. This speeds editing, internal sharing and streaming. Broad CODEC support is available through an API for integration into asset management systems with MP4-based clipping available directly from the Swarm Content Management interface.

Authorised users can generate a streamable URL for any file from the Swarm Content Portal, using Swarm File Sharing. They can email, download or Slack the URL for secure internal or external sharing.

Caching vs fast-access object store

Caringo takes a different approach than CTERA which launched its Media Filer yesterday.

Transferring large video files to and from the cloud – a remote object store – is time consuming. CTERA addresses this with the Media Filer box, which is a local cache, and instant streaming of cloud files to reduce time to the last byte.

Caringo prefers to offer partial downloads, letting users select short clips to access in a large file. It can also upload files with parallel IO, shortening the file movement time. This is not something that CTERA offers.

Swarm 11 is available today.

Spectra Logic’s StorCycle HSM shunts old data to lower cost storage

Spectra Logic, the tape systems vendor, is entering the storage management market, claiming it solves the problem that most data is stored on the wrong media i.e. expensive and fast storage.

This is the world of HSM (Hierarchical Storage Management) or ILS (Information Lifecycle Management) software. But, often Spectra Logic says, old, infrequently accessed data is generally not moved off storage tiers best suited for more frequently accessed data because software products that manage this are too expensive. Typically they are priced by capacity and can cost hundred of thousands to millions of dollars.

The answer is to automate the process with affordable software and that’s what Spectra Logics’s new product StorCycle HSM is meant to do. The company said StorCycle HSM can be much cheaper than existing software and can free up a lot of costly primary storage capacity, enabling customers to avoid spending cash on no longer-needed all-flash array capacity, for example.

StorCycle classifies storage into two main tiers: primary and perpetual, and moves perpetual data to sub-tiers such as nearline disk, tape or the public cloud.

StorCycle diagram.

StorCycle can move files on a primary storage tier to the public cloud, a third-party NAS store or Black Pearl archival storage. Black Pearl is Spectra Logic’s hybrid flash/disk, front-end cache. It stores files as objects on back-end tape devices, Arctic Blue nearline disk or the public cloud.

StorCycle’s overall four-stage job is to locate and identify file data’s state, migrate old stuff, protect it and enable access to it. It runs on Windows Server and builds a file catalogue. Data can be migrated semi-manually at project directory level in a one-time event or auto-migrated continuously. Auto-migration selects file on age and/or size and is directed by settable policies.

StoreCycle media and entertainment use case.

Files can be moved in four ways:

  • Remove and leave nothing behind
  • Copy and leave original data in place
  • Leave symbolic link behind if file moved to a NAS target and get extended access latency
  • Use HTML link for files moved to tape or cloud targets (.html appended to file name in directory)

App access means using symbolic links – an approach that Komprise, a Spectra partner and now competitor, also takes. HTML access access enables the user to manually restore the target file – similar to the way files are restored from Amazon Glacier – and return it to primary storage

StorCycle has a REST API interface and supports Amazon’s S3 protocol.

Offloaded archive data can be browsed and moved back, to a target NAS say, where it can be mounted and accessed.

With files moved from primary storage, backup processes take less time.

StorCycle has four pricing tiers, starting with three admin staff and unlimited usage for $18,000, and topping out with an enterprise ‘all-you-can-eat’ license for $144,000. Support costs are 15 per cent extra. There is no capacity charge if the customer buys Spectra storage and $50/TB surcharge if they use a third-party NAS.

Spectra says StorCycle is not an HSM

Spectra puts forward the view that StorCycle is not an HSM product. It says the concepts are similar, but HSMs were invented at a time when an entire storage infrastructure was installed onsite and consisted of only disk and tape based systems.

It says HSMs are expensive systems which sit directly within an organisation’s data path. As an HSM moves data from a primary system it leaves a stub file behind in place of the original file. StorCycle doesn’t do this and it does not need to sit in the primary data path.

Blocks & Files thinks that where ‘X’ is a thing that automatically moves data between high-cost and low-cost storage media, then X meets the generic definition of an HSM product. 


Dell EMC pumps up PowerMax performance

Dell EMC is ramping up performance for its high-end PowerMax arrays by adding support for FC-NVMe and Optane storage-class memory (SCM). Customers will get up to 50 per cent more read IOPS, lower latency and doubled throughput. PowerMax is also getting VMware, Ansible and Kubernetes integrations.

PowerMax already supports NVMe SSDs and this upgrade provides end-to-end NVMe support. FC-NVMe is the NVMe fabric protocol for storage array access implemented using Fibre Channel cabling -running at 32Gbit/s in this case. It delivers sub-200 μs latency with NVMe flash SSDs.

The updated systems support 750GB and 1.5TB dual-port Optane DC 4800X SSDs which have an NVMe interface. Dell heralded Optane support in December 2018 and PowerMax is the first system to ship with these drives.

The PowerMax’s machine learning sub-system looks at IO profiles and uses predictive analytics and pattern recognition to automatically place data on the SCM and NVMe flash SSDs.

PowerMax iterations

The first generation PowerMax array was announced in May 2018, with two models; the 4U 2000 and the rack-level 8000:

  • PowerMax 2000 – to 1.7m random read IOPS, 300μs latency, and 1PB effective capacity. 
  • PowerMax 8000 – to 10m IOPS, 300μs latency, to 175GB/sec and 4PB effective capacity.

They scale up by adding PowerBrick controller/storage units; one to two for the 2000 and one to eight for the 8000. A brick includes a controller with two PowerMax directors, packaged software, cache, and 24-slot Drive Array Enclosures. The engines use Xeon CPUs; E5-2650 v4 for the 2000 and E5-2697-v4 for the 8000

The updated PowerMax specs are:

  • PowerMax 2000 – to 7.5m IOPS, sub-100μs read latency and 1PB effective capacity. 
  • PowerMax 8000 – to 15m IOPS, sub-100μs read latency, 350GB/sec and 4PB effective capacity.

This is a performance upgrade, not a capacity boost.

PowerMax Competition

Infinidat claims its arrays are faster than PowerMax. A May 2019 update to its F6000 array provided up to 2m IOPS and 25GB/sec throughput. Infinidat claimed latency is typically less than 1 msec, down to under 50μs as measured from the host, in one example. PowerMax with SCM and NVMe-FC has more IOPS and equivalent latency.

HPE’s Primera array puts out 2.3m IOPS and 75GB/sec of data with sub-millisecond latency. NVMe-oF support is baked into the Primera OS but is not yet available.

Primera is also said to be ready to support dual-port Optane drives and the system can handle the IO load of NVMe-oF and SCM.

PowerMax Operations

Pre-built modules for RedHat Ansible, available on GitHub, enable customers to create Playbooks for storage provisioning, snapshots and data management workflows for automated operations.

Also available on GitHub, a Container Storage Interface (CSI) plug-in for PowerMax provisions and manages storage for Kubernetes workloads.

A  VMware vRealize Orchestrator (vRO) plug-in enables customers to develop end-to-end automation routines for provisioning, data protection and host operations. These routines can be offered as self-service catalogue items on the vRealize Automation platform. 

Dell EMC Cloud Storage services provide disaster recovery as a service using AWS, Azure or the the Google Cloud Platform. This supports PowerMax.

There is a Dell Technologies validated design for PowerMax; and PowerMax is validated for VMware’s Cloud Foundation through Fibre Channel as primary storage. The hardware updates (NVMe, SCM) will be available on September 16 but everything else is available now.

Cohesity aims to kill copy data management sprawl

Cohesity has entered the copy data management market, farming out space-efficient clones of data sets to developers from its backup store.

The idea is to stop developers having to request data copies from the IT department and then neglecting to manually delete used copies that are no longer needed.

The use of copy data management (CDM) software to combat copy data sprawl was pioneered by Actifio and Delphix in 2012 and 2013. They  provided virtual copies of master data for used by app development testers and others. Copies are managed and deleted when no longer needed so that storage of multiple redundant copies of data is stopped in its tracks.

Cohesity’s new software is called Agile Development and Test (ADaT) and it makes zero-cost clones (virtual copies) of database backups stored in Cohesity’s Data Platform. These can be created in a self-service fashion by authorised developers who use such virtual instances to test their application code.

Copy data is scanned to detect and mask personally identifiable information such as social security and credit card numbers. Cohesity claims the resulting smaller number of copies reduces customerexposure to security attacks and also lowers overall storage costs.

The main benefit though is on-demand access to up-to-date test data by developers. Hence the product name.

Other copy data management suppliers and offerings include;

CloudSpin vs Agile Development and Test

Cohesity had an earlier CloudSpin product punting data from its Data Platform to test and dev people. We are told by Cohesity that Agile Development and Test is a very different set of capabilities to CloudSpin. 

CloudSpin is aimed at hybrid cloud mobility – take an on-premises workload and spin it up in the cloud) – and the use cases are app migration to cloud, test and dev in cloud, and DR to cloud.


Agile Dev and Test is specifically focused on Database backups (Oracle, SQL Server, etc.) and seamlessly extending these backups for development and testing use cases. 

ADaT is subscription-based and will be available in the fourth quarter. The self-service capability is currently beta and data masking will be available in the coming quarters.

Cisco gets ready for 64Gbit Fibre Channel

Cisco today announced MDS 9000 storage network director support for the coming 64Gbit/s Fibre Channel interface and NVMe-FC,. It has also added Ansible automation and end-to-end analytics across ihe 64/32Gbit/s Fibre Channel portfolio.

Cisco’s MDS 9000 is a set of storage networking devices ranging from switches to directors – central very large switches or directors which aggregate switch-level traffic. The products support 16 and 32Gbit/s Fibre Channel (FC), FICON mainframe access, iSCSI and FCoE across Ethernet and FCIP (Fibre Channel over IP) protocols.

MDS 9000 product range.

Preparing for gen 7 FC

Fibre Channel speeds are expected to double in the very near future. Enabling technologies include all-flash arrays and non-volatile memory express over fabrics (NVMe-over-Fabrics) using Fibre Channel as the carrier cable.

Cisco is announcing 64G-ready capabilities on the Cisco MDS 9700 platform, with support for NVMe and through that for all-flash arrays. The company will add 64gig FC line card support to the 9700s as a non-disruptive hardware upgrade along with a software upgrade and the products will then support 16, 32 and 64gig FC simultaneously.

The upgrade from gen 6 32Gbit/ to gen 7 64Gbit/s is imminent and  line card, host bus adapters and edge switches that support the new Fibre Channel standard could land by the end of the year. At that point central storage networking switches and directors will need to support the standard to provide an end-to end 64Gbit/s FC capability.

Each FC speed advance requires new hardware line cards and software added to the MDS products. Historically Cisco has been behind its main storage networking competitor Brocade, now owned by Broadcom, in introducing support for developing FC standards. It was about two years behind with 16Gbit/s FC and one year behind with 32Gbit/s FC. Now it is signalling it has caught up and will be ready with 64Gbit/s FC when the industry adopts it.

With this 9700 iteration it will be possible to run concurrent FC-SCSI and NVMe/FC workloads. Cisco believes NVMe/FC has lower hurdles for adoption than NVMe-oF using ROCE which requires data centre-class Ethernet.

Analytics and automation

Cisco has supported generic Ansible automation modules since 2017 to automate aspects of MDS operation. It is now adding specific SAN provisioning automation modules for Ansible, with VSAN, device-alias and zoning configuration automation facilities.

Ansible modules can be used to automate previous manual infrastructure procedures for provisioning compute, memory, network and storage arrays. This can reduce what previously could take hours or even days to less than a minute.