Home Blog Page 375

Pensando’s AWS Nitro-killer network card does storage too

Q&A How does Pensando Systems work its server speed-enhancing magic?

The venture-back startup came out of stealth in late October and NetApp was revealed as a customer earlier this week.

At the time of de-cloaking to announce it had raised $145m in a C-series round, Pensando claimed its proprietary accelerator cards perform five to nine times better in productivity, performance and scale than AWS Nitro.

We emailed Pensando some questions to find out more about its Naples DSC (Distributed Services Card) and its replies show that this is in essence a network interface card that also performs some storage-related operations.

Blocks & Files: Does the DSC card fit between the host server and existing storage and networking resources?

Pensando: It’s a PCI device, so it’s actually in the server.

Naples DSC deployment models.

Blocks & Files: How many cards are supported by a host server?

Pensando: It depends on the server; typically servers can support multiple PCIe devices, so far none of our customers have a defined use case that requires more than one DSC.

Blocks & Files: How does the card connect to a host server?

Pensando: We support PCIe (host mode) or bump in the wire deployments.

Blocks & Files: Which host server environments are supported?

Pensando: The DSC has a standard PCIe connector so it can technically be supported any physical server to support bare metal, virtualized or containerised workloads. For example Linux, ESXi, KVM, BSD, Windows etc.

Pensando Naples DSC

Blocks & Files: How is the DSC card programmed?

Pensando: The card is highly customisable. It has fully programmable Management, Control plane as well as the Data pipeline. The Management and Control plane supports gRPC/REST APIs, where as the data plane can be customised using P4 programming language.

Blocks & Files: How does Pensando’s technology provide cloud service?

Pensando: There are two components in Pensando Technology to help provide cloud service:

a) Datapath is delivered via P4, which allows cloud providers to own the business function and ownership of the custom datapath. P4 offers the processing speeds of an ASIC, but flexibility beyond what an FPGA can offer using a very small power envelope. P4 code is written like a high level language and iterated upon much faster than FPGA code. 

At Pensando we have been able to implement new features in a matter of days, which would have required months if implemented in hardware. The P4 language syntax is similar to C, so it’s much more likely that typos and logic errors are caught by peer review. Another advantage of P4 is that statistics, instrumentation, and in-band telemetry are all software defined, just like any other feature.

b) Software Plane that runs on ARM cores (on the same chip) delivers gRPC APIs that implement various cloud features for networking and storage. This layer leverages Pensando’s P4 code and implements APIs to enable well understood industry standard cloud functions such as SDN, VPCs, Multi-tenancy, Routing, SecurityGroups, Network Load Balancing, NVMe Over Fabric, Encryption for Network/Data,  NAT, VPN, RDMA, RoCEv2, etc.

The flexibility of the platform allows cloud vendors to change/iterate on the functions as their customer offering changes.

Pensando DSC Hardware specs with our highlighting. PCIe Gen 4 will help make it fast.

Blocks & Files: Is Pensando a bump in the wire to existing network and storage resources? If it is which of each are supported?

Pensando: The Pensando DSC can be used as a bump-in-the-wire or be used directly as a PCIe device. Bump solution can be used for network functions quite easily, however in order to leverage RDMA, SRIOV, NVMe Pensando DSC can be used as a PCIe device. The bump-in-the-wire mode can be used selectively e.g. NVMe drivers on the host can be using the card for storage whereas networking functions can be used as bump-in-the-wire. 

The main attraction to use Pensando DSC as a bump-in-the-wire is to enable using Pensando technology when no driver installation is required. Using bump-in-the-wirte may preclude use of functions such as RDMA, SRIOV, etc. functions that require a Pensando driver be installed on the OS/Hypervisor. Note however that NVMe drivers are standardised and thus doesn’t require any driver installation to work over PCIe.

Software definitions

It is not surprising that Pensando has taken a software-defined networking interface card approach; its founders are a hot shot ex-Cisco engineering team.

A Naples DSC product brief document states: “Just as cloud data centers have adopted a “scale out” approach for compute and storage systems, so too the networking and security elements should be implemented as a Scale-out Services Architecture.

“The ideal place to instantiate these services is the server edge (the border between the server and the network) where services such as overlay/underlay tunneling, security group enforcement and encryption termination can be delivered in a scalable manner.

“In fact, each server edge is tightly coupled to a single server and needs to be aware only of the policies related to that server and its users. It naturally scales, as more DSC services capabilities come with each new server that is added.”

Cray ClusterStor gets faster and fatter

HPE-owned Cray claims the new ClusterStor E1000 parallel storage system delivers up to 1.6 terabytes per second and 50 million IOPS/rack. According to supercomputer maker this is more than double that of other unnamed competitors.

The ClusterStor E1000 starts at 60TB and scale to multiple tens of petabytes across hundreds of racks. What makes it so fast and able to hold such high capacity? Our guess is that the E1000’s PCIe gen 4 links to SSDs and use of disks as capacious as 16TB contributes to its speed and the 6,784TB/chassis maximum capacity.

And Cray has some marquee customers lined up already with the E1000 selected as external storage for the first three US exascale supercomputing systems: Aurora for US Department of Energy (DOE) for use at the Argonne Leadership Computing Facility, Frontier at the Oak Ridge National Laboratory and El Capitan at the Lawrence Livermore National Laboratory.  

Lustre with added lustre

ClusterStor is a scale-out parallel filesystem array for high performance computing installations. It uses Cray-enhanced Lustre 2.12 software. The system features GridRAID with LDISKFS or ZFS (dRAID). 

Cray E1000 table

This E1000, presented as exascale storage, comes with three components: Scalable Storage units (SSUs) which are either flash-based or SAS disk-based with SSD caching, a Meta Data Unit and a System Management Unit. 

The system uses NVMe/PCIe gen 4-accessed SSDs that are either capacity-optimised (1DWPD) or set for a longer life (3DWPD). 

It has faster than usual host interconnect options; 200 Gbit/s Cray Slingshot interconnect, InfiniBand EDR/HDR or 100/200 Gbit/E. The usual top speed are 100GBit/s. The E1000 can connect to Cray supercomputers across Slingshot and to any other computer system using InfiniBand or Ethernet.

There is a management network utilising 10/1GbitE.

The ClusterStor E1000 filesystem can support two two storage tiers in a filesystem – SSD and disk drives. There are three ways to move data between these tiers or pools:

  • Scripted data movement – user directs data movement via Lustre commands. Best for manual migration and pool data management.
  • Scheduled data movement – data movement is automated via the workload manager and job script. Best for time critical jobs and I/O acceleration of disk pool data.
  • Transparent data movement – read-through/write-back model supporting multiple file layouts of file. Best for general purpose pre/post processing.

ClusterStore Data Services uses a relatively thin layer of SSDs (for cacheing) above a large disk pool to present the system as a mostly all-flash system.

As yet there is no HPE InfoSight support – that may be something for the future as HPE only completed the acquisition of Cray in September 2019. Instead the E1000 offers HPC storage analytics on a job level with Cray View for ClusterStor via an optional software subscription.

The E1000 is a larger and faster array than Cray’s three existing ClusterStor systems; the L300F (flash), L300N hybrid flash/disk and L300 disk drive systems.

E1000 disk-based system (left) with flash cache, and all-flash system (right).

Competition

Cray identifies several competing arrays:

  • DDN EXAScaler – DDN ES400NV all-flash, DDN ES18K and ES7990 disk-based systems, and hybrid systems.
  • IBM Elastic Storage Server (ESS) GSxS all-flash and GLxC disk models. 
  • Lenovo Distributed Storage Solution for GPFS (DSS-G) with all-flash DSS-G20x and disk-based DSS-G2x0 arrays.

Cray says the E1000 has more throughput than all of these DDN, IBM and Lenovo systems and there is no automated scheduler-based data movement between their flash and disk pools. The IBM system requires Spectrum Scale software support while the E1000 includes its Lustre software.

Cray’s ClusterStor E1000 are available starting in Q1 2020.

NetApp will adopt QLC flash in 2020

Las Vegas “NetApp will be integrating QLC enterprise NVMe SSDs across their product portfolio through 2020.”

So says Wells Fargo senior analyst Aaron Rakers, fresh from attending an analyst meeting this week at NetApp Insight in Las Vegas. He is referring to NetApp’s all-flash portfolio – mostly ONTAP and E-Series arrays. 

Rakers thinks the switch to QLC flash could have a detrimental effect on Seagate.

Intel QLC flash ruler format drive.

QLC (4bits/cell) flash is denser than the current TLC (3bits/cell) and is both slower and has a shorter working life (endurance.) But it is still many times faster than disk drives at data access speed and a combination of over-provisioning and write management can extend endurance to an acceptable level.

Rakers suggests that wide QLC SSD adoption could affect shipments of 10K rpm disk drives. In a note to subscribers, he wrote NeApp “believes this could be a notable inflection point in the adoption of all-flash – believing that this will cannibalize the 10K RPM mission-critical HDDs.” 

He cited estimates by TrendFocus, a market research firm, that 3.5 million such HDDs shipped in Q2 2019 – and Seagate shipped 60-65 per cent of them. That means 2.2 million drives at the mid-point. Western Digital is estimated to have less than one per cent shipment share. 

As an indication of QLC SSD cost versus TLC SD cost, Pure Storage is using QLC flash in its FlashArray//C array, which is priced at a 40 per cent discount to its FlashArray//X mainstream array. 

Micron, which makes QLC flash, is another believer in the idea that QLC flash will cannibalise disk drive sales.

Commvault forecasts return to growth

Commvault has stemmed its recent revenue decline in Q2, a useful first step in its quest for top-line growth. The data protection vendor anticipates revenue growth in the next two quarters.

Revenue for the second fiscal 2020 quarter ended September 30, 2019 were $167.6m, down 1 per cent. Net loss was $7.1m.

in the earnings call CFO Brian Carolan confirmed Q3 ’20 consensus forecasts for approximately $73m of software and products revenue are “within our expected range.” 

He said revenue in the third fiscal 2020 quarter should be higher than Q2, and the company anticipates more growth in the fourth quarter.

Carolan said the company thinks Q1 ’20 “marked a baseline quarter” – a revenue trough, in other words.  “In Q2 we actually saw the best quarter ever in terms of our appliance sales for HyperScale.” The CEO said: “We closed our single largest appliance deal in the US this last quarter.” 

The CFO said fy2021 will be the first year that the company will “see some meaningful… renewals of the original subscription contracts that we sold in FY ’18. It’s going to create an opportunity for some upsell and cross-sell as well as some of our new technologies [Hedvig] that we recently acquired.”

Q2 2020 by the numbers

Software and products revenue was $68.6m, down one per cent y-o-y. Services revenue was $99.m, also down one per cent. Subscription and utility annual contract value (ACV) grew 59 per cent to $121m. Total repeatable revenue (subscription software and maintenance services) was $121.8m, up one per cent. These repeatable revenue streams outperform Commvault’s non-repeatable revenue. 

Operating cash flow was $24m compared to $17.8m a year ago. Free cash flow increased 35 per cent to $23.4m. Total cash and short-term investments were $475.2m at quarter end. Total operating expenses fell five per cent to about $110m.

Sanjay Mirchandani, Commvault CEO, who joined the company from Puppet Labs in February 2019, said in the earnings call: “We delivered results above expectations for both revenue and operating margins [14.8 per cent.] This performance as well as our positive outlook reflects the good headway we are making on the simplification innovation and execution priorities we established at the start of the fiscal year. We have work to do but we are making real measurable progress and we remain focused on setting achievable goals in meeting them.”

Return to growth?

The Q2 revenue number is an improvement on recent quarterlies as this list of revenue changes and profits/losses shows;

  • Q2 fy19   +0.7 per cent and profit of $0.9m
  • Q3 fy19   +2 per cent and profit of $13.4m
  • Q4 fy19   -2 per cent and loss of $2.2m 
  • Q1 fy20   -8 per cent and loss of $6.8m
  • Q2 fy20 -1 per cent and loss of $7.1m <— revenue decline rate slowed

A chart of quarterly revenues by fiscal year shows the revenue flow changes.


Nasuni bolsters Cloud File Services with analytics support, migration tools

Nasuni
Nasuni

Nasuni has issued a major release of Cloud File Services, adding an analytics connector, migrators for AWS and Azure, plus support for Google Cloud Platform, NetApp StorageGRID and Nutanix Objects.

In effect Nasuni is constructing a global public cloud file fabric to replace on-premises filers. Its file services software unifies NAS filers into a global file system that can be used at a basic level to sync and share files across distributed sites through local cloud storage gateways. These are backed up by cloud file storage, using AWS and Azure, to provide more scale than on-premises NAS systems, with built-in backup, disaster recovery and other file services.

The Analytics Connector creates and exports a temporary second copy of customers’ file data for use with analytics software. This is similar in concept to Komprise’s virtual data lake. The copy can also be used for AI, machine learning and by data recognition tools such as AWS Rekognition and Macie.

The Connector supports search software, such as SharePoint Search, Acronis FilesConnect, and Cloudtenna.

Cloud Migrator for AWS sends Nasuni file data to AWS S3 and the Cloud Migrator for Azure sends the data to Azure Blob Storage. There is no Cloud Connector for GCP but the announced support for GCP suggests one is on its way.

Nasuni’s new release also adds support for NetApp StorageGRID and Nutanix Objects as repositories for file data. All enhancements will be available by the end of the year.

NetApp steers MAX Data towards Elements HCI

Las Vegas NetApp has plunked MAX Data server data access speed-up technology on to the development road map for the company’s Elements HCI.

MAX Data is server-side software, installed and run on existing or new application servers to accelerate applications. Data is fetched from a connected NetApp array and loaded into the server’s storage class memory, meaning Optane currently.

MAX Data supports a memory tier of DRAM, NVDIMM and Optane DIMM with a storage tier of a NetApp AFF array. Server apps get faster access to data because it is served from/to an Optane cache fed by the ONTAP array and receives its data to be stored.

MAX Data architecture scheme.

The same principle – using a MAX Data storage-class memory cache in servers that access a shared block storage resource – is applicable to the Elements hyperconverged infrastructure architecture.

The more IO-bound the application, the greater the server speed-up benefit from using MAX Data.

Adam Carter, chief product architect for HCI at NetApp, told Blocks & Files about the roadmap status of MAX Data for Elements HCI today. He thinks the addition is very sensible.

We think juicing Elements HCI with MAX Data would give NetApp a distinctive feature to differentiate it from near-HCI competitor Datrium. Also it could open a performance gap with rival HCI systems targeting the enterprise, such as Dell EMC’s VxRail and Nutanix.

Carter pointed to the ongoing migration of ONTAP data services to Elements HCI, with SnapMirror already moved across. This will add to Elements HCI’s enterprise credentials.

There is no committed delivery date for MAX Data on Elements HCI.

Pensando recruits NetApp as acceleration chip customer

Accelerator card startup Pensando Systems emerged from stealth this month, announcing it had raised $278m in three rounds since its inception in 2017. What has gotten investors so fired up?

One clue could be that NetApp has signed up as a customer.

NetApp’s A400 all-flash ONTAP array, announced today, uses a Pensando chip to offload some data reduction processing from the main controller CPUs.

The ASIC chip performs compression, decompression and checksum calculations for deduplication and is located on the A400’s cluster interconnect card, which uses 100GbitE. The actual deduplication is performed by WAFL functionality inside ONTAP.

A400 Cluster Interconnect card enclosure. Note the large heatsink

This is NetApp’s first step into such a hardware offload and it may spread across the AFF all-flash array range.

Pensando is pitching its technology as a faster, more scalable alternative to AWS Nitro. Last week the company said its proprietary accelerator card, which is still in development, was influenced by input from vendors such as NetApp, HPE and Equinix. The card is already used by multiple Fortune 500 customers such as Goldman Sachs, which is also an investor.

HPE was a lead investor in Pensando’s recent £145m C-series funding, so it is reasonable to infer that the accelerator technology will pop up in HPE gear in due course.

Western Digital goes purple in 14TB range rampage

Western Digital has introduced a 14TB video surveillance drive, accompanied by a 512GB microSD card, to the Purple line.

for WD, ‘Purple’ signifies video surveillance media and this is an update on the 12TB Purple helium-filled, 3.5-inch form factor drive announced in June 2018. It seems to be basically the 12TB drive with an extra platter as the weight has gone up from that drive’s 1.46/0.66 lb/kg to the 14TB drive’s 1.52/0.69 lb/jkg. We reckon it has nine platters – WD’s data sheet doesn’t reveal this information.

WD’s Purple-branded disk drives.

Buffer size has doubled to 512MB and sustained streaming bandwidth has increased from 245MB/sec to 255MB/sec. There is no increase in the number of cameras supported- 64 – and the three-year limited warranty remains in place.

WD has also announced a Purple SC QD101 microSD Card, called an Ultra Endurance product. Capacity ranges from 32GB, 64GB, 128GB, 256GB, and 512GB. It uses 96-layer 3D NAND and, we assume, TLC flash technology.

A 256GB Purple QD312 microSD card was launched in April this year, supporting up to up to 768TBW from its 64GB, 128GB and 256GB capacity points. The QD101, with its doubled capacity would seem to replace that.

WD said the QD101 is an ultra-endurance drive without revealing a TBW number. At time of writing no data sheet is available on the company’s website.

The Purple 14TB is another iteration of WD’s 14TB nearline drive technology – the company launched the 14TB WD Red drive for NAS use a few days ago.

The 14TB WD Purple HDD is available now from select WD resellers. The Purple SC QD101 microSD card is expected to be available in the first quarter of 2020.

NetApp adds Keystone subscription billing

Las Vegas NetApp today introduced a storage subscription service called Keystone.

NetApp Keystone is an umbrella offering, covering on-premises and public cloud deployments, and both customer-managed, NetApp or partner-managed systems.

Keystone brings NetApp up to date with the big storage industry trend to move to subscription billing. Prominent examples include Nutanix and HPE’s Greenlake service.

At first sight, Greenlake looks the most similar service in concept and scope to NetApp Keystone. How they compare depends on detailed pricing terms – and NetApp is not revealing these yet.

The company is also simplifying how customers deal with NetApp when obtaining its storage gear and services. Any NetApp service will be available under Keystone.

Keystone options

Under Keystone customers can buy NetApp products, subscribe to them or choose metered usage. Subscription and metered billing are called cloud-like in the slide above. Customers can mix and match purchases and subscriptions.

Keystone offers the ability to burst to the cloud from on-premises. It’s starting with NetApp’s new systems, the A400, FAS8300 and FAS8700.

NetApp services that are obtained in the public cloud are available on AWS, Azure and GCP. Utility pricing is offered with zero commitment.

NetApp emphasises the simpler process involved with three steps; choose from three performance tiers, then from three storage types, and finishing up with the management style, as in the slide above.

It says there are performance and efficiency guarantees available, plus flat and predictable support pricing. Customers can migrate workloads to NetApp environments in the public cloud under the Keystone umbrella, or even repatriate them.

NetApp launches all flash object storage and more active ONTAP arrays

Las Vegas NetApp today pumped out a big hardware release centred on flash object and SAN storage with hybrid disk array action.

At NetApp Insight, the company also announced NetApp Keystone, a subscription billing service, and lots of software simplification and cloud storage-related announcements. We’ll focus on the StorageGRID hardware in this story.

NetApp made three StorageGRID announcements today, along with three all-flash ONTAP arrays and two hybrid arrays.

NetApp’s hardware range is anchored on the ONTAP operating system arrays, either all-flash AFF systems or FAS hybrid disk/flash arrays. These are accompanied by E-Series arrays with a stripped down software environment; StorageGRID object storage systems; SolidFire flash arrays and Elements hyperconverged systems using SolidFire flash storage.

StorageGRID

Last month NetApp introduced the storage industry’s first all-flash classic object storage system for workloads needing high concurrent access to many small objects. (We wrote about it here.)

A manufacturing customer developed the prototype when it found a disk-based StorageGRID array was too slow to handle sensor data from their production equipment line. NetApp, in response to this use case, developed the SGF6024 in six months. 

The StorageGRID range starts with an 8-CPU core SG571 with 144TB raw capacity from its 12 x 12TB disk drives, and continues with the SG5670 (720TB, 60 drives, 8-core CPU) and SG6060 (696TB, 58 disk drives, 2 caching SSDs, 40-core CPUs).

The new SGF6024 is built from two components: a 1U compute blade, and an EF570 storage shelf with 24 slots for 2.5-inch SSDs.

SGF6024 with compute blade on top.

All the EF570 SSDs are supported, with capacities from 800GB up to 15.36TB. These are mainstream TLC SSDs and not the new, denser QLC flash as used by VAST Data and Pure Storage.

Maximum raw capacity per chassis with 7.6TB drives is 182.4TB The EF570 supports latencies of <300µs with up to 1,000,000 4K random read IOPS. 

In a briefing with Blocks & Files, Duncan Moore, the head of NetApp’s StorageGRID software group, said the SGF6024 has about 50 per cent lower latency than SG6060. There is a moderate increase in data access speed – and room for more.

Moore said the StorageGRID software stack has been improved and there is scope for more improvement as it becomes more efficient. There was no need for such software efficiency before; the software had the luxury of operating in disk seek time periods.

Filers are an alternative option for Edge deployments such as manufacturing process line sensor handling. Moore said filers are better suited to transaction workloads with lots of file updates. There is no such thing as an update in the object world – all objects are unique. If the workload is not transaction-based then object storage is a good choice.

He said direct comparison between flash object storage and filer performance is not practical apart from comparing latencies. This is because filers are rated in MB/sec while object system are measured in objects/sec or Gets or Puts/sec. 

AI workloads can feature concurrent access to small objects and NetApp says the SG6024 is suited for this use case. It can be added to existing StorageGRID systems as a high-performance island.

Clearly the SGF6024 is faster than disk-based object storage systems and competing object storage hardware suppliers will look to develop their own all-flash object storage.

NetApp today also introduced SG1000 compute nodes providing high-availability load balancing and improved grid administration by hosting the admin node for the namespace on a single set of nodes.

 

SG1000 load balancer node. This lid-off shot shows its similarity to the SGF6024 compute unit.

The existing SG6060 gets expansion nodes and offers up to 400PB in a single namespace. This makes it suited for high density, high-capacity, large object workloads.

V11.3 of the StorageGRID software is needed for the new hardware, and this also provides the ability to tier off objects to Azure Blob Storage, supporting up to 10 Cloud Storage Pools per grid.

AFF ONTAP arrays get more active

NetApp’s ONTAP hardware range consists of all-flash FAS (AFF) arrays and hybrid flash/disk FAS arrays. There’s a range of AFF systems, from the new C190 entry level, through the A220, A300, A320 to the A700 top end model.

NetApp today announced an A400, a C190 and an All-Flash SAN, which seems like an oddity at first. It is an ONTAP system with a SAN-only (block) personality. All the file services have been made unavailable.

But it also has newly-developed active:active controller technology with automated failover. Until now NetApp ONTAP arrays have had active:passive controllers with a slower recovery from controller hardware failure. V9.7 ONTAP is required for this active:active capability.

The all-flash All SAN array (ASA) uses A220 or A700 hardware and a new A400 will be supported in coming months. Initially this All SAN system is not clusterable, as it is only a single dual-controller system.

NetApp positions this system for mission-critical applications running on databases such as Oracle and SQL,

Octavian Tanese, NetApp’s head of ONTAP software and systems group, told Blocks & Files that NetApp aimed to gain market share in SAN areas where customers don’t want to deal in file stuff.

A400

With a data acceleration capability that offloads storage efficiency processing, the A400 is a new departure in AFF arrays. NetApp has been tight-lipped about this feature but the implication is that the A400 uses a hardware assist for some or all of ONTAP’s compression, deduplication and compaction processes.

According to NetApp, the A400 incorporates end-to-end NVMe design and offers extremely low latency – no numbers supplied – at a mid-range price point for enterprise applications, data analytics, and artificial intelligence workloads.

The system has a dual-controller chassis and up to two NVMe-oF (100GbitE link) or SAS expansion shelves. The host connectors include 100GbitE and 32Gbit/s Fibre Channel with NVMe/FC supported. 

There are up to 480 NVMe or SAS SSDs per HA controller pair. The NVMe SSD capacities are 1.7TB, 3.8TB, 7.6TB and 15.3TB, making the maximum raw capacity 7.3PB. The SAS SSDs range in capacity from 960GB to 30.6TB, doubling the maximum raw capacity. NetApp provided no effective capacity measures before announcement time. 

The existing A320 supports 576 SSDs and has a maximum effective capacity (after data reduction) of 35PB, implying 60.8TB drives, which are not available. However, with a 4:1 data reduction ratio applied, we get to 15TB drives, which are available.

Applying a 4:1 data reduction ratio to the A400 gives it a maximum effective capacity of  29.4PB in NVMe guise and 58.8PB with SAS SSDs.

That 29.4 to 58.8PB straddles the A320’s capacity. NetApp told us that the A400 delivers up to 50 per cent higher performance than its predecessor, without the predecessor system being identified. It could be the A320.

AFF C190

AFF C190.

The C190 came in to life because NetApp saw there was market room under for a less powerful system than the A220. The C190 is not expandable and has 24 x 990GB SAS SSDs in its 2U cabinet, giving23.76 TB raw capacity. After deduplication and compression effective capacity is 50 TB. THat’s a 2:1 data reduction rate, which is different from the one we have estimated for the A400. We are asking NetApp to clarify this situation.

Two FAS additions

The FAS line starts with the FAS2720 and passes through the FAS2750, FAS8200 and on up to the FAS9000. There are two new models; the FAS8300 and the FAS8700.

NetApp has a table positioning these systems:

NetApp FAS positioning table.

The FAS8200 gets replaced by these systems, with the 8700 having twice the capacity of the 8300.

NetApp has not detailed any availability or pricing information.

Komprise opens doors to NetApp CloudVolumes

Komprise, the fiIe lifecycle management startup, is adding Cloud Volumes ONTAP coverage to its NetApp support.

Komprise COO Krishna Subramanian said today in a prepared quote that customers can “gain performance improvement and cost reduction benefits from understanding how data is used and storing it in the appropriate place,”

Announcing the update at NetApp Insight 2019 in Las Vegas Komprise also revealed it is beta testing Data Migration functionality for S3 files. The company currently supports NFS and SMB.

Data Migration S3 will enable Komprise customers to move files to NetApp’s StorageGRID object store, using the S3 protocol. The software will run multiple migrations in parallel and manage them centrally. It throttles the bandwidth used as needed, and retries on network and storage failures. When Komprise has S3 data movement in general availability, it will be able to move files from any S3 source to any S3 target.

Last month this startup announced Deep Analytics, a feature that investigates file estates, such as ONTAP filers residing on-premises. It searches across storage silos and creates a virtual data lake of files to fit the customer’s search criteria. Users can view, find, and tag data, in multiple file silos, and export this data set to any analytics application or destination, such as AWS Lambda

The data set can be operated on as a discrete entity, as all the permissions, access control, security and metadata of the source file components are kept intact as this virtual data lake moves.


NetApp revenue dip means it has to raise its game

NetApp’s annual Insight opens in at Las Vegas today, and sees the company having had two quarters of declining revenues and facing a third. The business faces the prospect of reinventing itself as its business is experiencing a downturn.

The company is the industry’s largest standalone supplier of unified SAN and file arrays, the all-flash AFF and FAS arrays both running the ONTAP operating system. Alongside these are the E-Series stripped-down arrays, the SolidFire Elements all-flash arrays, the Elements near-hyper-converged systems, StorageGRID object storage and its DataFabric with NetApp storage having fast connections to, or running in, the AWS and Azure public clouds.

A chart showing its quarterly revenues organised by fiscal year lays bare the problem the company is facing; customers aren’t buying enough of its equipment and services and revenues have dipped since the third fiscal 2019 quarter.

The Q2 fy2020 value is the company’s guidance, not an actual revenue number.

We can see a roller coaster pattern overall with revenues generally rising strongly from fy2011 to fy2012, then more gently to fy2013 after which there was a 3-year dip. George Kurian became NetApp’s CEO in June, 2015 and turned things around.

The revenue dip bottomed out in fy2017 when revenues started rising again in fy2018 only to have growth slacken off in fy2019, starting to fall in the fourth quarter and then fall quite steeply in the first fiscal 2020 quarter with another decline expected to occur in the second quarter.

The Q1 fy2020 revenue slump was attributed to an enterprise customer buying slowdown due to macro-economic factors such as the trade war between China and the USA. HPE also registered a buying slowdown. IBM shows a pattern of declining revenues but this has been ongoing for some years. Neither Dell-EMC nor Pure Storage experienced what NetApp is going through, which suggests that the macro-economic factors are not affecting these suppliers equally.

Is this a short-term dip for NetApp or is history repeating itself?

The chart above shows a general overall downward trend over a six year period from the peak of fy2013. If NetApp can’t pull out of its three quarter revenue decline then, yes, we might conclude it’s not a short-term issue. But, whatever the dip’s longevity, it needs to find ways to grow its revenues.

Where could NetApp turn for growth? The main growth areas Blocks & Files can see are hyper-converged systems, data protection and management, multi-cloud data access, and high-performance file storage.

HCI market facing duopoly

The hyper-converged infrastructure (HCI) appliance business is booming, as Dell-EMC and Nutanix revenue growth illustrates. NetApp was late to this party and joined in with a semi-HCI system, one sold as a single system but having servers getting storage form a shared Elements array rather than from an aggregated virtual SAN using HCI node’s direct-attached storage.This is the HCI model used by Dell EMC and Nutanix, also by HPE and Cisco. NetApp’s quasi-HCI system is an outlier, like Datrium’s, and facing an uphill struggle as a lot of market education is needed to explain exactly what is is and how it does what it does. 

Dell EMC and Nutanix own more than two thirds of the HCI market and the ability of NetApp to grow into a strong third player is, Blocks & Files thinks, questionable.

Object and multi-cloud storage

The object storage market is a relatively slow-growth one, compared to the HCI market and is anyway maturing into a way of storing files and commoditising  into the storage of S3-accessed objects, limiting the opportunity for break out growth.

Object storage is not fast growing but there are no overwhelming incumbents as there is with HCI.

The hybrid multi-cloud data storage and access market is being pursued by all the main storage players. NetApp was early into this area with its Data Fabric approach but, so far this has not enabled it to avoid being in the situation it is in. 

Subscriptions and the edge

Also, if its customers demand a general move to subscription revenues them NetApp may have to take a multi-quarter hit as perpetual licence income moves to lower, but longer-lived subscription sales.  No short term growth opportunity here.

The Internet of Things and the general edge computing area is nascent. The AI area is a focus for every single one of its competitors and NetApp has no obvious edge here. These competitors are also all into storage for containers, NVMe over Fabrics and Optane SSDs and storage class memory. Although NetApp appears to have had an advantage with its MAX Data technology, significantly speeding data access speeds by servers, there is little evidence that it is making headway here.

Data Management and fast file access

The backup business is booming, witness Veeam passing a billion dollar run rate. And backup vendors are going into the public cloud and data management.

The data management area; aggregating and using unstructured data for test/dev, analytics, compliance and so forth, looks promising, with startups Actifio, Cohesity, Delphix and Rubrik established and growing. Blocks & Files would suggest that an acquisition here would bolster NetApp’s ability to enter this market. Acquisition could be a quick way to up its backup and data management game,

We would also suggest that very high-speed access to high-performance computing-type file data, as espoused by startups Qumulo and WekaIO, might be worth a look by NetApp’s potential acquisitions team.