Home Blog Page 325

Fungible launches DPU-driven storage server to start data centre revolution

Fungible Inc. has launched an FS1600 storage appliance driven by its DPU microprocessor in the next step in its quest to achieve composable data centre domination.

Fungible’s DPU is a dedicated microprocessor that handles millions of repetitive IT infrastructure-related processing events every second. It does this many times faster than an x86 CPU or even a GPU.

The lavishly funded startup intends also to deploy DPUs in servers. Its goal is to link its products across a TrueFabric network – development is still in the works – using its Composer control plane to build massively scalable and efficient data centres linking GPU and X86 servers. Fungible GPU server cards are expected by the end of 2021 and should deliver data from FS1600 storage servers faster than GPUDirect storage systems.

Pradeep Sindhu, CEO and founder of Fungible, said in a launch statement: “Today, we demonstrate how the breakthrough value of the Fungible DPU is realised in a storage product. The Fungible Storage Cluster is not only the fastest storage platform in the market today, it is also the most cost-effective, reliable, secure and easy to use.”

He added: “This is truly a significant milestone on our journey to realise the vision of Fungible Data Centres, where compute and storage resources are hyper-disaggregated and then composed on-demand to dynamically serve application requirements.

Fungible Storage Cluster

Fungible composable disaggregated data centre scheme.

The Fungible FS1600 storage server is a data plane system which is managed from a separate control plane running on a Fungible Composer standard X86 server.

Fungible Composer is an orchestration engine and provides services for storage, network, telemetry, NVMe discovery, system and device management. Storage, network and telemetry agents in the FS1600 storage server link to these services.

The FS1600 is a scale-out, block storage server in a 2RU, 24-slot NVMe SSD box with two F1 variant DPUs controlling its internal activities. It is accessed via NVMe-over Fabrics across TCP. 

Fungible FS1600 storage server

The FS1600 can be deployed as a single node or clustered, with and without node failure protection, out to hundreds or even thousands of nodes.

It differs from any other 24-slot, 2RU box filled with SSDs because it is seriously faster, offering 15 million IOPS compared to the more typical two to three million IOPS of other 24-slot boxes.. That’s with 110μs  latency, 576TB capacity, and inline compression, encryption and erasure coding. It means a rack full of FS1600s could output 300 million IOPS.

Fungible claims it offers a 5x media cost reduction versus hyperconverged infrastructure (HCI) in a 1PB effective capacity deployment. That’s because it has better SSD utilisation, meaning less over-provisioning, and efficient erasure coding protection, equivalent to 3-copy replication.

According to Fungible, the FS1600 substantially outperforms competing systems and is much cheaper at petabyte scale because it uses the media less wastefully. A 2-node FS1600 cluster can deliver 18 million IOPS, which is about 10 times faster than an equivalent Dell EMC Unity-style system and 30 times faster than Ceph.

Under the hood view of FS1600.

There are three configurations; 

  • Fast (7.6TB SSDS, 81 IOPS/GB, 326MB per sec/GB),
  • Super Fast (3.8TB SSDS, 163 IOPS/GB, 651 MB per sec/GB)
  • Extreme (1.9TB SSDs, 326 IOPS/GB, 1,302MB per sec/GB). 

 The FS1600 will support 15TB SSDs at a later date, along with deduplication, snapshots, cloning and NVMe over RoCE.  

Potential customers

Fungible is targeting tier 2 cloud service providers – it says its technology give them four to five times more performance than AWS and Google Cloud.

The company cites Uber, Dropbox and FlipCard as examples of tier 2 CSPs Fungible also hopes to sell DPU chips to hyperscalers of the AWS, Azure and Facebook class. It is staying away from the enterprise market, but don’t know if that is a for now or forever thing.

FS1600 clusters are available today from Fungible and partners.

Nebulon: Here’s why we are better (and cheaper) than SANs and HCI

Analysis. Nebulon, a storage startup that came out of stealth in June, aims to compete with SAN arrays and HCI. It has built a product that fits into neither category. Let’s take a look.

A Nebulon array or nPod is a set of servers with locally-attached SSDs that are accessed via an interface card called a storage processing unit (SPU). The SPUs are networked and aggregated into a virtual SAN.

Think of the SPU as replacing the server RAID card or SAN Host Bus Adapter (HBA). The SPU uses dual Arm processors and various offload engines to perform SAN controller functions but at a lower hardware cost.

Nebulon SPU

Nebulon versus SAN

Nebulon has worked out some cost examples comparing nPods to a Pure FlashArray in various application scenarios; containerised NoSQL development, virtualization development, virtualzsation production, and virtualization stretch cluster. the company used public website pricing information for Pure Storage, Emulex HBAs, a Cisco MDS SAN switch, and Ethernet switches. Of course, many customers will have secured discounts off list price.

In each case the application servers were the same: Supermicro 2029U-TR4, Quad 1-GbE, 1000W, 2 x Intel 6242 16-Core 2.8GHz including 12 x 32GB PC4-23400 2933MHz DDR4 ECC RDIMM. The configurations supported 32 VMs or containers per server.

Nebulon SPU components.

In the NoSQL example there were 12 application servers. A Pure Storage FlashArray X50R3-FC-63TB-45/18-EMEZZ with 18 x 3.84TB SSDs was compared to each application server having a Nebulon SPU with six 960GB Intel SATA SSDs.

The protection overhead was 33 per cent for both Pure and Nebulon. Pure had a 4:1 data reduction ratio while Nebulon’s was 3.6:1. The effective capacity per server was 12.2TB – 0.38TB per container.

The total system cost for the Pure configuration was $324,736, compared to the Nebulon system cost of $230,403, giving Nebulon a 41 per cent cost advantage. The storage cost per server was $11,317 for Nebulon and $19,178 for Pure Storage.

In the virtualization development scenario, with eight servers, Nebulon had a 49 per cent cost advantage over Pure, and its advantage in virtualization production and stretch clusters, both 216 servers, was 33 per cent and 63 per cent respectively.

Nebulon vs HCI

HCI is a challenge to Nebulon and the company has to convince customers to put their new applications onto its storage instead of existing HCI setups. Nebulon says its advantage comes from offloading storage-related processing from the host server CPUs. This is hard to quantify in cost terms because there are so many different Xeon configurations, core counts, and DRAM amounts.

So Nebulon approaches it indirectly, with the notion of offloading storage-related processing from the host server CPUs and thereby returning CPU cores to application use. One SPU on this view is equivalent to four x86 cores.

The HCI CPUs storage-related processing is an overhead. Nebulon estimates that VSAN adds a 10 per cent overhead to the HCI processors, encryption can add a further 5 to 15 per cent as can dedupe and compression. Altogether there is 20 to 40 per cent storage-related overhead, with Nebulon CEO Siamak Nazari telling us: ”We model 25 per cent conservatively.”

In other words, one in every four HCI CPUs is used up doing storage-related processing. In an 8-node HCI configuration, using identical server CPUs, two CPUs are used up doing storage, encryption, and data reduction. Run this processing on Nebulon’s SPUs and you get those two CPUs back, or you simply don’t need them. A 6-server Nebulon system can do the work of an 8-server HCI system.

As Nebulon SPUs enabls fewer CPU cores, customers can save money on per-core software licensing schemes. Craig Nunes, Nebulon’s COO, said: “Any core-based license scheme is tough for HCI as you have cores that run the HCI Infrastructure.”

Nebulon prefers to sell against HCI on management costs, licensing costs, host server OS flexibility, better protection against server faults, and support of both local and shared storage.

You can buy Nebulon as easily as HCI, Nunes says, because it’s sold through server vendors such as HPE and Supermicro.

VAST Data makes AI workloads ‘accessible to NFS users’

VAST Data “democratises HPC performance” and makes AI workloads accessible to NFS users – or so the company claims.

Using its LightSpeed technology, the high-end storage startup runs vanilla NFS storage much faster than most parallel file systems, reaching a data delivery speed of 92.6GB/s. This is a speed-up of some 50x, compared with usual NFS rates, enabling VAST data storage arrays to feed Nvidia GPUs at line rate.

VAST wanted its array to feed data to an Nvidia GPU server but didn’t have a parallel file system, Howard Marks, VAST Data’s technology evangelist, told us. However, it needed parallel performance to feed data simultaneously through the DGX-2’s 8 ports leading to its 16 Tesla GPUs.

The company appears to be in no hurry to buy or build a parallel file system of its own. Any software upgrades to the parallel file system have to be synchronised to Linux upgrades and this is a complex process, according to Marks. “One VAST customer likened this to a suicide mission.”

So what does VAST do, instead?

It starts with the VAST Data Universal Storage array and NFS, and adds RDMA, nconnect (NFS multi-pathing) and GPUDirect to this engine to make it run faster.

The VAST system is a scale out cluster based on multiple compute node (Cnodes) front end controllers accessing back end databox stores across an internal NVMe fabric. The data boxes or Dnodes have a single QLC flash storage tier with Optane SSDs used to store file metadata and for write-staging before committing writes to the QLC flash. The scheme has data striped across the Dnodes and their drives for faster access and resilience.

VAST Data storage diagram.

According to Marks, an NFS source system can deliver 2GB/sec from one mountpoint across a single TCP/IP network link; one without nconnect.

Linux has nconnect multi-pathing code to add multiple connect support to NFS. VAST’s system uses this and supports port failover to aid reliability.

NFS traditionally operates across the TCP protocol. This involves the operating system processing IO requests through its software stack and copying the data from storage into memory buffers before sending it to to the target system and its memory. RDMA (Remote Data Memory Access) speeds data access because no storage IO stack and data copying into memory buffers is needed. Linux supports RDMA and so VAST uses NFS-over-RDMA instead of TCP to speed data transfer across the link.

The company supports Mellanox ConnectX network interface cards (NICs) because, Marks says, Mellanox’s RDMA implementation is the most mature and it dominates the market. These NICs support Ethernet and InfiniBand.

Marks said: “Our secret sauce is the NFS multi-path design with multiple TCP/IP sessions on separate NICs. ” A VAST graphic shows the concept:

VAST Data’s GPU data feeding scheme

The green boxes are NICs. There are eight interface ports on a DGX GPU server and VAST can link each of them them to its compute nodes.

NFS over RDMA with nconnect multi-pathing pumps data transfer speed up to around 32GB/sc of bandwidth. There is a bottleneck because DGX-2 server memory is involved in the transfer. VAST says there is still more network capacity available.

Nvidia’s GPUDirect technology bypasses the memory bottleneck by enabling DMA (direct memory access) between GPU memory and NVMe storage drives. It enables the storage system NIC to talk directly to the GPU, avoiding the DGX-2’s CPU and memory subsystem.

Blue arrows show normal data transfer steps. Orange line shows CPU/memory bypass effect of GPUDirect

Data transfer speed step summary

  • NFS over TCP reaches c2GB/sec across a single connection
  • NFS over RDMA is capped at 10GB/sec across a single connection with 100Gbit/s NIC.
  • NFSoRDMA with multi-pathing achieves 33.3GB/sec to a DGX-2
  • NFS over RDMA with GPUDirect pumps this up to 92.6GB/sec. 

A VAST test chart compares basic NFS over TCP, NFS with RDMA and NFSoRDMA with GPUDirect, and indicates the amount of DGX-2 CPU utilisation in the data transfer. The NFSoRDMA throughput of 33.3GB/sec features 99 per cent DGX-2 CPU utilisation. Moving to the GPUDirect scheme drastically lowers this CPU utilisation to 16 per cent, showing the memory bypass effect, and boosting the data rate to 92.6GB/sec.

Marks says this is near the line rate maximum of 8 x 100Gbit NICs; no host server CPU is involved in the data transfer; “We’re saturating the network; you can’t go any faster.” To do that you need a faster network.

A Nvidia A100 has 8 x 200Gbit/s NICs and VAST should be able to feed data to it even faster than to the DGX-2.

NetApp plays Spot the difference in cloud services build-out

NetApp today issued a fistful of public cloud announcements at the NetApp Insight conference, covering containerised app deployment, predictive monitoring and Windows VDI.

On the Spot

The new containerised app deployment service springs from NetApp’s acquisition of Spot, a cloud broker, in June. Today, the company introduces Spot Storage, a “storageless” service that works with Spot Ocean, an existing serverless facility. The launch means that Spot now abstracts server and storage details from Kubernetes-deployed containers.

Spot Ocean sets up serverless containerised app instances in the public cloud. “Serverless” means the cloud service provider (CSP) sets up the server instances needed to run the application containers.  The CSP also manages the details for Spot Storage, which means the user does not need to specify volume capacity, throughput, storage class and so forth.

B&F expects NetApp to provide integration between Spot and its Project Astra Kubernetes data lifecycle software,

Cloud Manager

NetApp today launched NetApp Cloud Manager, a public cloud-hosted service. This is a “new autonomous cloud volume platform, providing a single experience to manage NetApp hybrid, multi cloud storage and data services [with] full visibility and control across on premises, Azure, AWS and GCP storage.”

However it actually seems to be not entirely new – NetApp mentioned this in January 2018; “Hybrid Cloud Management with NetApp Cloud Manager (formerly OnCommand Cloud Manager).” 

Cloud Manager in 2018 provided “a single-pane view of your storage system irrespective of whether your system is deploying in AWS, Azure, Google Cloud, or on-premises;” – not a lot different from what NetApp is announcing today.

OnCommand provided a cloud storage service catalogue, facilities to analyse cloud storage delivery, automated data and virtual machine (VM) movement, and user self-service, as far back as 2011.

How does Cloud Manager differ from Cloud Insights? NetApp told us: “Cloud Manager is the global control plane which provides a single interface for the provisioning, deployment, and management of NetApp’s cloud storage services.

“Cloud Insights is an enterprise-class monitoring tool designed specifically for cloud-drive architectures, and providing visibility into both infrastructure (NetApp and other) and application resources. From a topological perspective, Cloud Insights sits ‘above’ the Cloud Manager-based resources, taking all of those into account along with the rest of the customer’s infrastructure.  Cloud Insights only monitors the infrastructure, it does not control it.”

Windows VDI deployment manager

NetApp’s new Windows VDI deployment service hails from the company’s CloudJumper acquisition in April.

The NetApp Virtual Desktop Management Service orchestrates Remote Desktop Services (RDS) in AWS, Azure and GCP as well as on private clouds. The fully managed, cloud-based service is accompanied by a validated hybrid cloud virtual desktop infrastructure (VDI) design.

VDS automates many tasks such as setting up SMB file shares (for user profiles, shared data, and the user home drive), enabling Windows features, application and agent installation, a firewall, and policies. The VDS offering specifically supports Windows Virtual Desktop (WVD) on Microsoft Azure.

The Long Goodbye. Why Intel NAND biz sale to SK hynix will take five years to get over the finish line

Intel is taking five years to sell its NAND business to SK hynix, because it needs to unravel commitments to its former partner, Micron.

“The unique structure of this deal is strictly a factor of existing commitments within our long-term agreements with Micron,” Intel CFO George Davis said in Intel’s earnings call. “We will continue to operate the factory for SK hynix until we can transfer the entirety of the business in 2025.” 

The $9bn deal, announced on October 20, is constructed in two phases. Phase one nets Intel $7bn as SK hynix buys the NAND SSD business and Dalian, China, fab. SK hynix gets the the Dalian facility and its assets. In 2025 – phase2 – SK hynix will pay Intel an additional $2bn for IP related to the manufacture and design of NAND flash wafers, R&D employees, and the Dalian fab workforce.

Intel will get an immediate financial benefit. Davis said: ‘Capital spending for the NAND business will be shown in assets held for sale and excluded from free cash flow. This will reduce our forecasted capital spend for 2020 by approximately $300m and raise our free cash flow by a similar amount.”

He added: “SK hynix will commit the necessary investment to bring this business to scale, and Intel will dispose of a non-strategic asset to focus on our core opportunities ahead.”

Intel set up the 3D NAND facility at Dalian in China in late 2015 with production scheduled for 2016. Until then the company had made NAND in a joint venture with Micron, IM Flash Technologies. January 2018 saw Intel officially separate its 3D NAND technology development from Micron.

In October 2018 Micron said it would exercise its right to buy out Intel’s share of IM Flash Technologies. That deal cost Micron up to $1.5bn and the JV ended in late 2019.

How to get started with Intel Optane – and utilise your hardware to its full potential

Speedy Gonzales

Sponsored If you take your data centre infrastructure seriously, you’ll have taken pains to construct a balanced architecture of compute, memory and storage precisely tuned to the needs of your most important applications.

You’ll have balanced the processing power per core with the appropriate amount of memory, and ensured that both are fully utilised by doing all you can to can get data off your storage subsystems and to the CPU as quickly as possible.

Of course, you’ll have made compromises. Although the proliferation of cores in today’s processors puts an absurd amount of compute power at your disposal, DRAM is expensive, and can only scale so far. Likewise, in recent years you’ll have juiced up your storage with SSDs, possibly going all flash, but there are always going to be bottlenecks en route to those hungry processors. You might have stretched to some NVMe SSDs to get data into compute quicker, but even when we’re pushing against the laws of physics, we are still constrained by the laws of budgets. This is how it’s been for over half a century.

So, if someone told you that there was a technology that could offer the benefits of DRAM, but with persistence, and which was also cheaper than current options, your first response might be a quizzical, even sceptical, “really”. Then you might lean in, and ask “really?”

That is the promise of Intel® Optane™, which can act as memory or as storage, potentially offering massive price performance boosts on both scores. And drastically improve the utilisation of those screamingly fast, and expensive, CPUs.

So, what is Optane™? And where does it fit into your corporate architecture?

Intel describes Optane™ as persistent memory, offering non-volatile high capacity with low latency at near DRAM performance. It’s based on the 3D XPoint™ technology developed by Intel and Micron Technology. It is byte and bit addressable, like DRAM. At the same time, it offers a non-volatile storage medium without the latency and endurance issues associated with regular flash. So, the same media is available in both SSDs, for use as storage on the NVMe bus, and as DIMMs for use as memory, with up to 512GB per module, double that of current conventional memory.

Platform

It’s also important to understand what Intel means when it talks about the Optane™ Technology platform. This encompasses both forms of Optane™ – memory and storage – together with the Intel® advanced memory controller and interface hardware and software IP. This opens up the possibility not just of speeding up hardware operations, but of optimising your software to make the most efficient use of the hardware benefits.

So where will Optane™ help you? Let’s assume that the raw compute issue is covered, given that today’s data centre is running CPUs with multiple cores. The problem is more about ensuring those cores are fully utilised. Invariably they are not, simply because the system cannot get data to them fast enough.

DRAM has not advanced at the same rate as processor technology, as Alex Segeda, Intel’s EMEA business development manager for memory and storage, explains, both in terms of capacity growth and in providing persistency. The semiconductor industry has pretty much exhausted every avenue available when it comes to improving price per GB. When it comes to the massive memory pools needed in powerful systems, he explains, “It’s pretty obvious that DRAM becomes the biggest contributor to the cost of the hardware…in the average server it’s already the biggest single component.”

Meanwhile, flash – specifically NAND – has become the default storage technology in enterprise servers, and manufacturers have tried everything they can to make it cheaper, denser and more affordable. Segeda compares today’s SSDs to tower blocks – great for storing something, whether data or people, but problems arise when you need to get a lot of whatever you’re storing in or out at the same time. While the cost of flash has gone down, endurance and performance, especially on write operations, means “it’s not fit for the purpose of solving the challenge of having a very fast, persistent storage layer”.

Moreover, Segeda maintains, many people are not actually aware of these issues. “They’re buying SSDs, often SAS SSDs, and they think it is fast enough. It’s not. You are most likely not utilising your hardware to the full potential. You paid a few thousand dollars for your compute, and you’re just not feeding it with data.”

To highlight where those chokepoints are in typical enterprise workloads, Intel has produced a number of worked examples.

For example, when a 375GB  Optane™ SSD DC P4800X  is substituted for a 2TB Intel® SSD DC P4500 as the storage tier for a MySQL installation running 80 virtual cores, CPU utilisation jumps from 20 per cent to 70 per cent, while transaction throughput per second is tripled, and latency drops from over 120ms to around 20ms.

This latency reduction, says Segeda, “is what matters if you’re doing things like ecommerce, high frequency trading.”

The same happens when running virtual machines, using Optane™ in the caching tier for the disk groups in a VMware vSAN cluster, says Segeda. “We’re getting half of the latency and we’re getting double the IO from storage. It means I can have more virtual machines accessing my storage at the same time. Right on the same hardware. Or maybe I can have less nodes in my cluster, just to deliver the same performance.”

A third example uses Intel® Optane™ DC Persistent memory as a system memory extension in a Redis installation. The demo compares a machine with total available memory of 1.5TB of DRAM and a machine using 192GB of DRAM and 1.5TB of DCPMM. The latter delivered the same degree of CPU utilization, with up to 90 per cent of the throughput efficiency of the DRAM only server.

Real time analytics

These improvements hold out the prospect of cramming more virtual machines or containers on the same server, says Segeda, or keeping more data closer to the PC, to allow real time analytics.

This is important because while modern applications generate more and more data, only a “small, small fraction” is currently meaningfully analysed, says Segeda. “If you’re not able to do that, and get that insight, what’s the point of capturing the data? For compliance?”

Clearly, compliance is important but it doesn’t help companies monetise the data they’re generating or giving them an edge over rivals.

The prospect of opening up storage and memory bottlenecks will obviously appeal, whether your infrastructure is already straining, or because while things are ticking over right this minute, you know that memory and storage demands are only likely to go in one direction in future.

So, how do you work out how and where Optane™ will deliver the most real benefit for your own infrastructure?

On a practical level, the first step is to identify where the problems are. Depending on your team’s engineering expertise, this could be something you can do inhouse, using your existing monitoring tools. Intel® also provides a utility called Storage Performance Snapshot to run traces on your infrastructure and visualise the data to highlight where data flow is being choked off.

Either way, you’ll want to ask yourself some fundamental questions, says Segeda: “What’s your network bandwidth? Is it holding you back? What’s your storage workload? What’s your CPU utilisation? Is the CPU waiting for storage? Is the CPU waiting for network? [Then] you can start making very meaningful assumptions.”

This should give you an indication of whether expanding the memory pool, or accelerating your storage, or both will help.

Next steps

As for practical next steps, Segeda suggests talking through options with your hardware suppliers, and Intel account manager if you have one, to take a holistic view of the problem.

Simply retrofitting your existing systems can be an option he says. Add in an Optane™ SSD on NVMe, and you have a very fast storage device. Optane™ memory can be added to the general memory pool, giving memory expansion at a relatively lower cost.

However, Segeda says, “You can have a better outcome if you do some reengineering, and explicit optimization.”

Using Optane™ as persistent memory requires significant modification to the memory controller, something that is currently offered in the Intel® Second Generation Xeon® Scalable Gold or Platinum Processors. This will enable the use of App Direct Mode, which allows suitably modified applications to be aware of memory persistence. So, for example Segeda explains, this will allow an in memory database like SAP Hana to exploit the persistence, meaning it does not have to constantly reload data.

Clearly, an all-new installation raises the option of a more efficient setup, with software optimised to take full advantage of the infrastructure, and with fewer but more compute powerful nodes. All of which gives which the potential to save not just on DRAM and storage, but on electricity, real estate, and also on software licenses.

For years, infrastructure and software engineers and data centre architects have had to delicately balance computer, storage, memory, and network. With vast pools of persistent memory and faster storage now in reach, at lower cost, that juggling act may just be about to get much, much easier.

This article is sponsored by Intel.

Infinidat sued by employees for ‘secretly’ diluting shares

Infinidat, the high-end Israeli storage startup, is being sued by 29 current and former employees for wrongfully diluting their shares in the company.

The plaintiffs are holders of Infinidat Class B shares, which they had received as stock options, according to the Israeli news site Calcalist, which first reported the news.

Their lawsuit claims that the value of these shares was underpinned by company statute which said: “Class B shares will always receive the first 20 per cent of recompense and rights in the company in the case of an acquisition, without the possibility of dilution by anyone who isn’t an employee or advisor.”

Moshe Yanai

They claim a “secret” dilution has taken place, as the employee stock option holders were told in 2018 that Class B shares were worth $1,290.00 and are now priced a fraction of that.

Infinidat said it will defend the “baseless” lawsuit and seek damages and costs from the plaintiffs, who “have chosen to sue it with an idle claim, the goal of which is to cause the company damage and to place invalid pressure on it”. 

Infinidat in 2020

Infinidat produces high-end storage arrays that use disk drive for bulk storage and DRAM caching for high-performance faster than all-flash arrays according to the company and less expansive per TB of capacity. Earlier this year it said it had more than an exabyte of installed capacity.

The venture-backed company appears to have had a difficult time navigating its way through the pandemic. The company furloughed some 100 employees without pay in March 2020, some of whom were subsequently laid off.

In May 2020 Moshe Yanai, the founder, chairman and CEO of Infinidat resigned as CEO in May 2020 and becoming its Chief Technology Evangelist. He was pushed out by the board because of Infinidat’s poor business performance, according to Calcalist sources. Two co-CEOs were appointed; Nir Simon who runs R&D and Operations, and CFO Kariel Sandler.

In June 2020, the company raised an undisclosed sum in D-series funding round, to add to the $325m-plus  already raised. Existing investors, including TGP, Goldman Sachs, Claridge Israel, ICP, and Yanai, put in fresh capital. Accompanying the round was Yanai’s relinquishment of the chairman’s position and the appointment of outsider Boaz Chalamish as executive chairman.

Class B share shock

The Class B shareholders were then told that “whoever holds Class B shares and no longer works for the company will have their share diluted so that it would be worth one thousandth of its former value… and that current employees will be offered a new option program (combining a new class of shares with a new maturation process) that would be worth 6 per cent of the previous value of their holdings.”

The ex-employees Class B shares were now worth $1.29 and the employees’ Class B stock was valued at $7.40. This suggests that the number of class B shares was increased by a number between 16x and 1,000x.

The lawsuit alleges the registered capital of Class B shares was increased in the June 2020 funding round. There were two reasons for the increase; firstly; ”To allow an anti-dilution mechanism for two companies (Claridge Israel and ICP), who probably invested in the company in return for Class B shares and are meant to hold 54 per cent of Class B shares in any situation,” and secondly, to issue thousands of new Class B shares for the company’s employees.

This dismayed the Class B shareholding employees and several dozen left, taking Infinidat’s headcount to less than 400 and more than 20 per cent down on its 2018-2019 era peak of 500 employees. Infinidat told Calcalist it is recruiting.

Now read our story Actifio squashes employee shareholders ‘like cockroaches‘.

Israel time change knocks out local NetApp storage

A switchover to winter time in Israel was followed by failures in NetApp storage arrays which in turn caused server failures.

Computers in the Ministry of Health, universities and businesses failed on Sunday October 25 hen the change from daylight savings time to winter time occurred, Haaretz reports. “A crash of the storage [was] followed by a crash of additional servers, even after the original problem has been addressed,” an IT manager told Haaretz.

NetApp has fixed the problem and affected systems are restarting after many hours of being out of service. It did not respond to a request for comment by Haaretz.

NetApp ONTAP 9.7 was involved in the error, according to a Blocks & Files source.

Your occasional storage digest featuring Diamanti, HYCU, TrueNAS and more

Two Kubernetes-related news stories in this week’s storage news round up. Diamanti has debuted a software-only offering and Robin.io has upgraded its core offering and introduced a full-featured but capacity-limited free-for-life software edition.

HYCU has found another great sub-niche for its data protection – SAP HANA on Google Cloud, and iXsystems has added two more products to its TrueNAS range.

Diamanti Ultima

Diamanti, the supplier of a hyperconverged system for running Kubernetes orchestration of containers, has announced a standalone software product called Ultima. This is an integrated networking and storage data plane system and extends across on-premises and cloud-based environments, supporting any Kubernetes cluster distribution.

Ultima includes multi-tenant L2 and overlay container networking capabilities and container-aware data services like snapshots, backup, synchronous mirroring, and asynchronous replication

A Diamanti spokesperson said: “This announcement continues Diamanti’s evolution to a software-focused company and puts us square in the middle of a space where there has been a lot of recent attention  and activity (i.e. Pure Storage’s acquisition of Portworx and Veeam’s acquisition of Kasten).” 

Tom Barton

Is this a move away from hardware? Diamanti CEO Tom Barton told us: “It is by no means a change in Diamanti’s technology focus. Our innovation has always been in the software layer. By untethering our data plane software from hardware, Diamanti is answering enterprise customer demands for easy ways to expand across Kubernetes services, distributions and infrastructure in a hybrid cloud. “

Barton said: “We are proud to launch the only comprehensive data plane solution for both networking and storage that delivers advanced data services and both CSI and CNI plugins while the rest of the industry addresses only one of these areas at a time.” 

HYCU DPaaS for SAP HANA on Google’s Cloud

HYCU is the first vendor to offering Data Protection as a Service (DPaaS) for SAP HANA running in Google’s Cloud.

The HYCU Backup for GCP product offers:

  • Automated Discovery, Deployment and Maintenance of Google’s SAP HANA BackInt Agent,
  • User Interface provides SAP Admins access via SAP HANA Studio and Backup Admins via HYCU Management UI,
  • Quick time to DR and Clone with 1-click simplicity for SAP HANA infrastructure level DR across Google regions, and also cloning,
  • Consistent, point-in-time recovery of a complete SAP HANA Database,
  • Cloud aware, compute-free at-source dedupe and auto-tiering based on data protection policies reduce cost,
  • Support for backup targets using Cloud Storage Bucket Lock (WORM),
  • Air-gapped backup targets.

Several cloud service providers around the world have deployed HYCU’s Google SAP HANA service. A HYCU Cloud Services Provider Program is a co-branded service that can deliver data migration, data protection, and disaster recovery as a service.

TrueNAS range bulks up

iXsystems has added the R-Series Storage systems and the SCALE Open Source HyperConverged Infrastructure (HCI) Platform to its TrueNAS portfolio.

TrueNAS R-Series.

There are four R-Series storage systems:

  • TrueNAS R50: a 4U, 48 x 3.5″ / 4 x 2.5″ NVMe Bay system which features up to 16 CPU cores and a maximum capacity of 890TB that starts at $9,990 MSRP. 
  • TrueNAS R40: a high-density 2U all-flash system with 48 x 2.5″ and 7mm Bays, up to 16 CPU cores and 360TB of Flash Storage with a starting price of $8,990 MSRP.
  • TrueNAS R20: a 2U system with 12 x 3.5″ and 2 x 2.5″ Bays, up to 16 CPU cores and 230TB capacity with pricing that begins at $4,990 MSRP. 
  • TrueNAS R10: a compact 1U all-flash system with 16 x 2.5″ and 7mm Bays, up to 10 CPU cores, and 120TB of capacity for a starting MSRP of $5,990.

TrueNAS SCALE is a scale-out HCI platform for converged compute and scale-out Open ZFS storage. It can scale from 1 to 100 nodes. SCALE is available as Open Source Debian-Linux based software or as an appliance-based solution using iXsystems’ X20, M40, M50, and M60, and R-Series hardware systems.. 

Robin.io adds more to Kubernetes  

Robin.io is adding enhancements to Robin Cloud Native Storage for Kubernetes and the immediate availability of Robin Express, a full-featured, free-for-life edition.

Robin Express complements the company’s enterprise-focused offering, Robin Enterprise, which offers 24×7 enterprise support, unlimited node and storage capacity, and true “per-node-hour,” consumption-based pricing. Its capacity is limited to 5 nodes and 5TB.

The enhancements to Robin Cloud Native Storage for Kubernetes include;

  • Data management for Helm Charts: Helm is the most popular package manager and deployment mechanism on Kubernetes.  With Robin, you can now easily snapshot, backup, and migrate an entire Helm release as a single entity
  • Data locality (compute-storage affinity) for performance-sensitive workloads
  • Affinity and Anti-affinity policies to support the availability needs of stateful applications that rely on distributed databases and big data platforms.
  • Consumption based pricing for Robin Enterprise: Pay only for what you use.

Shorts

Acronis has integrated iAcronis Cyber Protect with Citrix Workspace.  The VB100 certified anti-malware solution secures endpoints with real-time protection that uses AI-based static and behavioural heuristic, on-demand antiiirus, anti-ransomware, and anti-cryptojacking technologies to prevent direct attacks against the Citrix Workspace app.  

Alluxio has announced the availability of its latest open source Data Orchestration platform with an expanded metadata service and a new management console for hybrid and multi-cloud deployments. Users can manage namespaces with billions of files without relying on third party systems. Use the management console to connect an analytics cluster, with engines such as Presto and Spark, with data sources across multiple clouds, single cloud or on-premises.

Grau Data has announced the general availability of Blocky for Veeam, which protects Veeam backups by denying any file access from unauthorised application processes. Blocky uses application whitelisting to identify and allow only authorised processes to access backup files. It creates a secure WORM (write-once, read-many) functionality.

HubStor has opened a private preview of HubStor backup for Azure VMs. Early access is open to a limited number of organisations running VMs on Microsoft Azure. The company will offer some preference for existing customers and partners, and space in the program is limited. 

Kingston Digital Europe has announced the DataTraveler Duo USB flash drive with dual USB Type-A and Type-C1 ports to share files between laptops, desktops and mobile devices. The drive has 32GB and 64GB capacities. 

VMware has selected MinIO as a design partner for the launch of the new Data Persistence Platform which is part of Cloud Foundations 4.1, which in turn is part of vSphere 7.0. You can launch multi-tenant object storage directly from vCenter. 

VMware-MinIO diagram.

HCI vendor Scale Computing is offering Acronis Agentless Backup for its HC3 product, allowing for full virtual machine backup and fast recovery for HC3 clusters.

Ceph storage startup SoftIron has hired Greg Bruno as its Chief Architect. His pedigree include Teradata and helping to develop the Rocks cluster toolkit at the San Diego Supercomputer Center. He was VP of Engineering and co-founder of StackIQ, which was acquired by Teradata in 2017.

Western Digital has announced the SanDisk Ixpand Wireless Charger Sync and SanDisk Ixpand Wireless Charger 15W. The Ixpand Wireless Charger Sync, with up to 256GB of local storage, provides wireless charging, while automatically backing up photos and videos and freeing up space on a mobile device. The SanDisk Ixpand Wireless Charger 15W does charging without the backup capability.

Consultancy ESG has published a tech validation report on Yellowbrick’s data warehouse which cites an up to a potential 332 per cent return on investment (ROI). IT compared Yellowbrick with legacy on-premises EDWs, cloud-only EDWs, and Apache Impala implementations for Hadoop-based data lakes. Get a copy from Yellowbrick’s website.

Seagate calls the bottom of (its) pandemic trough

money pouring into black hole

Seagate’s first fiscal 2021 quarter revenues were depressed by the pandemic but the disk drive maker thinks things will start to pick up. In the earnings call, CEO Dave Mosley said: “We now believe the September quarter marks the bottom of the COVID-related demand disruptions. And we expect [a] gradual recovery from this point forward. 

“Demand for data continues to explode, even through this current period of market uncertainty,” Mosley said in a prepared statement.

“We see indications for enterprise demand to improve and we expect this to continue as the broader markets gradually recover, supporting our positive December quarter outlook and reinforcing our revenue expectations for the fiscal year.”

Revenues of $2.31bn in the quarter ended October 2, declined 10.4 per cent Y/Y, while net income climbed 11.4 per cent to $223m net income. That was helped by a year-on-year reduction in total operating expenses from $2.31bn to $2.1bn. The outlook for the next quarter is $2.55bn at the mid-point. That’s down from the year-ago quarter’s $2.7bn.

“Seagate delivered solid September quarter results,” Mosley said, “supported by strong recovery in the video and image applications market and healthy cloud data centre demand, which drove double digit year-over-year revenue growth for our mass capacity storage solutions.” 

Quick summary

  • Gross margin of 25.8 per cent, down from the year-ago 26 per cent,
  • Diluted EPS of $0.86, up from year-ago’s $0.74,
  • Operating cash flow of $297m and free cash flow of $186m,
  • Cash and cash equivalents of $1.7bn at quarter-end.

Seagate earned $2.39bn from selling disk drives, the same as a year ago. Its other businesses; Enterprise Systems, Flash and Others, generated $188m revenues, also flat Y/Y. The average capacity per disk drive was 4.4TB, well up from the year-ago 2.9TB.  

CFO Gianluca Romano said enterprise (high-capacity nearline drives represented 58 per cent of Seagate revenues, compared to the year-ago 47 per cent, and 63 per cent of disk drive revenues. They represented 51 per cent a year ago.

Outside the nearline market there were Y/Y declines in enterprise mission-critical and PC drives although consumer drives  sales increased Q/Q. 

Romano said sales of SSD products “trended lower due to challenging pricing environment.”

No HAMR blow, more of a gentle tap

Mosley confirmed Seagate is on track to ship its first 20TB HAMR drives for revenue in December, “with a path to deliver 50-terabyte HAMR drives forecast in 2026″.

However, Romano tempered any expectations of a swift transition to 20TB HAMR drives by saying: “Until we get to 24-terabyte there’s not really a compelling transition and so we have to keep continue working that drive and we will work that over the course of the next calendar year.”

Object storage

Seagate last month announced it was entering the object storage business with brand new CORTX software (see our story). An analyst on yesterday’s earning call asked the company if this would compete with – and alienate – existing storage vendors.

“I don’t look at it as alienating,” Mosley replied, “simply, because there really isn’t an object store that’s designed specifically for mass capacity.”

Mass capacity object storage is Seagate’s aim, he said. Seagate is “opening up the community, so other people can help us… we want partnerships… we’re open to any partnership with anyone, I don’t think it’s going to be a threat.”

Exclusive: HPE takes giant axe to sales teams

HPE is laying off at least 500 staff in a major sales re-organisation, affecting server, storage, networking and PointNext teams. Employees in this latest WorkForce Reduction (WFR) round include sales account managers, enterprise account managers and country account managers. They received notices to quit, starting October 19.

According to sources, HPE is laying off entire external sales teams in response to the Covid-19 pandemic, which has reduced the need for in-person sales calls. The company is reorganising to focus more on inside sales.

HPE declined to discuss specifics with Blocks & Files, but reminded us of the company’s May 21 announcement following a disastrous second quarter that it planned to make $1bn cost savings, which would include layoffs.

An HPE spokesperson said: “As we’ve previously announced, we are focusing our investments and realigning our workforce to critical core businesses and areas of growth that will accelerate our strategy. We are committed to making these necessary changes with empathy and transparency and ensuring impacted team members receive the support they need.

“These actions will enable us to become a more agile organization and advance our strategy to deliver everything as a service from edge to cloud so that we can help our customers and partners adapt to a new business environment and harness the power of their data wherever it lives.”

As-a-service

CEO Antonio Neri, in a briefing with financial analysts last week, said “across all of our businesses, we are making bold moves to drive our agility, strengthen our capabilities, simplify our processes, and enhance our execution.

“For instance, we are re-envisioning our go-to-market strategy to elevate the customer experience and accelerate our as-a-service mix, and will be making changes to these teams to provide a more seamless, holistic sales experience. Our goal is to partner with customers to help them achieve outcomes that drive their individual, unique digital transformations.”

NetApp’s flash upgrade for its AFF and FAS arrays

NetApp
NetApp CloudJumper

NetApp has bumped up performance for FAS and AFF arrays at the entry level, with end-to-end NVMe flash drive support, to “leave rivals in the dust”.

The refresh provides extra capacity and performance oomph to NetApp’s ONTAP products and enables the data storage veteran to compete more aggressively with new QLC-using suppliers like StorONE and VAST Data, as well as long-term rivals Dell and HPE.

NetApp sells two ONTAP external array product lines – the All-Flash FAS (AFF) and the Fabric Attached Storage (FAS) variants. Both are dual-controller external arrays used by ONTAP to present concurrent block and file access to data.

The FAS arrays are optimised for mix of performance and capacity and, until now, consist of hybrid flash-disk systems while the AFF line is optimised more for performance and uses all-flash media.

FAS500f

There are four FAS model groups: the FAS2600, FAS2700, FAS8200 and FAS9000 Series. NetApp has added the first all-flash product to this quartet with the FAS500f, which uses the slowest and cheapest SSDs – ones using QLC NAND. The existing FAS arrays use TLC SSDs, which are faster to access and have a longer life than QLC NAND.

NetApp suggests the FAS500f is positioned for workloads such as backup consolidation, and database and VM copies for testers and developers. Put another way, NetApp says the FAS500f is for customers with demanding application performance objectives in a cost-effective, high capacity system.

NetApp FAS datasheet model table.

The new array supports end-to-end NVMe access and will run faster than the hybrid FAS models, while remaining slower than the AFF series.

The FAS500f has a 2 rack unit chassis and supports 24 drives or 48 in a high-availability pair. It has a much lower maximum capacity, at 8.8PB, than the other FAS arrays with their 15PB to 176PB range. The FAS2750 and FAS2720 also have a 2RU x 24 slot controller chassis and support 144 drives per HA pair.

AFF refresh

The AFF systems use SAS and NVMe SSDs, and provide an end-to-end NVMe design for maximum in-array and to-array (NVMe-over FC) data access speed and the lowest latency. 

The new low end A250 AFF, which will supersede the A220, delivers “45 per cent more performance and 33 per cent more storage efficiency,” than the A220, according to NetApp product manager Saurabh Modh, “and it leaves our competition in the dust”. There is no price increase for the A250 over the A220.

Modh suggests using the A250 for “virtualization and consolidation of mainstream enterprise applications (such as Oracle, SAP HANA, and Microsoft SQL Server). It also provides an attractive starting price for emerging workloads such as artificial intelligence, machine learning, real-time analytics, and MongoDB.”

The A250 comes in a mini box, just 2RU in size, with 24 x 2.5-inch NVMe drive bays and supports expansion to 576 SSDs 

This A250 is a lower-cost entry-level AFF array. It supports up to 24 x 32Gbit/s Fibre Channel ports, 4 x 100GbitE and 28 x 25GbitE ports plus a pair of 10GbitE management ports. The controllers use a 24-core Xeon processor and this controls access to the 15.3TB or 30.2TB SSDs.

The latest version of ONTAP, v9.8, is required for the A250.

A700 upgrade

With the latest refresh NetApp has added NVMe SSDs to the A700. The array already supports NVMe-oF, and the update givess it end-to-end NVMe capability for low latency and high speed data access.

NetApp senior product manager blogger Mukesh Nigam says: “In tests using a 100 per cent random read Oracle SLOB workload, an end-to-end NVMe AFF A700 system is 85 per cent faster at 500 microsecond latency than AFF A700 system with FCP/SAS storage.” That’s quite the performance jump. 

In 2018 there were five AFF variants, starting with the A220 at the low end in a 2RU x 24 slot chassis, and ranging through the A300 (3RU chassis), A700 (8RU), A700s (4RU) and A800 (4RU).

There are now four AFF model groups listed by NetApp – the new low end A250, the mid-range A400 and A700, and the all-NVMe high-end A800. The A220 and A300 are absent from the current AFF datasheet, which indicates that they will soon move into end-of-life status.

AFF model summary details from NetApp data sheet

The A440 and A800 have 4RU controller enclosures while the A700 employs a larger 8RU chassis. The A800 scales to 2,880 drives and the A400 and A700 scale to 5,760 drives. This results in a huge difference in effective capacity terms; 35PB for the A250 vs 793.7PB for the A400 and A700 and 316.3PB for the A800.