Home Blog Page 309

Loss-making Snowflake doubles down in dash for growth

It takes Snowflake Computing six months or more from signing up a big enterprise customer for the revenues to start rolling in. Onboarding migration delays are to blame. This was one of the key takeaways from this week’s Q4 earnings call, in which the cloud data warehouse firm revealed why it is losing so much money.

Revenue in the quarter was $190.5m, up 117 per cent Y/Y and beating forecasts. However, the net loss was $198.9m, 139 per cent worse Y/Y. Full year revenue was $592m, up 124 per cent, and the company posted a loss of $539.1m – 54.7 per cent worse Y/Y. (Read The Register‘s report for more details.)

Earnings call

So what gives? The company is spending big in a dash for growth and operating expenses are outstripping revenues. Snowflake is on a hiring spree and doesn’t intend to slow down. It hired 800 people in fiscal 2021 and aims to hire even more in fiscal 2022. 

Frank Slootman.

CEO Frank Slootman said this in the earnings call: “We’re going to add 1,200 people next year. Actually, Q1 is a very, very big onboarding quarter. It will be probably the largest quarter of the year because we’re onboarding a lot of people in the sales and marketing organisation in advance of our sales kickoff that we just had. We are investing as quickly while being efficient in our business as we can.”

The revenue problem here is that Snowflake has focused on winning enterprise customers and they take many months to migrate their existing on-premises data warehouses to Snowflake’s cloud.

Migration drag

CFO Mike Scarpelli said: “I want to stress, it takes customers, especially if you’re doing a legacy migration, it can take customers six months-plus before we start to recognise any consumption revenue from those customers because they’re doing the data migration. And what we find is – so they consume very little in the first six months and then in the remaining six months, they’ve consumed their entire contract they have.”

Scarpelli said: “We are landing more Fortune 500 customers. We talked about we landed 19 in the quarter, but those 19 we landed, just to reiterate, we’ve recognised virtually no revenue on those customers. That’s all in the RPO that will be in the next 12 months.”

RPO (remaining performance obligation) represents an amount of future revenue that has been contracted with customers, but is not yet recognised as revenue.

Engineering spend

Slootman revealed that Snowflake engineering is developing the product to work faster: “Historically, we have shared data through APIs and through file transfer processes, copying, and replicating. It’s been an enormous struggle. The opportunity with Snowflake as to make this zero latency, zero friction, completely seamless.”

The need is to reduce what Snowflake calls data latency. Slootman explained: “One of the areas that we are investing in where we have extraordinary talent that we have attracted [to] the company is where our event-driven architectures. Today, our event latency is sort of seconds and minutes, right? But you want to drive that down to sub-second and dramatically sub-second.”

He said: “That obviously, that requires tremendous optimisation on our part and we are working on that because we see that as a very, very critical part of the ongoing evolution of digital transformations. … we have to become much, much faster than what we’ve done so far. … this is going to expand the marketplace in places where these technologies historically have not been.”

By the numbers

A chart shows the dramatic increase in losses over the last two quarters; 

Spot this two plunging red bars on the lower right.

Snowflake’s revenue growth is accelerating, as a second chart shows, with its steepening quarterly revenue lines.

Snowflake’s operating expenses have shot up faster still in the two most recent  quarters;

Snowflake watchers have to wait and see if its plans deliver increased growth in the next few quarters. Slootman is betting on it.

Seagate competes with its OEMs through StorONE

Seagate is selling its Exos AP drive arrays through StorONE, a software-defined storage startup. This takes the company into direct competition with OEMs such as Dell EMC, Hitachi Vantara, HPE, IBM and NetApp , which use Seagate disk drives in their arrays and filers.

Gal Naor

“Seagate’s Exos AP 5U84 Platform when combined with StorONE is an ideal solution for data centers of all sizes trying to consolidate workloads and reduce storage footprint,” said Gal Naor, CEO, and Co-founder of StorONE. “The Application Platform solution allows us to showcase our investments in the efficient utilisation of storage hardware and our next-generation hybrid technology.” 

The Exos AP 5U84 is a Seagate-built 5 rack unit, dual controller array with 84 x 3.5-inch drive bays. StoreONE installs its S1: Enterprise Storage Platform software in this box to provide an active:active storage array delivering over 200,000 IOPS, and more than 1PB of capacity for under $175,000. A 1.48PB configuration uses 70 x 18TB Exos disk drives for capacity, and 14 x 15.36TB SSDs for random IO performance heft while a 1.4PB config uses 216TB Exos disks.

StorONE will also sell you a smaller, 106TB hybrid system, based on Seagate’s 2RU, 12-slot AP 2U12 array. This is priced at $36,000.

Seagate Exos AP 5U84.

Seagate is making quite the push behind AP 5U84, last week announcing Lyve Cloud, an object storage services based on the system that resides in Equinix co-location centres in the USA. 

The company is also keen to counter some analyst views that SSDs are taking over from nearline disk drives. The Seagate-StorONE announcement declares: “Mainstream data centres require affordable high-performance and high-capacity. Many storage vendors have left these organisations behind, even though they represent most data centres in the world.”

That’s ‘left behind’ in the sense that these storage vendors are moving to all-flash, SSD  arrays.

A disadvantage of disk drives is that high capacity drives can take 24 hours or more to be rebuilt in a RAID scheme after a failure. StorONE’s vRAID technology rebuilds failed drives in under two hours, the companies say, without specifying a capacity, . 

In October 2020 StoreONE said it could rebuild a 16TB drive in under five hours, and a failed SSD in three minutes.

HPE sharpen’s SimpliVity edge backup

HPE has enabled updated its SimpliVity HCI operating system to protect data better and provision storage to containerised apps.

SimpliVity 4.1.0 can backup to the public cloud with Cloud Volumes Backup and integrates more deeply with the StoreOnce deduplicating target backup appliance.

The SimpliVity hyperconverged systems are positioned as one-box, do-it-all, remote office and branch office IT systems. They are pitched at distributed enterprises such as fixed site bricks and mortar retail businesses, and mobile site ones like oil and gas drillers and racing car teams. These traditional ROBO locations are now characterised as “Edge computing” but they are still classic ROBO IT operations.

The SimpliVity hardware – for example, a 1RU ProLIant DL325 server – is installed in the remote/branch sites. It is mounted in a rack, two cables are for power and networking are connected, and then switched on.

HPE DL325 1 RU server

Software is installed using VMware’s vCentre in the main data centre. Apps are installed as virtual machines in the same way.

Virtual machine backups are run at the remote site, with policies set centrally, and the data is sent to an HPE StoreOnce appliance in the data centre or to HPE’s Cloud Volumes Backup in a public cloud.

A Kubernetes CSI plug-in to the SimpliVity OS means that containerised apps can be pushed to the remote sites and use persistent volume storage in the SimpliVity HCI box. Their data is protected in the same way as VM data.

Pivot3 and Scale Computing also position their HCI systems as Edge computing boxes. HPE’s own dHCI (disaggregated HCI) Nimble systems are marketed as data centre systems, not edge systems. Dell EMC supplies its VxRail systems for both edge and data centre use.

HPE SimpliVity 4.1.0 is available worldwide at no additional charge for customers with valid support contracts. All new features and capabilities announced are supported with VMware vSphere 7.0. HPE Cloud Volumes Backup support is available in the Americas, Europe and Asia.

Dell powers up hybrid cloud storage services with Faction

Dell Technologies has expanded its partnership with Faction, the US multi-cloud managed services provider, to offer two new storage and data protection services.

Announcing the offerings, Joe CaraDonna, the CTO of Public Cloud and APEX Offerings at Dell Technologies, wrote in a blog post yesterday. “What we do – in collaboration with Faction – is provide a consistent way to manage storage and data protection for multiple clouds from a single place with one uniform experience”.

The two companies go to market under the catchy moniker ‘Dell Technologies Cloud Storage for Multi-Cloud – Powered by Faction” and teamed up in 2019 to offer array and filer storage for cloud apps and services in the AWS, Azure, Google and the Oracle clouds. They also offered Dell PowerProtect for MultiCloud. The services are fulfilled using Dell PowerScale, PowerStore, PowerMax, Unity XT arrays and PowerProtect systems installed in seven Faction data centres in the US.

Faction’s public cloud-adjacent data centre hubs provide a central source data repository and protection vault for customers who wrangle multiple public cloud and on-premises IT environments. This can be cost-efficient and convenient.

Or, as CaraDonna puts it: “The future of IT is hybrid – a world that balances the right public cloud services with the right on-premises infrastructure to provide the performance, scale, functionality and control required of modern applications and development paradigms.”

The new Dell services are:

  • Superna Eyeglass DR Manager for PowerScale for Multi-cloud – replicates file data to PowerScale filers with 1- button failover, flexible SyncIQ scheduling, continuous readiness monitoring, DR testing, data loss exposure analysis and reporting, plus recovery to the Faction data centre or selected public cloud. 
  •  PowerProtect Cyber Recovery – a PowerProtect repository for copied data with immutable storage, virtual air gaps, and malware-detecting analytics.

CaraDonna writes: “There is zero data gravity [with PowerScale for Multi-Cloud] as the data is not being transferred from one cloud environment to another, but simply accessed when needed. This approach to multi-cloud data access avoids the complexity, cost and time of managing multiple data copies.”

PowerScale and Faction.

These services are similar in concept to the Pure Storage-Equinix partnership and also to NetApp’s Azure NetApp Files scheme, although that is Azure-specific.

Equinix data centres offer Pure storage as a service

Pure Storage is opening a new front in the cloud edge storage market, with data centre operator Equinix offering Pure-as-a-Service to its customers.

Equinix has fitted out 18 data centres with Pure Storage hardware and software and is handling delivery through Equinix Metal. Customers can subscribe to Pure’s file, block, object and Pure’s Portworx Kubernetes storage services.

Jack Hogan, Pure VP of Technology Strategy, said: “Enterprises want full control over their environment, but they don’t want to operate their own data centres or be forced to fit into traditional cloud models – they want a cloud model that fits their business. By partnering with Equinix, we are eliminating management complexity and delivering the flexibility and controls that put organisations in charge of what their technology platforms can do for them.”

In effect Equinix, Pure and other Equinix partners are providing a virtual data centre, Platform Equinix, with public cloud-like economics.

The underlying technology for Equinix Pure-aaS includes Pure’s FlashArray//X and //C and FlashBlade Arrays and, as the diagram above shows, Portworx storage. There is replication to a second hub and connectivity to Pure’s Cloud Block Store and Portworx in the three main public clouds: AWS, Azure and GCP. ISPs can offer Pure storage services using these Equinix Metal centres.

Pure and Equinix have highlighted three use cases for the service:

  • Hybrid cloud and Data Recovery as a Service for virtual environments – on-premises VMs can be moved to the hosted Pure-Equinix environment using FlashArray
  • High-performance, near-edge cloud storage using FlashBlade for file and object workloads
  • Hybrid and edge cloud native container storage using Portworx, and claimed to be faster than the public cloud.

This Pure Equinix deal provides competition for Zadara’s service offering managed on-premises and public clouds storage arrays.

The Pure-Equinix partnership differs from the Seagate Lyve Cloud Equinix object storage arrangement announced last week. The latter is Seagate-operated and available in only one US Equinix centre, with a target of four centre availability by the end of 2021.

Nu-Metal

Equinix is a bare metal as a service, which sees Equinix set up the interconnect plumbing and provide billing for compute, network and storage services. Equinix Metal portfolio which will include a range of services beyond Pure -aaS.

Metal is based on technology acquired via the 2020 takeover of Packet. Equinix is using Supermicro and Dell X86 servers. Also, 128-core Altra Arm servers from Ampere are on the way. The Dell servers are available through Dell’s APEX subscription plan, with Equinix offering Dell Bare Metal as a Service. 

Metal also includes Cohesity’s Helios data platform software and the Mirantis Container Cloud has Equinix Metal as a supported provider. Equinix is offering managed servers, storage arrays and data management software as services.

‘Leaner, better resourced’ HPE rides the post-pandemic waves

Antonio Neri presenting on stage
Antonio Neri

HPE CEO Antonio Neri yesterday hailed the company’s “strong Q1 performance…Our revenue exceeded our outlook and we significantly expanded our gross and operating margins to drive strong profitability across most of our businesses.”

Revenues for the quarter ended Jan 31 were almost back to pre-pandemic levels, down 1.7 per cent Y/Y to 6.8bn. Net income fell 33 per cent to $223m.

Antonio Neri, HPE CEO
Antonio Neri

Neri summed HPE’s position up like this: “As a result of our cost optimisation and resource allocation program, we are emerging from an unprecedented crisis as a different company, one that is much leaner, better resourced and positioned to capitalise on the gradual economic recovery currently at play.”

Free cash flow in Q1 was a record and the company’s ongoing transition to a subscription-based business saw a 27 per cent increase in ARR (annualised revenue run rate) to $649m.

However, all of its business segments, with the exception of the Intelligent Edge, were flat or reported decreased revenues in Q1.

  • Intelligent Edge (mostly Aruba) – $806m – up 12 per cent Y/Y,
  • Compute (Servers) – $2.98bn – down 1 per cent,
  • High Performance Compute & Mission-Critical Solutions – $762m – down 9 per cent
  • Storage – $1.19bn – down 5 per cent,
  • Corporate Investments, etc. – $321m – down 4 per cent,
  • Financial Services – $860m – flat Y/Y.

The High Performance Compute business head, Peter Ungaro, ex-CEO of acquired Cray, is leaving in April, and consulting with HPE for six months. HPE said this business suffers from lumpy or uneven revenues and it remains confident in both the near-term and long-term outlook. The backlog of awarded but not yet recognised business exceeds $2bn of exascale business, and many multi-million dollar deals.

Storage

In storage, HPE continues to “see strong revenue growth in our own IP software-defined portfolio where we have been investing.” Specifically,  the Primera array business grew triple digits Y/Y and is expected to exceed 3PAR revenues – possibly next quarter. Neri said this is the fastest revenue ramp in HPE’s storage portfolio. Primera launched June 2019.

The overall HPE all-flash array business grew five per cent Y/Y, with Nimble and Primera arrays identified as driving this. The overall Nimble business was up 31 per cent Y/Y.

Neri said HPE’s hyperconverged strategy “continued to gain traction” but didn’t provide any numbers. He also said more storage operational services are being sold with the arrays, again without revealing numbers.

HPE has recorded storage revenue declines for five successive quarters, and Nero didn’t predict when storage growth might occur.

Outlook

HPE expects “to see gradual improvement in customer spending as we progress through fiscal year ’21 giving us the confidence in our ability to deliver on our long-term revenue growth guidance.“ HPE is guiding 30-40 per cent CAGR from fy19 to fy22 and is raising its free cash flow outlook for fy2021.

Neri said: “In the core businesses of Compute and Storage, our strategy to grow in profitable segments and pivot to more as-a-service solutions is paying off.”

The outlook for the next quarter is for double-digit percentage revenue growth Y/Y from last year’s $6bn and mid single-digit decline from the current quarter’s $6.8bn. We calculate that to mean a rough range of $6.6bn to $6.7bn.

Financial Summary

  • Gross margin – 33.7 per cent, up 0.3 per cent Y/Y,
  • Compute operating margin – 11.5 per cent, up 0.8 per cent Y/Y.
  • Free cash flow – $563m; a record for HPE,
  • Cash flow from operations – $1bn,
  • Cash on hand at quarter-end – $4.2bn.
HPE revenues by quarter, by fiscal year.

NetApp embraces K8s for hybrid cloud apps management, cans HCI appliance

NetApp is waving goodbye to the NetApp HCI appliance, launched only in 2017. The data storage vendor wants customers instead to join it in the hybrid cloud, using its software-only Project Astra. The company said it will reveal more details about the Kubernetes-focused project in coming weeks.

Eric Han

Eric Han, a product management VP at NetApp, told us: “The key thing here is we’ve seen HCI as important in the market when we started, but it’s a piece in the market that customers no longer need because it was meant to move people to the [hybrid] cloud and customers can do that without an appliance now.”

.

NetApp HCI end-of-life timelines

HCI and hybrid cloud

Han said the main focus for NetApp HCI is to help customers move applications to the hybrid cloud. But it has become clear that the application movement will take place using containers and Kubernetes on commodity hardware. “Application data management is the key next step in the Kubernetes evolution.” And this is why Astra is to become NetApp’s hybrid cloud enablement product, with Han saying the company will “double down on the investment.” We can expect this software-defined Astra story to be filled out with future infrastructure and platforms.

Application data management for hybrid clouds entails running and moving in and between private (on-premises) and multiple public clouds, making the boundary porous to containerised apps. As we see it, the application data management layer provisions apps with compute, storage and networking resources and also protection services.

B&F diagram illustrating migration from on-premises and separate public clouds to a unified public-private hybrid cloud

Han explained: “With containers, customers can now build with one operational playbook. And they can run in one public cloud, a second public cloud, and they can run on premises. That’s something that we haven’t had in the industry. It’s what Kubernetes gives us.”

It’s also an opportunity to bring NetApp’s data management to more customers through an Astra-managed Kubernetes portal.

We see the NetApp Data Fabric concept evolving towards a view of a unified hybrid cloud, a single app enablement, execution and protection space with containerised apps and their resources moving freely between private and public clouds.

NetApp HCI

NetApp HCI (hyperconverged infrastructure) launched In June 2017 as a set of compute servers and a separate SolidFire Elements all-flash storage nodes. This was the disaggregated HCI style, also espoused by HPE with its Nimble dHCI product, and Datrium, since bought by VMware for disaster recovery tech. The disaggregated HCI competitive advantage was the ability to independently scale compute and storage capacity thus better fitting application workloads.

This did not become mainstream. Instead, disaggregated HCI became an adjunct to the massively bigger pure HCI market, which is dominated by VMware VSAN systems such as Dell EMC’s VxRail, and Nutanix. 

The SolidFire Elements software was made available separate from NetApp hardware in October last year. This paved the way for the HCI appliance hardware to enter end-of-life.

The SolidFire arrays continue unchanged with support for third party servers and the existing long-term roadmap.

A year from now, the NetApp HCI appliance will no longer be available. In this year critical new functionality and critical fixes will be implemented. From year-end, customers will get three years’ software maintenance and five year’s hardware maintenance.

Your occasional storage digest with Arcserve, StorageCraft, DDN, and more

Old Arcserve and young StorageCraft – both second or third tier data protection companies – are merging. Are they huddling together for warmth or is it a growth play? Two newer Kubernetes-focused startups are getting funding; StorageOS and Platform9.

Arcserve and StorageCraft merge

Veteran data protector Arcserve and relative newbie StorageCraft are merging. Terms were undisclosed. The two companies are both backed by private equity owners and say they will form a single entity providing workload protection throughout the data centre, in the cloud, cloud, for SaaS applications, and at the edge.

Arcserve UDP scheme.

Arcserve CEO Tom Signorello will run the combined company, with StorageCraft branded ‘StorageCraft, an Arcserve Company’. Matt Medeiros, Storagecraft’s CEO will depart. Doug Brockett, its president, will stay in that role and report to Signorello.

Signorello said: “This merger will place us at the forefront of filling a massive market gap by supporting all workloads in every environment with one ecosystem. No longer will organisations require ad-hoc solutions that only add to the complexity they are trying to solve.”

Arcserve has UDP appliances that integrate backup, disaster recovery and backend cloud storage and partners Sophos for security. It also has cloud and SaaS offerings. The smaller StorageCraft covers a larger geographic area with data protection, including backup for Office365, G Suite and other SAaS  options, and OneXafe, a converged, scale-out storage product.

The two will continue to support and invest in their existing products while looking to combine their IP and develop new functionality and services. They say this will enable a seamless evolution from current to next-generation infrastructures and data workloads, including hyper-converged, multi-cloud, containers, edge infrastructures, and next-generation cloud data centres.

N.B. Arcserve has announced UDP (Unified Data Protection) 8.0 It is designed to protect organisations’ entire infrastructure, including hyperconverged, from data loss, cybercriminals, and persistent threats like ransomware (using Sophos technology) and AWS S3 immutable storage. It protects Nutanix Files and can use Nutanix Objects as a target. UDP 8.0 also protects Oracle database backups using RMAN.

StorageOS gets $10m B-round

StorageOS, a UK startup that provides virtual SANS for Kubernetes-orchestrated containers, has raised $10m in a B- series round. Total funding now stands $20m. The investment will fund go-to-market activities and build out sales and other customer-facing teams.

The funding round was led by Downing Ventures, with current investors Bain Capital Ventures, Uncorrelated Ventures, MMC Ventures and new investor Chestnut Street Ventures all chipping in.

Alex Chircop, StorageOS Founder and CEO, said: “Securing a further significant round of funding is an important step in the development of the business and a huge vote of confidence in our team, technology and achievements to date.”

Salil Deshpande, General Partner at the Bain-backed Uncorrelated Ventures, said: “Storage subsystems need to be containerized – it’s the future – but not if it compromises performance. That was the real test: could the StorageOS container-based storage engine run and support very high performance workloads such as transactional databases?”

“Once I saw it in action, delivering stunning results, and consistently blowing away all the competition in independent benchmarks, I knew it was a winner.” 

Platform9 gets funding for Kubernetes offering

Managed Kubernetes provider startup Platform9 has raised $12.5m in D-round funding, taking the total raised to $37.5m.

Sirish Raghuram, Platform9 co-founder and CEO, said in a statement: “Kubernetes has become the de-facto standard for building out hybrid and edge applications. However, the journey to cloud native is fraught with complexity: developers need to understand micro-services, platform engineers need to operationalise Kubernetes, and ongoing upkeep of cloud native applications is extremely difficult.”

Platform9 thinks it can fix that and reported 145 per cent growth in ARR (Annual Recurring Revenue) in its fiscal 2020, and a 340 per cent increase in its customer base. Current Platform9 clients include Kingfisher Retail, Redfin, Cloudera, Yext, and Juniper Networks.

Shorts

DDN has announced record $400m in annual revenues for 2020, along with its highest ever profitability. The company said it delivered 52 per cent in revenue growth from 2018 to 2020 under its DDN and Tintri brands. 2020 was the fifth consecutive year of customer expansion, revenue and profitability growth for DDN. It said it has an installed base of more than 11,000 customers.

HPE has acquired CloudPhysics, the developer of SaaS technology that monitors and analyse IT infrastructures, estimates costs and viability of cloud migrations and models customers’ IT infrastructure as a virtual environment. Potential infrastructure purchases can be checked for positive or negative ROI. 

Kaseya has acquired RocketCyber and its cloud-agent security operations centre.

DataStax has announced Astra serverless, an open source, multi-cloud serverless database-as-a-service (DBaaS). It claims Astra delivers total cost of ownership (TCO) savings of up to 3-5 times over non-serverless database workloads

Infinidat has expanded its alliance with VMware with new support for vSphere Virtual Volumes (vVols). InfiniBox is the first petabyte-scale storage platform available in the VMware Cloud Solutions Lab. 

TrueNAS  SCALE 21.02 is now available from ixSystems. The software starts from the TrueNAS 12.0 base which includes OpenZFS 2.0, all the file, block, and object storage services, the middleware to coordinate these, and the web UI.

Robin.io and IBM said Robin Cloud-Native Storage (Robin CNS) platform for Kubernetes is now part of IBM Cloud Satellite. Cloud Satellite lets users run IBM Cloud services on IBM Cloud, on-premises, or at the edge, with everything delivered as a service. 

Intel has released the 670p QLC 3D NAND SSD for client devices. It runs faster and lasts longer than its immediate predecessor and is available in capacities of up to 2TB. Apparently, this makes it the ‘ideal storage solution for thin-and-light laptops and also desktop PCs”.

Intel 670P.

The 670P has 522GB, 1TB and 2TB capacities and the PCIe 3.0 4-lane interface. It delivers up to 310,000/340,000 random read/write IOPS, and 3.5GB/sec sequential read and 2.7GB/sec sequential write throughput, using a dynamic SLC cache in all cases. Endurance is a nominal 0.2 drives writes per day and 740TB written at the 2TB capacity level.

Samsung has begun mass production of its PM9A3 E1.S format SSD, the ‘ruler’ replacement for M.2 gumstick-style SSDs. It has a PCIe gen 4 interface to its 128-layer TLC flash and delivers up 750,000/160,000 random read/write IOPS and up to 3GB/sec sequential write bandwidth. Samsung isn’t revealing available capacity levels.

Microsoft has announced Surface Removable SSDs (rSSDs), enabling users and technicians to replace the SSDS on a Surface Pro 7+ laptop. SSD kits consist of a single certified refurbished SSD – plus SSD screw. All volume sizes are available: 128GB, 256GB ,512GB or 1TB. At this time, the kits cover Surface Pro 7+ only and will not work for Surface Pro X or Surface Laptop Go. For now, the product is available in the US only, but Microsoft plans to ‘gradually roll out the kits to all Surface regions’.

Samsung PM9A3 E1.s SSD.

Kioxia says its lineup of CM6 and CD6 Series PCIe 4.0 NVMe enterprise and data centre SSDs have gained compatibility approval with Supermicro rackmount systems such as Ultra, WIO, BigTwin, FatTwin, SuperBlade, 1U/2U NVMe all flash arrays, GPU accelerated systems, and Super Workstations.

BT has added Acronis‘s small business and small office/home office protection to its security services portfolio.

People

HPC storage supplier Panasas has hired Tod Ruff as VP Marketing, Brian Reed as VP Product and Alliances, and Richio Aikawa as Director of Vertical Solutions. They report to COO Brian Peterson. Tom Shea was promoted to Panasas CEO in September last year, and he hired Brian Peterson in November. The Panasas exec ranks are changing.

Sorin Faibish, a senior distinguished engineer – Midrange storage division at Dell EMC (meaning PowerStore) has been recruited by Cirrus Data Solutions as Director of Product Solutions.

Data warehouser Databricks has recruited Vinod Marur as SVP of Engineering to lead the global engineering team. Marur was SVP Engineering at Rubrik and an engineering VP at Google before that.

VAST Data has hired Sven Breuner as its Field CTO International. He was the field CTO for Excelero from Feb 2019 to Jan 2021. Breuner has been founder, CTO, CEO and board member at ThinkParQ since Oct 2013 to the present (board member only now.)

Screaming AI: NetApp joins DDN, WekaIO and pals with Nvidia-validated setup

NetApp and Nvidia have developed an all-flash ONTAP array/ DGX A100 GPU server AI reference architecture, pumping out up to 300GB/sec and joining DDN, Pavilion Data, VAST Data and WekaIO – which also have Nvidia-validated storage architectures.

The new ONTAP AI products come in three pre-configured sizes with 2, 4 and 8-node DGX A100 configurations. These are pre-tested and validated with AI/ML software from Domino Data Lab, Iguazio and other suppliers.

A NetApp post by Jason Blosil, a senior product marketing manager for cloud solutions, said of the move: “NetApp and Nvidia are bringing to market a new integrated solution that’s based on the field-proven NetApp ONTAP AI reference architecture. … It’s powered by NVIDIA DGX A100 systems that use second-generation AMD EPYC processors, NetApp AFF A-Series all flash storage, NVIDIA networking, and advanced software tools. You get new levels of performance and simplicity to help you more quickly deploy and operationalise AI.”

The AFF A800 array comes as a base unit and adds an expansion chassis for the 4-node config and a second for the 8-node config according to a NetApp diagram.

NetApp diagram

The main components are:

  • NetApp AFF A-800 all-flash array in high-availability pair configuration, with 48 x 1.92TB NVMe SSDs,
  • ONTAP v9,
  • DGX A100 GPU server with 8 x  A100 Tensor Core GPUs and 2 x AMD EPYC processors,
  • Mellanox ConnectX-6 adapter interconnects – 100/200Gb Ethernet- and InfiniBand-capable,
  • Mellanox Spectrum and Quantum switches, and
  • separate fabrics for compute-cluster interconnect and storage access

This reference architecture does not use server CPU/DRAM GPUDirect bypass technology.

A NetApp ONTAP AI document describing the DGX-A100 reference architecture says: “With NetApp all flash storage, you can expect to get more than 2GB/sec of sustained throughput (5GB/sec peak) with well under 1 millisecond of latency, while the GPUs operate at over 95 per cent utilisation. A single NetApp AFF A800 system supports throughput of 25GB/sec for sequential reads and 1 million IOPS for small random reads, at latencies of less than 500 microseconds for NAS workloads.”

DDN, WekaIO and VAST Data also have Nvidia-validated storage architectures for DGX-A100 servers. Pavilion Data does not offer an Nvidia DGX POD reference architecture but GPUDirect and the DGX A-100. These four suppliers’ announced throughput numbers are, respectively:

  • DDN – 173.9GB/sec with A1400X all-NVMe SSD system, Lustre parallel file system to a DGX-A100,
  • Pavilion Data – 182GB/sec to a single DGX-A100,
  • WekaIO – 163.2GB/sec to one DGX-A100 via 12x HPE DL325 servers,
  • VAST Data – 173.9GB/sec to four DGX0-A100s.

An Nvidia reference architecture document says: “With the FlexGroup technology validated in this [NetApp] solution, a 24-node cluster can provide over 20PB and up to 300GB/sec throughput in a single volume.”

That is – seemingly – comfortably more than the maximum bandwidth numbers announced by DDN, WekaIO and VAST Data.

In other words, ramp up the number of A-800 arrays to lift throughput much higher. Confusingly, the actual reference architecture refers to a maximum of 8 x DGX-A100s and a single A-800 high-availability pair, not two dozen A-800 nodes. We’ve asked NetApp and Nvidia to clarify this point.

It is difficult to make exact comparisons between the NetApp, DDN, Pavilion, WekaIO and VAST Systems as the number of all-flash array systems and DGX0-A100 servers differ between the various suppliers’ reference architecture configurations and explanatory documents. 

Potential buyers would be well advised to extract such configuration details from the suppliers so as to make an informed choice.

More information is available in a blog from NVIDIA and a very nicely detailed Nvidia reference architecture document (registration required.)

Clumio lays off more staff as it shifts focus to cloud

Backup-as-a-Service startup Clumio has laid off more staff as it focuses on public cloud backup. 

Sources told Blocks and Files that two-thirds of its sales force have been laid off and job offers withdrawn from applicants. We understand that Clumio is rebalancing its business.

Poojan Kumar.

Poojan Kumar, Clumio co-founder and CEO, told us: “The time to shift our focus to the public cloud is now, and as CEO, I had to tighten in areas that were no longer a strategic focus, to do the hard, right thing for the company to ensure we can serve our customers in the long run.”

The SaaS data protection market is contested by several strong suppliers all seeking to capitalise on the move to the public cloud. These include Acronis, Cohesity, Commvault (Metallic), Druva, HYCU,  Rubrik and Veeam (Kasten) among others.

Kumar told us: “Today starts a new chapter for Clumio with a mission to simplify data protection in the public cloud. Data protection in the public cloud is broken. Businesses are struggling with expensive, complex, and cobbled together technology. Operators lack the confidence in their systems to quickly restore from a major event, to know whether or not they are in compliance, or that their data is safe from bad actors.

“Clumio will not only focus on solving these problems, but we will also set forth on our goal to become the best ‘Public Cloud Data Protection as a Service’ platform on the planet.”

As a SaaS startup, Clumio protects data such as AWS EBS volumes, Microsoft 365 email and VMware VMs running in the public cloud against data loss and ransomware with a virtual air-gap technology. Originally it protected both private (on-premises) and public cloud applications.

Clumio revealed in January that: “We are amplifying our cloud efforts and restructuring a very small part of our sales team. We’ve actively communicated this with our internal team, and we look forward to delivering a customer-proven, cloud-agnostic SaaS solution for tomorrow’s cloud enterprise.”

HPE: No need to swap disk for SSDs in the enterprise

Some storage suppliers and analysts are saying QLC (4bits/cell) flash drives can replace disk drives for enterprise data storage. For example, Wikibon, VAST Data and StorONE. Others say that such SSDs can replace disk drives in nearline storage arrays. An example of this viewpoint is Pure Storage and its FlashArray//C arrays.

HPE provides both all-flash arrays and hybrid flash/disk arrays. We asked the firm’s Omer Asad, VP & GM, HCI, Primary Storage and Data Management Services, about HPE’s view on SSD and HDD use, and whether one would replace the other in enterprise storage.

Blocks and Files: How does HPE see the use of SSDs and HDDs evolving in enterprises? 

Omer Asad.

Omer Asad: SSDs are important in the primary storage space and whilst SSDs are becoming cheaper and cheaper, from growing benefits in production efficiencies, they are still more expensive on a $/GB basis when compared with HDDs and therefore we expect to continue to have hybrid models with HDD and SDDs for some time. Although it has been predicted for many years that SSDs will eventually replace HDDs, we still see a lot of value coming from the hybrid model.

Clearly a lot depends on the workload and specific use case, for example online transaction process workloads with higher performance requirements have already moved to all-flash and are not going back to HDDs, whereas HDD disk drive technology remains most cost-effective for long-term storage.

HDDs will continue to play an important role in data centres for secondary and tertiary, price sensitive application environments.  

Pure HDD-based systems in primary storage are on their way out, but HDDs will usually be found in most enterprises in a hybrid model together with SSDs – until price-performance is realised by lower cost/endurance flash media. HPE Nimble Storage delivers performance of all-flash at the economics of HDD and is a classic example of how general purpose workloads are likely to employ a mix of SSDs and HDDs.

Blocks and Files: Will SSDs replace nearline HDDs? 

Omer Asad: HPE Nimble Storage delivers the performance of all-flash at the economics of HDD and therefore the price-performance of Nimble Hybrid Flash far exceeds Pure Storage’s QLC solution. 

Using QLC/PLC (5bits/cell) SSD drives does not make sense when you have such a market-leading hybrid architecture. The advantages that HPE has with Nimble Storage uniquely shape our answers to these questions. Today TLC or even QLC drives do not offer the similar level of $/GB as nearline drives do. Therefore, a combination of SSDs + NL HDDs will always present a more compelling option to customers running their general purpose workloads than an all-flash solution even if they employ lower cost SSD alternatives such as QLC and/or PLC SSDs.

Single tier all flash arrays are a feature of HPE’s current storage line-up but because of different secondary and tertiary workloads with different requirements, we believe a hybrid architecture of HDDs/SDDs will be the future, and HDDs will continue to play an important role in data centres for secondary and tertiary, price sensitive application environments, for the foreseeable future.

Blocks and Files: Will penta-level cell flash play a role in enterprise SSDs? 

Omer Asad: We have seen lots of companies launching QLC SSDs and discussions are growing around penta-level Cell (PLC – 5bits/cell) technologies.  Thus far there is not a mainstream vendor offering PLC SSDs, although they are intended to be cheaper SSD technology.  

We are finding that the costs of QLC SSDs are not as low as we’d like them to be – at the moment! When QLC SSD prices drop low enough to create a compelling business case for us, opportunities for penta-level flash may open up at that point. 

While QLC intends to offer lower cost points … it comes with caveats around lower performance and lower endurance. So the use cases for QLC SSDs are somewhat limited and in many cases, resemble the use cases for HDD-based systems also. Given use cases are similar while employing QLC and HDD, cost now becomes a major point of differentiation and, in that matter, QLC SSDs are still way more expensive than HDDs. 

Comment

HPE’s position is unambiguous. Disk drives are cheaper than SSDs, even QLC SSDs, and it’s also selling Nimble’s storage array architecture, which delivers the performance of all-flash at the economics of HDD, or at least closer to it. Ergo, from its point of view, there is no need to consider exchanging disk drives for SSDs unless the SSD price falls to or below the cost of disk.

Quantum beats WekaIO in SPEC video benchmark

Quantum StorNext is faster than WekaIO in the SPEC SFS video workload benchmark, recording the top numbers for video stream processing, response time and bandwidth.

Brian Pawlowski

Brian Pawlowski, Quantum’s chief development officer, said the test results “clearly demonstrate that StorNext is the fastest file system on the planet for video workloads. And thanks to the architecture of the StorNext File System, it achieved these record-breaking results with substantially less hardware than the nearest competitor.” 

StorNext is a scale-out, multi-tiered file data management product and parallel access file system designed for entertainment and media applications. SPEC is the Standard Performance Evaluation Corporation, a non-profit body formed to establish, maintain and endorse standardised benchmarks and tools to evaluate performance and energy efficiency.

The SPEC SFS 2014 benchmark tests four aspects of a filer’s performance:

  • Number of simultaneous builds that can be done (software builds)
  • Number of video streams that can be captured (VDA)
  • Number of simultaneous databases that can be sustained
  • Number of virtual desktops that can be maintained (VDI)

As detailed by a SPEC document, the StorNext V7.0.1 parallel filesystem software accessed files on ten Quantum all-flash F-Series systems. These each had ten Micron 15.36TB 9300 NVMe SSDs, 150.36TB per mode, for a total of 1.5PB of flash.

There were 14 client systems running Quantum Xcellis Workflow software and a separate metadata-controlling server. A 32-port 100GbitE Arista switch connected the storage and the accessing servers. Clients and the StirNext system were connected using via iSER (iSCSI Extensions for RDMA).

We have included the Quantum results in a table listing other suppliers VDA numbers. 

ORT is Overall Response Time and is a measure of latency. Streams refers to the number of video streams that can be processed. A chart compares the MB/s numbers from each supplier and shows how WekaIO and Quantum have surged ahead of the competition.

Scale-out parallel file systems and NVMe SSDs provide a winning combination at this benchmark and, by extension, in video workflow applications.