Home Blog Page 158

Storage news roundup – March 1

Newspaper sellers on Brooklyn Bridge

Avery Design Systems has announced a validation suite supporting the Compute Express Link (CXL) industry-standard interconnect. It enables rapid and thorough system interoperability, validation and performance benchmarking of systems targeting the full range of versions of the CXL standard, including 1.1, 2.0 and 3.0. The suite covers both pre-silicon virtual and post-silicon system platforms.

Avery CXL Validation Stack.

Data lake supplier Dremio has rolled out new features in Dremio Cloud and Software. Expanded functionality with Apache Iceberg includes copying data into Apache Iceberg tables, optimizing Apache Iceberg tables and table roll back for Apache Iceberg. Customers can create data lakehouses by performantly loading data into Apache Iceberg tables, and query and federate across more data sources with Dremio Sonar. It is also accelerating its data as code management capabilities with Dremio Arctic –  a data lakehouse management service that features a lakehouse catalog and automatic data optimization features to make it easy to manage large volumes of structured and unstructured data on Amazon S3.

Sheila Rohra.

Hitachi Vantara has appointed Sheila Rohra as its Chief Business Strategy Officer, reporting to CEO Gajen Kandiah and sitting on the company’s exec committee. CBSO is a novel title. Kandiah said: “Sheila has repeatedly demonstrated her ability to identify what’s next and create and execute a transformative strategy with great success. With her industry expertise and technical understanding of the many elements of our business – from infrastructure to cloud, everything as a service (XaaS), and differentiated services offerings – I believe Sheila can help us design a unified corporate strategy that will address emerging customer needs and deliver high-impact outcomes in the future.” Rohra comes from being SVP and GM for HPE’s data infrastructure business focused on providing primary storage with cloud-native data infrastructure and hyperconverged infrastructure to Fortune 500 companies. 

Huawei has launched several storage products and capabilities at MWC Barcelona. They include a Blu-ray system for low-cost archiving; OceanDisk, “the industry’s first professional storage for diskless architecture with decoupled storage and compute and data reduction coding technologies, reducing space and energy consumption by 40 percent”; Foiur-layer data protection policies with ransomware detection, data anti-tamper, air-gap isolation zone through the air-gap technology, and end-to-end data breach prevention; and a multi-cloud storage solution, which supports intelligent cross-cloud data tiering and a unified cross-cloud data view. OceanDisk refers to two OceanStor Micro 1300 and 1500 2RU chassis holding 25 or 36 NVMe SSDs respectively, with NVMeoF access. We’ve asked for more information about the other items.

Data migrator Interlock says it is able to migrate data from any storage (file/block) to any storage, any destination and for any reason. Unlike competitors, Interlock can migrate data from disparate vendors as well as across protocols (NAS to S3). It is able to perform data transformation necessary to translate data formats and structures of one vendor/protocol to another. Interlock says it can extract data from an application if given access to storage. This allows Interlock to migrate data at the storage layer, which is faster than through the application.

Interlock migrates compliance data with auditability and can “migrate” retention from before. Typically, when migrating data across different storage systems, built-in data protections like snapshots are lost. But with Interlock, snapshots, labels, as examples, may be migrated with data. Migrations are complicated by lack of resources such as bandwidth and CPU/memory bottlenecks in the system. Interlock is able to track utilization (when the system is busy etc.) and adjust number of threads accordingly. This also helps reduce required cutover time.

Nyriad, which supplies disk drive-based UltraIO storage arrays with GPU controllers, is partnering SI DigitalGlue with its creative.space platform making enterprise storage simple to use and manage. Sean Busby, DigitalGlue’s President, said: “DigitalGlue’s creative.space software coupled with Nyriad’s UltraIO storage system offers high performance, unbeatable data protection, and unmatched value at scale.” Derek Dicker, CEO, Nyriad, said: “Performance rivals flash-based systems, the efficiency and resiliency are equal to or better than the top-tier storage platforms on the market – and the ease with which users can manage multiple petabytes of data is extraordinary.” The UltraIO system can reportedly withstand up to 20 drives failing simultaneously with no data loss while maintaining 95 percent of its maximum throughput.

Veeam SI Mirazon has selected ObjectFirst and its Ootbi object storage-based backup device as the only solution that met all its needs. Ootbi was racked, stacked, and powered in 15 minutes, built on immutable object storage tech. Mirazon says that, with Object First, it can shield its customers’ data against ransomware attacks and malicious encryption while eliminating the unpredictable and variable costs of the cloud. 

Data integrator and manager Talend has updated its Talend Data Fabric, adding more AI-powered automation to its Smart Services to simplify task scheduling and orchestration of cloud jobs. The new release brings certified connectors for SAP S/4HANA, and SAP Business Warehouse on HANA, enabling organizations to shift critical workloads to these modern SAP data platforms. The release supports ad platforms such as TikTok, Snapchat, and Twitter, and modern cloud databases, including Amazon Keyspaces (for Apache Cassandra), Azure SQL Database, Google Bigtable, and Neo4j Aura Cloud. The addition of data observability enables data professionals to automatically and proactively monitor the quality of their data over time and provide trusted data for self-service data access. More info here.

Veeam has launched an updated SaaS offering – Veeam Backup for Microsoft 365 v7 – enabling immutability, delivering advanced monitoring and analytics across the backup infrastructure environment, along with increased control for BaaS (backup as a service) through a deeper integration with Veeam Service Provider Console. It covers Exchange Online, SharePoint Online, OneDrive for Business and Microsoft Teams. Immutable copies can be stored on any object storage repository, including Microsoft Azure Blob/Archive, Amazon S3/Glacier and S3-compatible storage with support for S3 Object Lock. Tenants have more self-service backup, monitoring and restore options to address more day-to-day requirements. Veeam Backup for Microsoft 365 v7 is available now and may be added to the new Veeam Data Platform Advanced or Premium Editions as a platform extension or operate as a standalone offering.

AIOps supplier Virtana has announced a Capacity Planning offering as part of the infrastructure performance management (IPM) capabilities of the Virtana Platform. Companies get access to real-time data for highly accurate and reliable forecasts. Jon Cyr, VP of product at Virtana, said: “You’ll never be surprised by on-prem or cloud costs again.”

A Wasabi Vanson Bourne survey found 87 percent of EMEA respondents migrated storage from on-premises to public cloud in 2022, and 83 percent expect the amount of data they store in the cloud to increase in 2023. Some 52 percent of EMEA organizations surveyed reported going over budget on public cloud storage spending over the last year. Top reasons for EMEA orgs exceeding budget included: storage usage was higher than anticipated (39%); data operations fees were higher than forecast (37%); additional applications were migrated to the cloud (37%); storage list prices increased (37%); higher data retrieval (35%); API calls (31%); egress fees (26%) and more data deletion (26%) fees than expected. Overall, EMEA respondents indicate that 48 percent of their cloud storage bill is allocated to fees, and 51 percent allocated to storage capacity, on average.

WCKD RZR intros silo-busting data platform

UK startup WCKD RZR has unveiled Data Now software at Mobile World Congress in Barcelona that it says catalogs different databases anywhere in a customer’s network and gives them instant access to all their databases’ content.

A book library has a catalog – a single source of truth about the books it stores. It has a few buildings, one type of thing to catalog, and a homogeneous user population. A multinational organization like a bank is light years away from that happy state. WCKD RZR wants to move it from silo proliferation and data ignorance to the comfort of data asset knowledge and governed access.

Chuck Teixeira, founder and CEO of WCKD RZR, explained in a statement: “In Data Now, we’ve created the universal ‘master key’ for data discovery. Our goal is to revolutionize the way businesses manage and access their data, and our solution does just that. It’s truly disruptive and can benefit every large organization on the planet. Whether its multinational banks, government institutions or regional retailers; Data Now acts as their supercharged data connector and access accelerator.”

Data Now provides a central location for businesses to see, search and access data across an entire organization. Once discovered, the software enables users to view and download data from multiple databases, in any environment around the world, seamlessly and in full compliance with all global data regulations.

Why this is a big deal?

A bank like HSBC, Barclays or Citibank has hundreds of separate data silos holding data for specific applications that the bank has developed for specific applications in specific geographies with individual regulations and data access rules. It can be dealing with multinational customers whose myriad operations have a presence in some or many of these silos.

If we ask the question “What data does the bank hold on that customer?” the typical answer is: “It doesn’t know” – because it can’t find out. Each data silo is its own micro-universe with its own access rules, data element names, data formats, storage system and its own management.

John Farina and Chuck Teixeira, WCKD RZR
John Farina and Chuck Teixeira

The WCKD RZR story began when HSBC’s UK and US operations entered a deferred prosecution agreement with the US Justice Department in 2012 because it had failed to maintain an effective anti-money laundering program and to conduct appropriate due diligence on its foreign account holders. 

It forfeited $1.256 billion, paid $665 million in civil penalties, and had to update its records so it could monitor potential money laundering attempts. CTO Jon Farina told us: “You want to make sure that you can monitor transactions, credit card payments and flows of cash across our internal systems to make sure that something nefarious is not being done.”

This involved collating some 10PB of data covering 1.6 million clients in 65 legal jurisdictions. Teixeira and Farina, who were working at HSBC at the time, had the job of combing through the many data silos involved and creating, as it were, a single and accessible source of truth.

It was as if they were standing on top of Nelson’s Column in London’s Trafalgar Square, surveying hundreds of different buildings, and saying they had to get into each and every one and find the data inside.

They built a Hadoop system on-premises at HSBC with tens of thousands of tables and machine learning software to detect transactions across different clients to spot potential financial crimes. This was an absolutely massive extract, transform and load (ETL) operation, and they wanted it automated. But there was no software to do that. They realized that it was, in fact, a general problem, not one unique to HSBC.

They also thought it could be automated if connectors were built to the underlying silos and their contents cataloged and indexed, as well as their access routes and restrictions discovered. All this metadata could be entered into a single database, an abstraction layer lens, through which the underlying data silos could be virtualized into a single entity without their data contents being moved or migrated anywhere.

This realization triggered Teixeira into starting WCKD RAZR – named after his pet bulldog – in 2020, with Farina joining as CTO in May last year, when the company raised $1.2 million in a pre-seed round.

Farina briefed us on this background and on WCKD RZR’s software development. We constructed a diagram showing the basic structure of WCKD RZR’s Data Watchdog technology:

Diagram of WCKD RZR technology
Blocks & Files’ DataWatchdog/Data Now diagram

Clients access the catalog, search for data then request it. Their access status is checked and, if valid, DataWatchdog will fetch the data from the underlying sources and deliver it to them. 

There are three aspects to this: find, govern and access. Data Watchdog enables customers to find and govern their data in each country, in real time, and be fully compliant with relevant data sharing, privacy and governance rules. It spiders through the underlying data sources – in minutes it’s claimed – and adds them to the catalog, without touching, transforming or duplicating the original data sources. The Data Now software provides access to the data located by Data Watchdog and can mask sensitive information such as debit card numbers.

Farina said: “Data Now is a full-service data access accelerator. We are revolutionizing the way organizations can search multiple databases for information that they hold. Now they can search it, see it, find it, use it, monetize it. Mobile phones were transformed by the iPhone, video rentals were redefined by Netflix, and data access is now being revolutionized by Data Now.”

There is no need to migrate data from multiple sources into a single mega-database in a digital transformation project. There are aspects of this which are similar to the data orchestration provided by Hammerspace, but WCKD RZR is focused more on source databases rather than raw file or object data storage systems.

Pinecone: Long-term memory for AI

Pine cones from Wikipedia public domain image - https://upload.wikimedia.org/wikipedia/commons/e/ec/Sulkava.vaakuna.svg

Startup Pinecone, which provides a vector database that acts as the long-term memory for AI applications, has hired former Couchbase CEO and executive chairman Bob Wiederhold as president and COO after 15 months of acting as advisor and board member.

A long-term memory for AI apps sounds significant, but is it? Why do such apps need a special storage technology? Pinecone’s software applies to vector databases which are used in AI and ML applications such as semantic search and chatbots, product search and recommendations, cybersecurity threat detection, and so forth. After one year of general availability Pinecone says it has 200 paying customers, thousands of developers, and millions of dollars in annual recurring revenue (ARR).

Edo Liberty, Pinecone founder and CEO, said in a statement: “To maintain and even accelerate our breakneck growth, we need to be just as ambitious and innovative with our business as we are with our technology. Over the past 15 months I’ve come to know Bob as one of the very few people in the world who can help us do that.”

The key Pinecone technology is indexing for a vector database.

A vector database has to be stored and indexed somewhere, with the index updated each time the data is changed. The index needs to be searchable and help retrieve similar items from the search; a computationally intensive activity, particularly with real-time constraints. That indicates the database needs to run on a distributed compute system. Finally, this entire system needs to be monitored and maintained.

Edo Liberty, Pinecone
Edo Liberty

Liberty wrote: “There are many solutions that do this for columnar, JSON, document, and other kinds of data, but not for the dense, high-dimensional vectors used in ML and especially in Deep Learning.” The vector database index – the reason he founded Pinecone was to create an indexing facility – needed to be built in a way that was generally applicable and facilitated real-time search and retrieval.

When AI/ML apps deal with objects such as words, sentences, multimedia text, images, video and audio sequences, they describe them with numeric values that can describe a complex data object, such as color, physical size, surface light characteristics, audio spectrum at various frequency levels and so on.

These object descriptions are called vector embeddings and stored in a vector database, where they are indexed so that similar objects can be found in the database through index searching. A search is not run based on direct user-input data such as keywords or metadata classifications for the stored objects. Instead, we understand, the search term is processed into a vector using the same AI/ML system used to create the object vector embeddings. A search can then look for identical and similar objects. 

Pinecone was founded in 2019 by Liberty, an ex-AWS director of research and one-time head of its AI Labs that led to the creation of Amazon SageMaker. He spent just over two and half years at AWS after almost seven years at Yahoo! as a research scientist and senior research director. Pinecone raised $10 million in seed funding in 2021 and $28 million in an A-round in 2022.

In a 2019 blog, Liberty wrote: “Machine Learning (ML) represents everything as vectors, from documents, to videos, to user behaviors. This representation makes it possible to accurately search, retrieve, rank, and classify different items by similarity and relevance. This is useful in many applications such as product recommendations, semantic search, image search, anomaly detection, fraud detection, face recognition, and many more.”

Pinecone’s indexing uses a proprietary nearest-neighbor search algorithm that is claimed to be faster and more accurate than any open source library. The software’s design provides exceptional performance regardless of scale, with dynamic load balancing, replication, name-spacing, sharding, and more.

Bob Wiederhold, Pinecone
Bob Wiederhold

Vector databases are attracting a lot of attention. Zilliz raised $60 million for its cloud vector database technology in August last year. And we wrote about Nuclia, the search-as-a-service company in December. Wiederhold’s transition from advisor and board member to a full-on operational COO role indicates he shares that excitement.

He said: “There is incredibly rapid growth across all business metrics, from market awareness to developer adoption to paying customers using Pinecone in mission-critical applications. I am ecstatic to join such an elite company operating in such a critical and growing market.”

WekaIO’s stance on sustainable AI puts down roots

WekaIO wants us to be aware of datacenter carbon emissions caused by workloads using its software technology – AI, machine learning and HPC – and aims to counter those emissions with a sustainable AI initiative.

It says that although these technologies can power research, discoveries, and innovation, their use is also contributing to the acceleration of the world’s climate and energy crises. WekaIO wants to collaborate with leaders in the political, scientific, business, and technology communities worldwide to promote more efficient and sustainable use of AI. What it is actually doing now, as a first step, is planting 20,000 trees in 2023 and committing to plant ten trees for every petabyte of storage capacity it sells annually in the future, by partnering with the One Tree Planted organization.

Weka president Jonathan Martin said: “Our planet is experiencing severe distress. If we do not quickly find ways to tame AI’s insatiable energy demands and rein in its rapidly expanding carbon footprint, it will only accelerate and intensify the very problems we hoped it would help us solve.”

Is WekaIO just greenwashing – putting an environmentally aware marketing coat around more or less unchanged carbon-emitting activities?

Weka’s software indirectly contributes to global warming through carbon emissions caused by the  servers on which it runs using electricity for power and cooling. How much carbon do they emit?

One estimate by goclimate.com says emissions from a Nordic on-premises or datacenter server are 975kg CO2-eq/year, assuming the servers don’t use green electricity from wind farms and solar power. How many trees are needed to absorb that?

The average tree absorbs an average of 10kg, or 22lb, of carbon dioxide per year for the first 20 years, according to One Tree Planted. So Weka would need to plant 97.5 trees per year to absorb it for the working life time of this petabyte of storage. If that petabyte runs across eight servers, emitting 7,800kg of carbon per year, then WekaIO would need to plant 780 trees a year.

But WekaIO is planting 20,000 trees in 2023, which would absorb 200,000kg of CO2 per year. This is real.

Previously object storage supplier Scality has got involved in reforestation. It seems a good idea and perhaps there’s scope here for organized storage industry action.

Martin said: “We also recognize that reforestation is only one piece of the decarbonization puzzle. It would be tempting to stop there, but there is much more work to be done. The business, technology, and scientific communities must work together to find ways to make AI and the entire enterprise data stack more sustainable. Weka is committed to doing its part to help make that happen. Watch this space.”

Learn more about Weka’s thinking here and here.

Comment

The Storage Networking Industry Association has its Emerald initiative to reduce storage-caused datacenter carbon emissions. Pure Storage also has a strong emphasis on carbon emission reduction via flash drive use, although its all-flash product sales are obviously improved if storage customers buy fewer disk drives.

Weka’s green stance is not compromised by such concerns. Balancing storage supplier business interests and environmental concerns, without demonizing particular technologies, is going to be a hard nut to crack and getting a storage industry-wide consensus may be impossible. But a storage or IT industry-wide commitment to reforestation, via perhaps an agreed levy on revenues, might be feasible. Let’s watch this space, as Martin suggests.

Storage news ticker – February 27

Data catalog and intelligence tech supplier Alation has announced Alation Marketplaces, a new product for third-party data sets to augment existing data in Alation Data Catalog for richer analysis. Additionally, the company expanded Alation Anywhere to Microsoft Teams and Alation Connected Sheets to Microsoft Excel to help users access contextual information from the catalog directly within their tool of choice.  

SaaS data protector Druva has been awarded a patent, US11221919B2, for its smart folder scan technology. It assumes users have multiple snapshots – AWS EBS volumes, for example – and need to restore files from these snapshots. Instead of going over each point-in-time copy (snapshot) and checking for a file, a more efficient approach is to create a filename-based index of the files post-snapshot creation. A lookup of the index should help identify which snapshot contains the file (or associated versions). Index creation could further be optimized for resource consumption by performing a smart scan instead of a full scan to identify the changed files in incremental snapshots, we’re told.

An ESG survey has found that data loss from the public cloud is common due to a multitude of causes. The biggest culprit was SaaS applications: 42 percent of respondents experienced sensitive data loss from their SaaS platforms. The most common contributors to cloud-resident sensitive data loss were misconfigurations of services, policy violations, and access controls/credentials issues. Download the report here.

Storage CEO album

ExaGrid CEO Bill Andrews has released a free-to-stream easy-listening acoustic rock album, Warriors in the Woods, with him singing and playing guitar. Who would have thought it!

High-end enterprise array supplier Infinidat has promoted Steve Sullivan, EVP and GM Americas, to chief revenue officer. CEO Phil Bullinger said: “Steve brings to the CRO role a track record of building and leading high-performing teams that have set a very high standard for customer success and delivering technical and business value in enterprise storage. His leadership in the expanded CRO role will accelerate our growth as we scale our global account and partner relationships through collaborative and cohesive go-to-market strategies and programs.”

TrueNAS company iXsystems has announced its Mini R array, offering 12 lockable and hot-swappable 3.5” drive bays providing more than 200TB of capacity when fully populated and the option of 2.5” SATA SSDs for more than 90TB of flash storage. This larger Mini R joins the Mini X, X+, and XL+ systems in the entry-level TrueNAS Mini series. It is geared for small and home offices and can also serve in parts of enterprise deployments for remote sites, backup, and non-critical departmental applications. It is priced at just under $2,000. The latest BlueFin version 22.12.1 of its TrueNAS Scale Linux OS is now generally available for free download at here and comes pre-installed on TrueNAS Enterprise appliances with support.

iXsystems Mini R storage chassis
iXsystems Mini R chassis

Privately owned Komprise says subscription revenues in 2022 more than doubled for a third consecutive year. There was 50 percent growth in the Komprise Global File Index, which consists of hundreds of billions of files, providing customers a Google-like search across their entire data estate to find, tag and mobilize unstructured data. Some 30 percent of revenues came from expansions, indicating strong customer satisfaction and loyalty, and Komprise has a 120 percent net dollar retention rate. There was 200 percent growth in the number of organizations using Komprise for data migrations to a new NAS or for cloud data migrations. More than 100 Microsoft customers migrated data to Azure using Komprise during the first 12 months of the Azure File Migration program, we’re told, which funds customer use of Komprise.

Data protection supplier N-able revenues for 2022’s fourth quarter showed a 7 percent year-on-year rise to $95.8 million, with a $7 million net profit versus $2.06 million. Full year revenues were $371.8 million, up 7 percent on 2021’s $346.5 million, with a net profit of $16.7 million, up from $113,000. William Blair analyst Jason Ader commented: “Management noted broad-based demand across its portfolio as MSPs and SMBs continue to prioritize investments in data protection, security, and device monitoring, even in a tougher macro environment.”

NetApp has effectively stopped further development of its SolidFire all-flash arrays. CEO George Kurian said in its latest results that the company had decided to reduce investment in products with smaller revenue potential like Astra Data Store and SolidFire. He added: ”We had a small business in SolidFire that we continue to sustain, but we don’t plan to grow going forward.” NetApp bought startup Solidfire for $870 million in cash in 2015. Its technology was eclipsed by NetApp’s internally developed ONTAP all-flash arrays, which currently have a $2.8 billion annual run rate.

HPC scale-out and parallel file system supplier Panasas has run a survey by Vanson Bourne of hundreds of IT decision makers working at enterprises with more than 1,000 employees across the US, UK, and Germany. It looked at the problems enterprises face in building and managing storage infrastructure for high-performance applications. Over half of respondents (52 percent) cited specialty knowledge as the top challenge. Other problems were high acquisition costs (45 percent) and maintenance costs (43 percent).

The answer? Buy Panasas software. Jeff Whitaker, VP of Product Strategy and Marketing, said: “Our PanFS software suite demonstrates our commitment to delivering simple, reliable solutions that support multiple HPC and AI/ML applications from a single storage platform.”

We asked Katie McCullough, CISO at Panzura, how its Customer Security Advisory Council’s Chris Hetner’s security thinking will be made available to Panzura customers? She said: “He will have quarterly sessions with 2-3 of our customers’ CISOs from different industries. Our sales team is working with our customers to announce the launch of the Customer Security Advisory Council in March. We will be sharing opportunities to join the council on a quarterly basis.”

“During the quarterly session, Chris will offer an assessment of the financial, operational, and legal ramifications/costs associated with a ransomware incident.  Additionally, there will be a Q&A portion of the session where customers can share their pain points and seek Chris’ company-specific strategies advice.  Panzura will also be seeking customers feedback on solutions we are working on to address the data resilience challenges customer face. Chris will also be sharing his insights in coordination with Panzura security, product and services leadership through regular blogs, and webinars.

Cloud data warehouser Snowflake has announced a Telecom Data Cloud which offers a single, fully managed, secure platform for multi-cloud data consolidation with unified governance and elastic performance. Snowflake and Snowpark enable machine-generated data in near real time using ML models to predict faults, schedule maintenance ahead of time, and to reduce operational downtime. Initial users include AT&T, OneWeb and Singapore’s M1. Read more about Snowflake for Telecom here.

Web3 storage supplier Storj has released its Storj Next update providing an up to 280 percent increase in file upload and download speeds. It says it delivers enterprise-grade storage for 10 percent of the price of providers like AWS, Microsoft and Google. Storj has a network of more than 20,000 nodes; an increase over the year from 13,000. CEO Ben Golub said: “Storj gets faster as we add more nodes due to parallelism for downloads. We eliminate dependency on a single data hub, instead using a distributed network of under utilized storage capacity on existing hardware, which allows for less latency and reduced data transfer times.” 

A Storj spokesperson told us: “We’ve rolled out many optimizations in our metadata layer that keeps track of the split file locations [and] eliminated bottlenecks in our parallelization, [made] optimizations in our client, uplink. Eliminated unnecessary round-trips in our protocol: leveraging Noise protocol to reduce round trips, which is used by wireguard (https://noiseprotocol.org/) and optimized our gateway scalability and throughput.”

Data protector Veeam has a new CFO, Dustin Driggs. He succeeds Chuck Garner who led the Veeam’s finance, strategy and operations functions for more than four years and is leaving for parts unknown. Driggs joins Veeam from Barracuda Networks where he worked for over 16 years, most recently as CFO and senior vice president leading the finance and accounting function.

Kioxia in spaaace… as part of Spaceborne Computer-2 project

Kioxia today announced that HPE servers fitted with Kioxia SSDs are being used on the International Space Station (ISS) as part of the NASA and HPE Spaceborne Computer-2 (SBC-2) program.

The idea was to put ordinary commercial servers aboard the ISS to do as much compute as possible at the space station rather than always beaming data back to Earth for processing or physically returning disk drives. The program started in 2017 and SBC-1 was sent to the ISS in 2018 aboard a SpaceX Dragon spacecraft. It returned in 2019.

SBC-1 used two HPE Apollo 40 servers with a water-cooled enclosure, and it ran for a year, simulating, HPE said, the amount of time it would take to travel to Mars. The system was ruggedized to withstand radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power, and irregular cooling. 

Sending data to Earth for processing on such a trip “could mean it would take up to 20 minutes for communications to reach Earth and then another 20 minutes for responses to reach astronauts.” That means the compute really has to be done on board a space vessel. SBC-1 demonstrated it could provide a teraflop of compute capacity.

The follow-on SBC-2 mission was launched two years ago to demonstrate that ordinary servers could run data-intensive computational loads such as real-time image processing, deep learning, and scientific simulations during space travel. It doubled the compute power of SBC-1 and would run aboard the ISS for two to three years, ingesting data from a range of devices, including satellites and cameras, and processing it in real-time. The system was installed and powered up in May 2021.

Scott Nelson, EVP and CMO for Kioxia America, bigged up the company’s participation, saying: “Proving that datacenter-level compute processing can successfully operate in the harsh conditions of space will truly take something special. The synergies that exist when Kioxia and HPE collaborate to leverage our respective technologies, allows us to explore and study at the very edge of scientific discovery. We can’t wait to see where the HPE Spaceborne Computer journey takes us.”

HPE CMO Jim Jackson added: “By bringing Kioxia’s expertise and its SSDs, one of the industry’s leading NAND flash capabilities, with HPE Spaceborne Computer-2, together, we are pushing the boundaries of scientific discovery and innovation at the most extreme edge.”

HPE SBC-2 lockers featuring Kioxia storage
The two HPE SBC-2 lockers aboard the ISS

SBC-2 uses COTS (commercial off-the-shelf) servers: two ruggedized Edgeline EL4000 and ProLiant DL360 Gen 10 (gen 2 Cascade Lake CPU) servers running Red Hat Linux. One of each server is fitted in two lockers. Due to the small footprint available aboard the ISS, there is no shared storage and no traditional SAN. But SBC-2 is fitted with GPUs to process image-intensive data requiring higher resolutions such as shots of polar ice caps on Earth or medical X-rays, and support specific projects using AI and machine learning techniques.

The AI capability could be used for checking astronaut space walk wear.

Dr Mark Fernandez, solution architect, Converged Edge Systems at HPE, said: “The most important benefit to delivering reliable in-space computing with Spaceborne Computer-2 is making real-time insights a reality. Space explorers can now transform how they conduct research based on readily available data and improve decision-making.”

An SBC-2 test involved DNA sequence data. Previously, 1.8GB of raw DNA sequence data took an average time of 12.2 hours to download to Earth for initial processing. With SBC-2, researchers on the space station processed the same data in six minutes to gather insights. They then compressed it to 92 KB and sent it to Earth in two seconds, representing a 20,000x speed-up, HPE said.

Two types of Kioxia’s SSD are being used: the 2.5-inch format RM enterprise SAS SSD and M.2 gumstick card XG NVMe SSD. We are not being given the precise model generation numbers or capacities.

Why is Kioxia announcing its presence in the SBC-2 kit now on February 27? After all, the drives it sent up were 2021 vintage. It may have been tried to catch a marketing ride on today’s scheduled Crew-6 mission to the ISS. Unfortunately, that ride was called off two minutes before launch due to an issue with the rocket’s ignition fluid. The next launch date is March.

Scale adds easy-bake Edge HCI install

Scale Computing has introduced a zero (technical) touch provisioning (ZTP) feature that it says means edge computing sites need a body to unbox the kit, hook it up to power and the Internet and then installation is automatic with no need for any technical bods on site.

When powered up, the hyperconverged infrastructure (HCI) Scale kit automatically connects to Scale’s SC//Fleet Manager internet site. It then downloads, installs and executes a pre-written set of configuration instructions to set the bare metal up into a fully-functioning system with all hardware resources configured and system and application software installed and operational.

When an organization has hundreds or thousands of edge sites to deploy or update this solution from Scale could mean the dependency on having technical staff physically present is removed, saving hundreds of person hours of time and cost.

Scott Loughmiller, chief product officer and co-founder of Scale Computing, said: “Edge infrastructure shouldn’t require hands-on initialization,” and the new SC//Fleet Manager does away with that need.

He said: “Our new SC//Fleet Manager gives users the ability to see and manage their entire fleet from an intuitive cloud-based console at fleet.scalecomputing.com.”

Scale has also integrated its edge OS, SC//HyperCore, with the Red Hat Ansible Automation platform. Ansible is an open-source, command line automation software suite, infrastructure as code, that can configure systems, deploy software, set up cloud usage and orchestrate workflows – pretty much what SC//Fleet Manager does.

Scale Computing ZTP intro video

We think that Scale’s SC//Fleet Manager remote deployment functionality overlaps with Nebulon’s server card-based cloud management of servers.  

Scale’s David Demlow, VP of product strategy, disagreed: “There are actually very few similarities in that Nebulon is focused on the management of physical server fleets and deploying 3rd party OS and software stacks (like VMware, etc) if customers want to ‘roll their own’ software stack.”

Well, colour me difficult but there doesn’t seem a lot of difference between fleets of clustered HCI server kit and fleets of servers. Its all about remote management of servers at the end of the day.

Scale ZTP graphic

Demlow expanded futher: “Scale Computing provides a complete turnkey application solution that combines hardware management with our pre-integrated and managed HyperCore operating system making the entire solution ready to deploy, run and manage applications out of the box … just bring your workloads.”

Nebulon, with its SmartEdge product,  uses similar turnkey language, as in a podcast entitled “A turnkey edge data center when and where you need it.“ Think of ZTP as overlapping Nebulon’s capability but minus Nebulon’s add-in card and therefore less costly.

ZTP comes in two versions: one managed from SC//Fleet Manager, and the other a centralized pre-configuration facility for air-gapped edge sites, meaning no Internet connection. This latter version is for highly secured sites for customers unable to work with cloud-based Fleet Manager. This is called a “One-Touch Provisioning (OTP)” option. OTP is available via USB.

SC//Fleet Manager, now including ZTP, is priced based on the number of clusters under management.

VAST Data rolling out biggest ever software release

Ceres darta enclosure in front of Ceres dwarf planet
Ceres public domain image from Wikipedia

Single QLC flash tier storage upstart VAST Data is making a song and dance about its biggest software release ever, saying it will aid VAST’s larger data platform agenda.

A VAST blog by CMO Jeff Denworth introduces and reviews combined v4.6 and v4.7 VAST software releases and promises more instalments over the next five weeks. The starting point is the development of a data catalog.

Denworth says: “The IT industry has been accustomed to the idea that structured, unstructured and semi-structured data stores should all be distinct only because no single system has been designed to achieve true data synthesis, until now.”

VAST stores its data in its scale-out and global namespace Element Store, a single tier of QLC SSDs accessed across an NVMe fabric by stateless controllers which can see all the drives and refer to metadata held in Optane storage class memory, but now SLC-class fast SSD equivalents.

B&F Vast Data hardware graphic

The new releases add this data catalog plus virtual tiering, key management, remote cluster snapshots, capacity usage predictions, zero-trust features and Kubernetes storage classes.

VAST Catalog

It’s now adding a catalog of the data it holds, detailing “each and every file and object written into an extensible tabular format.” This “enables data users to … query upon massive datasets at any level of scale,” and “is never out of sync with your datastore.” There are aspects of Hammerspace’s Global Data Environment here. How much the two approaches overlap will become clearer as VAST’s software details are released.

We can envisage a VAST cluster as a datalake or lakehouse, with the queryable catalog a means of finding and selecting data according to some criteria. But then what can we do with the selection? 

We can make a step forward and imagine running an analysis process on it. Were VAST to set up integrations with providers of analytics software, machine learning training or general AI software, then this catalog capability becomes a data extract process. It could also transform the data but there would be no need to load it into anything, as the processing routine could then use it directly on the VAST storage system – ETL without the L.

Denworth says the catalog will be used for capacity management and chargeback and provide a faster way for backup and archiving processes to run. He also claims that, with the catalog, “Applications can replace POSIX functions with SQL statements to see rapid accelerations for mundane POSIX operations… why find when you can select 1,000 times faster?”

The Posix find function looks through the hierarchically organised nodes in a directory tree to locate files. A SQL select statement retrieves records (rows) from tables in a SQL database

Tiering and clustering

The virtual tiering concept sets up quality of service (bandwidth and IOPS minima and maxima) plans per user or per share, export or bucket. Thus there could be low, medium and high QoS plans, with each representing a tier of service applied to the underlying single tier element store.

The expanded clustering features start with a remote snapshot idea: “VAST clusters now also support the ability to share and extend snapshots to multiple remote clusters.  Each remote site can mount another site’s snapshots and even make clones of this data to turn a snapshot into a read/write View. This capability lays the foundation for other work we will unveil, relating to building a global namespace from edge to cloud.”

If “cloud” means public cloud, then that means the VAST OS will be ported there and customers will operate their VAST storage and associated applications in a VAST data universe that extends from globally-distributed on-premises edge sites through data centres to the public cloud.

Comment

Our understanding is that VAST Data’s capabilities and growth progress are being paid close attention by established and mainstream enterprise storage providers, meaning Dell, Hitachi Vantara, HPE, Huawei, IBM, NetApp, and Pure Storage, ambitious newcomers such as Qumulo and WekaIO, and cloud file services providers (CTERA, Egnyte, Nasuni, Panzura, and Lucid Link). 

Disruption is afoot in the storage world and it is centered on how best to provide global access and orchestration to petabyte-exabyte scale file, object and block data across a hybrid on-premises and multi-cloud IT environment. The innovation technology flags are being flown highest here by Hammerspace and VAST Data, and maybe WekaIO, with every other player we can see operating in wait-and-see or catch-up mode. 

Spectrum no more: IBM drops brand name for storage products

IBM Storage is becoming a brand, with Big Blue dropping the Spectrum prefix for its storage products.

The Spectrum prefix washed over IBM’s storage software products in 2015 when it said planned to invest more than $1bn in the portfolio over the subsequent five years. The brandname applied to a whole, um, spectrum of products.

For example:

  • Spectrum Connect – orchestrates IBM storage in containerized, VMware, and PowerShell setups
  • Spectrum Elastic Storage System (ESS) – software-defined storage for AI and big data
  • Spectrum Discover – file cataloging and indexing product
  • Spectrum Fusion – containerized derivative of Spectrum Scale plus Spectrum Protect data protection
  • Spectrum Protect – data protection
  • Spectrum Scale – scale-out, parallel file system software (the prior GPFS)
  • Spectrum Virtualize – operating, management, and virtualization software for the Storwize and FlashSystem arrays and SAN Volume Controller
  • Spectrum Virtualize for Public Cloud (SVPC) – available for the IBM public cloud, AWS, and Azure

There was a Spectrum NAS once, based on Compuverde software. It quietly went away when Pure Storage bought Compuverde.

IBM’s storage business unit gained some of Red Hat’s storage products in October 2022, after it had bought the open-source software company in 2018. These were Red Hat Ceph Storage, Red Hat OpenShift Data Foundation (ODF), Rook, and NooBaa. The base for Spectrum Fusion became ODF. But Ceph did not become Spectrum Ceph, the first sign, had we realized, that the wall-to-wall coverage of the Spectrum brand across IBM’s storage software offerings was coming to an end.

Now the Spectrum brand prefix is on the way out and we will have an IBM Storage prefix replacing it. 

We recognized the first signs of this a couple of days ago when told that IBM documentation referred to Storage Scale and Storage Fusion, instead of Spectrum Scale and Spectrum Fusion. The makeover was inconsistent, as the long Storage Scale datasheet still refers to Spectrum Scale while a shorter Storage Fusion solution brief document refers to IBM Storage Fusion.

An IBM spokesperson told us: “At IBM, we are constantly seeking out ways to enhance and simplify our customer experience. To that end, IBM Storage is reimagining its portfolio and aligning the offerings with what we’ve heard our clients need most now. We’ll be providing more information on this in the following weeks.”

Panasas’s halfway HPC house approach to the public cloud

HPC parallel scale-out file system software supplier Panasas has decided that full scale adoption of the public cloud, with its PanFS software ported there, is not needed by its customers. The cloud can be a destination for backed up PanFS-held data or a cache to be fed – but that is all.

Update: GPUDirect clarification points added. 3 March 2023.

Jeff Whitaker.

An A3 TechLive audience of analysts and hacks (reporters) was told by Panasas’s Jeff Whitaker, VP for product management and marketing, that the AI use case has become as important and compute-intensive as HPC, and is seen in the cloud, but: “We see smaller datasets in the cloud – 50TB or less. We’re not seeing large AI HPC workloads going to cloud.”

Data, he says, has gravity, and compute needs to be where the data is. That means, for Panasas, HPC datacenters such as the one at Rutherford Appleton Labs in the UK, one of its largest customers. Also, in its HPC workload area, compute, storage, networking hardware and software have to be closely integrated. That is not so feasible in the public cloud where any supplier’s software executes on the cloud vendor’s instances.

He says Panasas data sets need to be shared between HPC and AI/ML, and the Panasas ActiveStore/PanFS systems need to be able to serve data to both workloads. Its latest ActiveStir Ultra hardware can store metadata in fast NVMe SSDs, small files in not quite so fast SATA SSDs, and large files in disk drives which can stream them out at a high rate – supporting both small file-centric AI and large file-centric HPC.

This is tiering by file type and not – as most others do it – tiering by file age and access likelihood.

Panasas can support cloud AI by being able to move small chunks of data, up to 2TB or so, from on-premises PanFS systems to load cloud file caches in, for example, AWS and Azure.

Panasas also supports the movement of older HPC datasets to the cloud for backup purposes. Its PanMove tool allows end-users to seamlessly copy, move, and sync data between all Panasas ActiveStor platforms and AWS, Azure, and Google Cloud object storage.

Whittaker said Panasas saw on-premises AI workloads as important but it has no plans in place to support Nvidia’s GPUDirect fast file data feed to its GPUs – the GPUs which run many AI applications. This is in spite of GPUDirect support being in place for competing suppliers such as WekaIO, Pure Storage and IBM (Spectrum Scale). He suggests they are not actually selling much GPUDirect-supporting product.

Wantimg to clarify this point Whittaker told us: “We are very much selling into GPU environments, but our customers aren’t asking for GPUDirect. The performance they are achieving from our DirectFlow protocol is giving them a significant boost over standard file protocols without requiring an RDMA-based protocol such as GPUDirect.  Due to this, we are meeting their performance and scale demands.  GPUDirect is absolutely in-plan for Panasas.  We just are finishing other engineering projects before completing GPUDirect.”

When Panasas needs to move GPUDirect from a roadmap possibility to a delivered capability then it could do so in less than 12 months. What Panasas intends to deliver later this year includes S3 primary support in its systems, Kubernetes support, and rolling out the upload to public cloud file cache from Panasas’s on-prem systems. For example, to AWS’s file cache with its pre-warming capability and other functionalities to keep data access rates high. 

But there is no intent to feed data to the file caches from backed-up Panasas data in cloud object stores. Our intuition, nothing more, says this restriction will not hold, and that Panasas will have to have a closer relationship with the public cloud vendors. 

A final point. Currently Panasas ActiveStor hardware is made by Supermicro. Whittaker said Panasas is talking abut hardware system manufacturing possibilities with large solution vendors. We suggested Lenovo but he wasn’t keen on that idea, mentioning Lenovo’s IBM (Spectrum Scale) connection. That leaves, we think, Dell and HPE. But surely not HPE with its Cray supercomputer stuff coming down market. So, we suggest, a Dell-Panasas relationship might be on the cards.

Whichever large solution vendor might be chosen it could provide an extra channel to market for Panasas’s software. It could also help in large deals where the customer has an existing server supplier and would like use their kit in its HPC workloads as well.

Hammerspace: Other vendors are trying to copy us

Hammerspace
Hammerspace

Hammerspace says it is making such good progress that other vendors are trying to replicate its data orchestration technology.

Hammerspace’s Global Data Environment (GDE) provides a global namespace within which all of an organization’s file and data assets can be found, accessed and moved as required. It provides fast access to data. The GDE is not a storage repository, but a metadata-driven data catalog and orchestration layer of software that sits above file and object storage systems – both on-premises and in the public clouds.

Molly Presley, Hammerspace
Molly Presley

Molly Presley, SVP marketing at Hammerspace, told an A3 TechLive audience that she couldn’t identify competing vendors. “We don’t really have a direct competitor. It’s a new concept.” She admitted that, as a marketeer, that’s what she would say, but it’s true all the same.

Customers are realizing that they need some way of cataloging and orchestrating data when they have distributed and different data silos and need to tie them together. For example, they may need to move data from silo to silo between stages in a distributed workflow involving remote offices and have no easy means of doing that, apart from building their own software.

The whole data orchestration area is new. “There is no Gartner MQ for data orchestration,” and GigaOm is one of the few analyst groups looking at it.

Several vendors have file fabrics that link remote and central offices and the public cloud, such as NetApp’s data fabric and the cloud file services vendors like CTERA, Nasuni and Panzura, but they all have their own preferred silos. Whereas Hammerspace, not being a storage vendor, doesn’t have such a bias.

She said other suppliers draw Hammerspace into deals. “Azure pulled us in to several customers in Europe – to unify data across cloud regions, which it cannot do. We have strong local partnerships with Pure to unify data across multiple sites, which Pure does not have.” This is helping Hammerspace revenues. “We’re close to being cash-flow positive at this point. We’re in a very strong position with regard to growth.”

Some vendors recognize that Hammerspace’s technology is relevant, according to Presley. “We know for a fact other vendors are trying to build a technology like this … Weka is working on technologies like this [and] VAST is working on something like this.”

Jeff Denworth, VAST Data
Jeff Denworth

Which is true as far as VAST is concerned. It has just published a blog by CMO Jeff Denworth, entitled Our Biggest Software Release, Ever. This mentions new releases of VAST software which includes a a Catalog feature: “The VAST Catalog is an extension of the Element Store which now makes it possible for VAST clusters to catalog each and every file and object written into an extensible tabular format.”

This is needed because “the IT industry has been accustomed to the idea that structured, unstructured and semi-structured data stores should all be distinct only because no single system has been designed to achieve true data synthesis, until now. … Now users and administrators can tap into a powerful tool that provides global insight into their vast reserves of data with a fully synthesized and synchronized data catalog that requires no integration and where your catalog is never out of sync with your datastore.”

Then Denworth gets to the heart of its Hammerspace-like concept: “VAST clusters now also support the ability to share and extend snapshots to multiple remote clusters. Each remote site can mount another site’s snapshots and even make clones of this data to turn a snapshot into a read/write View. This capability lays the foundation for other work we will unveil, relating to building a global namespace from edge to cloud.”

Hammerspace would say it already has a a global namespace from edge to cloud, and one that is not limited to VAST storage clusters.

Global data catalogs and orchestration are becoming a pair of capabilities that large and distributed organizations are going to need more and more. Cloud file services players and hierarchical storage management suppliers are going to get pulled in that direction by their customers.

NetApp woes look set to get worse

Now we know why NetApp implemented a hiring freeze, spending cuts at the end of last quarter and an 8 percent job cut in January; it has just confirmed a 5 percent dip in its latest quarter revenue report, and next quarter’s outlook expects a deeper decline.

Revenues were $1.53 billion in its third fiscal 2023 quarter ended January 27, compared to $1.61 billion a year ago, with the fall terminating 10 successive growth quarters. There was a profit of just $65 million, compared to $252 million a year ago, a 74 percent drop, but a profit nonetheless, helped by the overheads reductions.

CEO George Kurian said in a statement: “In Q3, we executed well on the elements under our control in the face of a weakening IT spending environment and continued cloud cost optimization. … Building on that solid foundation, we are sharpening our execution to accelerate near-term results while strengthening our position when the spending environment rebounds.”

Financial summary

  • Gross margin: 65.6 percent; 66.5 percent a year ago
  • Return to shareholders (dividends + share repurchases): $308 million
  • Billings: $1.57 billion, down 10.5 percent y/y
  • Operating cash flow: $377 million; $260 million a year ago
  • Deferred revenue: $4.2 billion; up 6 percent y/y
  • EPS: $0.30 compared to $1.10 a year ago
  • Cash, cash equivalents and investments: $3.1 billion

The company is still paying a dividend to stockholders on April 26 of $0.50 per share.

NetApp revenue
Ten successive growth quarters come to a halt

Public cloud revenues grew 36 percent to $150 million but that total and its growth rate were eclipsed by hybrid cloud revenues of $1.38 billion, down 8.3 percent year-on-year. NetApp’s annual run rate for all-flash arrays dropped 12 percent to $2.8 billion from the year-ago $3.2 billion as customers bought less on-premises kit.

NetApp cloud revenue

That’s the problem in a nut shell; customers are spending less. Product revenues in the quarter were $682 million, down 19.4 percent.

Kurian said: “We continued to see increased budget scrutiny, requiring higher level approvals, which resulted in smaller deal sizes, longer selling cycles, and some deals pushing out. We are feeling this most acutely in large enterprise and the Americas tech and service provider sectors.

“Customers are looking to stretch their budget dollars, sweating assets, shifting spend to hybrid flash and capacity flash arrays from higher-cost performance flash arrays and, as our cloud partners have described, optimizing cloud spending.”

The flash array business was hit, with Kurian saying: “Our hybrid flash and QLC-based All-flash arrays continue to perform well, benefiting from customers’ price sensitivity in this challenging macro. The shift from high-performance All-flash arrays to lower cost solutions, coupled with the lower spending environment, especially among large enterprise, and US tech and service provider customers who are large consumers of flash, created headwinds to our product and All-flash array revenues.” 

The cloud cost factor affected NetApp, despite its Spot optimization capability. Kurian said: “Public Cloud ARR of $605 million did not meet our expectations, driven by a shortfall in cloud storage as a result of the same factors we experienced last quarter. Spending optimization and the winding down of project-based workloads like chip design, EDA, and HPC were headwinds again in Q3. We have a sizable base of public cloud customers, with a number of large customers who have grown rapidly over the past year and are now optimizing.

“Overall, the Cloud Ops portfolio performed to plan. Cloud Insights has stabilized, and Spot continues to grow nicely, benefiting from the cost optimization trend. Our dollar-based net revenue retention rate decreased to 120 percent but is still within healthy industry norms.” 

NetApp has three measures to deal with these problems: tightly manage the business’s spend, reinvigoratie efforts to support the storage business, and build a more focused Public Cloud business.

The cost control side is reflected in its decisions to reduce investment in smaller revenue potential products like Astra Data Store and SolidFire. Storage reinvigoration is a result of NetApp, as it moved rapidly to embrace cloud, losing momentum in its Hybrid Cloud business.

Kurian admitted: “We were slow to fully embrace the customer desire for lower-cost, capacity-oriented All-flash systems. At the start of Q4, we rectified that situation with the introduction of the AFF C-series,” with its lower cost quad-level cell (QLC) flash.

NetApp lost AFA market share, and Kurian said: “We are rebalancing our sales and marketing efforts to better address the significant storage market opportunity, including aligning compensation plans to drive sales of our reinvigorated storage portfolio. We believe that these actions will enable us to drive product revenue growth and regain share in the all-flash array market.” 

CFO Mike Berry added: “Our large enterprise and US tech and service provider customers have continued to reduce capex spend as they right-size their spending envelopes. These customers are the most forward leaning technology adopters and the biggest consumers of All-flash systems in the economy, and their pause in capex spending has had a material impact on our total revenue, All-flash mix and product margins.” 

The public cloud business needs to grow. Kurian commented: “We believe strongly that Public Cloud services can be a multibillion-dollar ARR business for us. However, achieving that target will take longer than we initially planned due to the industry-wide slowdown in cloud spending and our recent performance.” 

All in all there are quite a few mea culpa comments by NetApp. It took its eye off the QLC AFA ball, letting competitors like Pure Storage and VAST Data have more of a free run. William Blair analyst Jason Ader told his subscribers: “NetApp was negatively impacted by three main factors: 1) concentration among large enterprise customers (particularly in the technology and service provider verticals), which have substantially slowed down their spending due to the macro backdrop – this impacted both the cloud and hybrid cloud segments; 2) NetApp’s lack of a low-end, capacity-optimized all flash array product, which led to underperformance (and share loss) in traditional storage; and 3) poor sales execution (GTM misalignment), which stemmed from an overemphasis on cloud products in sales comp plans in fiscal 2023.” 

It’s going to get worse as fourth quarter revenues are being guided to $1.475 billion to $1.625 billion. The $1.5 billion midpoint will be a 10.7 percent decline on a year ago. This would mean a $6.28 billion full year, 0.6 percent less than fiscal 2022’s $6.32 billion.

That means 2024’s first fiscal quarter is the first one that could show an upturn and return to growth. NetApp execs had better show some positive results by then or analysts may be calling for management changes.