Home Blog Page 76

Bookings up over 800% at backup appliance firm Object First

Object First, which has developed a “ransomware-proof” backup storage appliance purpose-built for Veeam workloads, reports that its bookings for the first quarter of 2024 grew 822 percent worldwide. The company, which isn’t listed and thus doesn’t publicly report its results, didn’t drill into what the base figures were.

This is when compared to the same quarter in 2023. Growth in North America alone shows a jump in growth of 560 percent. Annual growth in EMEA, now a strategic focus for Object First, was 400 percent, we’re told.

Ootbi appliance

The figures are slightly skewed by the fact the company’s Ootbi (Out-of-the-Box-Immutability) appliance was only launched in February 2023, but increased sales generation is gaining momentum in the market overall, according to Object First.

The company says it completed 57 Ootbi deployments in Q1 2024, a 714 percent increase over Q1 2023. Of those, three brought in revenue over six figures in dollars. It recorded a 700 percent percent increase in deals that included two or more Ootbi appliances, and Object First reckons demand for the larger capacity 128 TB units grew at 1,050 percent year-over-year.

To address demand from customers to store ever-growing quantities of backup data securely, Object First announced a 192 TB appliance earlier this month. This version unlocks up to 768 TB of usable immutable backup storage per Ootbi cluster. It integrates with Veeam’s 12.1.2 release, facilitating backup storage capacities exceeding 3 petabytes, as part of a Veeam backup repository.

Customer numbers grew 300 percent worldwide, we’re told, and there was third party supplier growth of 637 percent, Object First says, again not giving a base number or final number. Recently signed up partners include Insight, Sentinel Technologies, IT2Grow, Viyu Network Solutions, Copaco, Pedab, and V-Valley.

“Building on a strong 2023, we accelerated our bookings growth in Q1 2024, anticipating the need for ransomware-proof immutable backup storage that is secure, simple and powerful,” said David Bennet, CEO at Object First. “The strong growth in our partner and customer base, especially among those leveraging our larger capacity appliances, reflects the need to secure growing volumes of backup data, and ensure rapid and complete recovery when disasters occur.”

Object First was launched in 2022, by the original and departed founders of Veeam, Ratmir Timashev and Andrei Baronov.

Scality tops GigaOm’s enterprise object storage Radar

Radar
Radar

Research house GigaOm has placed Scality as leader ahead of Pure Storage, VAST Data, NetApp and others in the latest enterprise object storage Radar report.

GigaOm’s Radar evaluates supplier products in a technology area and locate them in a 2D circular space, a quasi-radar display screen, featuring concentric rings – leader, challenger, and new entrant. Those set closer to the center are judged to be of higher overall value. Each vendor is characterized on two axes – balancing Maturity versus Innovation and Feature Play versus Platform Play. There is an arrow indicating the supplier’s development speed over the coming 12 to 18 months, from Forward Mover, through Fast Mover to Outperformer.

This fifth annual Radar report evaluates 18 suppliers, grouping them into eight leaders, six challengers and four entrants, as the diagram illustrates: 

Scality central in GigaOm radar

Apart from Scality, the leaders are in two groups – the mature platform players Pure, Dell, NetApp, and Cloudian – and the innovative platform players – VAST, MinIO, and Weka.  IBM, Hitachi Vantara, and DDN are a trio of challenging mature platform suppliers. Nutanix is an innovative platform company, and DataCore is an innovative feature player, while OSNexus is more of a mature feature player. Scattered around the new entrants rim are Quantum, Quobyte, Softiron, and Zadara.

GigaOm rates suppliers on their key features, and Scality had the top averaged 5.0 score with Pure Storage and VAST next at 4.6, and WEKA at 4.3, but Scality distinguished itself on an emerging features comparison. There it scored 4.5, with Nutanix, OSNexus, and Pure given 4.0, NetApp at 3.5, Dell 3.0, and Quobyte and VAST at 2.5. 

Scality also led the business criteria comparison with its score of 4.8 ahead of Cloudian, Dell, and MinIO’s 4.7.

Scality has unsurprisingly made this report publicly available here via a registration form. Read it for a full description of the GigaOm criteria and a review of each supplier’s products, listing strengths and weaknesses.

Avaneidi secures €8M to enhance SSD endurance, security

Italian startup Avaneidi has received €8 million ($8.6 million) in Series A funding, and is planning to use this to extend SSD endurance by 1.9x, increase security, and lower power consumption, using drive-specific flash translation layer (FTL) software.

The company was formally founded in April by Milan-based CEO Dr Rino Micheloni, an ex-Microsemi VP and Fellow in its Flash Signal Processing labs. He left in March 2021, spent a year as a research fellow at the University of Ferrara, then went freelance as a senior tech exec before starting Avaneidi. Dr Lorenzo Zuolo is Avaneidi’s CTO and also an ex-employee of Microsemi’s Flash Signal Processing Labs and University of Ferrara alumnus. Microsemi is now called Microchip.

Dr Rino Micheloni, Avaneidi
Dr Rino Micheloni

Micheloni said: “Our mission at Avaneidi is to pave the way for more secure, efficient, and sustainable data storage solutions. This funding will keep us at the forefront of the market, enabling us to accelerate the development of our enterprise SSDs and all-inclusive storage appliances. Unlike off-the-shelf products, our solutions address cyber security and data governance issues by leveraging a tight hardware-software co-design while offering extensive customization options.”

Avaneidi is developing enterprise storage systems based on a multi-level “security by design” approach, claiming to look after cybersecurity, protection, and data reliability for enterprise-grade applications. It claims its storage tech can boost performance and security, and reduce energy consumption.

The company uses tailor-made chips and “advanced algorithms,” providing customers with a bespoke offering it says is optimized for performance and cybersecurity applications. Its claims its storage appliances offer “a cost-effective, highly efficient alternative to traditional storage solutions, featuring extended drive lifetime, improved security and significant energy savings.”

We asked a few questions to find out more and discovered it is using drive-specific flash translation layers (FTLs).

Blocks & Files: How much will an Avaneidi SSD boost performance over a mainstream manufacturer’s TLC SSD at any capacity level? How much will it reduce energy consumption?

Avaneidi: Thanks to our proprietary error correction code, we can extend the lifespan of the drive by 1.9x compared to off-the-shelf drives. Power saving per IOPS is 25 percent.

Blocks & Files: What is the capacity range of an Avaneidi SSD product? What will its sequential read/write bandwidth and random read/write IOPS be? 

Avaneidi: Our current implementation is 8 TB. As of today, we don’t plan to sell the drive in the open market. Our drive will be used in our Storage Appliance. Thanks to our proprietary drive management system, we can squeeze higher performance (even from off-the-shelf SSDs) by leveraging a workload-friendly architecture. This is the main reason why we won’t publish data about bandwidth and IOPS. The comparison with other vendors wouldn’t be fair.

Blocks & Files: How will it boost security over a standard enterprise SSD?

Avaneidi: We act at two levels to boost security. Each SSD (or group of SSDs) can have a different FTL. In other words, even if a hacker would find a way to access one SSD, he wouldn’t be able to use the same recipe for another SSD. As far as we know, nobody offers a drive-specific FTL.

Blocks & Files: Will Avaneidi build appliances as well as drives?

Avaneidi: Yes. Our main focus is the Storage Appliance. We call it “S4S,” which stands for Security-for-Storage and Storage-for-Security, [and] our SSD, together with many other security features, is the core, again, thanks to its drive-specific FTL.  

Avaneidi says it is targeting “organizations and industries that are highly sensitive to data governance and security,” particularly in the field of AI applications, naming the finance, defense, automotive and healthcare industries. It claims it has “attracted the attention of major industry players. Negotiations and preliminary agreements are in place to validate and expand the market reach of its innovative products.”

The funding came from United Ventures. Managing partner Massimiliano Magrini said: “By channeling resources into AI infrastructure like Avaneidi’s, we aim to facilitate the development of technologies that will redefine industries and transform tomorrow’s society.”

Pure delivers first AI copilot for storage

Pure Storage has extended the software environment of its all-flash arrays, with fleet management and a storage Copilot, and says this has enhanced their ability to detect and recover from malicious attacks. 

These additions were announced at its June 18-21 Las Vegas Pure Accelerate conference, which overlaps with HPE’s June 17-20 Discover conference in the same city. Pure made no new hardware announcements nor eye-catching deals with Nvidia, such as Nvidia AI Computing by HPE. Instead it provided solid incremental advances for its users, enhancing array fleet management and improving array SLAs and cyber resilience.

Pure chairman and CEO Charles Giancarlo said: “Pure is redefining enterprise storage with a single, unified data storage platform that can address virtually all enterprise storage needs including the most pressing challenges and opportunities IT leaders face today, like AI and cyber resilience. The Pure Storage platform delivers unparalleled consistency, resilience, and SLA-guaranteed data storage services, reducing costs and uncertainty in an increasingly complex business landscape.”

The Pure Fusion offering, which automates orchestration and workload placement, is now embedded into the Purity operating environment. It unifies arrays and optimizes storage pools on the fly across structured and unstructured data, on-premises, and in the cloud. Pure Fusion will be available across the entire Pure Storage platform to all global customers. 

Pure Fusion

AI

The company is introducing a GenAI Copilot for storage, saying it “represents a radically new way to manage and protect data using natural language.” It has been trained on data from tens of thousands of Pure Storage customers, and can guide storage teams through complex performance investigation steps, management, and security issues.

Pure now provides secure application workspaces with fine-grained access controls. This combines Kubernetes container management, secure multi-tenancy, and policy governance tools to enable data integrations between enterprise mission-critical data and AI clusters. It is claimed to make storage infrastructure transparent to application owners, who gain fully automated access to AI features without sacrificing security, independence, or control.

Pure AI copilot

Pure Storage already boasts Nvidia DGX BasePOD and OVX validation. It now expects to be certified storage for Nvidia’s SuperPOD by the end of the year. It claims its “industry-leading energy efficiency, consuming up to 85 percent less energy than alternative all-flash storage,” can help customers overcome data center power constraints in large AI clusters.

More Evergreen

Pure announced three Evergreen//One Storage as-a-Service (STaaS) service-level agreements (SLAs) including an AI StaaS guarantee, cyber recovery and resilience and site rebalance features, along with a new security assessment and data access anomaly detection improved by using AI. 

Pure Evergreen//One for AI

The Evergreen//One AI Storage as-a-Service SLA guarantees storage performance for GPUs to support training, inference, and HPC workloads, and introduces the ability to purchase based on dynamic performance and throughput needs. Pure claims to deliver the required performance, thereby eliminating the need for excess planning and expenditure.

The Evergreen//One consumption-based storage array arrangement has an enhanced Cyber Recovery and Resilience SLA that includes disaster recovery. Pure will deliver a customized recovery plan, and, if a disaster occurs, ship clean service infrastructure within a defined SLA, provide onsite installation, and supply additional professional services for data transfer.

As part of this, Pure will work with customers “to build and maintain a comprehensive cybersecurity strategy.” This will include quarterly reviews to help ensure best practices, ongoing risk assessments, and operational security remediation.

Pure’s new security assessment provides visibility into fleet-level security risks, with actionable recommendations to maximize cyber resilience across a customer’s entire storage fleet. The assessment is based on aggregated intelligence across 10,000-plus Pure arrays. The assessment provides best practices to align with NIST 2.0 standards, meet regulatory compliance rules, remediate potential security anomalies, and restore operations if a security-related event occurs.

Also, the AI copilot uses this security assessment to provide CISOs with information to compare their organization’s own security posture against that of other Pure Storage customers.

The Site Rebalance SLA enables customers to adjust existing reserve commitments as storage requirements evolve. A reserve commitment is a chosen hardware configuration’s minimum storage capacity level, billed in advance at the corresponding reserve rate.

For example, if capacity needs at one site go down, a datacenter is being reduced in size or consolidated, or there is excess capacity at some site for whatever reason, the Site Rebalance SLA means customers can rebalance reserve commitments once every 12 months for each Evergreen//One subscription.

Pure already has ransomware detection based on data reduction anomalies. It has now strengthened its ability to detect unusual data access behavior, malicious access, and denial of service attacks by having multiple machine learning models running to identify anomalous behavior.

These models analyze customer environments with historical data for anomalous patterns based on heuristics of performance as well as user context on how storage is used. Pure says customers can identify a last known good snapshot copy to reduce the impact of anomalous data access by identifying recovery point targets to restore data, reducing risk and guesswork. 

You can read more about how Pure says it’s tackling AI and cyber resiliency in a blog.

Kioxia plots tech route to 1,000 layer 3D NAND by 2027

Kioxia outlined a tech roadmap to 1,000 layer 3D NAND at the IWM 2024 conference in Seoul against a backdrop of statements from NAND fab JV partner Western Digital that manufacturing costs are rising and returns on investment are falling.

Update: Kioxia response to notion of product dissonance between Kioxia and WD views on NAND technology development pace says technology development separate from product development cadence and implies the two companies agree on need for business returns from 3D NAND node levels. 2 July 2024.

The Japanese PC Watch media outlet covered Kioxia’s pitch, which predicted that NAND die density would reach 100 Gbit/mm2 with 1,000 wordline (memory cell) layers by 2027 by extrapolating past trends and improving NAND cell technology.

Kioxia graph

The number of 3D NAND layers generally increased from 24 in 2014 to 238 in 2022, a tenfold rise in eight years. Kioxia said a 1,000-layer level would be achievable by 2027 at this 1.33x/year rate of increase.

Increasing density in a 3D NAND die isn’t just a case of stacking more layers on the die, because an edge of each layer needs exposing for wordline electrical connectivity. This gives the die a staircase-like profile and, as the layer count increases, so too does the area of the die needed for the staircase.

See staircase on left side of die

That means that density must be increased by also shrinking the cell size vertically and laterally, and increasing the bit level from today’s TLC (3bits/cell) to QLC (4 bits/cell).

All these scaling vectors – layer counts, vertical cell size reduction, lateral cell size reduction, and cell bit level increases – bring their own technology problems. Having more layers means that etching the vertical connecting holes (through-silicon vias or TSVs) becomes more difficult as the TSV dimensions can be warped and the channel material layers deformed. A hole that is 0.1 μm in diameter and 5 μm or more deep has an aspect ratio of 50. As the hole gets deeper, the normal reactive ion etching (RIE) rate decreases and different low-temperature RIE has to be used to counteract that.

Deeper vias lead to higher channel resistance and the existing polycrystalline silicon (polysilicon) material will need modifying to single-crystalline silicon by being heat-treated in a metal induced lateral crystallization (MILC) process. 

The staircase area of a die can be reduced by moving to two and four-lane wordlines per staircase level from today’s mono lane approach. Electrical connectivity within the stacked layers can be harmed as well by having longer paths for the electricity to transit, allowing resistance to affect the current. Kioxia is considering changing the wordline material from tungsten to molybdenum to decrease the current resistance and associated delay time.

All the problems it identifies are solved with ideas such as wordline metal changes and via etch technology advancements, but there is an issue that Kioxia’s technologists did not address. That is manufacturing capital costs and the return on that expenditure by selling chips and SSD using the fabbed NAND dies.

Western Digital EVP Robert Soderbery addressed this directly earlier this month, talking about a new era of NAND at a June 10 investor conference. Western Digital made the point that 3D NAND cost more to make than 2D NAND. NAND in the 3D era requires a higher capital intensity but delivers a lower cost reduction as bit density increases. Western Digital, which didn’t speak directly about its Kioxia hookup, is calling the situation the “end of the layers race.”

Western Digital investor conference slide
Western Digital investor conference slide

Soderbery said there would be a slowing of NAND layer count jumps to optimize capital deployment. Notably, he declared: “We’re no longer on a hamster wheel of nodal migration.” 3D NAND layer count nodes must be long-lasting, feature-rich, and future-proofed. 

In other words, the lifetime of any particular node will be extended and Western Digital will look to maximize the return on its capital expenditure for a node level. That means, Soderbery said, its strategy will be to supply premium nodes for premium use cases with stronger Western Digital customer relationships. The large customers will give demand information to Western Digital, which will commit to supplying that demand with manufacturing arrangements.

Western Digital investor conference slide

Western Digital and Kioxia have announced their BiCS 8 generation 3D NAND product technology at a 218-layer level. BiCS 9 and BiCS 10 generations have been mentioned with 300 and 400-plus layer counts. These are a long way from a 1,000-layer node. One could imagine, given Western Digital’s views on capital expenditure and return on that investment, that the company might be unwilling to join Kioxia in making the multiple node level jumps necessary to reach 1,000 layers by 2027. Judging by that presentation, it wants to slow the nodal level increase rate, not maintain or increase it.

Kioxia is in a race with industry leader Samsung to reach flagship NAND density levels to retain and grow its market share. But we can imagine it soon having some intense discussions with Western Digital about the rate and timing of NAND node level increases. We have asked both for comment.

A Kioxia spokesperson replied: “IEEE forums like the IMW Workshop are gatherings of industry insiders to discuss technology enablement, advancements, and possibilities – the presentation that was delivered was not a product development statement or product announcement. The Kioxia presentation predicted that 100Gb/mm2 could be achieved by 2027 and we believe the technology, various design, and cell characteristics could be there.

“For Kioxia, the number of layers is not as important as the lateral scaling to minimize the cost and meet the performance goals of our customers for their applications. Kioxia has a longstanding viewpoint on this subject and has produced a range of materials (that we have made available on our website, blog and social media over the years) that speak to our opinion on number of layers and the costs that come with going higher vs. the benefits of lateral scaling. It is our stance that the cost-effective solution that meets performance and density requirements, regardless of number of layers, is the best solution.”

Hence there is no dissonance between the Kioxia and WD views at a product level.

Don’t mind if I DPU: Pliops and Kalray continue merger talks as former shows off Hammerspace integration

Two DPU startups, Kalray and Pliops, have been working on a merger for several months, with Pliops recently announcing an integration with data orchestrator Hammerspace.

Pliops is an Israeli data processing unit (DPU) startup that, like Nebulon, has had to differentiate itself from the generalized storage and network accelerating DPUs popularized by Nvidia (BlueField), Intel (IPU), Pensando, and Fungible. The Pliops XDP (Extreme Data Processor), with key:value store technology, offloads and accelerates low-level storage stack processing from a host x86 server for applications like RocksDB and RAID.

France-based Kalray is another DPU developer of Massively Parallel Processing Array (MPPA) accelerator technology. It was started in 2008 as a fabless semiconductor business spun off from CEA, the French Atomic Energy Commission, and developed MPPA chips and cards using them, such as its K200-LP storage accelerator card. It has a partnership with Arm for CPUs. 

Kalray CEO Eric Baissus said in a prepared statement: “This proposed merger with Pliops represents a major strategic opportunity. By combining our strengths, we aim to become the global leader in data acceleration solutions for storage and AI GPUs.”

Pliops chairman Eyal Waldman added: “By combining Pliops and Kalray’s exceptional assets, we are poised to enhance business opportunities for both companies.”

Pliops CEO Ido Bukspan said: “The potential in this proposed merger between our two companies is tremendous. Combining our technological expertise, teams, and products to make this new entity a global leader will significantly accelerate our time to market with a novel storage paradigm for AI data acceleration solutions.”

Kalray ran an IPO on the Euronext Paris Stock Market in 2018, raising €47.7 million ($51 million). It achieved break-even EBITDA by mid-2023, won an FMS award for its DPU tech that year, and also exhibited at Dell Technologies World as a Dell Technologies ETC (Extended Technology Complete) partner. That means it can combine its products with Dell’s servers, storage, and networking gear.

This parallels Pliops’ partnership with HPE. It has collaborated with HPE to validate its XDP with HPE ProLiant servers. Pliops will be present at, and a sponsor of, this week’s HPE Discover event where it will preview an upcoming offering that tackles the issues surrounding large language models (LLMs) inference optimization. Pliops is also collaborating with scaleout software provider Hammerspace and will introduce a new offering integrating its XDP PCIe gen 5 card with Hammerspace’s Global Data Environment. 

Pliops integrates with Hammerspace
Hammerspace Global Data Environment with participating Pliops XDP-front-ended NVMe storage systems

AI has become a common factor in both Kalray and Pliops’ technology development. Kalray announced its Ngenea for AI data acceleration platform in May this year.  Google, Lenovo, and Pliops collaborated in April to develop AlloyDB Omni, an integrated offering designed to accelerate traditional database and generative AI applications.

DPU startups have generally had a hard time gaining traction and taking off in the face of competition from Nvidia and Intel. Pensando was acquired by AMD for $1.9 billion in 2022 in the most successful DPU startup exit to date. 

Microsoft bought Fungible for a reported $190 million in 2020, less than the $300 million funding Fungible raised. Nebulon has seemingly been absorbed by Nvidia earlier this year in a messy transaction that neither company will confirm, with many Nebulon staff hobbled by NDAs, although others put an Nvidia move or acquisition on their LinkedIn profiles. For example, Laleh Rongere, Director of People and Resources, left in March 2024. Her LinkedIn profile says “Director of People and Resources at Nebulon (Acquired by Nvidia).”

Now Pliops and Kalray are negotiating with the aim of joining forces. The former was founded in 2017 by then-CEO Uri Beitlar, CTO Moshe Twitto, and chairman Aryeh Mergi, and has raised $215 million in VC funding to date. Ex-Nvidia chip design SVP Ido Bukspan became Pliops’ CEO in June last year, with Beitler becoming chief strategy and business development officer. Mellanox founder Eyal Waldman joined the board in late 2020 and became chairman earlier this year. 

The merger proposal envisages the combined business, with some 120 Pliops employees, being owned 65 percent by Kalray shareholders and 35 percent by the Pliops ones, with Kalray issuing new shares to the Pliops stockholders. The 35 percent stake could rise to 40 percent if certain business milestones are met, according to Israel’s Calcalist media outlet. Kalray is capitalized at €140 million ($150 million) and the new entity will be worth €240 million ($257.3 million). This implies a Pliops valuation of between €84 million to €96 million ($90-103 million), which is less than the $215 million it raised, and much lower than the $650-700 million valuation at the time of the last VC round in August 2022. 

The enlarged Kalray business, strengthened by Pliops technology and business relationships, can look forward to a storage access acceleration product combining Kalray and Pliops IP built for GenAI applications. Kalray could also inherit the Pliops-Hammerspace partnership.

Storage news ticker – 18 June 2024

Data orchestrator and manager Arcitecta announced Powerhouse, a branch of Australia’s largest museum group, has selected its data management platform, Mediaflux, as its new digital asset management solution (DAMS). Mediaflux is tailored to the museum’s extensive, varied, and valuable collections, marking a substantial recalibration of operations and initiating a significant paradigm shift in the museum’s curatorial processes and user experiences at the museum. With a foothold in higher education, government, media and entertainment, and life sciences, Arcitecta now gets its data management technology into the museum/gallery/cultural asset market. 

Nothing has been announced by ceramic tablet storage startup Cerabyte or European atom smashing institute CERN but Cerabyte is listed as a partner and participant in CERN’s OpenLab.

Data streamer Confluent announced Build with Confluent, a new partner programme that helps system integrators (SIs) speed up the development of data streaming use cases and quickly get them in front of the right audiences. It includes  specialised software bundles for developing joint solutions faster, support from data streaming experts to certify offerings, and access to Confluent’s Go-To-Market (GTM) teams to amplify their offering in the market. Build with Confluent aims to monetise real-time solutions built on Confluent’s data streaming platform.

William Blair analyst Jason Ader reckons positioning as a consolidating platform for the fragmented analytics estate and the building tailwinds from AI as enterprises start to push projects into production is benefitting Databricks. It has, he tells subscribers, sidestepped budget headwinds that have afflicted other vendors and saw an acceleration in first-quarter revenue growth to mid-60 percent compared to 50 percent growth at the end of fiscal 2024 (January year-end). At of the end of July 2024, Databricks expects to have a roughly $2.4 billion revenue run-rate and NRR above 140 percent. This strong growth is in part a function of Databricks’ expanding portfolio, with the addition of traditional data warehousing capabilities (SQL Warehouse) that is today a $400 million business, as well as new AI capabilities (MosaicML, Notebooks, and the newly announced AI/BI and LakeFlow). 

Databricks is becoming a platform of choice for enterprises to manage all of their analytics and AI data needs and should continue to benefit from a number of growth drivers across its business, including successful open-source Spark conversions, added support for Iceberg tables, the early opportunity for its new serverless offering, and the continued cross-sell of new products.

IBM announced Storage Ceph v7.1 with support for NVMe over TCP, a VMware vCenter plug in, and NFS v3 for CephFS with metrics. Storage Ceph is focused on the following use cases:

  • Object storage as a service
  • AI/ML analytics data lake especially for open-source analytics like Presto, Spark, Hadoop
  • Cloud-native data pipeline S3 store, especially for open-source Storm analytics
  • VMware storage
  • File storage as a service

Data integrity checker and deep content inspector Index Engines announced aa 99.99 percent service level agreement (SLA) for CyberSense to accurately detect corruption due to ransomware. CyberSense has a machine learning process, which uses thousands of actual ransomware variants, sophisticated intermittent encryption variants, tens of millions of data sets and backup data sets, that feeds its AI-engine. Index Engines says trad data protection methods fall short as they primarily focus on identifying obvious indicators of data compromise such as unusual changes in compression, metadata, and thresholds, which can be easily bypassed by sophisticated  ransomware. 

Jim McGann, VP of Strategic Partnerships for Index Engines, said: “Our strategic partners, including Dell and Infinidat, benefit from our new SLA to provide their customers with enhanced resiliency, minimizing the impact bad actors have on their business operations.” CyberSense is currently deployed at thousands of organizations worldwide and sold through strategic partnerships with leading organizations. This includes Dell PowerProtect Cyber Recovery,  Infinidat Infinisafe with Cyber Detection, and  IBM Storage Sentinel.

The Nikkei reports that Kioxia has reversed 20 months of NAND production cuts at its Yokkaichi and Kitakami plants, because the market is recovering. t reported a 10.3 billion yen profit for the first calendar 2024 quarter, after six successive quarters of losses. Banks financing Kioxia have re-arranged 540 billion yen ($3.43 billion) loans which would have become repayable this month and set up a new 210 billon yen credit line. New Kioxia spending to increase production at Kitakami has been delayed to 2025 at least.

Australia’s Macquarie Cloud Services, part of Macquarie Technology Group, has leveraged strategic relationships with Microsoft and Dell Technologies, to launch Macquarie Flex. It says this is the first Australian hybrid system powered by Microsoft Azure Stack HCI and Dell’s APEX Cloud Platform for Azure. This will provide workload flexibility, one management plane, 24×7 mission-critical support, and compliance across public, private and hybrid cloud environments. Macquarie Cloud Services is the first Dell Technologies partner offering Azure Stack HCI in Australia. This announcement follows its recent launch of Macquarie Guard, a full turnkey SaaS offering that automates practical guardrails into Azure services.

NVIDIA released its MLPerf v4.0 Training results, saying it set and maintained records across all categories. Highlights include:

  • Achieving new LLM training performance and scale records on industry benchmarks with 11,616 Hopper GPUs.
  • Tripling training performance on GPT3-175B benchmark in just one year with near-perfect scaling.
  • Increasing Hopper submissions speeds by nearly 30% from new software. 

It says these results reinforce its record of  demonstrating performance leadership in AI training and inference in every round since the launch of the MLPerf benchmarks in 2018.

Omdia’s latest analysis reveals that the global cloud storage services market generated $57 billion in 2023, with Amazon Web Services (AWS) leading with a 30 percent market share. Microsoft and Google follow, while Alibaba and China Telecom are key players in China. Our 2024 Storage Data Services Report projects the market to reach $128 billion by 2028, driven by digital transformation, remote work, and IoT growth. Notably, file storage is set to grow at a 21 percent CAGR, becoming crucial for AI workloads. The analysis forms part of Omdia’s2024 Storage Data Services Report, offering an in-depth breakdown of the cloud data storage services market.

Omdia forecasts robust growth for the global cloud data storage services market, projecting it to reach $128 billion by 2028, with a CAGR of 17% over the next five years. In 2023, the total storage capacity sold in cloud storage services amounted to 2,100 exabytes. Amazon led in terms of global storage capacity consumed, accounting for approximately 38% percent of the market. Object storage, often referred to as cloud storage, dominated the services capacity sold, making up 70 percent of the total storage services capacity.

Dennis Hahn, Omdia Principal Analyst said: “File storage is poised to be the fastest-growing segment, with an expected CAGR of 21 percent through 2028. This growth is largely attributed to the increasing use of file storage as high-performance storage in AI workloads. Despite object storage leading in capacity, storage services revenues are more evenly distributed among object, block, and file storage. This is due to higher per-capacity service charges for block and file storage from most vendors.

Wedbush analyst Matt Bryson suggests to subscribers that Pure Storage could announce 150TB Direct Flash Modules; its proprietary SSDs, at its Accelerate Conference starting today (June 18). Then again, it might not.

Veritas, being bought by Cohesity, announced the latest Data Insight release with availability as a SaaS offering. Veritas Data Insight enables customers to assess and mitigate unstructured and sensitive data compliance and cyber resilience risks across multi-cloud infrastructures. Customers can consume Veritas Data Insight from the cloud as a multi-tenant, Veritas-managed SaaS application. VDS data indexing now requires up to 50% less disk space. Expedited  data classification better focuses on content that is relevant. Improved indexing and targeted classification results in more comprehensive compliance. Veritas Data Insight is available as part of all three Veritas data compliance and governance service offerings and we understand it will be in the DataCo part of Veritas left behind after the Cohesity acquisition.

Western Digital previewed a 2Tb QLC (4 bits/cell) NAND chip at a June 10 investor conference. The die uses BiCS8 node 218-layer NAND and a CBA (CMOS bonded to the Array) manufacturing technology. EVP Robert Soderbery talked about the end of the NAND layers race, with a slowing down of NAND layer count jumps to optimise capital deployment. He said “We’re no longer on a hamster wheel of nodal migration.” Nodes must be long-lasting, feature-rich and future-proofed. WD will be supplying premium nodes for premium use cases with stronger WD-customer relationships.

Quantum confirms dramatic sales plunge for FY 2024

Quantum has finally announced its delayed full fiscal 2024 profit and loss accounts, and they don’t make for pretty reading for employees, investors or customers.

Revenues for FY 2024, ended March 31, fell 26 percent year-over-year to $311.6 million, its lowest annual revenue for more than 30 years,. It reported a net loss of $41.3 million compareds to a net loss of $16.4 million in the prior year.

The top and bottom lines for FY 2022 and 2023 have been restated after an accounting investigation concerning standalone pricing of product components used in product bundles. FY 2022 revenues of $372.8 million were restated as $383.4 million and the previous net loss of $32.3 million has become a profit of $38.4 million. Revenues in FY 2023, which were $412.8 million, are now $422.1 million with the previously reported loss of $37.9 million now restated as an $18.4 million profit.

However, the good news ends there, given the revenue plunge and mounting losses in the last full financial year.

Quantum revenue, net income

Chairman and CEO Jamie Lerner said: ”Our full year 2024 results reflect a significant reduction of revenue from our largest hyperscale customer, which we had expected would scale down over time but instead stopped placing orders at the end of fiscal Q1 2024.” We understand that the hyperscaler stopped buying tape libraries. Tape media and product royalty revenues also declined.

CFO Ken Gianella said on the earnings call: “The hyperscaler wanted a custom solution that didn’t align with our business model. They sought a contract manufacturing model with 3-5 percent margins and required us to contribute our intellectual property, which we declined.”

This was Quantum’s largest customer by revenue and the effect of the buying pause was dramatic – $100 million + went south. Product sales of $174.9 million in FY 2024 were down 36.4 percent year-on-year. Quantum’s revenues for the second, third, and fourth FY 2024 quarters were $219.8 million, 30.1 percent lower than the equivalent nine months in FY 2023.

Lerner said: “While extremely disappointed with the impact from significantly lower revenue year-over-year, we have been proactively accelerating our business transformation. During this time, our team continues to focus on improving the Company’s capital structure as well as optimizing our overall business operations.”

Jamie Lerner, Quantum
Jamie Lerner

The accounting investigation ”found no evidence of intentional misconduct.”

Financial summary for FY24

  • ARR: $145 million
  • Subscription ARR: $17.8 million, up 33 percent year-over-year
  • Cash, cash equivalents & restricted cash at quarter end: $25.9 million vs $26.2 million a year ago

Gianella’s presentation stated that there were “aggressive cost reduction actions taken through FY 2024” and they’re continuing in the current financial year. He said there are more than “$33 million of annualized operational efficiency and self help cost actions through FY 2025.” 

Lerner said Quantum was entering turnkey and manufacturing partnerships, and pursuing paths to monetize non-strategic assets. It sold its service inventory assets for $15 million in April.

He mentioned senior leadership and sales team changes in his results presentation. CRO John Hurley resigned and was replaced by VP EMEA Henk Jan Spanjaard in April. SVP and chief customer officer Rich Valentine left in March.

Lerner said: “Looking ahead, we remain committed to getting back to profitability as well as stabilizing and improving the performance of our legacy Automation and StorNext solutions. Quantum remains dedicated to use cases for Media & Entertainment, Life Sciences, Industrial Technology, and Federal while improving our position to address the prevailing industry trends around Artificial Intelligence across the multiple verticals we serve.”

The Nasdaq delisting threat (due to stock price woes) was put on hold after a hearing on May 14 in which the panel “granted the Company’s request for relief that the Company meet the minimum bid price requirement by September 16, 2024, and that its Quarterly Reports on Form 10-Q for the fiscal quarters ending September 30, 2023 and December 31, 2023 be filed by July 1, 2024.”

The delay to the quarterly reports is due to ongoing audit work.

Quantum’s product focus is on “rationalized platforms and portfolio.” It will look for resellers with reach in AI and life sciences.

The sales focus from now on is on the ActiveScale and Myriad products “serving use cases that drive higher recurring revenue, with improved margins, in faster growing market segments.” Lerner reckons: “Execution of our strategy to advance our operating model, combined with improving our capital structure, will drive step-change improvements to Quantum in fiscal 2025.”

He added: “ActiveScale is currently the fastest-growing product, and Myriad is expected to follow closely. We see high demand and significant attach rates between Myriad and ActiveScale, leading to higher ASPs and margins.”

The outlook for the first fiscal 2025 quarter is $72 million +/- $2 million revenues, a 22.7 percent fall at the midpoint. The full FY 2025 revenue outlook is guided at $310 million +/- $10 million, essentially flat at the midpoint.

The hyperscaler headwind, Lerner said, “will be felt in Q1, but we expect a slight uptick in Q2. The decline will be offset by growth in primary storage, Myriad, and ActiveScale, along with stabilizing legacy business.”

Quantum expects “product margins to improve as we focus on subscription-based and service revenue. The shift to subscription models impacts product revenue but enhances overall profitability.”

Datadobi brings mapping order to unstructured data chaos

As unstructured data stores face rampant growth, Datadobi has put out its latest StorageMAP, 7.0, claiming it will help data managers get a grip on multiplying increases in data formats, sources, locations, users, and help owners to control costs and create virtual datasets for application use.

StorageMAP technology scans and lists a customer’s unstructured data silos, including its file and object storage estates. Once identified and mapped, the storage used for this data can be optimized, with old and cold data moved to lower-cost archival storage, for example, and dead data deleted. Warmer – more frequently accessed – data could be tagged for use in AI training and inference work, and migrated to a public cloud for access by GPUs there. 7.0 adds custom dashboards and an analysis module to make this work more efficient.

Carl D'Halluin, Datadobi
Carl D’Halluin

CTO Carl D’Halluin said: “No other vendor on the market even comes close to what we now deliver. Datadobi is the future of unstructured data management – unrivaled, unparalleled, unmatched.”

Dave Pearson, IDC’s Infrastructure Research VP, added: “We are regularly hearing from clients that unstructured data is a double-edged sword – they recognize that it represents a potential wealth of information and a source of value creation within their organizations, but also that the massive capacity growth, a lack of enterprise-wide visibility, and difficulty breaking down silos of file and object data is a major management problem in a cost and resource-constrained time.”

When you have billions of files spread across thousands of locations, having specifically tuned reporting dashboards can make management more effective. Custom dashboards use metadata fields and StorageMAP tags to visualize, organize, and monitor the data in a single pane of glass. Elements include point-in-time and series charts, and lists. Files and objects can be categorized by data ownership, age, last accessed time, or user-defined tags such as criticality, sensitivity, and usefulness.

Data managers can use the analysis module to explore and analyze trends in an enterprise’s unstructured data.

It provides multiple layers of filters and classifications that create virtual datasets matching criteria of interest. These datasets can be used to create charts, tabular output, and other reports that can be included in the custom dashboards and used as input for actions such as migration, replication, pipelining etc. carried out within StorageMAP.

StorageMAP 7.0 also supports WORM migrations from IBM COS and Hitachi HCP Object systems to any S3 systems supporting the S3 Object Lock API. This enables data managers to migrate such data while retaining legal hold status and retention dates. IBM COS and Hitachi HCP  implemented their own proprietary WORM API protocols prior to AWS implementing WORM functionality in the S3 API. Now these non-S3 standard WORM vaults can be brought into the S3 environment.

General availability for StorageMAP 7.0 is planned for July 2024.

Qumulo edges out WEKA in Azure cloud performance benchmark

Scale-out file system supplier Qumulo has beaten WEKA in the Azure cloud using an AI-related benchmark.

The SPECStorage Solution 2020 benchmark has an AI Image subset alongside four other workload scenarios: Electronic Design Automation (EDA), Genomics, Software Builds, and Video Data Acquisition (VDA). The Azure Native Qumulo (ANQ) offering achieved 704 AI Image jobs with an overall response time (ORT) of 0.84ms and delivering 68,849MB/sec

Qumulo claimed that this is both the industry’s fastest and most cost-effective cloud-native storage offering as its Azure run cost “~$400 list pricing” for a five-hour burst period. The software used a SaaS PAYG (pay as you go) model, in which metering stops when performance isn’t needed.

It said that deploying cost-effective AI training infrastructure in the public cloud requires transferring data from inexpensive and scalable object storage to limited and expensive file caches. ANQ acts as an intelligent caching data accelerator for the Azure object store, executing parallelized, pre-fetched reads, served directly from the Azure primitive infrastructure via the Qumulo file system to GPUs running AI training models.

WEKA recorded the highest results in four of the benchmark’s categories in January 2022, including AI. It reported 1,400 AI Image jobs with an overall response time (ORT) of 0.84ms using Samsung SSDs. A separate run with WEKA software running in the Azure public cloud recorded 700 AI Image jobs with a 0.85ms ORT and 68,318MB/sec. Qumulo has just beaten this by four jobs, 0.01ms, and 531MB/sec – a narrow margin.

A chart shows the differing vendor product results:

Qumulo has a lower ORT than WEKA in the AWS public cloud but a far lower AI Image job count. 

A Qumulo blog claims that its Azure PAYG pricing model is disruptive. It argues that “most other vendors, including a previous submission at 700 jobs at 0.85ms ORT, do not communicate costs transparently.”

The blog authors further state: “They include a large, non-elastic deployment of over-sized VMs that you would have to keep running, even after deployment, in order to maintain your data set. They require a 1–3 year software subscription, costing hundreds of thousands of dollars, on a software entitlement vs having a PAYG consumption model.”

Eminent Sun alumnus says NFS must die

Tom “Pugs” Lyon, Sun’s eighth employee and NFS architect, argues that NFS must die and give way to a block-based protocol for large dataset sharing in the cloud.

He gave a talk at the 2024 Netherlands Local Unix User Group conference in Utrecht in February, which can be watched as a YouTube video starting from the 6:24:00 point.

Lyon first established his credentials with impressive stints at Sun, Ipsilon Networks, Netillion, Nuova Systems, and DriveScale:

His presentation’s starting point is that “file systems are great for files but they don’t really give a shit about datasets.” Datasets are collections of files. A dynamically mountable file system in Linux can be considered a dataset, and datasets, as used in AI training, can be made up from billions of files which are consumed (used) by thousands of GPUs, and also updated by a few hundred agents working on a few thousand files at a time.

NFS architect Tom Lyon
Tom Lyon

NFS works and is popular, with its “killer feature that you can rapidly access arbitrary datasets of any size.” Its purpose is to provide a shared mutable data space. It provides access to large datasets, by network-unaware applications, without having to first copy them.  But, Lyon argues, shared mutable data is basically a bad idea in concurrent and distributed programming. File sharing is not appropriate in the cloud.

It’s a bad idea because it’s error-prone, and developers are forced to use complex synchronization techniques to ensure data consistency, such as locks and semaphores. If we share immutable data instead, making copies of it, then there is no need for synchronizations, and data consistency can be guaranteed.

In the cloud, he says, sharing layered immutable data is the right way to do it, citing Git, Docker files, Delta Lake, Pachyderm, and LakeFS as successful examples. You can cache or replicate the data and it won’t change. But this can involve too much copying.

Admins should think about using NVMe over Fabrics and block storage providers to get over the copying issue. NVMe-oF provides “crazy fast remote block device access.” Block semantics are well defined and “aggressively cached by the OS.”

The POSIX-like, distributed file system BeyondFS “allows many different block storage providers” such as Amazon EBS, Azure, and GCP equivalents, OpenEBS, Dell EMC, NetApp, and Pure Storage.

There is much more in his presentation, which you can check out here.

Overall, Lyon proposes a new approach to achieve fast, highly scalable, and consistent access to dynamic datasets by using existing file systems, OverlayFS, and NVMe-oF. And he wants collaborators to join him in the effort of getting a new open source project started to achieve it. He can be reached at pugs@lyon-about.com or on Mastodon @aka_pugs@mastodon.social.

Nutanix to hand Bain Capital $1.7B amid continued growth

Bain Capital is to receive an $817 million investment payback from Nutanix, as well as almost 17 million shares worth $900 million.

Update: Rajiv Ramaswami quote counters Chris Evans’ view that the Bain payment may restrict Nutanix’ ability to invest in its business. 24 June 2024.

Nutanix went public back in 2016. In September 2020, it arranged a $750 million investment from Bain Capital Private Equity’s BCPW Nucleon unit via a 2.5 percent convertible senior notes deal due in 2026. These notes are a form of debt security that can be converted into cash and equity (shares) at a later date.

Dheeraj Pandey, Nutanix co-founder, chairman, and CEO at the time, said: “Bain Capital Private Equity has deep technology investing experience and a strong track record of helping companies scale. Bain Capital Private Equity’s investment represents a strong vote of confidence in our position as a leader in the hybrid cloud infrastructure (HCI) market and our profound culture of customer delight.”

Pandey retired shortly thereafter.

The deal is governed by an indenture which specifies that Nutanix is required to settle the conversion by paying $817.6 million in cash and delivering approximately 16.9 million shares of Class A common stock. Nutanix plans to use its existing balance sheet liquidity to settle the cash portion of the conversion and should deliver the shares in late July 2024 following regulatory approval. 

When Nutanix took in the Bain investment, its revenues were $314 million to $328 million a quarter and it was making $185 million to $241 million losses per quarter. It finally became profitable in February this year and, as of its latest quarter end in June, had $1.651 billion in cash, cash equivalents, and short-term investments.

Rajiv Ramaswami, Nutanix
Rajiv Ramaswami

Nutanix CEO and president Rajiv Ramaswami said: “We appreciate the support, guidance, and counsel that Bain has provided us over the past several years and are pleased with their sustained endorsement.”

Analyst Chris Evans pointed out: “Effectively, for the issuing of $750 million, the company ‘pays back’ $1.73 billion (pays back in quotes as some of this is share issuance).

“It will be interesting to see the effect on the share price, but also on Nutanix’s ability to compete with the likes of VMware, when a big chunk of cash comes out of the company.

“Possibly the worst time for this to happen as it lets VMware off the hook somewhat. Nutanix might have to scale back sales and marketing spend, and maybe R&D, as these are currently the areas where the most money goes out the door. As reported in the third quarter 2024, the company only has about $1.6 billion in cash and short-term investments, so this payout will be quite a blow.”

However, Ramswami said: “Paying Bain the cash portion of the settlement of their recent note conversion does not mean we have to scale back investments in our business. We fund our investments in sales and marketing and R&D from the gross profit generated by our business, not from cash on our balance sheet. In the first nine months of our fiscal 2024, after covering all of our planned expenses, including sales & marketing and R&D, we generated over $370 million of free cash flow.”

Max de Groen and David Humphrey, partners at Bain Capital, will continue to serve as members of Nutanix’s Board of Directors. Humphrey said: “We have been really pleased with our partnership with Nutanix over the last several years, particularly during its transformation from a pioneer of hyperconverged infrastructure into a much broader hybrid multi-cloud platform provider. We continue to believe in the future of Nutanix. Their innovative technology, market position and operational discipline are enviable.”

De Groen said Bain had no plans to sell its Nutanix shares. The share price is up 3.63 percent to $54.01 over the past five days.