Home Blog Page 66

Users complain Apple’s MacOS Sonoma screwed up their exFAT external drive access

Apple’s Sonoma MacOS screws around with non-Apple format external drives stopping content access, users have complained, saying that it has done so for months and claiming the US tech giant has ignored the problem.

Mac users sometimes need external disks or SSDs to move files between Macs and other systems, such as Windows PCs and servers. Drives obviously have to be in a format accessible by both the MacOS and Windows operating system. One such is exFAT, the Extensible File Allocation Table format, developed by Windows in 2006 as a licensable item, which then became open for use by everyone in 2019.

exFAT is now pretty much a standard for moving files on disks, SSDs and SD cards between Windows, Linux and smart devices. Users are complaining that after upgrading from MacOS Ventura to Apple’s current MacOS v14 (Sonoma) is giving users problems with exFAT drives.

An Apple customer told us: “MacOS Sonoma messed up an exFAT formatted SD card containing dashcam footage. Literally all files vanished and Spotlight Search made sure recovery was going to be fruitless by stamping its mark on the filesystem although TestDisk/PhotoRec got some files back but mostly truncated MP4.” 

“In the process Sonoma corrupted two Kingston SSDs. These appear to work fine on my Windows NUC but don’t want to risk plugging them in to my MacBook Air M1. They would literally not appear in Finder and I couldn’t figure out what MacOS was doing. Windows came to the rescue here.”

Apple’s community forum has a lengthy thread on the issue. 

Poster BungalowBill92 is one of many reporting the issue, and in October 2023 he said: “Since I updated to macOS Sonoma, I’ve been experiencing issues with my external drives (in ExFAT). I can’t access them, nor can I access Disk Utility (which continuously displays a “Loading disks” message). Only when I use the Disk Arbitrator app to prevent the drives from mounting, it allows me to run Disk Utility and “First Aid,” enabling access to the external drive. However, the folder icon images are missing, and in the Sharing & Permissions info, it reads “You have custom access.” Is anyone else experiencing similar problems, or does anyone know how to resolve this?”

Poster feketegy reported experiencing an identical issue: “Having the same issue with external USB drives, USB sticks and external keyboards that are connected through USB on macOS Sonoma 14.2 and MacBook M3 Pro.”

There are 20 pages on this thread in the forum with users still experiencing the problem seven months later. Poster earz said this month:

“In all frankness I have to wonder just how much longer this travesty is going to continue. So many folks cannot USE their Apple product due to an operating system that seems to not be of any concern to Apple, and it looks like the crippled businesses who try to use the current OS and are not happy are not a concern either.

“I truly do not know of ANY OTHER scenario where the product remains ‘messy’ and ‘somewhat crippling to its users’ for so long. I truly believe Apple could care less about remedying Sonoma. Of course, they just might be sitting on a solution they will release with a price tag on it …some day ..maybe…IF they can fix the OS at all…”

Several posters suggest reverting to MacOS v14 (Ventura) to get their exFAT drives working again.

Elsewhere on the internet there are many similar forum posting threads: Reddit – r/<MacOS, Apple Insider, MacRumors Forums, Mac Help Forums and iBoysoft, for example.

In the MacOS Sonoma 14 release notes, Apple mentions a “New Feature” under the topic of File System. It states: ”The implementations of the exfat and msdos file systems on macOS have changed; these file systems are now provided by services running in user-space instead of by kernel extensions. If the application has explicit checks or support for either the exfat or msdos file systems, validate the applications with those file systems and report any issues. (110421802).”

The “user-space” term refers to everything within MacOS that isn’t in the kernel of the OS. Anything running in user space is subject to user ID access rules. These can limit file use and function depending on how they are set up. Kernel file access services don’t have any such file access issues as they run with root privileges instead. 

It is not known if this is a contributor to the exFAT drive access problem or not – since Apple has not said anything about it – but the timing of the change is suggestive.

We contacted Apple’s media access service on May 20 relaying the issues above, and asking what Apple is doing to fix the issue, and will the issue be resolved in the forthcoming MacOS 15 (Catalina)?

There has been no reply.

Comment

It appears that Apple released MacOS Sonoma with inadequate testing of external drive connectivity and access. Four Sonoma point releases have not fixed the problem and Apple has issued no public statement recognizing the issue or committing to fix it.

Infinidat unveils EPYC 4th gen InfiniBox arrays

Infinidat has upgraded its InfiniBox arrays to fourth-generation hardware with a higher level of performance and added cyber-protection, Azure support, a controller upgrade program, and increased service offerings.

The company sells high-end and scale-up enterprise arrays with three controllers for reliability and Neural Cache-branded memory caching for very low latency storage request responses down to 35μs. InfuzeOS controls the arrays and also runs in the AWS cloud, providing an InfiniBox environment there. Infinidat has all-flash (SSA) and hybrid flash/disc versions of its array, both with memory caching. There is an InfiniGuard cyber-protection and backup system, and the InfiniVerse cloud-based monitoring system receiving telemetry from InfiniBox arrays.

Phil Bullinger, Infinidat
Phil Bullinger

Infinidat CEO Phil Bullinger said: “We’re excited to announce the new InfiniBox G4 systems and the many new enhancements that expand our InfiniVerse platform and STaaS (Storage-as-a-Service) initiatives, cybersecurity capabilities, infrastructure lifecycle management, and hybrid multi-cloud support, culminating significant product development efforts and field engagement with our partners and customers.”

Hardware

The InfiniBox SSA F1400T arrays move away from Intel Xeons to controllers based on AMD EPYC 9554P single socket 64-core CPUs. This gives them 31 percent more CPU capability and a 20 percent power reduction on a per core basis compared to the existing F4304T and F4308T systems.

They are fitted with a PCIe gen 5 bus, replacing the prior gen 3 PCIe interconnect, and DDR5 DRAM, enabling them to deliver up to twice the performance of the current (G3) generation of InfiniBox and InfiniBox SSA II arrays. 

There are four models in the SSA range – F1404T, F1408T, F1416T and F21432T – and they have the same active-active-actuve controller set up but different capacities:

Infinidat InfiniBox capacities

The capacities are expressed as percentages of a petabyte. The F1432T will be available in a few months. The F1404T requires 14 RU of rack space at 155TB usable capacity, and the F1408t and F1416T also take up 14RU as will the F1432T. You will still be able to buy them from Infinidat in rack form though, as well as 14RU enclosures that fit in a standard rack. It will not be possible to upgrade from one F1400T model to the next.

Infinidat has also extended its hybrid InfiniBox 4400 range upward with a new F4420 model fitted with 20 TB disk drives, giving it 3.17 PB usable capacity, 55 percent more than the prior top end F4412:

Infinidat InfiniBox capacities

The F4408T and F4416T will be available in a flexible storage architecture with 60, 80 and 100 percent capacity populated forms.

Infinidat is joining Pure with EverGreen, IBM with Storage Assure, and HPE’s Timeless initiatives with its own Mobius-branded, non-disruptive, controller upgrade program for the gen 4 arrays over their life cycle. 

InfuzeOS

The array’s InfuzeOS is now available in the Azure cloud with the InfuzeOS Cloud Edition product, as well as AWS, giving Infinidat customers multi-public cloud capability for running Infinidat storage facilities with replication to and between AWS and Azure from on-premises Infinidat systems for test/dev, DR, backup and business continuity. The performance of the AWS and Azure InfuzeOS instances will depend upon the underlying infrastructure the CSP uses. 

Cyber-protection and InfiniVerse

Infinidat has developed Automated Cyber Protection (ACP) where API calls from Syslog, Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) tools can trigger automatic snapshotting for all or selected parts of the array’s data volumes. That means a ransomware or other attack, when detected by the tools, can have its threat window drastically reduced because critical data can be immediately copied into an immutable InfiniSafe snapshot. The snaps can be inspected with InfiniSafe Cyber Detection scanning uses AI and ML to check the content for integrity.

A blog by Bill Bassinas, Infinidat’s Senior Director of Product Marketing, says: “InfiniSafe ACP is a simple concept – when you see something, do something! … Without thinking, it automatically triggers a protection scheme to create immutable snapshots of any data within your InfiniBox SSA and InfiniBox platforms. Why do it? Why not? It costs you nothing! … and can save you millions!”

Infinidat has extended its InfiniSafe Cyber Detection capabilities to VMware environments. Bassinas says: “Volumes or file systems that are used for VMware datastores can now be specifically scanned with the same accuracy as standard data volumes and file systems. VMs are reported on with the same accuracy and high levels of granularity as volumes, files, databases, etc.”

It will extend its coverage to its InfiniGuard purpose-built backup appliance in the second half of 2024. InfiniSafe core functionality and InfiniSafe ACP are included at no cost with all Infinidat arrays.

Infinidat InfiniVerse

The InfiniVerse cloud monitoring facility has been upgraded to a so-called Platform-Native Architecture. It now includes Cyber Resilience Services, Consumption Services, Lifecycle Management Services, Data Services, and Manage and Monitor control plane services.

Infinidat systems can be purchased as Capex, FLX consumption-based, pay-as-you-grow STaaS or in a COD (Capacity On Demand) scheme. InfiniVerse Mobius applies to the Capex purchase scheme.

Comment

Customers should welcome these substantial improvements of Infinidat’s on-premises arrays and their software and services, with extended cyber-protection, controller upgrades, and hybrid cloud facilities. The F1404T, with its 155 TB capacity, provides a lower-cost entry point than before and should extend Infinidat’s appeal to new customers.

HYCU gets a Dell boost by adopting DD Boost

HYCU has announced API-driven integration with Dell PowerProtect/Data Domain and DD Boost software to speed putting HYCU backups on PowerProtect/Data Domain target deduping appliances.

PowerProtect is Dell’s brand for its Data Domain backup appliances. DD (Data Domain) Boost is backup source system deduplication software, which part-deduplicates the data stream with the target backup appliance completing the dedupe work. This sharing of the dedupe compute burden increases backup data ingest speed.

HYCU says this Dell-HYCU deal “significantly expands the reach and applicability of our solutions to thousands of new customers globally!”

Simon Taylor

Simon Taylor, HYCU founder and CEO, says in a statement: “This extraordinary release significantly improves data transfer and efficiency between the backup service and storage systems to enable Dell PowerProtect Data Domain customers to have faster backups, improved security, and reduced network  traffic.”

HYCU has been supporting physical and virtual PowerProtect appliances from its beginning, and backs up data from Dell PowerScale, ECS  Enterprise Object Storage, PowerEdge servers,  and the XCSeries to PowerProtect appliances.

David Noy, Dell’s VP of Data Protection Product Management, said: “The new HYCU integration with DD Boost software allows customers to more  efficiently protect heterogeneous on-premises, multi-cloud, and SaaS workloads.”

HYCU is also announcing native support for Azure Stack HCI so that it can protect and run on Dell  Azure Stack HCI and Azure Stack Hub.

HYCU’s R-Cloud SaaS data protection scheme covers more than 200+ SaaS apps. It says enterprises can maintain control over their SaaS data, whether stored on-premises or with an MSP using Dell ECS Enterprise Object Storage.

More HYCU-Dell updates are  on the way, and they are currently in controlled release with select partners and customers. The capabilities above will be generally available before the end of June 2024.

Komprise automates dataset-to-AI process workflows

Komprise has a Smart Data Workflow Manager which it says will enable users to build workflows connecting datasets with AI services.

The VC-backed company specializes in data management and orchestration, with software that tracks unstructured data usage and tiers it to an appropriate storage tier on-premises and in the public cloud, depending on its access frequency. As part of its software capabilities, it includes data moving and data identification.

AI applications such as large language models (LLMs) respond to users’ conversational questions and requests with output, reports, code, etc. To do this they need access to data sets. These sets need maintaining and refining to be kept up to date and valid, and keep sensitive information masked or hidden where needed. Komprise’s metadata can be used to tag and track AI preparedness.

Komprise identifies two major issues: Efficiently discovering and feeding the right data to an AI platform, and enriching data sets for AI processing. It says: “Both processes are highly manual, laborious tasks that are error-prone and require meticulous data governance.” Its Smart Data Workflow Manager can fix these problems, it says.

Kumar Goswami, Komprise co-founder and CEO, said: “Leveraging AI responsibly and efficiently is a priority. We are targeting common use cases that many of our customers have brought to us as a first step and we will continue to expand the Smart Data Workflow ecosystem to encompass any AI service.”

Komprise customer Rob Behary, who is the head of systems and scholarly communications in the Gumberg Library at Duquesne University, said: “With Komprise, our librarians can process more images than ever and at faster speeds by leveraging AI to systematically tag all our digital collections.”

The Smart Data Workflow Manager provides:

  • Data Workflow Wizard – Intuitive point-and-click UI wizard to set up an AI data workflow, from searching for the right data set, configuring and tuning the AI service, defining the tags and defining how frequently the workflow should run;
  • Global Search and Analytics – Use Komprise Deep Analytics to search across an entire data estate, on-premises and in the cloud, and define the precise data set needed to feed a compute-intensive AI app;
  • Automated Workflows – Automatically run workflows and repeat the process as new data becomes available, saving time and effort and ensuring continuous data enrichment;
  • A catalog of pre-built integrations – Use with Azure, AWS and other AI services for personal identifiable information (PII) data detection, Chatbot augmentation and image detection;
  • Monitor hundreds of workflows from a single interface – See their status, how many files were processed, the runtime, the next scheduled run and any actionable errors;
  • Enrich data with tags in the Komprise Global File Index – The tags file characteristics you can query and take actions on and are stored in the Global File Index, not changing file attributes in any way;
  • Data governance and auditing with Komprise’s software maintaining logs of details such as what data is fed, when, and to which service.

Komprise Smart Data Workflow Manager is available today as an early access program to customers. It is included in the Komprise Intelligent Data Management platform. Learn more here and attend a 45-minute webinar on June 6 at 15:00 UTC, registering here.

Why GenAI has IT in the catbird seat

COMMISSIONED: In baseball, “sitting in the catbird seat” means holding a position of strategic advantage. Like a batter with no strikes and three balls.

The legendary commentator for the Dodgers, Red Barber, borrowed this phrase from the behavior of the gray catbird in the southern United States. Sitting high above the ground, the catbird is ready to capitalize on opportunities. IT administrators are in the same spot, ready to steer their organization to the forefront of Generative AI (GenAI).

Research suggests 82 percent of business leaders believe GenAI will significantly transform their business. Using their organization’s data to create new content, automate processes, and deliver insights with speed and efficiency.

However, the deployment and management of GenAI technologies demands a nuanced understanding of data management, network requirements, and infrastructure scalability. These are domains where Infrastructure Technology teams excel.

IT admins: strategic players in AI initiatives

IT teams are poised to lead the adoption and integration of GenAI technologies across the business for three key reasons:

IT knows how to get to the data

At the heart of GenAI initiatives is the ability to access and manage large volumes of data. Organization specific knowledge holds incredible potential. What begins as “just” data, has the potential to become insights, innovation, and intelligence unique to each business.

Storage admins have long been at home in this complex landscape. They have spent years managing structured and unstructured data. They have learned to excel in a multicloud world, providing access to both on-premises and cloud-based data.

The success of GenAI projects hinges on the efficient retrieval and processing of data. New requirements like real-time access to large volumes of data, and architectures optimized for speed and flexibility will drive the need for new approaches to storage management. Here, storage administrators can spearhead the adoption of innovative tools like the Dell Data Lakehouse to ensure that data in any format is accessible, preprocessed, and primed for effective AI training.

IT knows how to navigate the network

Network administrators have played a crucial role in establishing and maintaining the connectivity framework within organizations since the inception of IT teams. They laid foundational digital highways, enabled seamless communication, data transfer, and access to resources.

If data is the lifeblood of GenAI, then networking is its backbone. GenAI initiatives and the exponential growth of data will drive even more complex and powerful network infrastructure. Software-defined networking, orchestration and advances in congestion control and adaptive routing mechanisms will continue to help fuel this rapid growth. While InfiniBand is most frequently used, Ethernet technologies continue to advance. Analysts forecast that the need for non-proprietary, cost-effective solutions, will drive a 50 percent expansion in the datacenter switch market with 800Gbps making up most AI back-end networks by 2025. Network admins can ensure their organizations are ready for AI infrastructure by strategically learning and deploying the right solution to meet the growing demands of their organization.

IT knows how to scale and virtualize compute

IT teams excel at scaling technology to meet the expanding needs of the business. When management complexity expanded, they drove containerization and virtualization. When their organizations grew, they deployed scalable infrastructure and implemented cloud services.

As GenAI projects find success, the ability to quickly scale is paramount. Transitioning from proof of concept to full-scale production while quickly showing ROI can present a significant challenge. The key for IT teams will be to start with low hanging, high probability use cases. Simultaneously they will need to anticipate growth trajectories and prepare infrastructure to support expansion. Leveraging an agile framework like the Dell AI Factory for NVIDIA provides a highly scalable infrastructure with a flexible microservices architecture.

A call to action

It is time for IT teams to take advantage of their unique position, not as participants, but as leaders. This journey calls for a deepening of skill sets. From mastering data processing for GenAI, to understanding the demands of high-bandwidth GenAI infrastructure, to looking at the datacenter in an entirely new light. The opportunity is ripe for IT professionals to build upon their established expertise, driving not only their careers forward but also positioning their organizations at the forefront of the AI revolution. IT teams do not have to do this on their own. Dell Technologies is also driving these initiatives with education, services, and great solutions. Visit us at Dell.com/AI to learn more.

Brought to you by Dell Technologies.

Nutanix and Cisco grease migration for VMware customers

The second part of today’s Nutanix barrage is aimed at the installed base of Cisco UCS servers and VMware vSAN systems, with the first part here and the third story here.

Software-defined Nutanix needs servers on which to run its HCI software and AHV hypervisor. Partner Cisco has thousands of installed UCS servers, both blades and rack shelves. Broadcom’s acquisition of VMware has raised concerns among customers about the direction of VMware’s software development and changes in business practices. Nutanix aims to make it easier for installed base VMware vSAN customers to migrate their hardware to Nutanix.

Andrew Phan, CIO at Treasure Island and Circus Circus Hotel Casino, said: “We made the decision to move all of our workloads, including our mission-critical 24×7 environment, entirely to Nutanix when we learned our existing hypervisor pricing would more than double. Moving to Nutanix was one of the fastest and smoothest migrations we’ve ever had.”

Nutanix is working jointly with Cisco to certify Cisco’s UCS blade servers. An aim is to enable enterprises to repurpose existing deployed UCS  servers, including blade servers, to run the Nutanix AHV hypervisor. We could see UCS blade, rack, and X-Series compute-only nodes connected to Nutanix HCI or storage-only nodes.

The AHV company also supports repurposing many vSAN ReadyNode configurations to help customers simplify migration to Nutanix’s Cloud Infrastructure/AHV offering by enabling reuse of existing hardware. This and the UCS support will help lower the TCO for customers looking to, Nutanix says, “modernize their infrastructure.”

Thomas Cornely, SVP of Product Management, said: “We are excited to work with our partners to expand the reach of Nutanix AHV to compute-only servers beyond traditional hyperconverged servers, further accelerating its adoption by enterprise customers to simplify operations and increase cyber-resilience.”

The cyber-resilience angle was reinforced by Nutanix enhancing its Secure Snapshot capability with a new multi-party approval control for privileged operations such as snapshot changes to protect against malicious actors and ransomware.

Nutanix is strengthening its AHV Metro offering with support for multi-site disaster recovery (DR) to help customers more quickly recover from two simultaneous site failures. This may be helpful for customers subject to the Digital Operations Resilience Act (DORA), which will go into effect for financial services organizations in the European Union in 2025.

AHV is getting Automatic Cluster Selection to intelligently place newly created virtual machines (VMs) across a set of clusters, balancing resource utilization without admin involvement, simplifying self-service application provisioning. AHV’s live migration is being accelerated by managing the way memory is replicated to the destination host in a more intelligent manner. This will particularly help he migration of  large and highly active VMs.

New AHV server capabilities and AHV Metro multi-site DR are under development. Support for vSAN ReadyNode, Secure Snapshot, and Automatic Cluster Selection in AHV are available to customers. More information here.

Nutanix expands AI initiatives with new partnerships

Nutanix is building on its Cisco deal, Broadcom’s VMware acquisition, the GenAI boom, containerization, PostgreSQL interest, and green initiatives with a slew of announcements at its .NEXT conference in Barcelona. 

The announcements concern GPT-in-a-Box, AI inferencing and training, adoption of in-place Cisco UCS servers and vSAN nodes, a PostgreSQL partnership with EDB, expanded Kubernetes support, and electricity power monitoring.

We’ll cover the AI angles here, look at UCS server support and vSAN node adoption in a second article, with EDB, Kubernetes, and power monitoring combined in a third story.

Nutanix and AI

“Enterprise is the new frontier for GenAI,” said Thomas Cornely, Nutanix SVP of Product Management, “and we’re excited to work with our fast growing ecosystem of partners to make it as simple as possible to run GenAI applications on premises at scale while maintaining control on privacy and cost.”

Nutanix has upgraded its GPT-in-a-Box offering to integrate with Nvidia’s NIM microservices and the Hugging Face large language model (LLM) library, and intends to add support for Nvidia’s GPUDirect file delivery protocol so Nutanix’s HCI systems can pump file data to Nvidia GPU farms for AI model training.

Joint customers will be able to use GPT-in-a-Box 2.0 to consume validated LLMs from Hugging Face and execute them. Nutanix and Hugging Face will develop a custom integration with Text Generation Inference, the Hugging Face open source library for production LLM deployment, and enable text-generation models available on the Hugging Face Hub within GPT-in-a-Box 2.0.

There will be a jointly validated and supported workflow for Hugging Face libraries and LLMs, ensuring customers have a single point of management for consistent model inference.

Nutanix says many enterprises find it challenging to make all the decisions needed to set up AI apps, such as choosing among hundreds of thousands of models, serving engines, and supporting infrastructure. They lack “the new skill sets needed to deliver GenAI solutions to their customers.” Its collaboration with Nvidia and GPT-in-a-Box v2 enhancements help simplify the customer experience. 

Tarkan Maner, chief commercial officer at Nutanix, said: “Enterprises are looking to simplify GenAI adoption, and Nutanix enables customers to move to production more easily while maintaining control, privacy, and cost.”

Nvidia NIM

NIM microservices, running on top of the Nutanix Cloud Platform, enable retrieval-augmented generation (RAG) whereby generally trained GenAI LLMs get access to a proprietary and private user data: spreadsheets, presentations, documents, mails, POs, whitepapers, etc. They enable AI inferencing on a wide range of models, including open source community models, Nvidia AI Foundation models, and custom models, leveraging industry-standard APIs.

Users will be able to build scalable and secure NIM-enhanced GenAI apps by accessing the catalog of NIM microservices.

Nutanix announced certification for the Nvidia AI Enterprise 5.0 software platform for streamlining the development and deployment of production-grade AI, including Nvidia NIM. 

Manuvir Das, Nvidia VP of Enterprise Computing, said: “The integration of Nvidia NIM into Nutanix GPT-in-a-Box gives enterprises an AI-ready solution for rapidly deploying optimized models in production.”

GPT-in-a-Box 2.0 will include:

  • Unified user interface for foundation model management, API endpoint creation, end-user access key management
  • Point-and-click-user interface to deploy and configure Nvidia NIM
  • Integrated Nutanix Files and Objects, plus Nvidia Tensor Core GPUs
  • Automated deployment and running of inference endpoints for a range of AI models and secure access to the model using fine-grained access control and auditing
  • Support for AI-optimized GPUs, including Nvidia’s L40S, H100, L40, and L4
  • Support for density-optimized GPU systems from Dell, HPE, and Lenovo to help lower TCO by allowing customers to deploy a smaller number of systems to meet workload demands

NUS, Data Lens, MGX and partners

Nutanix enhanced its unstructured data platform for AI/ML and other applications. Nutanix Unified Storage (NUS) now supports a new 550-plus terabytes dense low-cost all-NVMe platform and up to 10 GBps sequential read throughput from a single node (close to line speed for a 100 GbitE  port), enabling faster data reads and more efficient use of GPU resources. 

Nutanix’s Data Lens cloud-based data governance and ransomware hunting service has extended its cyber resilience to Objects in addition to Files data. A new Data Lens point of presence in Frankfurt enables broader adoption for EU customers, meeting their own compliance needs.

Nutanix has planned support for NX-9151, which is based on the Nvidia MGX modular server reference architecture. This allows for different configurations of GPUs, CPUs, and DPUs – including Nvidia Grace, x86 or other Arm CPU servers, and OVX systems.

There is a new Nutanix AI Partner Program providing customers with simplified access to an expanded ecosystem of AI partners. Partners will help customers build, run, manage, and secure third-party and homegrown GenAI apps on top of Nutanix Cloud Platform and GPT-in-a-Box, targeted at prominent AI use cases. Initial partners include Codeium, DataRobot, DKube, Instabase, Lamini, Neural Magic, Robust Intelligence, RunAI, and UbiOps.

Nutanix GPT-in-a-Box 2.0 is expected to be available in the second half of 2024. More information can be found here. Support for Nvidia GPUDirect and NX-9151 are under development. Additional features announced in NUS as well as Data Lens are available. More information here.

Nutanix partners with EDB to fit database service for AI

Nutanix is partnering with EDB to integrate its PostgreSQL software in Nutanix’s Database Service with an AI aspect to the deal. It also announced the Nutanix Kubernetes Platform (NKP), which simplifies management of container-based apps, along with a power monitor that displays Nutanix software’s electricity consumption.

This is our third story about Nutanix’ .NEXT 2024 announcements with the first part here and the second part here.

PostgreSQL is an open source, Oracle-compatible, and extensible relational database management system (RDBMS) that’s said to be enterprise-grade. VC-funded EDB (EnterpriseDB) provides PostgreSQL software, services, and support. Its commercially supported EDB Postgres Advanced Server has added features, tools, and certifications to make it even more enterprise-ready. The EDB offering now becomes an officially supported database of the Nutanix Database Service.

Nutanix chief commercial officer Tarkan Maner said: “Nutanix Database Service automates provisioning, patching, cloning, and data protection to accelerate deployment, support day two operations, maintain compliance, and manage databases at scale. Our collaboration with EDB allows customers to deploy PostgreSQL in the most demanding enterprise environments while simultaneously increasing productivity for developers building applications on PostgreSQL.”

EDB CEO Kevin Dallas echoed the sentiments about AI, stating: “The expanded partnership between Nutanix and EDB promises a seamless path to migration from legacy systems and provides a competitive edge for the AI generation of applications with support for transactional, analytical, and AI workloads.

”EDB’s future data and AI platform will catapult PostgreSQL into the world of data analytics and AI, providing businesses with a PostgreSQL-enabled, comprehensive data ecosystem.”

Nutanix EDB diagram with Postgres AI layered onto Nutanix’s Cloud Infrastructure software layer

This refers to EDB adding vector support to PostgreSQL, with a pgvector extension adding vector data types and functions aiding semantic or similarity searching. The platform aims to add vector embedding capabilities, enhance vector search, and incorporate retrieval-augmented generation (RAG) workflows. Data scientists will be able to develop and execute machine learning models inside the PostgreSQL ecosystem, coding in Python, R, and Rust.

In summary, EDB is extending its PostgreSQL software into Postgres AI, a combined database, data lake, data warehouse, and AI/ML vector datastore in a single software stack, and now has Nutanix cooperation, adoption, and support for its Postgres AI to do so.

Nutanix Kubernetes Platform

Nutanix has supported Kubernetes in its Cloud Platform software for some while. NKP extends this Kubernetes support by building upon Kubernetes management technologies developed by D2iQ’s Kubernetes Platform, which was acquired by Nutanix in 2023. 

It is also part of Nutanix’s Project Beacon to enable the decoupling of apps and data from the underlying infrastructure so developers can build applications once and run them anywhere.

NKP has a complete, CNCF-compliant cloud native stack providing a consistent operating model for managing Kubernetes clusters across on-premises, hybrid, and multi-cloud environments. It can manage clusters running in non-Nutanix environments, including popular public cloud Kubernetes services, as well as both connected and air-gapped environments.

Customers can manage both on-premises Nutanix containers and clusters running in the public cloud through a single interface, reducing “complexity and operating costs.” 

Tobi Knaup, general manager of Cloud Native at Nutanix, said: “One of the biggest challenges organizations face with cloud native applications is deploying, securing and managing the rapidly expanding fleets of Kubernetes clusters being deployed on premises and in public clouds and NKP simplifies this.”

NKP comes in three tiers:

  • NKP Starter is included in Nutanix Cloud Infrastructure and effectively replaces the existing Nutanix Kubernetes Engine, delivering turnkey clusters
  • NKP Pro adds a suite of cloud native projects to help securely run and operate individual clusters, including built-in Nutanix Data services for Kubernetes when deployed on Nutanix Cloud Infrastructure
  • NKP Ultimate brings fleet management capabilities, including the ability to install, run and monitor clusters in the public cloud

Power monitor

Nutanix’s Cloud Platform software is getting an electrical power consumption monitor, based on measurements from the hardware in use, updated in near-real time. Customers will be able to visualize power metrics in their Prism Central dashboard, report historical data, and better understand energy utilization across their Nutanix environment.

It is included in Nutanix Cloud Infrastructure (NCI), and builds upon recently released capabilities in Nutanix’s X-Ray benchmarking tool, providing power and energy information for comparison alongside other performance metrics (CPU, Memory, IOPS, etc.) for real-world scenarios. This can help customers better understand the power and energy usage for specific simulated workloads.

Nutanix says that, on average, its customers who shared their experiences using the NCI product, the hyperconverged infrastructure-based building block, reported over a 70 percent decrease in physical footprint and a 50 percent reduction in energy consumption versus their legacy systems. 

It claims that this is the first step in providing Nutanix Cloud Platform users with information and tools to better support their sustainable IT initiatives.

Availability

EDB on NDB is available now. Learn more here. NKP is expected to be available in the summer of 2024. More information can be found here. The power consumption dashboard is under development. More information here.

Dell unveils PowerStore Prime with enhanced NVMe performance and scalability

Dell is upgrading its PowerStore software to PowerStore Prime at Dell Technologies World 2024 in Las Vegas, as well as introducing a QLC (4 bits/cell) model to increase storage density and lower cost per TB.

PowerStore is Dell’s mid-range, dual-controller, unified file and block storage array. Version 3.0 of the PowerStore OS was announced in May 2022. The current gen 2 PowerStore hardware employs Cascade Lake Xeon CPU controllers and a new “no compromises” QLC system – the PowerStore 3200Q – is being announced. The container-based PowerStore v3.6.1.1 OS is upgraded to PowerStore v4.0 in a PowerStore Prime initiative.

Dell’s SVP for product management, Travis Vigil, blogs: “Dell Technologies is pleased to announce PowerStore Prime, an integrated offering that combines all-flash storage advancements with new strategic business advantages to help our customers compete in a rapidly changing world.” 

There are six PowerStore models: the  all-flash 500T, 1200T, 3200T, 5200T, and 9200T, all using TLC (3 bits/cell) SSDs in 2RU x 24 drive chassis, with the new 3200Q being a QLC (4 bits/cell) flash variant using 15.36TB NVMe drives. It can expand from 11 to 93 drives in single drive increments, reaching more than 24PB effective in a 4-way cluster. The 3200Q with the v4 OS supports block, file and vVol storage protocols.

PowerStore model specifications.

V4 PowerStore OS increases end-to-end NVMe performance and scalability, with federated clusters able to reach 23PB effective capacity. A 2RU x 24-drive slot node can scale up to 1PB of raw capacity and a cluster can scale out to eight active-active nodes.

TLC (3 bits/cell) or QLC (4 bits/cell) NVMe SSDs (3200Q only) are available, as well as NVMe/TCO and NVMe/FC over external networks. Dell says: “This end-to-end NVMe ecosystem delivers extremely high IOPS value and sub-millisecond latency.” SmartFabric storage networking software provides the industry’s first fully automated end-to-end NVMe deployment, meaning no manual networking admin effort is needed.

Dell PowerStore Prime
The software performance boost compares PowerStore OS v4.0 with a 70/30 read/write mix and 128K block size over Fibre Channel, with PowerStore 5200’s peak IOPS running PowerStore OS v3.6

There is a guaranteed 5:1 data reduction level from always-on deduplication and compression. Dell claims this “keeps capacity and power costs perpetually low, reducing the data footprint in the background with no performance trade-offs.”

When volumes need to be rebalanced across PowerStore appliances, a machine learning engine is used, with Dell claiming a 99 percent reduction in admin time to accomplish rebalancing.

Dell is now offering options to move live PowerStore workloads to and from APEX Block Storage

Data resilience

Dell has five types of data protection available, including new synchronous replication for block and file, in a single workflow. A patented Dynamic Resiliency Engine (DRE) protects mission-critical data, using virtualization methods to safeguard against simultaneous drive failures.

PowerStore Prime performance, efficiency, resilience and multi-cloud features slide.

In multi-appliance environments, native file, block, and vVol replication provides secure, immutable, policy-based snapshot data mobility and protection for all workloads. Over shorter distances (up to 60 miles), native Metro Synchronous Replication provides a high availability feature with zero RTO/RPOs. It is software-only, configurable in six clicks, and comes at no extra charge.

PowerStore v4.0 users can backup, restore, and migrate to the public cloud via APEX offerings. An integration with PowerProtect DD enables users to configure and manage remote or multicloud backups directly from the PowerStore Manager UI, using an included Instant Access capability for simple, granular restores.

Admin features and integrations

There is a PowerStore AIOps application – Dell APEX AIOps – accessible from any mobile device, in the cloud, which “reduces time to resolution” with a new generative AI assistant. It mitigates cyber security risks, improves staff productivity, and forecasts future storage needs. Dell’s ISG VP technologists Itzik Reich blogs: “The release is supported by … a new Gen AI-powered version of our APEX AIOPs Infrastructure Observability tool (formerly CloudIQ) that adds natural language query support to deliver instantaneous high-quality answers to any question about Dell’s infrastructure products. Essentially, you’ll be able to ‘talk’ to CloudIQ, saving hours of research, resolving issues and optimizing your infrastructure faster!” 

PowerStore Prime future-proofing slide.

PowerStore services can be provisioned at the VM-level directly from vSphere. VMware integrations include vRO, VAAI, and VASA support with VSI plugins, native blocks, vVols and file datastores, vVols-over-NVMe networking, and native vVol replication.

Storage workflows can be automated via a REST API and integrations with several orchestration frameworks. DevOps users can reduce deployments from days to seconds by provisioning PowerStore directly from Kubernetes using Ansible, Terraform integrations, and the platform’s CSI (Container Storage Interface) plugin. Dell Container Storage Modules (CSM) bring enterprise storage capabilities to Kubernetes to facilitate cloud-native workloads. Amazon EKS support enables users to run container orchestration across the public clouds and on-premises environments.

PowerStore has non-disruptive, data-in-place controller upgrades. Dell says all array software is included in the hardware purchase – both the initial OS release and continuous performance and feature upgrades. There is no licensing for purchase or maintenance, and all software enhancements are provided at no charge for the life of the product.

Dell says partners can increase PowerStore sales with competitively priced product bundles and deliver expanded use cases for joint customers. Partners can also streamline sales motions when selling PowerStore and PowerProtect offers together.

Availability

PowerStore software enhancements will be globally available in late May. PowerStore QLC model and data-in-place higher-model appliance upgrades for Gen 2 customers will be globally available in July. PowerStore multi-cloud data mobility will be globally available in the second quarter of 2024. APEX AIOps Infrastructure Observability and Incident Management are available now. Application Observability availability is planned for October 2024.

Download a PowerStoreOS v4 datasheet here. Check out a PowerStore Prime blog here. Look at PowerStore model details here.

Vawlt CEO thinks single cloud dependence is a mistake

Interview. Our UniSuper Google Cloud disaster story spurred a discussion with Ricardo Mendes, CEO and co-founder of Portugal-based Vawlt – a software startup looking at moving securely data to the cloud. His views raised interesting points. Its software orchestrates multiple cloud providers for disaster recovery, backup, and data agility.

Blocks & Files: What do you think of UniSuper’s Google Cloud disaster?

Ricardo Mendes, Vawlt
Ricardo Mendes

Ricardo Mendes: This situation illustrates a critical point we’ve been advocating for years: All suppliers, including cloud giants, can fail, and cloud-managed geo-replication isn’t synonymous with comprehensive fault tolerance or resiliency. Many of our customers often ask about the difference between spreading data across multiple clouds and regions versus relying on a single provider’s geo-replication. 

The analogy we frequently use is that clouds are very good and reliable “supercomputers” with  all the challenges such an approach brings – Single point of failure and dependency, to name just those related to the incident you mentioned. So geo-replication provides resiliency against events like natural disasters but doesn’t safeguard against cloud-level  incidents.

Blocks & Files: How do you view customers depending on a single cloud?

Ricardo Mendes: Blind trust in single cloud providers means abdicating control over critical disaster recovery strategies and activities, weakening an organization’s ability to respond effectively to incidents. As you stated in your article, this dependency isn’t just operational, since companies depend on their cloud providers’ responses and transparency when something goes wrong (and I agree with you that, probably, it tends to be increasingly worse the smaller a company is). Relying solely on a single cloud provider compromises business continuity, independence, and organizational sovereignty (and, as a side note, precludes economic and operational efficiencies).

For these reasons, cloud independence should be a more frequently discussed topic. A world where this independence is the standard is, in my opinion, the way to go. And let me be clear: Vawlt does not fully solve the issue, but one thing I know is that it helps. I believe this event is a crucial lesson for the industry on the importance of this matter and the existence of the space for solutions (and services) to solve it.

Blocks & Files: Could a CSP use logical air-gapping to separate out an internal-to-their-cloud disaster recovery facility such that a user subscription cancellation would not automatically cancel all the customer’s cloud infrastructure? The DR site would need multi-factor authentication for delete changes or could have an immutability characteristic with set-in-stone retention periods.

Ricardo Mendes: From the CSP’s point of view, they can (and should) have the maximum measures in place to segregate different internal systems, making each individual failure independent. A solution like the one you suggest could solve the specific problem of UniSuper, but I think the issue is much broader. 

No matter how many measures CSPs take internally, given that they are a single organization, they could simply be inoperable as a whole – not just due to technical issues related to unwanted incidents, but also due to the CSP’s decisions (discontinuing a particular service, deciding to change prices). 

My stance is that the measures to ensure the independence, sovereignty, and business continuity of companies should be taken by those companies themselves, completely independently of the guarantees provided by CSPs. 

The concept of Supercloud that is starting to be discussed, although there is still no consensus on its definition, is fundamentally about creating an “abstract cloud” that abstracts away from the CSPs it is built upon. Aside from another potential buzzword, I advocate that there is room for the creation of software layers that leverage resources provided by clouds, ensuring vendor independence (and, by the way, with other benefits such as cost reduction). 

Using Vawlt as an example, what we do with data storage can and should be done for other areas of infrastructure and cloud services in general. Aviatrix, for example, operates in this space with cloud network solutions distributed with fault tolerance at the CSP level.

Blocks & Files: Would a user having two CSP subscriptions, with subscription 2 being the DR facility for subscription 1, work such that it would prevent a UniSuper-type disaster?

Ricardo Mendes: Operationally, yes. However, in my view, I would add the concern of how the DR scheme is set up – the client should have control. 

An approach where the client uses a software layer to replicate their DR capability between two CSPs entirely independently of each of them is, in my opinion, the way forward. 

I am talking about operating this software layer to respond to disasters (and the migration process) to avoid the operational unavailability that occurred in the UniSuper case, but also to ensure that the response to the incident is entirely under the organization’s control, not the CSP’s, and thus not dependent on the CSP’s support response time. 

Here, I should add that there is a market space exactly for solutions that offer this kind of guarantees in a very simple way by design – there are technologies that allow things like what I am referring to when combined, but they tend to be extremely complex and expensive, which means they are only within reach of large players.

Blocks & Files: Could two CSPs set up a mutual cross-cloud DR facility for customers such that AWS would have a GCP agreement for an AWS customer to have a DR facility in GCP and vice versa? My simple mind says this is theoretically possible but likely to be commercially impossible.

Ricardo Mendes: Technically speaking, it seems to me that there are already solutions on the market that would allow the technical implementation of such a solution. And yes, I think the immediate problem would be commercial and the message it would send to the market by the CSPs. 

However, as I mentioned, I think a problem would remain: control being on the CSPs’ side in operation, configuration, and response to potential incidents. 

In a way, taking it to the limit, this would be a new service with a sort of consortium geo-replication. There would certainly be greater segregation between systems and everything else, but to what extent is this very different from geo-replication of a single CSP, considering that a rather deep integration between the two CSPs would be needed? 

Would it not be a new abstraction of a single CSP that could then fail like a single CSP? I always return to the initial point when these questions arise: taking advantage of the resources of the multi-cloud world, but independence and incident response should be ensured by the client, who should maintain the maximum control over these mechanisms and processes.

Dell PowerScale storage upgraded in grab for AI model training

Dell has announced a PowerScale F910 system with a parallel file system.

PowerScale is Dell’s name for the acquired EMC Isilon scaleout filers. Up until now there were five all-flash PowerScale models: F200, F210, F600, F710 and F900, with the PCIe gen 5 bus-using F210 and F710 systems announced in February and using Sapphire Rapids Intel CPUs. These are all PowerEdge servers with directly attached storage, running the OneFS operating system. They can be clustered with from 3 to 252 nodes.

The F910, like the F900, comes in a 2RU chassis with 24 NVMe drives. It can hold up to 1.87 PB of capacity per node, meaning it uses 61 TB SSDs, QLC ones from Solidigm we think. An F910 blog by Dell’s Tom Wilson, a senior product manager in the Unstructured Data Solutions (UDS) group, says the F910 is “offering 20 percent more density per RU compared to the earlier released F710.”

The F910 is essentially the F900 upgraded from Cascade Lake to Sapphire Rapids CPUs and from Gen 3 PCIe to the Gen 5 bus. It also requires OneFS v9.8 compared to the F210 and F710’s OneFS v9.7.

Dell PowerScale F910 slide

The F910 is available on-premises with its OneFS v9.8 OS being available in the public cloud as APEX File Storage (AWS and Azure). Dell says the F910 has 127 percent more streaming performance than the F900 and is up to six times faster than the Azure NetApp Files offering. It is, Dell says, the first Ethernet storage system for Nvidia’s DGX SuperPOD.

Wilson blogs: “It accelerates the model checkpointing and training phases of the AI pipeline and keeps GPUs utilized with up to 300 PBs of storage per cluster.” He adds it: “controls storage costs and optimizes storage utilization by delivering up to 2x performance per watt over the previous generation,” meaning the F900 running OneFS 9.5. 

OneFS 9.8 provides RDMA for NFS v4.1, APEX File Storage for Azure, and source-based routing for IPv6 networks. The PowerScale OS is claimed to protect AI data against being poisoned and also model inversion, in which an attacker trains their own machine learning model on the output from the target model and so can predict the target model’s input data from its outputs. This is akin to reverse engineering using a kind of AI model digital twin. A Defense.AI blog can tell you more. How OneFS provides a defense against model inversion is not disclosed.

Varun Chhabra, Dell’s SVP for ISG Marketing, said in a briefing: “We’re excited to announce Project Lightning which will deliver a parallel file system for unstructured data in PowerScale. Project Lightning will bring extreme performance and unparalleled efficiency with near line rate efficiency – 97 percent network utilisation and the ability to saturate 1000s of data hungry GPUs.”

“Lightning will deliver up to 20x greater performance than traditional all-flash, scale-out NAS vendors meeting making PowerScale the perfect platform for the most advanced AI workloads as well.”

Dell’s Project Lightning has a history. Back in 2010, this project was about PCIe/flash-based server cache technology. It has progressed to enable the PowerScale cluster nodes to perform I/O in parallel. Dell has not revealed any details of how the F910’s software has changed to add parallel file system access. The OneFS 9.8 release notes, for example, do not mention parallel access.

PowerScale model characteristics.

We are not told whether the parallel file system support extends to the other all-flash PowerScale products. Dell has been asked about these points.

Chhabra added some networking points: “GPUs are getting larger and more demanding. So networking also has to keep up the amount of data flowing from GPU to GPU. And from server to storage. Networking is massive. We’ve partnered therefore with Broadcom to have some really big announcements to help customers with their AI network fabric to make sure that they’re getting the maximum performance out of their infrastructure. We have a comprehensive portfolio of Ethternet-based NICs, switches, and networking fabric, all of which we’re making advancements on. Starting with a brand new PowerSwitch that’s based on Broadcom Tomahawk 5, which will support 400 G and 500 G switching.”

Wilson said: “We will be announcing further enhancements coming up in the second half of this year.” These are:

  • 61TB QLC drives that will double storage capacity and data center density to accommodate large data sets required for training complex AI models.
  • Included options for 200GbE Ethernet and HDR 200G InfiniBand options for greater connectivity, faster data access and even more seamless cluster scaling; NVIDIA Spectrum-4 and Quantum QM8790 switches.

The PowerScale F910 will be available globally starting May 21, 2024. You can find more information on Dell’s AI-optimized PowerScale nodes on the spec sheet here and on its PowerScale website

A Dell spokesperson told us: “The new parallel file system will be available at a later date, we’re not disclosing availability today.”

PowerScale market position

Dell’s parallel filesystem IO feat positions PowerScale as a competitor to Lustre, IBM’s Spectrum Scale, VAST Data, WEKA, and other parallel access file system storage players. It instantly upgrades PowerScale to be a serious contender as storage for AI model training, as all the fastest Nvidia GPUDirect-qualified file systems are parallel, not sequential, in nature.

On February 22, Michael Dell tweeted: “A GPU from @nvidia will often sit idle if the storage can’t feed it the data fast enough. This is why we created PowerScale, the fastest AI storage in the world.” That comment did not stack up against GPUDirect supplier stats, which showed the then sequential IO PowerScale as the laggard compared to parallel systems from DDN, Huawei, IBM, NetApp with BeeGFS, VAST, and WEKA.

Nvidia GPUDirect bandwidth

Now it should be a different story, and we look forward to seeing newer PowerScale GPUDirect performance numbers.

By adopting parallel access, PowerScale is now differentiated from NetApp, whose ONTAP file system offering is scale-out and not parallel in nature, and also from Qumulo for the same reason.

Nasuni: Copilot has won the Windows Gen AI chatbot race

Analysis: Nasuni execs reveal that they think Microsoft has already effectively won the Gen AI race for Windows users, closing the door on storage companies building their own vectorization facilities to make RAG content available to their own Gen AI chatbots. Nasuni will use a co-pilot in its AIOps facilities.

I talked with three Nasuni execs at a London, UK, Nasuni user group meeting. They were chief evangelist Russ Kennedy, chief innovation officer for data intelligence and AI, Jim Liddle, and Asim Zaheer, the chief marketing officer. The background is that Nasuni’s File Data Platform provides file access to distributed users from a central synchronized object storage-based cloud repository with added services including ransomware detection and recovery. 

Nasuni wants to use AI to help with automatically analyzing, indexing, and tagging file data. In April it announced that customers could integrate their data stores and workflows with customized Microsoft Copilot assistants.

But would it go further? I suggested that, with large customer file data estates held in the Nasuni cloud, it could build its own Copilot chatbot facility to analyse them for responses to customer user inquiries and requests. It could also vectorize this stored data, and use its tagging facility to keep track of data which had been vectorized and data which had not.

A NetApp exec has suggested that such vectorization and indexing could be carried out at the storage layer in the IT stack.

Liddle disagreed with these ideas. In his view, developing a smart chatbot, like GPT-4, costs millions of dollars. It would be hugely expensive and, even if Nasuni were to develop its own chatbot, customers may well prefer not to use it. He said, and Kennedy and Zaheer agreed, that Nasuni customers getting aboard the Gen AI train were using Microsoft’s Copilot because it’s already working across Microsoft 365 components, Windows 10 and 11, and Outlook. 

“Customers will only want to use one Copilot,” Liddle said, and not have to change to a separate one for different system software environments. They’ll want a single Gen AI chatbot lens through which to look into their data.

Copilot uses Microsoft’s Semantic Index, This, Microsoft says, “sits on top of the Microsoft Graph, which interprets user queries to produce contextually relevant responses that help you to be more productive. It allows organizations to search through billions of vectors (mathematical representations of features or attributes) and return related results.”

These Nasuni execs believe customers want a single Gen AI co-pilot facility, and that in Windows-land Microsoft’s Copilot and Semantic Index are already effectively in-place and incumbent. 

My thinking is that several things follow from this. First, Nasuni will not build its own Copilot-like chatbot. That game, for its Windows users, has already been won by Microsoft.

Second, Nasuni will not build its own vectorization facility and vector data store into its File Data Platform. Microsoft’s Copilot uses vectors in the Semantic Index and wouldn’t understand vectors provided by Nasuni from its own vector database. Kennedy observed that there is there is no industry-standard vector format.

Again, the vectorization and indexing game for Windows users has already been won – by Microsoft Copilot and its Semantic Index. 

Nasuni’s – and its customers’ – best interests are served by making Nasuni-held data available to Copilot and Semantic Index.

A third thought occurred from this as well. Nasuni has customers who use Nutanix HCI and Not Microsoft’s Hyper-V server virtualization. It is most unlikely Nutanix will build its own chatbot, nor its own vectorizing and vector storage facilities – for the same reasons as Nasuni. That means Nutanix will have to support external chatbots – and that’s what it’s doing with its GPT-in-a-box offering.

Liddle said Nasuni will use Gen AI large language models to refine and optimize its own infrastructure for its customers. It tracks every I/O, not the I/O content, and this provides the base data for an LLM to optimize, for example, data placement across a customer’s File Data Platform deployment sites, and further optimize the overall efficiency and cost of that  multi-site deployment.

All of the above leads me to think that running vectorization and vector indexing in an external storage filer or SAN – or object system – is a misplaced idea in Windows environments, and will not happen.