Home Blog Page 98

Cerabyte demos ceramic-coated glass storage system

Startup Cerabyte, which specializes in ceramic-coated glass storage, has built a demo workflow system and is looking for funding and partners to take the concept further.

The German company has technology to create microscopic holes in a layer of a ceramic substance coating a square glass carrier. Holes are created with femtosecond laser pulses and laid down in a format similar to quick response (QR) codes with sequences of holes and no-holes representing binary zeroes and ones.

Cerabyte glass carrier
QR pattern data blocks written in ceramic layer on Cerabyte’s glass carrier

The glass carrier can be coated on both sides to increase data capacity, and several can be stacked inside a data cartridge. Cerabyte envisages a storage system composed of library racks and a write-read rack, with robotics used to transport the cartridges from the library racks to the write-read rack.

It could be likened to a tape library with the tape drives replaced by glass cartridge writer/reader devices, or an optical storage jukebox.

The demo Cerabyte system uses a datacenter rack form factor, and is shown in a video. The commentary explains that once a cartridge is brought to the write-read device, a carrier is extracted and placed on a platform or stage. This can move forward or backward underneath laser and reading camera components. These are focused on the glass carrier using mirrors and lenses.

Cerabyte laser writing
Video screen grab showing laser writing

The video commentary explains: “During the forward movement of the stage, a line of QR patterns is written, which are read back and verified during the backwards movement by a microscope camera. When a data carrier is fully written, it is returned into the cartridge which is then moved back to library.”

“The read workflow is similar whereby only the microscope camera is engaged and data is read in both directions of the stage movement. Error Correction and file allocation will work similar to other mainstream data storage technologies.”

We can readily understand that the write speed is based on two stage movements, forward for writing and backward for reading. Reading is based on both forward and backward movements.

The TBps write and read speeds will depend upon the QR code density. Cerabyte has mentioned 1 PB cartridges as a potential initial capacity with development through 10PB and out to 100 PB by using smaller bit sizes (holes) in the ceramic recording medium. It has discussed a progression from 100 nm to 3 nm bit sizes.

This ceramic medium has an initial thickness of 50 to 100 atoms and is said to be resistant to fire, cold, water, radiation, electricity, and other effects that can destroy data on tapes, disk drives, and SSDs. The glass carriers effectively last forever – we’re told – and, once in their cartridges, need no electrical energy for cooling or data maintenance. Moreover, unlike archival tape cartridges that require re-silvering, these carriers do not need any form of refreshing.

Cerabyte says its demo system was manufactured using commercial off-the-shelf components. Interested potential manufacturing partners, enterprises, research institutions, and institutional investors should contact Cerabyte by email at office@cerabyte.com.

Comment

The obvious comparisons we make with Cerabyte’s technology are Blu-ray optical disks and holographic storage technologies. Blu-ray writing creates pits in a recording medium which can be multi-layered. The spot size for the blue laser used is 580 nm with a 150 nm pit size. The pits are laid down in circular tracks on disks, whereas Cerabyte uses QR blocks on square glass carriers and has a smaller pit (hole) size of 100 nm. Holographic storage technology has never resulted in a commercial product due to manufacturing difficulties around the highly precise mirrors and lenses required.

The data capacity of Cerabyte’s demonstration square glass carriers or physical size is not known. The video suggests they are about 2-3 inches squared:

Cerabyte glass carrier

Microsoft has a Project Silica research effort looking at a glass-based archival storage medium and system for its Azure public cloud.

LucidLink secures $75M to boost remote collaboration

Remote large file collaboration startup LucidLink has raised $75 million to engineer software for distributed creative professionals to work faster together.

Update. CTERA CTO comments added; 27 November 2023.

LucidLink’s FileSpaces product has file sections streamed directly from a central cloud repository rather than being synchronized and shared between remote worker locations. This sync ‘n’ share approach is how it characterizes the services offered by CTERA, Egnyte, Nasuni, and Panzura. LucidLink says its software provides fast access to large files used by customers such as Adobe, A&E Networks, Whirlpool, Shopify, Buzzfeed, Spotify, various Hollywood studios, major broadcasters, digital ad agencies, architectural firms, and gaming companies.

Peter Thompson, LucidLink
Peter Thompson

Peter Thompson, LucidLink co-founder and CEO, said: “Legacy collaboration and storage solutions are not designed for this new hybrid workplace reality, and LucidLink is becoming the go-to solution for companies looking to future-proof their businesses. Our customers are reaching 5x in productivity gains on previously impossible workflows, and we are excited to see how they continue to unlock new possibilities as we help to accelerate the future of collaborative work.”

LucidLink was founded in 2016 and the latest C-round funding follows a $20 million B-round last year, taking the total raised to about $115 million. The round was led by growth stage investor Brighton Park Capital. Several existing investors – including Headline, Baseline Ventures, and Adobe Ventures – also participated. The new capital will be put into product and engineering development, customer acquisition efforts, and expansion into new verticals and geographies.

Co-founder and CTO George Dochev said: “With this Series C investment, LucidLink will accelerate its most ambitious product updates in the Company’s history to expand our technology leadership position, open up new customer use cases, and create more personalized product experiences that enable creative professionals to work more efficiently and effectively.”

The startup has grown its annual recurring revenue (ARR) by nearly 5x and the number of users on its platform by more than 4x in the past two years. LucidLink says that, for creative industries that work with complex files and applications, real-time collaboration across a hybrid and remote employee base has become a major concern. It cites as an example that three-quarters of creative collaboration now happens remotely, with the average creative review process taking eight days and more than three versions to receive sign-off, according to a Filestage report.

LucidLink says that IDC predicts that investment in cloud infrastructure and services will grow to $1.2 trillion by 2027 as the need for businesses to prepare for a hybrid workforce grows more urgent.

It is uncertain at what point file collaboration services like CTERA, Egnyte, Nasuni, and Panzura no longer meet customer needs and where LucidLink’s technology becomes the more suitable option. All LucidLink says is that its creative professional customers get faster access to large files, but what does “faster” and “large” mean?

GigaOm analysts said in September 2021: “LucidLink focuses on globally, instantly accessible data with one particularity – data is streamed as it is read. Streaming makes the solution especially well suited for industries and use cases that rely on remote access to massive, multi-terabyte files, such as the media and entertainment industry. LucidLink’s capabilities in this area are unmatched.”

A blogpost by Thompson and Dochev discusses the new funding and what it means.

Update

CTERA CTO Aron Brand suggests LucidLink is incorrect in its claim to be the only player in the market to have this streaming support technology. “CTERA’s streaming technology, especially the dedicated integrations with the Adobe suite including CTERA for Adobe Premiere, Illustrator, Photoshop, and InDesign,” he says, “has been a game-changer for its clients.” There are video demonstrations here.

Brand tells us some of CTERA’s largest clients, such as WPP and Publicis, benefit from CTERA’s offerings. Its edge filers, providing more than just an agent-based solution (which is what CTERA believes LucidLink offers), offer features such as up to 256TB of fast local cache, useful for 8K video production, and “enabling professional level of operation that is simply impossible with agent-based solutions.” For WPP, CTERA Edge Filers enabled it to achieve gains in productivity, collaboration, and cost savings.

Additionally, the CTERA Mac Assist enhances the user experience of Mac designers interacting with the global filesystem.  

Arcitecta upgrades Mediaflux to add streamlined data management

Arcitecta has upgraded its Mediaflux software to become a Universal Data System orchestrating, managing, and storing geo-distributed unstructured data across its entire lifecycle.

Mediaflux is an unstructured data silo-aggregating software abstraction layer with a single namespace that can store files and objects in on-premises SSDs, disk or tape, or the public cloud, with a database holding compressed metadata. There is a fast Mediaflux data mover to put files in the appropriate storage tier.

Mark Nossokoff, Cloud & Storage lead analyst at Hyperion Research, said in a statement: “By working to converge data management, orchestration, and storage onto a single unified platform, Arcitecta is aiming to boost users’ data accessibility, manageability, and scalability. And with pricing based on concurrent users rather than on capacity-based data volumes being managed, and eliminating the need for third-party software and file systems, Arcitecta is also seeking to significantly lower their customers’ costs.”

Up until now enterprise and most organizational files have been stored in clustered systems using dual controller nodes, with objects stored in scale-out systems. High-performance computing (HPC) systems have used parallel file systems such as Lustre and Storage Scale (GPFS). As file and object storage capacity needs have increased from terabytes to petabytes and on to exabytes, a need rose for lifecycle management software to analyze an organization’s unstructured data estate and move less-accessed data from fast and expensive storage to less expensive but slower tiers, such as from SSD to disk, and then on to tape or to object storage, on-premises or in the public clouds. Komprise is an active supplier in this area.

Public cloud object stores have been used as a basis on which to provide file collaboration between remote users. Suppliers such as CTERA, Egnyte, Nasuni and Panzura are specialists in this sphere.

Another need is now growing, and that is a requirement to be able to navigate an unstructured data estate composed of billions of files and objects that can be global in scope and used by distributed and remote offices. This has given rise to data orchestrators, predominantly Hammerspace, which manages sophisticated metadata repositories and enable access to globally distributed files as if they are local.

Now along comes Arcitecta saying you don’t need these extra software layers. Its Mediaflux metadata-driven software can do it all. 

Arcitecta graphic
Arcitecta graphic modified for readability

It claims its Mediaflux Universal Data System can provide:

  • Converging data management, orchestration and storage within a single platform to allow customers to access, manage, and utilize data assets more effectively. 
  • Manage every aspect of the data lifecycle, both on-premises and in the public cloud, with globally distributed access, providing cataloging, transformation, dissemination, preservation, and eventual storage. This streamlines the processes as data moves through its lifecycle.
  • Multi-protocol access and support with NFS, SMB, S3, SFTP and DICOM, providing flexible access and interoperability. Its global distributed access ensures data can be retrieved from any location, facilitating international collaboration among data-intensive organizations such as research facilities, universities, entertainment studios, and government institutions.
  • Scalability as Mediaflux licensing, based on the number of concurrent users, is decoupled from the volume of data stored so organizations can affordably scale storage needs to hundreds of petabytes, accommodating hundreds of billions of files without the financial strain typically associated with such vast capacities.
  • Clustered storage capabilities without the need for third-party software, whether a business is using block storage from one vendor or multiple vendors, and Mediaflux can integrate and manage all the data and storage within the environment.
  • Cost savings by eliminating the need for third-party software, storage fabrics and volume-based pricing. Mediaflux’s intelligent data placement feature optimizes storage efficiency by automatically tiering data based on usage and access patterns.
  • Supports multi-vendor storage environments, allowing customers to choose best-of-breed hardware. The storage underlying the Mediaflux Universal Data System can be from any vendor or multiple vendors.
  • Fast file transfer with integrated high-speed WAN file transfer features, with throughput of up to 95 percent of the available bandwidth on networks of 100 GbitE or more. Metadata and adaptive compression capabilities increase speed by eliminating redundant file and data transfers for optimized data movement. 

Arcitecta also claims that, despite having fast file transfer, Mediaflux can take compute operations directly to the data without moving the data over a network. With a direct path approach, users can collaborate more effectively with a system that distributes compute algorithms to where the data resides. This approach is important since transmitting data over large distances is expensive and inefficient.

Hammerspace also recognizes that transmitting data over large distances is expensive and time-consuming and has a partnership with Vcinity to deal with the problem.

Jason Lohrey, Arcitecta
Jason Lohrey

The Mediaflux Universal Data System is available immediately. There is a Solution Brief here and some webpages on the Mediaflux Universal Data System here if you would like to find more.

Jason Lohrey, CEO and Arcitecta founder, commented: “Mediaflux Universal Data System is the culmination of Arcitecta’s vision for the future of data management. By merging world-class data management, orchestration, multi-protocol access, and storage into one cohesive platform, we aim to set a new industry standard that moves beyond data storage and makes data more accessible, manageable and valuable than ever before.”

Accelerating High-Bandwidth Memory to light speed

Accelerated processors like GPUs could get faster memory access by using light-based data transfer and by directly mounting High Bandwidth Memory (HBM) on a processor die.

HBM came into being to provide more memory to GPUs and other processors than the standard x86 sockets interface could support. But GPUs are getting more powerful and need data accessed from memory even faster in order to shorten application processing times – Large Language Models (LLMs) for example, can involve repeated access to billions if not trillions of parameters in machine learning training runs that can take hours or days to complete.

Current HBM follows a fairly standard design: a stack of HBM memory is connected to an interposer, placed on a base package layer, via microbumps which link to Through Silicon Vias (TSVs or connecting holes) in the HBM stack. The interposer also has a processor mounted on it and provides the HBM-to-processor connectivity

HBM suppliers and the HBM standards body are looking at ways to speed HBM-to-processor access speeds by using technologies such as photonics or directly mounting the HBM on the processor die. The suppliers are setting the HBM bandwidth and capacity pace – seemingly faster than the JEDEC standards body can keep up.

The current standard is called HBM3e, and there are mooted HBM4 and HBM4e follow-on standards.

Samsung

Samsung is investigating the use of photonics in the interposer, with photons flowing across the links faster than bits encoded as electrons, and using less power. The photonic link could operate at femtosecond speeds. That means a 10⁻¹⁵ unit of time – one quadrillionth (one millionth of one billionth) of a second. The Korean behemoth’s Advanced Packaging Team, featuring principal engineer Yan Li, presented on this topic at the recent Open Compute Project (OCP) summit event. 

Samsung presentation at OCP Global Summit 2023. See the slide deck here.

An alternative to using a Photonics Interposer is to link the HBM stacks more directly to the processor (logic in Samsung diagram above). This is going to involve careful thermal management to prevent over-heating. This could mean that the HBM stacks could be upgraded over time to provide more capacity, but there would need to be an industry standard covering that area for it to be possible.

SK hynix

SK hynix is also working on a direct HBM-logic connection concept, according to a report in the Korean JoonAng media outlet. This notion has the GPU die or chip manufactured with the HBM chip in a mixed-use semiconductor. The chip shop views this as an HBM4 technology and is talking with Nvidia and other logic semiconductor suppliers. The idea involves the memory and the logic manufacturers co-designing the chip, which is then built by a fab operator such as TSMC.

Sk hynix interposer (top) nand combined HBM+GPU chip (bottom) HBM concepts.

This is somewhat similar to the Processing-in-Memory (PIM) idea and, unless safeguarded by industry standrards, will be proprietary with supplier lock-in prospects.

Together Samsung and SK account for more than 90 percent of the global HBM market.

Micron

Tom’s Hardware reports that Micron – the rest of the market – has HBM4 and HBM4e activities. It is currently making HBM3e gen-2 memory with 24GB chips using an 8-high stack. Micron’s 12-high stack with 36GB capacity will begin sampling in the first quarter of 2024. It is working with semiconductor foundry operator TSMC to get its gen-2 HBM3e used in AI and HPC design applications.

Micron says its current product is power-efficient and, for an installation of ten million GPUs, every five watts of power savings per HBM cube (stack) is estimated to save operational expenses of up to $550 million over five years over alternative HBM products. These strike us as somewhat fanciful numbers.

The HBM4 standard should arrive by 2026 and have a double-width interface of 2,048 bits compared to HBM3e’s 1,024 bits, with a per-stack bandwidth of more than 1.5TB/sec. HBM3e products operate in the 1.15 and 1.2 TB/sec area. Micron thinks there will be 36GB 12-high stack HBM4 capacity as well as 48GB 16-high stacks.

The table below adds Micron’s HBM4 and follow-on HBM4e (extended) numbers to the existing HBM – HBM3e numbers that we have.

B&F Table. HBM4 and HBM4e entroes are in italics because they are not official JEDEC standards.

Comment

Micron is not talking about combining HBM and logic in a single die, unlike Samsung and SK hynix. It will be telling the GPU suppliers – AMD, Intel and Nvidia – that they can get faster memory access with a combined HBM-GPU chip, while the GPU suppliers will be well ware if the proprietary lock-in and single source dangers.

As ML training models get larger and training times lengthen, the pressure to cut run times by speeding memory access and increase per-GPU memory capacity will increase in lockstep. Throwing out the competitive supply advantages that exist with standardized DRAM to get locked-in HBM-GPU combined chip designs – albeit with better speed and capacity – may not be the right way to move forward.

Dell places multi-cloud storage navigation in its APEX

Dell is providing automated deployment and monitoring of its public cloud block storage with the APEX Navigator for Multicloud Storage, and has file storage support coming.

APEX is a set of services from Dell where it supplies its compute, storage and networking gear through a public cloud-like subscription model. Navigator is a software agent, an automation engine, for setting up things like a VMware cluster or multi-cloud storage. They can be automatically deployed and managed. 

Magi Kapoor, director, Product Management at Dell Technologies, said in a blog that B&F saw pre-publication: “This is just the beginning. We’re also previewing the integration of Dell APEX Navigator with Dell APEX File Storage for AWS (integration availability expected in the first half of 2024). Dell is committed to expanding our support for more storage offerings, more public cloud providers and more regions.”

APEX Navigator for Multicloud Storage is a SaaS tool providing centralized management of Dell storage software across multiple public clouds. It enables ITOps and storage admins to deploy, configure, and manage Dell storage in public clouds, with monitoring and data mobility across on-premises and public clouds.

Dell says deployment is a four-stage process with simple configuration and automated provisioning of underlying public cloud resources, and automated deployment of its storage software.

The multicloud storage Navigator has a zero trust-based security approach with role-based access control (RBAC), single sign-on (SSO) and federated identity, featuring control over roles, permissions, groups, certificates, and keys. It provides APIs so it can integrate with other automation tools such as Terraform. The software uses Dell’s CloudIQ to monitor storage health, with a traffic light status display.

APEX Navigator for Multicloud Storage health metrics screenshot

Data placement within the on-premises and multi-cloud environment can be adjusted as needs demand.

APEX Navigator for MultiCloud Storage is initially available with APEX Block Storage for AWS with a 90-day evaluation. It will be available for quoting on November 30 and will be generally available in December in the US. Dell will show it off at the AWS re:Invent conference starting on November 27 in Las Vegas.

Dell AWS file storage

Dell APEX File Storage for AWS, based on Dell’s PowerScale scale-out OneFS software, has a Dec 13 update coming, with:

  • Increased storage capacity within a single namespace, from 1PiB (1.126PB to up to 1.6PiB (1.8PB) of hot data per cluster. This expanded capacity supports workloads such as AI and analytics, cloud burst, file sharing, disaster recovery and more.
  • Expanded geo availability with additional AWS EC2 compute instance types available in more AWS regions.
  • Added support for the Hadoop Distributed File System (HDFS) protocol, enabling the expansion of analytics use cases in AWS and for QoS.

The main competition for APEX File Storage for AWS comes from NetApp and Qumulo, with APEX Navigator  providing functionality that overlaps with NetApp’s Blue XP and the Qumulo One control plane. HPE, OEM’ing VAST Data software, has multi-cloud file storage and HPE GreenLake will no doubt face up to APEX Navigator in the multicloud file storage management area as well.

Storage news ticker – November 20

Niraj Tolia

Dr. Niraj Tolia, Alcion’s CEO and co-founder, was awarded the Distinguished Alumni Award by Carnegie Mellon’s Parallel Data Lab (PDL). The PDL, founded in 1992, looks at new data system architectures, technologies, and design methodologies. This award, only previously awarded five times in PDL’s history, recognizes Dr. Tolia’s seminal work and industry impact in the areas of storage and distributed systems with his academic work having been cited over 3,900 times. His industry impact can be seen in his work as CEO and co-founder at Kasten (acquired by Veeam Software) that provided Kubernetes data protection, and work on object-storage based file systems at Maginatics (acquired by Dell EMC).

Cloud and backup storage services supplier Backblaze announced another edition of its regular disk drive statistics blog looking at annualized failure rates (AFR). There were no surprises. But 354 drives exceeded their rated maximum temperature in Q3, with 2 failing and 352 continuing, apparently OK. It said: “Beginning in Q4, we will remove the 352 drives from the regular Drive Stats AFR calculations and create a separate cohort of drives to track that we’ll name Hot Drives. This will allow us to track the drives which exceeded their maximum temperature and compare their failure rates to those drives which operated within the manufacturer’s specifications. While there are a limited number of drives in the Hot Drives cohort, it could give us some insight into whether drives being exposed to high temperatures could cause a drive to fail more often.” It will also monitor drive AFRs by data center, cluster amd storage pod (overall enclosure)

… 

Cohesity is working with Microsoft to deliver improved backup and recovery performance for Microsoft 365 environments via the integration of native APIs of Microsoft 365 Backup Storage with Cohesity DataProtect.

Commvault has announced its participation in the Microsoft Security Copilot Partner Private Preview.  It is working with Microsoft product teams to help shape Security Copilot product development in several ways, including: 

  • Validation and refinement of new and upcoming scenarios 
  • Providing feedback on product development and operations 
  • Validation and feedback of APIs to assist with Security Copilot extensibility 

There’s more information in a Microsoft blog.

Cloudera announced support for Nvidia multigenerational GPU capabilities for data engineering, ML and AI in both public and private clouds to help customers build applications for AI. This will involve accelerating AI and ML workloads in Cloudera on Public Cloud and on-premises using Nvidia GPUs and Cloudera Machine Learning, and accelerating data pipelines with GPUs in Cloudera Private Cloud using Cloudera Data Engineering.

CrashPlan announced new packages for small businesses and general consumers. They provide tiered offerings and include automatic backup of every file version across Windows, macOS, and Linux devices. There are two flavors: 

  • CrashPlan Essential: A cost-effective offering that includes fully encrypted backup for individual users. For $2.99 a month, users can protect up to 200GB of data with automatic cloud backups, encryption, unlimited versioning and file restore workflows, with the flexibility to add storage for $1 for every 100GB as needs grow.
  • CrashPlan Professional: An endpoint backup package with unlimited versioning, and per-user encryption for small businesses and creatives. Users get the same backup functionality, with unlimited cloud storage capacity and file versioning for $8 a month. It supports multiple users and adds admin features optimized for small-to-medium businesses including secure service access (SSA).

The unlimited versioning enables faster and more granular file recovery. CrashPlan provides 256-bit AES encryption for data in transit and at rest.

The CXL Consortium announced the release of the Compute Express Link (CXL) 3.1 specification with improved fabric manageability to take CXL beyond the rack and enable disaggregated systems. It features:

  • CXL Fabric Improvements/Extensions – scale-out of CXL fabrics using PBR (Port Based Routing) supporting tree, mesh, ring, star, butterfly and multi-dimensional topologies 
  • Host-to-Host communication with Global Integrated Memory (GIM) concept
  • Trusted-Execution-Environment Security Protocol (TSP) – allows for Virtualization-based Trusted Execution Environments (TEEs) to host Confidential Computing Workloads 
  • Memory Expander Improvements – up to 34-bit of metadata and RAS capability enhancements

Get an evaluation copy of the CXL 3.1 spec here.

DDN and Tel-Aviv-based NextSilicon announced an optimized end-to-end compute to network to storage system for better datacenter I/O performance. DDN’s AI400NVX2 Storage appliance is connected simultaneously and directly to high-speed InifiniBand and Ethernet networks and to NextSilicon’s Maverick processor. The setup uses RDMA to bypass CPU bottlenecks and pass data directly from the DDN storage to the accelerated NextSilicon processing unit. In other words, GPUDirect-like access to a non-Nvidia accelerator. The NextSilicon software suite adapts the Maverick processor into a workload-specific ASIC, at runtime. NextSilicon is developing software algorithms that reconfigure NextSilicon’s hardware, improving performance based on runtime telemetry. The hardware has distributed HBM memory access.

NextSilicon Maverick board with accelerator chip

DDN says it and Sandia National Laboratories have partnered over the last 3.5+ years to design a next generation parallel storage system to meet the needs of current and future scientific workloads executing on large-scale High Performance Computing (HPC) systems. This tie up has resulted in designing extensions for DDN’s new Infinia product, which targets HPC storage workloads. This will bring the world’s fastest object store to bear on HPC’s hardest storage problems. Matthew Curry, Principal Member of Technical Staff at Sandia, said: “We have found that by allowing DDN and a diverse representation of subject matter experts from DOE laboratories to jointly participate in design activities, Infinia’s architecture can be broadened to handle the gamut of DOE workloads and beyond, contributing to mission success for DOE and a wider market for DDN.”

Dropbox and Nvidia announced a collaboration to improve productivity for Dropbox customers by using AI. Dropbox plans to use Nvidia’s AI Foundation Models, AI Enterprise software and GPU-accelerated computing to enhance Dropbox Dash, universal search that connects apps, tools, and content in a single search bar; and Dropbox AI, a tool that allows customers to ask questions and get summaries on large files across their entire Dropbox estate. “The arc of AI is expanding from cloud services into enterprise generative AI assistants that will drive the most significant transition in the computing industry to date,” said Jensen Huang, founder and CEO of Nvidia. “Together, Nvidia and Dropbox will pave the way for millions of Dropbox customers to accelerate their work with customized generative AI applications.”

Automated data movement pipeline supplier Fivetran announced support for Microsoft OneLake through integration with Microsoft Fabric as a new data lake destination. Fivetran has been named a Microsoft Fabric Interoperability Partner. Together with support for Delta Lake on Azure Data Lake Storage (ADLS) Gen2, also announced today, Fivetran customers now have two Microsoft data lake destinations to consolidate their data workloads with any of Fivetran’s 400-plus pre-built, fully managed data pipelines. Fivetran claims that, because Fit automates data extraction, cleansing, conforming and converting data to Delta Lake format, customers are able to move faster in developing AI and generative AI-based projects.

GigaIO has enabled a single SuperNODE x86 server to support 32 GPUs, twice as many as Liqid, using its FabreX dynamic memory fabric over a shared PCIe bus. The GigaIO SuperNODE is shipping now, available directly from Dell, Supermicro, and selected channel partners. FabreX provides the ability to create composable systems that allocate PCIe devices across multiple servers. GigaIO’s SuperNODE system was tested, using Hashcat and Resnet 50, with 32 AMD Instinct MI210 accelerators on a Supermicro 1U server powered by dual 3rd Gen AMD EPYC processors.

It has also announced a SuperDuperNODE supporting up to 64 GPUs.

Huawei’s OceanStor Pacific object storage system ranked highest in five out of seven use cases in the latest Gartner Critical Capabilities for Distributed File Systems and Object Storage report.

Lenovo announced Q2 fy2024 revenues of $14.4 billion, 16 percent lower than a year ago. There was a $273 million profit, 54 percent down on the year. But Lenovo revenues have now inceased for two successive quarters and it sees clear signs of recovery across the technology sector.

The segment revenues were:

  • Intelligent Devices Group (PCs) – $11.5 billion, down 16 percent y/y
  • Infrastructure Solutions Group – $2 billion, down 23 percent
  • Solutions and Services Group – $1.9 billion, up 11.8 percent

NetApp has claimed the #1 spot in the SPECstorage Solution 2020 EDA Blended benchmark. Net App reckons the top position in high-performance data storage for electronic design automation (EDA) affirms the effectiveness of its all-flash NetApp AFF storage solutions, which excel in scalability and low latency, meeting the ever-growing demands of EDA workloads. 

It’s hard to see how much use this benchmark has when so few suppliers appear to have submitted results. Also, is it valid to compare NetApp’s AFF A900 ONTAP with Oracle ZFS in SW Builds?

Panmnesia presented its CXL 3.0 All-in-One Framework powered by its proprietary CXL IP. This empowers customers to develop their own CXL 3.0 software/hardware capabilities, eliminating the need to start from scratch. A video shows a demo of this. You can download this document to gain an understanding of the framework.

 …

Rubrik has been named a Leader in the IDC Marketscape for worldwide cyber-recovery vendor assessment. Other players in the Leaders quadrant are Veritas, Cohesity, Druva and Acronis. The second-ranking Major Players section of the chart includes Veeam, Commvault, Dell, Zerto and Quest. A Contenders section has two suppliers mentioned: IBM and Arcserve.

IDC’s Marketscape is a 2D square chart with two axes: Capabilities from low to high on the vertical axis and Strategies (low to high) on the horizontal axis. Suppliers are placed in sections called Leaders, with the highest ratings in capabilities and strategies, Major Players with the next-highest ratings, Contenders with the next highest, and Participants which have the lowest ratings. A supplier’s spot or bubble on the chart increases in size with their revenue.

SK hynix  has started supplying global smartphone makers, such as Vivo, with 16 gigabyte (GB) packages of Low Power Double Data Rate 5 Turbo (LPDDR5T). It says this is the fastest mobile DRAM available today and can transfer 9.6 gigabits per second (Gbps).The LPDDR5T 16 GB package operates in the ultra-low voltage range of 1.01 to 1.12V set by the Joint Electron Device Engineering Council (JEDEC), and can process 77 GB of data per second, which is equivalent to transferring 15 full high-definition (FHD) movies in one second.

The SNIA announced its 2023-2024 Board of Directors and Technical Council members. The organization has more than 200 industry-leading organizations, 2,500+ active members, and more than 50,000 IT end users and IT professional members around the world.

Board of Directors: Executive Committee:

  • Chair: Dr. J Metz, AMD
  • Vice Chair: Richelle Ahlvers, Intel Corporation
  • Secretary: Chris Lionetti, Hewlett Packard Enterprise
  • Treasurer: Sue Amarin, Industry Consultant
  • Member: Scott Shadley, Solidigm Technology
  • Chair Emeritus: Wayne Adams, Industry Consultant

Board Members:

  • Peter Corbett, Dell Technologies
  • John Geldman, KIOXIA America  
  • Roger Hathorn, IBM
  • Jonathan Hinkle, Micron Technology
  • Dave Landsman, Western Digital
  • David McIntyre, Samsung Corporation
  • George Pamboris, NetApp

Technical Council co-chairs and members:

  • Co-Chair: Bill Martin, Samsung Corporation
  • Co-Chair: Jason Molgaard, Solidigm Technology
  • Curtis Ballard, Hewlett Packard Enterprise
  • Anthony Constantine, Intel Corporation
  • Dan Hubbard, Micron Technology
  • Shyam Iyer, Dell Technologies
  • Fred Knight, KIOXIA America
  • David Peterson, Broadcom
  • Leah Schoeb, AMD

Hammerspace hammering out AI ref architecture

Data orchestrator Hammerspace has produced an AI reference architecture (RA) and is testing its GPUDirect compliance.

Hammerspace provides a global parallel filesystem enabling data in globally distributed and disparate sites to be located, orchestrated and accessed as if it were local. It’s RA is for a data architecture used for training inference for Large Language Models (LLMs) within hyperscale environments. GPUDirect is an Nvidia file access protocol enabling NVMe storage to send data to and receive data from Nvidia’s GPUS without a host server being involved and delaying the data transfer. A Hammerspace front-ended file storage system can now be used to feed data to LLM training systems with hundreds of billions of parameters and tens of thousands of GPUs.

David Flynn, Hammerspace Founder and CEO, said in a statement: “The most powerful AI initiatives will incorporate data from everywhere. A high-performance data environment is critical to the success of initial AI model training. But even more important, it provides the ability to orchestrate the data from multiple sources for continuous learning. Hammerspace has set the gold standard for AI architectures at scale.”

Hammerspace’s software uses parallel NFS. It says a parallel file system architecture is critical for training AI as countless processes or nodes need to access the same data simultaneously. Its orchestration software provides an AI training data pipeline that can deliver the necessary data to very large numbers of GPUs.

The AI RA involves several components:

  • GPUs
  • Client servers
  • DSX (Data Storage eXtension) storage nodes with NVMe SSDA
  • Anvil Metadata service nodes
  • NFS v3 data path between storage nodes, client servers and GPUs
  • pNFS v4.2 control path between metadata servers and client servers

The DSX storage node can be bare metal, virtual or containerized software, and provide parallel linearly scalable performance from any block storage.

A diagram, based on an actual Hammerspace installation at a customer site, shows how they are related:

The Hammerspace parallel file system client is an NFS4.2 client built into Linux, using Hammerspace’s FlexFiles (Flexible File Layout) software in the Linux distribution. This enables standard Linux client servers to achieve direct, high-performance access to data via Hammerspace’s software. 

The storage nodes can be existing commodity white box Linux servers, Open Compute Project (OCP) hardware, Supermicro servers or NAS filers. Hammerspace says that, by using training data wherever it’s stored, it streamlines AI workloads by minimizing the need to copy and move files into a consolidated new repository. 

At the application level, data is accessed through a standard NFS file interface to ensure direct access to files in the standard format applications are typically designed for. There is no need for sophisticated parallel file systems such as Lustre or Storage Scale (GPFS as was). A second diagram shows the possible data sources to the AI pipeline:

An IEEE article, “Overcoming Performance Bottlenecks With a Network File System in Solid State Drives” by David Flynn and Thomas Coughlin, describes how the Hammerspace software provides a high-speed data path. It discusses the idea of a decentralized storage architecture with elements of file systems embedded in solid state drives.

The authors note that: “With Network File System (NFS) v4.2, the Linux community introduced a standards-based software solution to further drive speed and efficiency. NFSv4.2 allows workloads to remove the file server and directory mapping from the data path, which enables the NFSv3 data path to have uninterrupted connection to the storage.” This is partly because the metadata server is out of the data path.

It then says: “Yet another data path efficiency is now available leveraging GPUDirect architectures. This passes a single PCIe hop and copy through the host CPU and memory offering similar data path efficiencies to those of NVMe-oF.”

GPUDirect

Hammerspace presented at a TechLive event in London on November 16 and B&F asked if Hammerspace had ideas about actually supporting GPUDirect itself. Molly Presley, SVP Marketing for the company said: “Hammerspace is testing Nvidia GPUDirect now.”

Since a Hammerspace installation is a front end to existing filers, that implies it could provide a GPUDirect connection to those filers, whether they natively supported GPUDirect or not. Was this the case? Presley said: “Yes.”

Hammerspace can provide GPUDirect access to files stored in a shared environment.

If this is the case then, as long as AMD and Intel GPUs support RDMA, Hammerspace could theoretically provide GPUDirect-level data access speeds to those GPU systems. No other supplier, as far as we know, can so far do this.

Huawei gives OceanStor Dorado array all-flash teeth

Huawei has launched a fresh OceanStor Dorado all-flash array.

Huawei’s arrays are sold outside the USA, and the company ranks second to Dell in the worldwide all-flash array market, according to Gartner’s figures. The Dorado products feature 2-controller array designs and complement Huawei’s scale-out OceanStor Pacific products. The all-flash 9920 model was announced in September.

Huawei envisions an all-flash future for all data storage scenarios. At the Huawei Connect 2023 event in Paris, Yang Chaobin, Director of the Board and President of ICT Products & Solutions at Huawei, stated: “Ubiquitous data services require more performant, reliable, and energy-efficient data storage solutions.”

Huawei OceanStor Dorado 2100

Huawei claims all-flash arrays are faster, more reliable, and consume less electrical energy than disk-based arrays.

The Dorado 2100 boasts an active-active (A-A) NAS architecture, which Huawei claims to be an industry first for SMB arrays. It provides continuous availability with a 99.9999 percent data reliability guarantee. The array can house up to 8 controllers, offers 512 GB of cache –64 GB per controller – and supports up to 400 SSDs.

Huawei OceanStor Dorado 2100 specs

Huawei emphasizes the ease of deploying the 2100 by simply scanning a QR code and highlights its support for remote mobile operation and management. The power consumption of the 2100, assisted by data reduction techniques, is 1.04 W per TB, contributing to customers’ ESG (Environmental, Social, and Governance) goals.

The 2100 is designed for SMB customers and is suitable for use in small to medium-sized hospitals with PACS workloads, manufacturing file-sharing, and secure email sharing. It supports the usual file access protocols.

The array is purported to have up to 30 percent lower total cost of ownership compared to peer vendors. Huawei believes it will assist SMBs in addressing data challenges such as managing a high proportion of unstructured data, overcoming slow read/write speeds from disk-based arrays, and providing multi-tenancy sharing with isolation.

NetApp has also announced an entry-level all-flash array, the AFF A150, and has established a reselling agreement for it with Fujitsu. This sets the stage for intensified competition with Huawei.

HYCU reaches 50 SaaS app connector landmark

HYCU now has 50 connectors for SaaS applications to integrate with its SaaS backup services.

SaaS apps store customer data and the customers need to protect that data as the SaaS app suppliers only protect their own infrastructure. There are tens of thousands of SaaS apps and a 2-way obstacle to getting customer data protected. No backup supplier can add support for the thousands of SaaS applications and they tend to support the main ones, like Salesforce, Microsoft 365, and Service Now. Secondly, the 30,000-plus SaaS application providers cannot integrate their apps with all the data protection supplier’s products and services. HYCU identified a need for a massive increase in SaaS app integration with its backup services and developed its R-Cloud and an API for SaaS app vendors to use.

Simon Taylor, HYCU
Simon Taylor

HYCU CEO and co-founder Simon Taylor has written a book about this. He said in a statement: “Introducing R-Cloud unlocked the ability to visualize an entire data estate, learn what data was protected and unprotected, and with 1-click simplicity, turn on protection when no other way was possible. This offered companies a simple yet powerful way to protect their entire data landscape. Today’s milestone marks not just an achievement, but a promise of our platform’s transformative power.”

The R-Cloud service enables a customer to understand what SaaS apps they use. The API, plus a low code development platform, enables a SaaS app developer to integrate with – connect to – HYCU so its customers can get their data protected.

Jerome Wendt, founder and principal of industry analyst firm DCIG, said: “SaaS providers largely expect their clients to take responsibility for protecting the data they store with them. At the same time, SaaS providers offer no easy or scalable options for organizations to back up data in their platforms. HYCU has changed this conversation with the introduction of R-Cloud. It provides a process for providers to develop and embed a backup module in their SaaS application.”

HYCU already had a set of supported SaaS apps and has added more than 40 new SaaS app partners over the last few months to accelerate the development of the modules and deliver integrations to protect user data. It has also trained staff within each of these partner organizations to help augment their existing capabilities and services to add as-a-Service protection. 

The list of HYCU protected cloud services includes:

  • Compute and storage services: This includes AWS services like Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), and Amazon Simple Storage Service (S3), Google services like GCE, GCS, and GKE, VMware and Nutanix running on Public Clouds, Dell PowerScale family, NetApp. 
  • Database as a Services: Including Amazon Relational Database Service (RDS), Google AlloyDB, Google BigQuery, Google CloudSQL, and Amazon DynamoDB.  
  • Platform services: Including AWS Identity and Access Management, AWS CloudFormation,  Google Cloud Artifacts, Google Cloud AppEngine, Nutanix Cloud Clusters on AWS, Nutanix Cloud Clusters on Azure, Okta CIAM, Okta Workforce Identity Cloud. 
  • SaaS applications: Including Asana, Atlassian Confluence, Atlassian Jira Management, ClickUp, Google Workspace, Microsoft 365, Miro, Notion, Salesforce, Typeform, and Terraform. 

HYCU says it aims to protect the app ecosystem from build to runtime. It now claims to protect more SaaS apps and services than any other supplier and is on its way to hit its 100 SaaS app protection target by the end of the year. We expect other backup suppliers to follow suit in 2024.

Western Digital shipping 24 TB and 28 TB nearline drives

Seagate’s 24/28 TB lead lasted just four weeks because Western Digital is now also shipping 24 TB CMR and 28 TB SMR disk drives.

Update: Dat102 chassis capacity with HC580 drives corrected to to 2.448 PB raw capacity, meaning 612 TB per rack unit; 27 November 2023.

The new products are the Ultrastar DC HC580 conventional magnetic recording (CMR) and DC HC680 shingled magnetic recording (SMR) drives. They are an evolution of the existing DC HC570 22 TB CMR and DC HC670 26 TB SMR products announced in May 2022. They are comparable to Seagate’s Exos X24 CMR and Exos X28 SMR products.

Western Digital’s Ashley Gorakhpurwalla said in a statement: “New and existing endpoints from industries, connected devices, digital platforms, AI innovations, autonomous machines and more create a staggering amount of data each day. This relentless creation of data ultimately finds its way to the cloud, which is underpinned by our continued advancements in HDDs.”

Cloud providers and other hyperscalers are Western Digital’s natural market for the HC680.

IDC storage analyst Ed Burns commented: “We are seeing strong momentum for Western Digital’s SMR HDDs and believe that SMR adoption will continue to grow as their new 28 TB SMR HDD offers the next compelling TCO value proposition that cloud customers cannot ignore.” Western Digital said its 26 TB SMR HDD (DC HC670) exabyte shipments reached nearly half of its datacenter exabytes shipped in the first quarter of fiscal 2024.

Western Digital is using is its ePMR (energy-assisted recording), OptiNAND (flash-enhanced controller), triple-stage actuator (TSA), and UltraSMR (shingling) technologies with these new drives.

Both the HC580 and HC680 have helium-filled enclosures, spin at 7,200 rpm, and have 6 Gbps SATA or 12 Gbps SAS interfaces. The HC680 SMR drive has a 512MB cache and delivers up a 265Gbs sustained transfer bandwidth. It is targeted at environments such as bulk storage, online backup and archive, video surveillance, cloud storage, regulatory compliance, big data storage, and other applications where data may be infrequently accessed. The HC580 is faster, pumping out up to 298Gbps bandwidth, again with a 512MB cache. it’s aimed at customers in the same markets who need faster write access to data than shingled disks can provide.

Western Digital is pushing its green credentials, saying the new drives are built with 40 percent (by weight) recycled content, and are more than 10 percent energy efficient per terabyte than the previous generation 22/26 TB drives.

The company is shipping a Gold 24 TB CMR drive, using DC H580 technology with a 6 Gbps SATA interface to system integrators and resellers selling to enterprises and SMBs. This drive spins at 7,200 rpm, delivers up to 298 MBps sustained data transfer and has a 512 MB cache. We understand that the DC HC580 performance is pretty similar. The Gold 24 supports a 550 TB/year workload, has a projected 2.5 million hours MTBF rating, and a five-year warranty. The Gold 24 family capacity points range from 1 to 24 TB.

Western Digital is also integrating the new drives into its Ultrastar Data60 and Data102 JBOD storage products. The HC580 can provide up to 2.448 PB raw capacity in the 4RU Data102 chassis, meaning 612 TB per rack unit.

The Ultrastar DC HC680 and HC580 HDDs are being qualified by select hyperscalers, CSPs, and OEM customers, and are now available for large enterprise customers. They are only available with SATA interfaces and the SAS versions of the DC HC680 and HC580 HDD will be available in the first quarter of 2024.

Western Digital denies hardware flaws in SanDisk SSDs

Western Digital has rolled out a firmware update to address some failing SanDisk SSDs, but a data recovery specialist reckons there are underlying hardware problems.

SanDisk customers started reporting SSD data loss earlier this year and lawsuits were launched against parent company Western Digital. The affected SSDs identified in the lawsuits include SanDisk Extreme Portable 4 TB, Extreme Pro Portable 1 TB, 2 TB, and 4 TB, and Western Digital My Passport 4 TB.

Western Digital issued a firmware update, telling Austrian media outlet Futurezone (translated from German): “We recognized a problem with the SanDisk Extreme, the SanDisk Extreme Pro and WD My Passport in the spring and released a firmware update with which we fixed the problem.”

Affected SanDisk circuit board. Image from Attingo Datarecovery.

But some disagree that an update is enough. Markus Häfele, managing directors at Attingo, a data recovery company, claimed the issue goes deeper than firmware (auto-translated from German): “It’s definitely a hardware problem. It is a design and construction weakness. The entire soldering process of the SSD is a problem … The soldering material used, i.e. the solder, creates bubbles and therefore breaks more easily … In addition, the components used are far too large for the layout intended on the board. As a result, the components are a little higher than the board and the contact with the intended pads is weaker. All it takes is a little something for solder joints to suddenly break.”

Affected SanDisk circuit board. Image from Attingo Datarecovery.

In Häfele’s view, components that are too large for the circuit board will not stay securely in place over the long term and start to loosen, providing intermittent contact. This is generally a precursor to a complete failure, he added. The reason for the alleged solder bubbling is not known but Häfele said he suspects it is related to humidity and temperature issues.

Häfele reports that Attingo has observed SanDisk SSDs where epoxy resin is used to secure the components in place. “It is reasonable to suspect that Western Digital has recognized the problem and wants to make the parts more durable,” he claimed. “They want to offer an additional safety factor, but these models also end up with us.”

Western Digital told PetaPixel and others: “While we are working to gather more information, at this time we do not believe hardware issues played a role in the product concerns that we successfully addressed with the firmware update.”

We have asked WD for further comment.

Storage news ticker – November 15

Storage news
Storage news

Aerospike released v7.0 of its real-time, multi-model database with a new unified storage format and other in-memory database enhancements. The unified storage format provides the flexibility to choose the right storage engine for different kinds of workloads, even within the same cluster. Developers no longer need to understand the intricacies of in-memory, hybrid-memory, and all-flash storage models, it claims. The vendor said in-memory deployments gain fast restarts for enterprise-grade resiliency and compression that shrinks the memory footprint of real-time applications.

Grafana Labs has acquired Asserts.ai, whose technology will help Grafana Cloud users better understand their observability data and find issues more quickly, from the infrastructure to the application layer. Asserts.ai provides out-of-the-box insights into relationships over time among various system components, enabling users to better understand and navigate their applications and infrastructure. Asserts.ai serves as a contextual layer for Prometheus metrics and provides an opinionated set of alerts and dashboards so that users can more efficiently perform root cause analysis and resolve issues more quickly. Asserts in Grafana Cloud will be demoed during the keynote at ObservabilityCON 2023, and available in private preview soon.

GPU-powered RAID card provider Graid has a strategic partnership with ThinkParQ and its BeeGFS parallel file system. The union aims to address the increasing demands in High-Performance Computing (HPC), Artificial Intelligence (AI) and Deep Learning, Media and Entertainment, Life Sciences, and Oil and Gas industries. Graid president and CEO Leander Yu said: “By joining forces with ThinkParQ, we aim to redefine the benchmarks in high-performance computing, AI, and diverse industries reliant on data-intensive operations. The collaboration between SupremeRAID and BeeGFS creates an unparalleled synergy that not only enhances performance but also ensures comprehensive data protection, scalability, and flexibility.”

Graid has a strong focus on competing with software RAID supplier Xinnor (see here) and tells us Xinnor’s benchmark test notes say: “Applying the following kernel parameters will lead to a security risk. While they improve performance, they also reduce system security. Please use these parameters at your own responsibility.” Also: “It’s important to note that disabling or modifying these security mitigations can potentially expose your system to security vulnerabilities. These parameters are typically used for debugging or performance testing purposes and are not recommended for regular usage, especially in production environments.” Graid is saying that the performance numbers that you might get from Xinnor benchmarks are totally done for show and could never be achieved in a production environment. “No IT manager anywhere would compromise security for performance.”

HPE has a partnership with StorMagic and now includes StorMagic’s SvSAN as a new way to store backup data using HPE GreenLake for Backup and Recovery and HPE StoreOnce. HPE GreenLake for Backup and Recovery is a one-stop solution as HPE creates and maintains the backup environment as a service. HPE StoreOnce is a purpose-built, high-performance backup appliance designed to store copies of backup data in deduplicated form, allowing customers to reduce backup footprints by as much as 95 percent. StorMagic SvSAN is a simple, flexible, reliable HCI software solution that transforms any two x86 servers into a highly available shared storage cluster.

We wondered if HYCU, which is building SaaS app connectors, was thinking about developing a copilot like Cohesity, Commvault, Druva, and Rubrik. A source familiar with HYCU told us: “You can imagine that once you start to deliver all these new integrations that you’ll need a way to have customers understand the particular language of each SaaS application and/or what you are protecting and making available to recover. And you can imagine if you were to do this as more than just a simplistic Q&A tool, but to be meaningful to the customer so they have a deep understanding of the solutions they have available for items like, what would be the best place to keep this data, what ways can we be more efficient, etc. that makes for a powerful AI-based solution. For now, it’s safe to say it’s with early customers now and should officially come in calendar Q1 next year.”

iXsystems announced GA of its new performance flagship series of storage appliances, the TrueNAS Enterprise F-Series, along with the latest release of TrueNAS SCALE (23.10) software. These all-NVMe models were designed for maximum performance, reliability, and density to serve the most demanding workloads, while offering significant reductions in power, space, and TCO. With 30 TB NVMe drives, a single 2U system supports 720 TB of highly available storage.

iXsystems TrueNAS F-Series storage
iXsystems TrueNAS F-Series

The F-Series features: 

  • Two models to choose from: F100 with 30 GBps bandwidth per node and F60 with 20GBps bandwidth per node
  • NVMe-powered storage for low latency and bandwidths greater than 20 GBps
  • The F-Series easily scales from a few terabytes to 720 TB in only 2U, meeting a wide range of user requirements. Both scale-up and scale-out technologies can be used to increase capacity
  • Dual-controller architecture provides continuous accessibility, preventing data downtime
  • Vast connectivity options to maximize interoperability

France-based Kalray announced its NG-Box, a disaggregated NVMe storage array based on Dell PowerEdge servers combined with Kalray DPU-based storage acceleration cards. NG-Box is designed for unstructured data workloads to offer reliable, fast, automated, and scalable on-premises Tier 0 storage for data-intensive workflows which are increasingly AI-focused. The NG-Box delivers over 80 GBps per server. Industry tests such as IO Zone and FIO show doubled performance compared to non-DPU accelerated versions of the server and reduced transaction latency. NG-Box is part of the Kalray NGenea data management platform which also includes NG-Stor and NG-Hub. It’s showcasing the NG-Box at SC23.

XConn Technologies, an HPC interconnect supplier, announced a strategic partnership with composable systems supplier Liqid to deliver composable memory based on the Compute Express Link (CXL) 2.0 protocol. Enabling the orchestration of disaggregated devices connected via CXL fabric, Liqid Matrix software now supports the XConn Apollo switch, a hybrid CXL 2.0 and PCIe Gen 5 interconnect system. On a single 256-lane SoC, the XConn switch offers the industry’s lowest port-to-port latency and lowest power consumption per port in a single chip at a low TCO. Liqid Matrix composes CXL memory device endpoints to CXL connected hosts, disaggregating the memory from the host server CPU complex and connecting via CXL to a server where DRAM memory is accessed directly for applications and workloads to use with exponential efficiency. The XConn/Liqid demonstration will be featured during SC23, November 12-17, in Denver in both XConn’s booth #1301 and Liqid’s booth #1219.

MSP backup service supplier N-Able announced Q3 calendar 2023 revenues of $107.6 million, up 15 percent year-over-year, with a profit of $6 million. Subscription revenue amounted to $105.2 million, up 15.3 percent. On September 30, 2023, total cash and cash equivalents were $127.4 million and total debt, net of debt issuance costs, was $335.5 million. The Q4 outlook is for revenues of in the range of $106.5 to $107.0 million, representing approximately 11 to 12 percent year-over-year growth.

 

N-able storage revenues

Cloud file data services supplier Nasuni and energy systems supplier Cegal are announcing a strategic partnership. The two will provide fast file access from anywhere, full cyber-resiliency, and facilitation of hybrid cloud infrastructure. In effect Cegal becomes a channel partner for Nasuni. Nasuni is also seeing strong momentum in the industry with a 235 percent increase in data managed for enterprise energy customers year-over-year, with organizations including Fugro, Geoactive, Ithaca Energy, and ENGIE replacing legacy file storage and enhancing data protection with the Nasuni File Data Platform.

Boston Igloo AI+ powered by PEAK:AIO is being unveiled at SC23 this week, designed to transform HPC and AI workloads by providing ultra-fast AI storage to power technologies such as Nvidia GPUs.  The key features of Boston Igloo AI+ are:

  • File and Block access
  • Up to 80GBps plus throughput
  • PEAK:PROTECT RAID 1, 10, 5, and 6
  • GDS GPUDirect RDMA support
  • Up to 737TB capacity in 2RU
  • Lowers the cost barrier

This is claimed to outperform multi-node HPC storage solutions while remaining cost-effective. Its available now.

HPC filesystem supplier Quobyte has a partnership with Hitachi Vantara to provide a turnkey, fully integrated solution of hardware, software, and support. The Hitachi Vantara storage hardware product is not named. Quobyte has also the addition of RDMA (Remote Direct Memory Access) capability to its parallel distributed file system. Admins can now take advantage of Quobyte’s ability to automatically switch between IP and RDMA to optimizing performance. By offering a mixed-mode operation, Quobyte will now allow admins to operate Quobyte over traditional IP networks and RDMA-capable fabrics like Infiniband in the same cluster or infrastructure.

Interesting chart from Storage Newsletter, comparing analyst firms and HDD shipment numbers. They are pretty consistent:

Storage Newsletter chart

Top500.org has released its latest TOP500 supercomputer list. The Frontier system retains in the top spot and is still the only exascale machine on the list. There are  five new or upgraded systems in the Top 10, including Aurora, the Azure cloud’s Eagle, Fugaku, and Lumi. Aurora is being commissioned and will reportedly exceed Frontier with a peak performance of 2 exaFLOPS when finished.

Veeam has launched Veeam Backup for Salesforce v2, available on Salesforce AppExchange. It extends support for multiple clouds, provides greater security with single sign-on (SSO) and multifactor authentication (MFA), and provides a safe environment for testing and developing via sandbox seeding. Organizations can deploy on-premises or in the cloud, recover exactly what they need when they need it, and experience backup that is custom-engineered for Salesforce data and metadata.

VergeIO is attacking VMware, unveiling a claimed risk-free VMware conversion services. Powered by IOmigrate, VergeIO ensures a frictionless transition of VMware virtual machines to VergeOS with just a few clicks. In light of Broadcom’s pending acquisition of VMware and growing user concerns about the state of the virtualization software and the company behind it – ranging from rising licensing costs, ransomware vulnerabilities, and diminishing quality of support – VergeIO is stepping up, offering complimentary professional services for VMware customers seeking an exit.

Virtuozzo announced the appointment of Sergey Dobrovolsky as CTO. Dobrovolsky joins Virtuozzo from Acronis, where he was VP Cyber Infrastructure. Virtuozzo provides open source-based hyperconverged cloud technology which it claims enables MSPs to reduce costs for customers by as much as 25 percent. Its HCI cloud is used by 600+ service providers, hundreds of thousands of businesses, and millions of end users across the world. 

WekaIO is working with Applied Digital Corp, a designer, builder, and operator of digital infrastructure for high-performance computing (HPC) applications and workloads, to provide an underlying accelerated software architecture underpinning its recently launched AI Cloud Service. Applied Digital is developing a turnkey offering that allows enterprise customers to purchase and use GPU resources on-demand, at scale, across its datacenters for AI model training and inference in virtualized, bare metal, and containerization environments. Applied Digital’s customers use some of the world’s largest GPU clusters, with plans to scale to 10,000 GPUs or more per cluster, requiring data access of one terabyte per second or more. 

Weka says its customers can achieve 1.8 terabytes per second of bandwidth in a single rack, lowering infrastructure costs and energy usage, improving GPU utilization by up to 20 times, and reducing training time by 10-100 times. Applied Digital has partnered with Weka to improve GPU utilization and efficiency, deliver high-performance data storage and management capabilities, simplify data onboarding, and allow customers to seamlessly scale deployments ranging from a fraction of a GPU to tens of thousands of GPU servers. 

Chinese 3D NAND supplier YMTC is suing Micron in the USA for infringing eight of its patents; United States Patent Nos. 10,950,623 (building 3D NAND), 11,501,822 (non-volatile storage device and control method), 10,658,378 (pass-through array of three-dimensional memory device), 10,937,806 (control of Through Array Contact, TAC for 3D NAND memory device), 10,861,872 (3D NAND devices and methods of manufacturing), 11,468,957 (Architecture and methods for NAND operations), 11,600,342 (reading method of 3D NAND), and 10,868,031 (multi-layer stacked 3D NAND device and its manufacturing). The infringements are allegedly made by Micron’s 96L, 128L, 176L, and 232L NAND products. YMTC wants royalty payments.