Dell has added Intel Gaudi3 GPU support to its XE9680 server and ported APEX File Storage to Azure in support of AI workloads.
The XE9680 server was announced in January 2023 and has gen 4 Xeon processors (up to 56 cores), a PCIe 5.0 bus, and support for up to eight Nvidia GPUs. By October, it had become the fastest ramping server in Dell’s history. As of March this year, it supports Nvidia’s H200 GPU, plus the air-cooled B100 and the liquid-cooled HGX B200. Intel’s Gaudi3 accelerator (GPU) has two linked compute dies, each with eight matrix math engines, 64 tensor cores, 96 MB of SRAM cache, 16x lanes of PCIe 5.0, 24 200GbE links, 128 GB of HBM2e memory, and 3.7 TBps of bandwidth.
Now the XE9680 is adding Gaudi3 AI accelerator support. The Gaudi3 XE9680 version has up to 32 DDR5 memory DIMM slots, 16 EDSFF3 flash drives, eight PCIe 5.0 slots, and six OSFP 800GbE ports. It’s an on-premises AI processing beast.
Deania Davidson
A Dell blog written by Deania Davidson, Director AI Compute Product Planning & Management, says: “The Gaudi3’s open ecosystem is optimized through partnerships and supported by a robust framework of model libraries. Its development tools simplify the transition for existing codebases, reducing migration to a mere handful of code lines.”
The OSFP links allow for direct connections to an external accelerator fabric without the need for external NICs to be placed in system. Davidson says: “Dell has partnered with Intel to allow select customers to begin testing Intel’s accelerators via their Intel Developer Cloud solution.” Learn more about that here.
APEX File Storage for Azure
Dell launched its APEX File Storage for AWS, based on PowerScale scale-out OneFS software, in May last year. Now it has added APEX File Storage for Microsoft Azure, complementing its existing APEX Block Storage for Azure. It claims the APEX File Storage for Azure is “a game-changing innovation that bridges the gap between cloud storage and AI-driven insights” in a blog by Principal Product Manager Kshitij Tambe.
Kshitij Tambe
The Azure APEX File Storage provides high-performance and scalable multi-cloud file storage for AI use cases. Tambe says customers can “move data from on-premises to the cloud using advanced native replication without having to refactor your storage architecture. And once in the cloud, you can use all enterprise-grade PowerScale OneFS features. With scale-out architecture to support up to 18 nodes and 5.6 PiB in a single namespace, APEX File Storage for Azure offers scalability and flexibility without sacrificing ease of management.”
He says it has the highest performance at scale for AI, based on maximum throughput performance and namespace capacity. We’ve asked NetApp what it thinks about these claims and a spokesperson said: “Azure NetApp Files is a high-performance enterprise file storage service that is a managed native service from Microsoft. While it is based on NetApp’s leading storage OS, the team at Microsoft will be better able to answer your questions about the specifics of performance.”
COMMISSIONED: The decision by the three largest U.S. public cloud providers to waive data transfer fees is a predictable response to the European Data Act’s move to eradicate contractual terms that stifle competition.
A quick recap: Google made waves in January when it cut its data transfer fee, the money customers pay to move data from cloud platforms. Amazon Web Services and Microsoft followed suit in March. The particulars of each move vary, forcing customers to read the fine print closely.
Regardless, the moves offer customers another opportunity to rethink where they’re running application workloads. This phenomenon, which often involves repatriation to on-premises environments, has gained steam in recent years as IT has become more decentralized.
The roll-back may gain more momentum as organizations decide to create new AI workloads, such as generative AI chatbots and other applications, and run them in house or other locations that will enable them to retain control over their data.
To the cloud and back
Just a decade ago, the organizations pondered whether they should migrate workloads to the public cloud. Then the trend became cloud-first, and everywhere else second.
Computing trends have shifted again as organizations seek to optimize workloads.
Some organizations clawed back apps they’d lifted and shifted to the cloud after finding them difficult to run them there. Others found the operational costs too steep or failed to consider performance requirements. Still others stumbled upon security and governance issues that they either hadn’t accounted for or had to reconcile to meet local compliance laws.
“Ultimately, they didn’t consider everything that was included in the cost of maintaining these systems, moving these systems and modernizing these systems in the cloud environment and they balked and hit the reset button,” said David Linthicum, a cloud analyst at SiliconANGLE.
Much ado about egress fees
Adding to organizations’ frustration with cloud software are the vendors’ egress fees. Such fees can range from 5 cents to 9 cents per gigabyte, which can grow to tens of thousands of dollars for organizations working with petabytes. Generally, fees vary based on where data is being transferred to and from, as well as how it is moved.
Regulators dislike fear the switching costs will keep customers locked into the platform hosting their data, thus reducing choice and hindering innovation. Customers dislike these fees and other surcharges as part of a broader strategy to squeeze them for fatter margins.
This takes the form of technically cumbersome and siloed solutions (proprietary and costly to connect to rivals’ services), as well as steep financial discounts that result in the customer purchasing additional software they may or may not need. Never mind that consuming more services – and thus generating even more data – makes it more challenging and costly to move. Data gravity weighs on IT organizations’ decisions to move workloads.
In that vein, the hyperscalers’ preemptive play is designed to get ahead of Europe’s pending regulations, which commence in September 2025. Call it what you want – just don’t call it philanthropy.
The egress fee cancellation adds another consideration for IT leaders mulling a move to the cloud. Emerging technology trends, including a broadening of workload locations, are other factors.
AI and the expanding multicloud ecosystem
While public cloud software remains a $100 billion-plus market, the computing landscape has expanded, as noted earlier.
Evolving employee and customer requirements that accelerated during the pandemic have helped diversify workload allocation. Data requirements have also become more decentralized, as applications are increasingly served by on-premises systems, multiple public clouds, edge networks, colo facilities and other environments.
The proliferation of AI technologies is busting datacenter boundaries, as running data close to compute and storage capabilities often offers the best outcomes. No workload embodies this more than GenAI, whose large language models (LLMs) require large amounts of compute processing.
While it may make sense to run some GenAI workloads in public clouds – particularly for speedy proof-of – concepts, organizations also recognize that their corporate data is one of the key competitive differentiators. As such, organizations using their corporate IP to fuel and augment their models may opt to keep their data in house – or bring their AI to their data – to maintain control.
The on-premises approach may also offer a better hedge against the risks of shadow AI, in which employees’ unintentional gaffes may lead to data leakage that harms their brands’ reputation. Fifty-five percent of organizations feel preventing exposure of sensitive and critical data is a top concern, according to Technalysis Research.
With application workloads becoming more distributed to maximize performance it may make sense build, augment, or train models in house and run the resulting application in multiple locations. This is an acceptable option, assuming the corporate governance and guardrails are respected.
Ultimately, whether organizations choose to run their GenAI workloads on premises or in multiple other locations, they must weigh the options that will afford them the best performance and control.
Panzura has launched AI-boosted Detect and Rescue software for near-real-time ransomware attack recognition and recovery.
The cloud file services supplier notes there was a 95 percent increase in ransomware attacks between 2022 and 2023, quoting a Q3 2023 Global Ransomware Report from Corvus Insurance. It cites Sophos estimates that two thirds of organizations globally are attacked by ransomware each year and almost a third (32 percent) pay ransoms to recover their data. On average, recovery from an attack takes 21 days.
Dan Waldschmidt
Panzura’s Detect and Rescue functionality enables suspicious behavior and I/O patterns to trigger automated alerts and interdiction within minutes, meaning less time for files to be copied, encoded or corrupted.
CEO Dan Waldschmidt explained in a statement: “When it comes to ransomware recovery, time really is money … Panzura Detect and Rescue applies machine learning to take roughly 12 billion data snapshots on any given day, giving us an incredibly accurate benchmark from which to identify anomalous patterns and potential threats.”
The active Detect and Rescue feature can recover files from Panzura’s passive immutable storage snapshot-based protection in its CloudFS service. Waldschmidt continued: “Organizations must utilize both robust, immutable protection, as well as added layers of resilience like Panzura Detect and Rescue, which improves ransomware monitoring, delivers real-time alerts, and can dramatically reduce any downtime resulting from the threat of ransomware.”
Chief customer officer Don Foster added: “To date, no organization that followed Panzura’s best practices guidance has lost data to ransomware while using Panzura CloudFS.”
Find out more about Panzura’s Detect and Rescue here.
Other file system suppliers are also using AI to speed ransomware attack detection, such as NetApp with AI/ML features in its ONTAP arrays providing real-time file ransomware attack detection. CTERA, another cloud file services supplier, added Ransom Protect near-real-time attack detection last year, looking for things like a sudden spike in encrypted data writes, and is using AI for this.
Spectra Logic has announced a new Cube tape library, LumOS tape library management software, and a TFinity Plus version of its enterprise tape library.
There are three existing Spectra Logic library products: the Stack entry-level, T950 mid-range, and ExaScale high-end enterprise system. The single rack Stack scales from one 6RU module to seven, with a control module and up to six tape modules, each holding 80 LTO tapes and up to six half-height drives. The T950 scales from 50 to 10,250 LTO slots, with loading via 10-cartridge TeraPack trays, and has up to 120 tape drives. The ExaScale has from 3 to 45 frames (specialized racks), supports LTO, IBM TS1170 and Oracle T10000 tapes, and is in a different league from the other two altogether.
Matt Ninesling, Spectra Logic senior director of tape portfolio management, said: “The escalating costs of public cloud storage have forced a reckoning, leading to significant interest in moving data to more economical locations including on-prem clouds and hybrid clouds. Compared to typical public cloud options, Spectra Cube solutions can cut the costs of cold storage by half or more, while providing better data control and protection from existential threats like ransomware.”
Spectra Logic’s Cube comes in a 42RU standalone frame measuring 79.4 x 35.9 x 45.4 inches. A standard IT rack is smaller, at 73.5 inches high, 19 inches wide, and 36 inches deep.
Spectra Logic Cube library with top loft holding a BlackPearl system
The Cube has up to 30 PB of native capacity, with up to 1,670 slots, compared to the Stack’s max of 10.1 PB, with max compressed capacity of 75 PB. Up to 16 full-height or 30 half-height LTO tape drives may be intermixed, delivering a max native throughput of 32 TB/hour. Marketwise, it’s positioned above the Stack, with its maximum native throughput of 30 TB/hour, and below the T950 libraries.
It supports LTO-6 through 9 format tapes and future generations as they arrive. It also supports up to 16 logical partitions for multi-tenant environments and a 10-cartridge TeraPack access port. Drive interfaces include Fibre Channel and SAS, and optional Ethernet-to-SAS bridges eliminate the need for dedicated SAN connections.
The Cube can be dynamically scaled and deployed in minutes. It’s serviceable without tools or downtime. Spectra Logic’s LumOS is the management software, which integrates with Spectra Logic’s BlackPearl object storage to provide a backing tape tier with Amazon S3 and S3 Glacier compatibility.
There is a Cube capacity-on-demand expansion model. Additional tape slots can be dynamically enabled via software while additional tape drives can be added to scale performance, all without downtime, tools or service calls.
Spectra Logic says that Cube environmental costs for power and cooling are substantially lower than object-based disk or flash systems, and it has a storage density of more than 2.5 PB/sq ft of datacenter floor space. An optional loft enclosure can accommodate rackmount devices, including Spectra Logic BlackPearl storage, industry-standard servers and more, simplifying connectivity and increasing floor space efficiency.
LumOS management software
This is a new version of Spectra Logic’s tape library management software. It’s said to have an intuitive, feature-rich interface with secure local and remote access, and is multi-threaded and extensible. Spectra Logic claims it’s up to 20x faster than the previous generation software. The features include:
REST API for automation of all library functions including software upgrades
Integrated partitioning for shared or multi-tenant environments
Capacity-on-Demand (CoD) for dynamic addition of media slots
AES-256 encryption and key management for LTO drive-based encryption
Library and drive lifecycle management to predict and help prevent failures
Automatic drive cleaning for error reduction and extended drive life
Media lifecycle management and data integrity verification
Proactive monitoring and diagnostics with email notification and automatic support ticketing
TFinity Plus
This enhanced TFinity library model supports up to 168 drives, a 24-drive increase over the TFinity ExaScale library. That means its throughput is higher than the TFinity although its maximum capacity stays the same:
Spectra Logic says the TFInity Plus delivers the highest tape move and exchange rate of any available library due to the LumOS software, TeraPack magazines, and internal robotic transporters.
Any existing TFinity or TFinity ExaScale library can be field upgraded to support LumOS library management software, and will then be functionally identical to the new TFinity Plus enterprise library.
Spectra Logic will demonstrate the new Cube library at the 2024 NAB Show, to be held April 14-17 at the Las Vegas Convention Center, NV, and at ISC High Performance 2024, May 13-15 at the Congress Center, Hamburg, Germany. Cube libraries are available to order now with a 30-day lead time. Configuration and pricing information is available upon request.
Apache released Airflow 2.9, the latest version of the popular open source project, a data orchestration tool with more than 12 million monthly downloads and 33,000 GitHub stars. Airflow 2.9 brings more than 35 new features and over 70 improvements including enhancements to data-aware scheduling and dynamic task mapping. According to the 2023 State of Airflow Survey, these features are key for common use cases like ETL and MLOps.
…
Arcitecta will be in the Dell Technologies booth #SL8065 at NAB 2024discussing how Dell PowerScale and Arcitecta Mediaflux provide elite data orchestration, workflow automation, and edge caching, all monitorable from a single global namespace.
…
CAST AI, a Kubernetes automation platform, announced AI Optimizer, a new service that automatically reduces the cost of deploying large language models (LLMs). It integrates with any OpenAI-compatible API endpoint and automatically identifies the LLM across commercial vendors and open source that offers the most optimal performance and the lowest inference costs. It then deploys the LLM on CAST AI-optimized Kubernetes clusters, unlocking generative AI cost savings. AI Optimizer comes with insights into model usage, fine-tuning costs, and explainability in optimization decisions – including model selection.
…
Data connectivity supplier CData Softwarehas acquired Data Virtuality to provide customers with a suite of data replication and live data access tools compatible with on-premises, cloud, or hybrid environments. Data Virtuality delivers live connectivity for direct system-to-system data access without relocation, as well as data virtualization solutions for a unified, governed semantic layer. It readily complements and extends CData’s existing Drivers products, which provides point-to-point system connectivity, and Connect Cloud, which enables multi-point connectivity across cloud applications. CData claims it’s the only data management vendor to bring a bi-modal integration stack to market, allowing customers to leverage both data replication and live data access within one common connectivity platform.
…
Cloud connectivity supplier Cloudflare says developers can now deploy AI applications on Cloudflare’s global network in one click directly from Hugging Face, without the need to manage infrastructure or incur costs for unused compute capacity. It adds that Workers AI, an at-scale platform for running AI inference at the edge, is now GA with GPUs deployed in more than 150 cities, and Python support now available in open beta. Cloudflare also announced its instant serverless relational database, D1, is now generally available.
On top of this Cloudflare announced updates for R2, its globally distributed object storage:
Event Notifications: R2 is now able to automatically trigger Workers, taking action with data in R2 bucket changes
Super Slurper for Google Cloud Storage: Developers can now easily migrate data from Google Cloud to Cloudflare R2
Infrequent Access Private Data:Developers will now pay less to store data that isn’t accessed frequently
For more information, head to Cloudflare’s website.
…
Data management supplier Denodo has a partnership with Google integrating the Denodo Platform with Google Cloud’s Vertex AI and combining logical data management capabilities with GenAI services with access to state-of-the-art LLMs. Narayan Sundar, Senior Director, Strategic Alliances, said: “Using Denodo’s logical/semantic-model approach, enterprises across industries can integrate and manage data and apply Retrieval Augmented Generation (RAG) techniques that combine the capabilities of a pre-trained large language model with external data sources.” There’s more information here.
…
Deduping backup target system supplier ExaGrid has appointed Sam Elbeck as its VP of Americas Sales and Channel Partners. He comes from being VP of Sales and Partnerships Americas for Arcserve, and, prior to that, Senior Global Director of Sales for Digital Compliance and SaaS Backup at Veritas. He was also VP of Sales and Business Development for Globanet, and held sales and technical leadership roles at both Symantec and IBM.
…
ExaGrid announced continuing growth with its strongest first quarter (Q1) in the company’s history, for the quarter ending March 31, 2024. It is profitable, and cashflow-positive, and has been for 13 consecutive quarters. The customer count has risen from over 4,100 last quarter to more than 4,200 at the end of Q1. Some 75 percent of new bookings are for 6- and 7-figure deals and its competitive win rate is around 70%.
…
Data orchestrator Hammerspace has joined the STAC benchmark council. STAC is a respected benchmark in the financial services arena.
…
Dr Stephen Weston
In-memory data grid supplier Hazelcast appointed Dr Stephen Weston as its chief scientist. He will lead its AI research and development, and be instrumental in guiding the company’s strategy and application of AI within the Hazelcast Platform and the product roadmap. Previously, Weston was managing director in global credit trading at JPMorgan Chase. Most recently, as a partner in the Risk Advisory practice at Deloitte LLP, he managed the transition to machine learning and agent-based models for risk management, model development, and model validation. In addition to his position as chief scientist at Hazelcast, Weston supervises academic research in the department of computing at Imperial College London, where the research group develops models that integrate technology, science and finance.
…
SaaS data protection supplier Keepit has signed Ingram Micro as a US distributor in a strategic go-to-market deal. Ingram Micro will help market, sell, and support the Keepit portfolio
…
Composable systems supplier Liqid has an UltraStack system that can transparently connect up to 20 GPUs to a standard server. It has published an UltraStack Performance Whitepaper with UltraStack connected to Dell R760 and R7625 servers. The purpose of one set of MLPerf 3.1 benchmark tests was to prove the Nvidia L40S could outperform the well-regarded A100 PCIe GPU in certain inference workloads. They were conducted on two servers with GPUs composed by Liqid Matrix. One server contained 4 x L40S PCIe GPUs and the other 4 x A100 PCIe GPUs. Results indicate that the L40S GPUs not only matched but in certain aspects surpassed the performance of the Nvidia A100 PCIe GPUs. Download the white paper here.
…
Micron is sampling its automotive-grade 4150AT SSD, the first quad-port SSD, capable of interfacing with up to four systems-on-chips (SoCs) to centralize storage for software-defined intelligent vehicles. It supports single-root input/output virtualization (SR-IOV), a PCIe Gen 4 interface and ruggedized automotive design. Random read/write IOPS are up to >600,000/>100,000 respectively. The 4150AT can handle data streams from multiple SoCs at once, making it suited for vehicles that must multitask to handle diverse systems, from advanced driver-assistance systems (ADAS) to in-vehicle infotainment (IVI) and AI-enabled cabin experiences.
…
Storage array supplier Nexsan is celebrating its 25th anniversary since being founded in 1999. It has more than 2,600 systems with over 624 PB actively deployed and under maintenance (2023).
…
Objective Analysis analyst Jim Handy has collated semiconductor fab responses to the April 3 earthquake in Taiwan.
Micron said all of its team members have been accounted for and reported to be safe. It’s evaluating the impact to its operations and supply chain, and will communicate changes to delivery commitments after this evaluation is completed.
TSMC stated that over 70 percent of its tools had recovered within 10 hours of the earthquake, and that operations at its most advanced factory, Fab 18 in Tainan, had surpassed 80 percent.
Winbond stated that there were no injuries to personnel. Some machinery at the Taichun and Kaohsiung fabs activated self-protection mechanisms due to the earthquake, but overall there was no significant impact on the company’s operations and finances.
UMC announced that the earthquake had no material impact on its operations. All personnel are safe, but automatic safety measures at the company’s fabs in Hsinchu and Fab 12A in Tainan were triggered, and some wafers in the production line were affected. Currently, operations and wafer shipment are resuming as normal, and there will be no meaningful impact on UMC’s finances and business.
…
Researchers at Dongguk University in Korea have devised an opto-electronic memory (OEM) device with a floating gate built from a 2D van der Waal heterostructure (vdWhs) composed from ordered layers of rhenium disulfide (ReS2)/hexagonal boron nitride (hBN)/tellurene (2D Te). This the ReS2/hBN/2D Te vdWh device exhibits high long-term stability (>1,000 cycles), a high on/off switching ratio of the order of 10⁶, and impressive data retention (>10⁴), owing to the opto-electrical properties of ReS2 and 2D Te. The non-volatile device can support multi-bit storage states by varying the gate voltage, input laser wavelength, and laser power, enabling complex data patterns and high data capacities. It can perform fundamental OR and AND Boolean logic gate operations by combining electrical and optical inputs.
SpaceBilt Inc., which develops reusable spacecraft, has partnered with SSD and controller supplier Phison to fly the Large in Space Server (LiSS), a 100 TB-plus storage and edge compute system, to the International Space Station (ISS) in 2025. The LiSS server system is powered by a Microchip Polarfire SoC along with an Nvidia Jetson Orin AI computer for rapid access to a string of X1 enterprise level PCIe-based solid state drives (SSDs) provided by Phison. SpaceBilt and partner Novium will provide demonstrations of a mockup space station at the Space Symposium, April 8-11, 2024 in Colorado Springs. Phison’s 8 TB M.2 2280 SSD system was previously selected to take part in Lonestar Data Holdings’ first lunar datacenter mission.
Spacebilt satellite image
…
Vector database supplier Pinecone has launched a Pinecone Partner Program. The new program provides select partners with benefits to deliver competitive AI-building capabilities to their customers. They include streamlined integration options for a seamless user experience, usage reporting for greater visibility, and sales, marketing, and technical support to drive successful adoption. The program is launching with industry-leading and rapidly rising companies as launch partners, including Anyscale, Confluent, LangChain, Mistral, Monte Carlo, Nexla, Pulumi, Qwak, Together.ai, Vectorize, and Unstructured, with more to be announced.
…
Clayton Dubilier & Rice (CD&R) has entered into a definitive agreement under which its funds will acquire a majority ownership position in Presidio from BC Partners, which will retain a minority interest. Presidio has more than 6,600 customers. Among its portfolio is PRISM, which offers surveys into a customer’s AWS RI usage and presents a savings model based on its data. If the customer signs a PRISM deal, Presidio buys AWS services upfront on the customer’s behalf, freeing up cash and capital costs. BC Partners acquired Presidio in 2019, delisting the company from Nasdaq in a $2.1 billion take-private transaction. The transaction is expected to close in the second quarter of 2024, subject to customary closing conditions.
…
Data management supplier Reltio announced its Customer 360 Data Product, a next-generation AI cloud offering that provides a comprehensive view of customer data in milliseconds. It provides a comprehensive view across multiple first and third-party sources for domains, including connected customer, product, supplier, location, and more. Reltio’s Flexible Entity Resolution Networks (FERN) for rule-free matching takes automation in data unification to the next level. Using LLM-powered pre-trained ML models, this approach to entity resolution enables rule-free matching with high match accuracy across industries and with minimal effort. A Reltio Intelligent Assistant (RIA), using GenAI and natural language technology, makes it easier for users to search digital content. More information here.
…
Seagate has quietly added a BarraCuda 530 PC SSD to its US website’s product lineup section. It is an M.2 2280 format drive, with a PCie gen 4 x 4 interface, and 512 GB, 1 and 2 TB capacities. It looks like a successor to the 2020-era BarraCuda 520, which supported NVMe v1.4. The 530 updates this to NVMe 2.0 and also extends the warranty period from three to five years. It has greatly improved performance compared to the 520 with the max sequential read speed up to 7.4 Gbps from 5 Gbps and sequential write speed up to 6.4 GBps from 4.2 GBps. As with the 520, no IOPS numbers are available. The 520 capacity range extends to 4 TB so, possibly, the 530 could get a 4 TB model as well. No NAND details are provided and we assume, for now, it’s 3D NAND with TLC format cells.
…
High-availability supplier SIOS has joined the Nutanix Elevate Partner Program and gained Nutanix Ready validation designation for its LifeKeeper and DataKeeper products. SIOS has more than 80,000 licenses installed globally.
…
NAND and DRAM fabricator SK hynix will invest an estimated $3.87 billion in West Lafayette, Indiana, to build an advanced chiplet packaging fabrication and R&D facility for next-generation HBM4 products. It could bring 1,000 jobs to the region. SK hynix plans to begin mass production in the second half of 2028, while the new facility will also develop future generations of chips and house an advanced packaging R&D line. SK hynix has risen past a $100 billion stock market capitalization on the back of booming HBM demand. It has 90 percent of the HBM market.
…
Research house TrendForce says burgeoning demand in the AI market since early 2023 has sparked a surge in high-capacity HDD products. A supply shortage for large-capacity HDD products will persist throughout this quarter and possibly extend for an entire year. HDD prices are expected to continue rising in the second quarter of 2024, with a potential increase of 5-10 percent. HDD prices have been static amid SSD competition.
…
Veeam has added KVM support to its backup and recovery offerings, with Oracle Linux KVM and Red Hat Virtualization protected. More information here.
Researchers from Japan and Seagate have demonstrated a multi-layer disk drive recording method that could double or triple disk drive capacity in a proof-of-concept study.
Hard disk drive platters have a surface layer of magnetic recording material on which recorded bits are written and read in concentric tracks. The reading and writing is carried out by an actuator device that moves across the disk tracks and operates on the bit areas as the disk spins beneath it. Seagate’s latest Heat-Assisted Magnetic Recording (HAMR) technology writes small bits, with the bit area temporarily raised to a high temperature to permit writing (setting the magnetic polarity direction) then cooling to room temperature for long-term magnetic stability.
The paper – “Dual-layer FePt-C granular media for multi-level heat-assisted magnetic recording” – describes the recording material built by the researchers, its layering, and the theoretical testing of its capabilities. It was written by scientists from Japan’s NIMS and Tohoku University, and engineers from Seagate Technology in Fremont, and is available behind a $43.14 Elsevier paywall.
The authors explain conventional HAMR recording, with one layer and two levels. Magnetic north or south corresponding to binary one or zero, and their multi-level, two-layer recording with two four binary levels per layer:
Disk drive capacity has been increased by shrinking the size of the bit areas with their magnetic nano-grains. The existing perpendicular magnetic recording (PMR) technology, which operates at room temperature, runs into a grain size limit, roughly 1 Tbit/in² bit areas, below which the grain’s magnetic polarity is not stable, meaning the disk drive cannot store information reliably. HAMR technology uses a different magnetic recording medium, which is stable at ambient or room temperature, but needs heating to allow its magnetic polarity to be changed. This has enabled the bit regions to be reduced in size to 1.5 Tbit/in² and down to 4 Tbit/in². Beyond that they become unstable as well.
They say that “increasing the areal density of future generations of HDD recording media requires a new recording concept that is not reliant on grain size miniaturization. One solution is bit-patterned media (1 bit = 1 grain) instead of granular media (1 bit = multiple grains). Numerical calculations predicted that bit-patterned media can achieve an areal density of 10 Tbit/in². However, bit-patterned media cannot be mass produced because of their costly nanofabrication process.”
Their idea is to use a two-pass approach with two layers of HAMR recording material, each with different Curie temperatures, at which their magnetic direction can be changed. The laser heats the recording medium’s surface and the surface layer heat penetrates to the lower layer, enabling both layers to be written to the same value. After cooling the surface is heated again but to a lower temperature, which enables the upper layer’s value to be written but not alter the lower layer’s value.
The researchers have demonstrated this two-layer recording material concept, with the HAMR heating laser and write head tuned to be able to operate on the lower layer first and then write the upper layer bit value. This provides four theoretical bit values for any pair of bits in the upper and lower layers: 1-1, 1-0, 0-1, and 0-0.
They started with the recording FePt-C (iron, platinum, and carbon) layers, which are a approximately 5 to 6 nm thick and have a breaking Ru-C ( ruthenium and carbon) layer between them that is half that thickness. A carbon capping layer was applied to prevent surface oxidation.
Electron microscopy photos were taken to inspect the grain structure and the separation between the layers. Finite element model analyses were run to verify the operation of their concept. In other words, the researchers did not build an actual read-write head and test its operation on a spinning platter with their recording medium layers.
In this modelling, “the heat spot was moved along the medium at a velocity of 4 m/s.” The researchers noted “written track widths of ≈ 60 nm in the bottom layer and ≈ 100 nm in the top layer, owing to the different Tc (Tc1<Tc2)” where Tc is the Curie temperature of each layer.
Although there are four potential paired layer bit states, a read-write head can only detect three, 1-1, 1-0 or 0-1, and 0-0, providing three-level recording from the two layers. Development work could enhance read-write head tech to distinguish between 1-0 and 0-1, enabling four-level recording from each bit area, as the authors say: “The magnetization corresponding to ↑↓ and ↓↑ was M ≠ 0, and should allow 4-level recording as the antiparallel states were easily distinguishable.”
Comment
The effective bit areas are the same in the upper and lower layers meaning that, for example, a 20 TB single-layer HAMR disk would become a 40 TB dual-layer HAMR disk. With three-level recording, the 20 TB HAMR disk would effectively become a 60TB drive, and, with 4-level recording, an 80 TB drive. This would change the relationship between HDDs and SSDs in $/TB and TB/RU terms.
Writing a bit area value is a two-pass operation, with the lower and upper layer areas written first, and the upper layer area written separately second. Thus it would take longer than today’s single pass writing operation. We understand the same issue would apply to reading the bit areas. This means that the disk’s IO speed would be slower than today’s single-pass HAMR drives. Its IO density, how much performance can be delivered from its storage capacity, would worsen, both from this and because the read-write head is the single channel to much higher capacity. Dual actuators might be needed to overcome this.
Considering that the disk is spinning underneath the read-write head during this two-pass operation, the second pass bit area is behind the first pass bit area to some degree. The authors say that they used a 4 meters per second (4 mps or 4,000 mmps) recording medium speed in their modeling.
We understand that the average track length of a 3.5-inch HDD is approximately 269.44 mm. It may be that a two-layer disk’s spin speed might need adjusting to reduce the offset between the first and second layer bit areas due to this movement.
The authors write that there were “written track widths of ≈ 60 nm in the bottom layer and ≈ 100 nm in the top layer, owing to the different” Curie temperatures for each layer.
Bootnote
The formal citation for the paper is: P. Tozman , S. Isogami , I. Suzuki , A. Bolyachkin , H. Sepehri-Amin , S.J. Greaves , H. Suto , Y. Sasaki , T.Y. Chang , Y. Kubota , P. Steiner , P.-W. Huang , K. Hono , Y.K. Takahashi.
Shot of Data Center With Multiple Rows of Fully Operational Server Racks. Modern Telecommunications, Cloud Computing, Artificial Intelligence, Database, Supercomputer Technology Concept. Shot in Dark with Neon Blue, Pink Lights.
While the total global datacenter infrastructure market for 2023 managed some growth, the data storage and server segments showed major declines, according to a report from analyst house Dell’Oro Group.
It said total global capital expenditure growth crept up 4 percent year-on-year to $260 billion, with servers leading all technology areas in revenue. However, this growth rate marked a slowdown from the double-digit growth seen in the previous year.
Reduced investments in general purpose servers and storage systems was attributed to supply issues that occurred in 2022, prompting enterprise customers and resellers to place excess orders, which led to inventory surges and subsequent corrections. Consequently, server shipments declined by 8 percent in 2023.
The demand for general purpose server and storage system components, such as CPUs, memory, storage drives, and NICs, saw a “sharp decline” in 2023, said Dell’Oro, as the major cloud service providers and server and storage system OEMs reduced component purchases in anticipation of weak system demand.
In contrast, there was a shift in capex towards accelerated computing. Spending on accelerators, such as GPUs and other custom accelerators, more than tripled in 2023, as the major cloud service providers raced to deploy accelerated computing infrastructure that is optimized for AI use cases. Accelerated servers, although comprising a small share of total server volume, command a high average selling price premium, contributing significantly to total market revenue.
The storage system market witnessed a 7 percent decline in revenue in 2023, with Dell leading in revenue share, followed by Huawei and NetApp. But Huawei was the only major vendor to achieve growth, driven by its success in adopting the latest all-flash arrays among enterprise customers.
Dell again led in server revenue share, followed by HPE and IEIT Systems. Excluding white box server vendors, revenue for original equipment manufacturers declined by 10 percent in 2023, with lower server unit volumes attributed to economic uncertainties and excess channel inventory. However, some vendors experienced revenue growth through shifts in product mix towards accelerated platforms or general-purpose servers with the latest CPUs from Intel and AMD.
In the cloud service provider market, Microsoft and Google increased total datacenter investments, particularly in AI infrastructure, while Amazon underwent a “digestion cycle”, said the analyst, following pandemic-driven expansion. In contrast, the major Chinese providers experienced declines in datacenter capex due to “economic, regulatory, and demand challenges.” Enterprise datacenter spending also declined “modestly” in 2023, “reflecting weakening demand amid economic uncertainties.”
Looking ahead to 2024, Dell’Oro forecasts a double-digit increase in total worldwide datacenter capex, driven by increased server demand and higher average selling prices. Accelerated computing adoption is expected to continue, supported by new GPU platform releases from Nvidia, AMD, and Intel.
It adds that “recent recovery” in server and storage component markets for CPUs, memory and storage drives is “signalling the potential” for increased system demand later this year.
According to Marketresearch.biz, the global datacenter storage market is expected to be worth around $159.7 billion by 2032, a rise from $50.2 billion in 2022, growing at a CAGR (compound annual growth rate) of 12.6 percent.
That compares with Data Bridge Market Research, which says the worldwide market will reach $132 billion by 2031, up from $55.53 billion in 2023, growing at a CAGR of 11.49 percent.
TrueNAS producer iXsystems has encountered some user turbulence concerning a shift from a FreeBSD focus to Debian Linux.
The two main products supplied by iXsystems, TrueNAS CORE and TrueNAS SCALE, are both open source. CORE is based on BSD Unix and is characterized as a scale-up product whereas the newer SCALE is based on Debian Linux, termed a scale-out product, and supports Docker Containers, Kubernetes, KVM, Gluster, and a wider range of hardware than CORE. It’s reckoned that the more mature CORE has better performance than SCALE and needs less CPU power and memory.
From now on iXsystems will develop SCALE faster than CORE, with CORE becoming more of a maintenance product. Why shift the development center of gravity from BSD to Linux? It’s to do with software developments upstream of the base operating system.
Kris Moore
Kris Moore, SVP for engineering, told us: “Upstream has shifted. So first of all, ZFS, that’s kind of the heart and soul of TrueNAS and was for FreeNAS as well. Most of that [development] work takes place on Linux these days; features testing, all that happens on Linux. FreeBSD is the thing you port to and you’re done. So that momentum has moved.”
“Again, drivers, features, all these things are natively developed on Linux first. These are day one, and then FreeBSD is if we feel like it or a community person ports something over.”
“We had a huge chunk of our engineering staff spending time improving FreeBSD as opposed to working on features and functionalities. What’s happened now with the transition to having a Debian basis, the people I used to have 90 percent of their time working on FreeBSD, they’re working on ZFS features now … That’s what I want to see; value add for everybody versus sitting around, implementing something Linux had a years ago. And trying to maintain or backport, or just deal with something that you just didn’t get out of box on FreeBSD.”
“It’s not knocking against FreeBSD. We love it. That’s our heritage. That’s our roots, I was on the CORE team elected twice. So believe me, if I felt like I could have stayed on FreeBSD for the next 20 years, I would have absolutely preferred to do that … But at some point, you gotta read the writing on the wall and say, well, all the the vendor supported-innovations are happening on the Linux side these days.”
Brett Davis
Brett Davis, iXsystem EVP, added: ”Our heritage as a company is we actually spawned from BSD. It’s near and dear to our hearts, and has been for a long time. And we probably stuck to it longer than we should. We probably should have made the decision earlier.”
“If we had our druthers we probably would have stuck with with FreeBSD. But the reality of our focus is making a great product that people love, and not maintaining an operating system. That’s just the reality.”
BSD aficionados don’t like this change. Moore said: “Talk is cheap and complaints are free. You know, everyone loves to complain about it. But … if people wanted to push FreeBSD forward for the last 15 years, they would have.”
”It’s not like iX has gone to the dark side. We’re an open source company still. Guess what, Linux is open source. This is something I’ve had to argue with the BSD guys, like you’ve betrayed us. And I was, like, the source code is all still there. It’s all free. It’s all still available.”
Davis suggested: “There’s actually three camps of users. There’s the FreeBSD user. There’s a Linux user, and actually, there’s a larger group of users that don’t care. They just want a product that works. They want a NAS product. They don’t care what operating system is underneath it. I mean, do you care what operating system runs your microwave?”
“They just want to be able to install something. They want to make sure that they’re not frustrated by incompatibilities when drivers aren’t there for the hardware they’re trying to build. And most of these users aren’t giving us any money.”
It seems to us here at B&F that two parallel rivers of open source Unix development. BSD and Linux are changing in relative importance, with Linux taking priority in many developers’ minds over BSD. It’s not iXsystem’s responsibility to tell these Linux supporters that they are wrong. FreeBSD is unfortunately not so important anymore.
TrueNAS CORE releases are tied to BSD releases, and Davis admitted: “To be totally fair, we kind of scored an own goal by making a decision to change the name of the [next CORE] release from 13.1 to 13.3 … so that it could align with that version of FreeBSD.”
It’s been said that TrueNAS CORE VMs won’t migrate to SCALE. Moore said this “is just patently false, they migrate clean one to one.” It’s also been suggested that CORE is dead and has no future. Moore said: “We’re getting ready to release 13.3. The next update is coming out in the next few months, and we have to support it for years, no matter what.”
Davis added: “We have enterprise customers with entitlements dating out seven years. So we’ve got a lot of engineering ahead of us.”
Summing up, Davis said: “We want to make sure that the users are are taken care of, and whether or not they pay us, even though we are a business, making sure that we have a product that’s freely available in open source is important to us.”
“The other thing I think that’s sort of missed here is that no one’s marooned. You have the ability to migrate from CORE to SCALE at any point. No one’s forcing that at any point. If we’re going to maintain it for years, and you have a system that’s running fine, [then] just leave it, no problem.”
Zluri has introduced its “privacy vault” to protect personal identifiable information (PII) at companies, as part of its iPaaS (integration platform as-a-service) product that supports on-premise and cloud data.
The SaaS operations management platform provider is designed to give organizations full visibility and control over their SaaS applications, access, and costs. It promises to enable IT and security teams to “effortlessly discover 100 percent of shadow IT”, mapping the SaaS apps used, optimize cost efficiencies, and govern all user access.
The patented AuthKnox engine establishes a unified data fabric for SaaS across on-premise and cloud data locations, complemented by built-in iPaaS and workflow automation.
To enhance its data security offer, the firm has now unveiled its encrypted “privacy vault” for PII to “ensure compliance with the highest data protection standards”. It makes sensitive data significantly harder for malicious actors to access, by isolating it within a zero-trust secure environment, meaning only select company nominees have access to it.
The vault supports “bring your own key” (BYOK), allowing Zluri customers to use and manage their own encryption keys, adding an additional layer of data ownership and security.
As for “your right to be forgotten” legislation and regulations, we’re told the vault centralizes sensitive data, enabling companies to maintain an accurate inventory to ensure the total deletion of any sensitive or personal data upon request, in compliance with data protection rules.
Zluri’s vault also supports data residency requirements, allowing customers to choose the geographic location for storing their sensitive information, ensuring compliance with regional data protection regulations.
Strict role-based access controls govern access to the vault, limiting it to authorized personnel, and logging and monitoring capabilities track access and modifications within the vault, facilitating prompt identification of any suspicious activity. Zluri says it conducts regular security audits, including vulnerability assessments and penetration testing, to ensure the resilience of the vault against evolving threats.
“Enhancing security posture in modern SaaS platforms is more critical than ever before,” said Zluri’s Chief Technology Officer and co-founder Chaithanya Yambari. “By introducing our privacy vault, we are not only meeting but exceeding industry standards, safeguarding customer data, and staying ahead of evolving threats.”
Zluri founders from left: Chaithanya Yambari, Ritish Reddy and Sethu Meenakshisundaram.
Zluri was founded in 2020 in Bengaluru, India, by Yambari, Ritish Reddy and Sethu Meenakshisundaram. It now headquartered in Milpitas, California and is financially backed by Lightspeed, MassMutual, Endiya, and Kalaari Capital. The company has so far raised a total of $32 million. Last July, it raised $20 million in a series B round, led by Lightspeed, with participation from existing investors MassMutual, Endiya, and Kalaari. The series A round took place in January 2022, raising $10 million.
Cirata, the successor to crashed WANdisco, has reported decreased revenues and increased losses for 2023.
WANdisco supplied replication-based Data Integration (DI) and DevOps/Application Lifecycle Management (DevOps) software. According to WANdisco’s previous filings with the authorities, a senior sales rep falsified sales reports and an inadequate sales management and monitoring system did not prevent this. Management thought the company was growing at a high rate – until auditors burst that bubble in March 2023. Sweeping executive changes followed, as well as a temporary ejection from the UK’s AIM stock market, layoffs, equity-based fundraising, readmission to AIM, and a name change to Cirata.
Stephen Kelly
CEO Stephen Kelly, who was brought in to find a way forward for the business, today issued 2023 profit and loss accounts which show a deterioration in trading performance. “The speed of the recovery is slower than we anticipated,” Kelly said.
According to the update, 2023 revenues fell 30 percent year-on-year to $6.7 million and, the pre-tax loss widened to $36.5 million versus $29.6 million in 2022. Bookings of $7.2 million were down 37.5 percent.
Kelly said: “The first half of FY23 revealed a business at a standstill. Internally, the post 9 March 2023 announcement (the “Irregularities”) discovery period extending into late 2023 resembled the laborious task of Sisyphus. Reactive surprises, rear-guard activities and unexpected challenges occupied late nights and weekends. The situation demanded continuous firefighting. We were experiencing a seemingly endless series of ‘whack-a-mole’ challenges.”
Cirata fiscal and calendar years are identical
“Soon after 9 March 2023, some customers and partners placed the Company on their ‘watchlist’, leading to a pause in activities and the then embryonic sales pipeline coming to a standstill. For a period, the only substantial executive interaction with certain customers and partners involved reassuring their compliance teams. It wasn’t until post-October 2023 that any semblance of normality returned, with Q4 2023 providing an opportunity for management to proactively plan for FY24.”
As Cirata’s annual results reveal, the sales management system was even more broken than the new management first thought. Kelly said: “The reality is that, since its IPO in 2012, the Company has raised $270 million but without delivering consistent sales momentum. Several fundamental elements of a scalable growth company seemed to be lacking.”
“By mid Q2 FY23, there were no sales compensation plans, territory plans, or account reviews, which are key for a professional sales organisation.”
Initial projections in March 2023 suggested a significant 12-month pipeline. “However, upon closer scrutiny the reality emerged. The reassessed pipeline was around 20 percent of the original figure and some of the ‘deal values’ overestimated. This reality within the GTM presented a scenario akin to starting from scratch.”
“Sadly, over the preceding 12 months, a significant portion of the engineering schedule and product roadmap was anchored in customer requirements that did not exist.”
“Company-wide, essential elements of governance, training and certification were missing.”
“The working culture mainly characterised by a 4-day week, unlimited vacation, and working-from-home, failed to align with the operational reality of a loss-making business.”
The new management team discovered that some customers and strategic partners had legacy contracts featuring uncapped licencing and partner agreements with unconsumed “pre-paids.”
An unanticipated $8 million cost for advisory fees resulted in pleas to the firms involved to reduce their fees. Some did. Some did not.
Kelly said management has struggled to deal with all these issues but the sales close cycle is still problematic. “Notably, although it is fair to represent that DI customers remain in the pipeline, the predictability of customer deal closure has been challenging, with a tendency for slippage from quarter to quarter. DI solutions are sold into large, complex enterprises and the sales cycle can be longer and unpredictable. A key focus of the new management team is to enhance the pipeline, improve predictability, and elevate overall sales performance.”
“Challenges and uncertainty remain. FY24 represents a transition year to growth and our path to cash-flow positive in FY25.”
Overall, Kelly said of 2023: “A necessary cost realignment, a capital raise and a ‘root and branch’ restructuring and refocusing of the Company sees the business exiting 2023 with its customers and partners re-engaging … FY24 needs to evidence a transition to growth.”
The 2024 bookings outlook is for revenues between $13 million and $15 million; 81 percent bookings growth at the low end and 108 percent at the high end. There is no revenue forecast. Cirata wants to exit 2024 at cash flow break-even.
Cirata will issue a trading update on the first 2024 quarter later this month and has noted that deal slippage is still a problem. It is hoping that sales booking and revenue momentum will pick up during the year to deliver the growth it needs in what is looking like a make-or-break year.
Precisely says its Precisely Connect data integration solution now brings together Amazon Relational Database Service (Amazon RDS) and IBM Db2 (Database 2) database workloads. This is intended to simplify migration from customers’ Db2 relational databases to Amazon Web Services (AWS), in theory helping organizations achieve greater scale and gain new insights from analytic workloads in the cloud.
News of the integration follows Precisely Connect’s Amazon RDS Ready Partner designation, and the recent expansion of the AWS Mainframe Modernization Data Replication with Precisely service.
Db2 products include operational databases, data warehouses, and data lakes. The SQL language in them allows for flexible data table manipulation by allowing multiple users to insert, delete, and update records simultaneously using specific SQL commands. Db2’s security has also made it a popular database.
AWS announced general availability of Amazon RDS for Db2 at the end of last year, a service that’s pitched at making it easier to set up, operate, and scale Db2 databases in the cloud.
Precisely allows Amazon RDS for Db2 to be a target for IBM Z mainframe and i Series (AS/400) replication, enabling customers to move current Db2 data and workloads to AWS. As a result, organizations can derive maximum value from existing infrastructure investments.
In the mainframe to cloud data connectivity market, Precisely competes against a number of players, including the likes of BMC (which recently acquired specialist Model9), Hitachi Vantara, WEKA, and, indeed, IBM. It’s a growing market too as while the global number of mainframes has gone down, the amount of data being stored on them has gone up.
“Digital transformation and IT infrastructure modernization initiatives look different for every company, but the one common denominator across all industries is the need for fast and reliable access to trusted data,” said Eric Yau, chief operating officer at Precisely. “Our work and expertise with AWS allows us to support customers with the flexibility and agility needed to align real-time data delivery with changing business demands.” Precisely says it has 12,000 customers in more than 100 countries, including 99 of the Fortune 100.
Distributed storage startup Vawlt has announced a major upgrade to its “supercloud” offering. Vawlt 3.0 promises to address escalating customer demands for data protection, cyber security, and operational efficiency, helped by added ransomware protection.
With Vawlt, data is distributed in multiple clouds or on-premise nodes, creating what it calls a “supercloud” – essentially a cloud of clouds – that enables customers to take advantage of multiple storage environments through a single pane of glass. All data is encrypted at the client side, and it never goes through Vawlt’s servers. It travels directly between the users’ machines and the storage clouds. The platform enables channel partners to tailor and optimize storage to their customers’ needs and most common use cases – whether that be to support their hot or cold data.
The advanced ransomware protection offer is built in and introduces policy-based immutability for file systems and S3-API working, with continuous snapshotting to allow system-wide data rollbacks to any specified point in time, based on individual customer configurations.
There is also a new web-based user interface to support intuitive configuration, improve monitoring, and simplify often complex data management tasks.
In addition, there are channel partner-focused tools for managed service providers, valued-added resellers, and system integrators to streamline their operations, improve their customer support, and aid business development.
Ricardo Mendes
“With Vawlt 3.0 we are catalyzing a transformation in the data storage market in a hybrid multi-cloud world. It is not simply a re-invention of our solution but, much more than that, the latest release signposts the innovations we will continue to deliver,” explained CEO Ricardo Mende. “Our goal is to equip organizations with the means to assert their data sovereignty and data independence.”
Vawlt was founded in 2018 by researchers from the LASIGE research and development group within the University of Lisbon.
Last month, Vawlt secured an additional €2.15 million ($2.34 million) in funding to help widen the reach of its storage system. The previous round of funding was in 2021, and the total now stands at €3 million ($3.26 million).
Three new investors made up the latest round, including round leader Lince Capital, along with Basinghall and Beta Capital. There was also participation from existing investors Armilar Venture Partners and Shilling VC.
With the extra cash, the biz promised further product development, wider channel support, and an expansion in its headcount across business and product development.