Home Blog Page 38

Huawei developing SSD-tape hybrid amid US tech restrictions

Huawei’s in-house development of Magneto-Electric Disk (MED) archive storage technology combines an SSD with a Huawei-developed tape drive to provide warm (nearline) and cold data storage.

MED technology was first revealed back in March. We were told that, facing potential disk supply disruption due to US technology export restrictions, Huawei was working on its own warm and cold data storage device by combining an SSD, tape cartridge, and drive in a single enclosure. Its storage portfolio could then run from fast (SSD) for hot data and MED for warm and cold data, skipping disk drives entirely.

Presentation images of the MED now show a seven-inch device:

Huawei presentation slide

The MED is a sealed unit presenting a disk-like, block storage interface to the outside world, not a streaming tape interface. Inside the enclosure there are two separate storage media devices: a solid-state drive with NAND, and a tape system, including a tape motor for moving the tape ribbon, a read-write head, and tape spools. 

This is unlike current tape cartridges, which contain a single reel of tape, approximately 1,000 meters long, and have to be loaded into a separate drive for the tape to be read and have data written to it. A tape autoloader contains the motor and spare reel with tape cartridges loaded into it and moved to the drive by a robotic mover. Much bigger tape libraries also have robotics to select cartridges from the hundreds or thousands stored inside them, and transport them to and from the tape drives.

The MED contains an internal motor to move the tape and an empty reel on which to rewind the tape from the full reel after it is pulled out and moved through the read and write heads. A conceptual diagram of the device illustrates its design: 

Diagram of magneto-electric drive

The MED contains a full reel of tape, about half the length of an LTO tape, motor, read-write heads and an empty reel to hold the used tape. Huawei engineers could choose to have the tape ribbon positioned by default with half on one reel and half on the other so that the read-write heads are at the midpoint of the ribbon, shortening the time to get to either end of the tape.

The system is designed to be a combined archive for cold data and nearline store for warm data. Data flows into the MED through the SSD at NAND speed, from where it is written to the tape in sequentially streamed blocks. Warm data can be read from the SSD at NAND speed. Cold data is read from the MED more slowly as it has to be located on the tape and the tape ribbon moved to the right position before reading can begin. This can take up to two minutes.

The MED has a disk-like, block interface, with the SSD logically having a flash translation layer (FTL) in its controller that takes in incoming data and stores it in NAND cell blocks. From there, a logical second tape translation layer assembles them into a sequential stream and writes them to the tape.

When the MED receives a data read request, the controller system locates the requisite blocks using a metadata map, stored and maintained in the NAND, and then fetches the data either from the NAND, or from the tape, streaming it out through the MED’s IO ports.

Huawei and its Chinese suppliers have developed their tape media and the read-write technology, not using IBM LTO tape drive technology or LTO tape media, which is made by Fujifilm and Sony. The tape media ribbon is about half the length of an LTO tape and has a much higher areal density. The MED NAND is produced in China as well. Huawei is open to using NAND from other suppliers should US technology export restrictions allow it.

The MED system and its components are protected by patents. The first-generation MED should arrive sometime in 2025. A second-generation MED, with a 3.5-inch disk bay slot size, with a shorter and much higher density tape ribbon, has a 2026/2027 position on the MED roadmap:

  • A gen 1 MED will store 72 TB, and draw just 10 percent of the electricity needed by a disk drive. 
  • It should have a 20 percent lower total cost of ownership than an equivalent capacity tape system.
  • A gen 1 MED rack will deliver 8 GBps, hold more than 10 PB, and need less than 2 kW of electricity
  • We don’t know if the 72 TB capacity is based on raw or compressed data. 

The MEDs won’t run hot as they store mostly archive data. A MED chassis has no need of robots and can be filled with MEDs like a dense JBOD. It will function like a better-than-tape archive system, providing much faster data access, both for reads and writes, draw less electricity, and occupy less datacenter rack space.

It is simple to envisage MED variants with more or less NAND storage, pitched at applications needing more or warm storage compared to cold, archival data storage in the future, squeezing the disk market somewhat. In effect, Huawei is compressing the storage hierarchy from three elements to two. From “SSD-to-HDD-to-Tape” to “SSD-to-MED.”

Such two-element hierarchies could be easier to manage, more power efficient and enable faster cold data access. They could become popular in regions with constrained disk supply through US restrictions, and elsewhere as well, because they will make on-premises datacenter and tier 1, 2, and 3 public cloud archival storage more practicable. Chinese public cloud suppliers are having conversations with Huawei about using the technology, we’re told.

It is possible that MEDs could have a profound effect on the robotics-using tape autoloader and library systems markets, prompting suppliers of such systems to look at developing their own MED-like technology. MEDs might also add to the pressure on disk drives from NAND by moving some nearline data to MEDs, squeezing the disk drive market from two sides.

It’s notable that Huawei has only developed its MED technology because of US disk tech export restrictions, and that MED technology could end up threatening Western Digital and Seagate because of Huawei’s inventive response to those restrictions.

Bootnote

Huawei is said to be developing its own 60 TB capacity SSD, using QLC NAND with an SLC cache.

Tintri opens lid on Kubernetes container storage interface for streamlined management

DDN enterprise storage subsidiary Tintri is releasing data management features for Kubernetes environments, with its new VMstore Container Storage Interface (CSI) driver.

The VMstore platform provides visibility into performance, data protection, and management for virtual machine workloads. The new CSI driver provides VMstore customers with that same insight within Kubernetes, using a single interface.

With cloud-native application support, VMstore can efficiently manage data for microservices-based deployments.

The driver allows admins to manage all data using familiar Tintri interfaces and tools to reduce complexity in hybrid VM/container environments, said the provider. The driver enables dynamic provisioning and automatic attachment and detachment of volumes to containers.

Brock Mowry, Tintri
Brock Mowry

“This IO-aware CSI driver is the most adaptable data management platform for Kubernetes, transforming how IT administrators handle Kubernetes environments in both cloud and on-prem,” said Brock Mowry, CTO at Tintri. “The driver empowers administrators, regardless of their Kubernetes expertise, with the essential tools to efficiently manage and optimize data across physical and virtual clusters.”

The driver also enables the easy management of workload transitions between cloud environments, enhancing operational efficiency through automated performance tuning. In addition, ETPH analytics provide insight to optimize cloud storage costs.

The driver leans on Tintri’s TxOS performance, analysis, and optimization capabilities, allowing admins to dynamically manage container performance and autonomously prioritize application workloads in real time, we are told. With Tintri Global Center (TGC), admins can manage multiple VMstores serving as Kubernetes clusters, either globally or locally, through a single pane of glass.

Through the VMstore TxOS integration, Tintri also brings data protection and disaster recovery to Kubernetes environments, including snapshots and cloning of persistent volumes or large data sets, ensuring consistent storage, secure data management, and efficient recoverability, according to the company.

Tim Averill, US CTO at IT infrastructure and managed security service provider at Silicon Sky, said: “We are leveraging the Tintri CSI driver within our datacenters, both in the cloud and on-premises. By providing primary storage, disaster recovery and data protection in one solution, we are simplifying and enhancing our IT operations.”

In August, Tintri said it was developing a disaster recovery feature with autonomous detection and alerting to combat ransomware attacks.

Lightbits backs up Crusoe’s sustainable AI cloud scale-up

Lightbits Labs says its block storage is supporting the expansion of Crusoe Energy Systems’ sustainable AI cloud service. Crusoe powers its datacenters with a combination of wasted, stranded, and clean energy resources to lower the cost and environmental impact of AI cloud computing.

“Stranded” energy is methane being flared or excess production from clean and renewable sources. Crusoe, which has dual headquarters in Denver and San Francisco, currently operates in seven countries with around 200 MW of total datacenter power capacity at its disposal, some owned by the company and some at shared datacenter sites.

In September, Crusoe said it was collaborating with VAST Data to offer its customers VAST’s Shared Disks technology, which is another high-performance storage product for AI workloads.

Lightbits uses NVMe/TCP to enable direct access to NVMe storage over standard Ethernet TCP/IP networks. This architecture is designed to significantly reduce latency and maximize throughput, making it ideal for demanding AI and ML workloads, according to Lightbits.

Patrick McGregor, Crusoe
Patrick McGregor

Lightbits scales IOPS with increased load while consistently maintaining latencies under 500 μs. The clustered architecture provides up to three replicas per volume across multiple availability zones for high availability.

“Lightbits’ suite of enterprise-grade functionality has been instrumental in helping us build a high-performance, climate-aligned AI cloud platform, addressing performance and operational gaps that other block storage solutions struggle with,” said Patrick McGregor, Crusoe chief product officer.

“From data preprocessing to real-time inference, the advantages of lower and more consistent latency, higher throughput, and linear scalability make Lightbits high-performance block storage an excellent offering to our customers to optimize their AI workflows.”

Kam Eshghi, Lightbits
Kam Eshghi

Users can resize their VMs and consume high-performance storage in the form of persistent disks on demand, while leveraging the OS images pipeline to generate their workload-specific images, such as LLM training with Jax or generative AI with Stable Diffusion. Lightbits’ technology is integrated with Kubernetes, OpenStack, and VMware to support modern cloud-native apps and traditional virtualized apps.

Kam Eshghi, Lightbits co-founder and chief strategy officer, added: “This expanded partnership reflects the tangible results Crusoe has seen and demonstrates our crucial role in shaping the future of AI cloud technology.”

Earlier this week, it was announced that Lightbits’ cloud virtual SAN software is now available in Oracle Cloud Infrastructure (OCI), following availability in the AWS and Azure clouds.

Nutanix tightens ties with AWS on hybrid cloud solutions

Nutanix is getting closer to AWS, with on-prem/public cloud hybridity front and center, to both ease app migration to AWS and use AWS for on-prem extension.

It’s doing this through an upgrade to Nutanix Cloud Clusters (NC2) that run both on-premises and in the AWS cloud. Nutanix says NC2 operates as an extension of on-prem datacenters that span private and public clouds. It’s operated as a single cloud with a unified management console.

The idea is that NC2 on AWS provides disaster recovery, datacenter extension, and application migration facilities for an on-prem Nutanix deployment. The expanded partnership will enable customers “to seamlessly extend their on-premises Nutanix environment to AWS.” 

Tarkan Maner, Nutanix
Tarkan Maner

Tarkan Maner, chief commercial officer, stated: “Our expanded strategic partnership with AWS is a win-win-win for both companies and our customers, as it will help simplify their cloud migration journeys, accelerate their adoption of AWS using NC2, and open the door to hybrid cloud and on-prem Nutanix opportunities.”

NC2 on AWS places the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal instance in Amazon Elastic Compute Cloud (EC2). It runs AOS and AHV on the AWS instances and packages the same CLI, GUI, and APIs that cloud operators use in their on-prem environments. Nutanix provisions the full bare-metal host for your use, and the bare-metal hosts are not shared by multiple customers.

On-prem workloads can be migrated to AWS with the Nutanix Move migration tool, without refactoring. Customers get access to AWS services, databases, S3, AI and ML services. The elasticity of AWS can be used to manage expected and unexpected capacity demands. Procurement is simplified by using AWS Marketplace for all Nutanix software licensing needs.

Customers can also get access to promotional credits for migrating VMware on AWS workloads to NC2 on AWS through an AWS VMware Migration Accelerator offering. If migrating workloads from other clouds or on-premises they will also have access to the AWS Migration Acceleration Program benefits, including free proof-of-concept trials, migration assessment, and support with AWS credits, as well as Nutanix licensing pricing promotions.

Nutanix cloud platform diagram
Nutanix cloud platform diagram. Nutanix Cloud Clusters also run on Azure

Dave Pearson, an IDC Research VP, said: “The partnership between Nutanix and AWS emerges as a strategic solution to enable more seamless migrations to Nutanix Cloud Clusters on AWS.” You can obtain more information from on the AWS partnership here.

LucidLink offers global teams tools for real-time, secure file access

LucidLink is providing unified real-time file-based collaboration among distributed teams working on massive projects, with instant, secure file access across desktop, web, and soon mobile.

Startup LucidLink sells file collaboration services to distributed users. Its Filespaces product streams parts of files from a central cloud repository, providing fast access to large files, protected by zero knowledge encryption. All the locally cached data and metadata on the client devices are stored encrypted on the local disk. This sub-file streaming is opposed to the full file sync ‘n’ share approach, which, it says, characterizes the services offered by CTERA, Egnyte, Nasuni, and Panzura. LucidLink’s software is used by entertainment and media, digital ad agencies, architectural firms, and gaming companies. 

Peter Thompson, LucidLink
Peter Thompson

Peter Thompson, co-founder and CEO of LucidLink, stated: “The new LucidLink is both an evolution of everything we’ve built so far and a revolution in how teams collaborate globally. For the first time, teams can collaborate instantly on projects of any size from desktop, browser or mobile, all while ensuring their data is secure.

“This milestone release marks a new chapter in our mission to make data instantly and securely accessible from anywhere and from any touchpoint. As we introduce more new features in the coming months, our focus remains on empowering teams to collaborate seamlessly, wherever they are.”

The real-time mobile collaborative capabilities will actually arrive in the first quarter next year. 

LucidLink says the latest software involves no downloading, syncing, or transferring data. It has a new desktop and web interface, streamlined onboarding, and flexible pricing for teams of all sizes, from freelancers to large enterprises, working from home, in datacenters or the public cloud.

LucidLink is providing a new desktop interface and a global user concept in which users can join multiple Filespaces across desktop, web, and soon mobile devices. There is a faster and smoother installation process for macOS users which “eliminates reboots or security changes.”

There is cloud flexibility as users can choose LucidLink’s bundled, egress-free AWS storage options or bring their own cloud storage provider.

The new LucidLink PC/notebook interface

There are more features scheduled for early 2025:

  • Mobile apps for Android and iOS: Full-featured mobile apps will give users immediate access to data.
  • External link sharing: Users can share content with external collaborators without needing the desktop application.
  • Browser-based upload: Users can drag and drop files directly from their browser for seamless collaboration.
  • Multi-Factor Authentication (MFA) and SAML-based SSO: Enhanced security options for all users.
  • Guest links: Teams can collaborate securely without requiring full user accounts.

An upcoming Filespaces upgrade tool will provide a smooth path to the new LucidLink for existing customers.

LucidLink says Spotify, Paramount, Adobe, and other creative teams worldwide have used LucidLink to increase productivity fivefold, access global talent, and “free their people to focus on creating.” 

We note that CTERA says its technology also offers “direct read/write access from the cloud, allowing desktop and server applications to handle large files effortlessly, without the need to upload or download them in their entirety. The data is streamed on-demand, allowing tools like Adobe Premiere or DaVinci Resolve to function smoothly and quickly, no different than if you were using a local disk.”

Bootnote

LucidLink’s Filespaces have a split-plane architecture in which data and metadata planes are managed separately. The metadata is synchronized through a central metadata service provided by LucidLink, while the data is streamed directly to and from the cloud or an on-premises object store.

Datadobi boosts StorageMAP with faster data mobility

Having pushed out the seventh major version of its unstructured data charting and moving tool in June, Datadobi says it has made StorageMAP faster, more scalable, and better able to deal with the now end-of-life Hitachi Data Ingestor (HDI).

StorageMAP software scans and lists (maps) a customer’s file and object storage estates. It can then optimize storage use by migrating old and cold data to lower-cost archival storage, for example, and delete dead data. Datadobi says warmer – more frequently accessed – data could be tagged, for example, for use in AI training and inference work. Warm data could also be migrated to a public cloud for access compute instances there.

Carl D'Halluin, Datadobi
Carl D’Halluin

v7.0 added custom dashboards and an analysis module. According to Datadobi CTO Carl D’Halluin: “StorageMAP 7.1 takes it a step further and solves some focused challenges facing our customers globally, including offering an innovative HDI Archive Appliance Bypass feature, example dashboards, and the most important one, improvements to scalability and performance.”

StorageMAP has a uDME feature, an unstructured Data Mobility Engine. This moves, copies, replicates, and verifies large and complex unstructured datasets based on trends and characteristics derived from the metadata intelligence stored in the StorageMAP metadata scanning engine’s catalog. 

Datadobi says the uDME has been made faster and more scalable, capable of handling greater capacities and larger numbers of files and objects.

An HDI Archive Appliance Bypass feature – we’re told – gets data faster from the primary NAS and archive (HCP) sides of an HDI installation, HDI being a file storage system that can move data off a primary NAS to a backend HCP vault for cheaper, long-term storage. With HDI now defunct, customers may need to migrate their data to actively supported NAS and backend stores, but the HDI software impedes data migration.

D’Halluin says it has “significant performance limitations that make migrating all active and archived data an extremely slow process typically riddled with errors.”

StorageMAP has a bypass that “involves using multiple StorageMAP connections to the storage systems – one connection to the primary storage system and a second connection to the archive storage system. These connections effectively bypass the middleware HDI archiving appliance, which is responsible for both relocating data to the archive storage system and retrieving it when a client application requests archived data.”

This is an alternative to the Hitachi Vantara-CTERA deal for moving data off HDI.

Lastly, DataDobi has added example dashboards to help customers take advantage of v7.0’s custom dashboard feature “that a customer can refer to for ideas to include in their own custom dashboards.”

Check out StorageMAP here.

NAKIVO adds Microsoft 365, Proxmox, and Spanish language support

NAKIVO has boosted its backup offering with additional VM support, Microsoft 365 protection, Spanish language adoption, and extended cybersecurity.

Sparks, Nevada-based NAKIVO was founded in 2011, five years after industry leader Veeam, to provide virtual machine and then physical server backup to small and medium enterprises. It says it has more than 29,000 customers spread across 183 countries who buy from more than 300 MSPs and 8,600 partners. That customer count is well short of Veeam’s 450,000-plus but is plenty high enough to give NAKIVO a viable business.

Bruce Talley, NAKIVO
Bruce Talley

CEO Bruce Talley is the co-founder and his founding partners are Ukraine-based VP of Software Nail Abdalla and Turkey-based VP Pof roduct Management Sergei Serdyuk. Talley said of the latest Backup & Replication v11 release: “With v11, we’re introducing features that align with today’s demands for flexible data protection, increased security, and multilingual support. Our goal with this release is to provide a comprehensive solution that supports data resilience for businesses worldwide.”

There is added support for open source, KVM-based Proxmox VE, which “has become a mainstream virtualization solution,” reflecting the move away from Broadcom-acquired VMware by some customers. Both Veeam and Rubrik have added Proxmox VE support in recent months. NAKIVO provides agentless VM backups, incremental backups, multiple backup targets, as well as encryption and immutability for backups in both local and cloud repositories.

v11 adds Microsoft 365 backup to the cloud, including Amazon S3, Wasabi, Azure Blob, Backblaze B2, and other S3-compatible storage targets. The Backup Copy feature means customers can create multiple backup copies and store them in various locations – tape, in the cloud, on S3-compatible storage, or network shares, which strengthens disaster recovery capabilities.

Adding Spanish language support, as Rubrik has done, means customers can operate and manage NAKIVO’s software using Spanish, and also access its website, educational content, and user documentation in Spanish.

v11 supports NAS (network-attached storage) backup, source-side backup encryption, which is integrated with the AWS Key Management Service (KMS), and NetApp FAS and AFF storage array snapshots. Customers can back up their VMware VMs stored on these devices this way. Supported storage devices now include HPE 3PAR, Nimble Storage, Primera, and Alletra, as well as the NetApp arrays.

It also introduces a Federated Repository feature. This allows customers to create a scalable storage pool from multiple repositories, or “members,” which automatically work together to ensure continuous operation. If a repository reaches capacity or becomes inaccessible, backups are seamlessly redirected to available members, ensuring uninterrupted protection and access to data.

Customers can scale storage capacity by adding or removing members as needs change, optimizing resource use without unnecessary costs. For MSPs, and in addition to the existing MSP Console, v11 introduces the Tenant Overview Dashboard, a centralized tool designed for MSPs to monitor and manage all tenants in one place.

Other additions include the extension of Real-Time Replication (Beta) for VMware functionality to cover vSphere 8.0. Customers can create replicas of vSphere 8 VMs and keep them updated as changes are made, as frequently as once per second. They can also now enable immutability for backups stored on NEC HydraStor systems.

NAKIVO Backup & Replication v11 is available for download, with a datasheet accessible here. Customers can either update their version of the solution or install the 15-day Free Trial to check how the new features work.

Toshiba drives power CERN’s data demands at the LHC

CERN, with more than 120,000 disk drives storing in excess of an exabyte of data, is probably Toshiba’s largest end-user customer in Europe. Toshiba has released a video talking about how its drives are used in making Large Hadron Collider (LHC) data available to hundreds of physicists around the world that are looking into how atoms are constructed.

The Toshiba drives are packaged inside a Promise Technology JBOD (just a bunch of drives) chassis and CERN has been a long-term customer, starting with Promise’s 24-bay VTrak 5800 JBOD and Toshiba’s 4 TB Enterprise Capacity drives. Their capacity increased over time to the 18 TB MG09 series of these drives.

When the LHC smashes atoms into each other in collisions, component particles are spun off and detected. The LHC has a 24/7 operation for its collision detectors. As the LHC breaks atoms up into myriad component particles, masses of data is generated – around 1 TB/minute, 60 TB/hour, 1.44 PB/day and 10.1 PB/week.

The data is organized and accessed within CERN’s EOS in-house file system. It currently looks after more than 4,000 Promise JBODs and the aforementioned 120,000-plus drives.

Toshiba is now testing 20 TB MG10 series drives in a Promise 60-bay, 4RU, VTrak 5960 SAS chassis, which has so-called GreenBoost technology. This is based on intelligent power management, which Promise says can deliver “energy savings of up to 30 percent when compared to competing enclosures.”

Promise CMO Alice Chang said: “The energy crisis is now a real challenge to all enterprises, including CERN. The VTrak J5960 offers a well-rounded solution to solve this dilemma, and we are confident that Toshiba’s Enterprise Capacity HDDs, installed and operated in this JBOD, will support CERN’s future need for growing data storage capacity in a reliable and energy-efficient way.”

Rainer Kaese, Senior Manager Business Development, Storage Products Division at Toshiba, said: “We continue to develop higher capacities, up to 30 TB and beyond, as HDDs are and will remain essential for storing the exabytes of data that CERN and the entire world produce in a cost-effective and energy-efficient manner.” 

That’s a sideswipe at the idea that SSDs will replace disk drives for mass capacity online data storage.

Veeam partners with Continuity Software to fend off ransomware attacks

Backup vendor Veeam is increasing its data security capabilities via an anti-ransomware partnership with Continuity Software to boost customer cyber-resiliency.

Continuity Software’s StorageGuard solution analyzes the security configuration of storage and backup systems. It says it scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage, backup, and data protection systems – including Dell, NetApp, Hitachi Vantara, Pure, Rubrik, Commvault, Veritas, HPE, Brocade, Cisco, Cohesity, IBM, Infinidat, VMware, AWS, Azure, and now Veeam. 

Andreas Neufert, Veeam
Andreas Neufert

This Continuity collaboration follows a Veeam-Palo Alto deal in which apps are being integrated with Palo Alto’s Cortex XSIAM and Cortex XSOAR systems for better cyber incident detection and response.

Veeam’s Andreas Neufert, VP of Product Management, Alliances, stated: “Partnering with Continuity is an additional step towards helping our customers maintain a safer security posture in compliance with specific regulations including CIS Control, NIST, and ISO throughout their Veeam Data Platform life cycles. The partnership helps to ensure our industry-leading technology, [and] also the surrounding environment, is continuously checked for misconfigurations and vulnerabilities to withstand cyberattacks, as well as adhering to ransomware protection best practices.”

Gil Hecht, Continuity Software
Gil Hecht

Continuity becomes a Veeam Technology Partner (TAP) and the two say its StorageGuard will provide automatic security hardening for environments to improve customers’ security posture, comply with industry and security standards, and meet IT audit requirements.

We’re told StorageGuard is a complementary offering to the Veeam Data Platform, enabling customers to automatically assess the security configuration of their environment, while validating the security of all backup targets, including disk storage systems, network-attached storage (NAS), cloud, and tape that connect to customers’ environments.

StorageGuard can prove audit compliance with various security and industry standards, such as ISO, NIST, PCI, CIS Controls, DORA, and so on.

Continuity CEO Gil Hecht said: “The partnership with Veeam is a testament to the powerful value proposition StorageGuard delivers. Veeam customers can get complete visibility of security risks across all their backup and data protection environments, while ensuring their Veeam and backup storage systems are continuously hardened to withstand cyberattacks.”

Pure Storage intros on-prem VMware migration service to Azure

On-prem VMware users with external block storage can face problems moving to the Azure cloud, and Pure Storage is hoping to attract customers who have those issues with its AVS (Azure VMware Solution) product.

The problems faced by orgs center on providing the same on-prem external block storage facilities in the Azure cloud that a customer has on-premises. For example they may be using vSphere Storage APIs for Array Integration (VAAI) and vSphere Virtual Volumes (vVols) in their VMware environment and support for them is lacking in Azure, according to Pure. They may also find it difficult to separate compute and storage instances in Azure for their vSphere environment and pay for for storage or compute by using combined instances. Pure can fix these refactoring issues with its AVS fully managed block Storage-as-a-Service (STaaS).

Shawn Hansen

Pure’s Shawn Hansen, GM for its Core Platform, stated: ”Enterprises have struggled for years with the inefficiencies and high costs tied to migrating VMware workloads to the cloud. [AVS] eliminates these obstacles by providing seamless, scalable storage as-a-service that scales efficiently and independently with business needs.”

Scott Hunter, VP Microsoft Developer Division, said: “Through this collaboration, Pure and Microsoft can better serve customer needs by enabling them to provision, use and manage Pure Storage on Azure just like other Azure services.”

AVS decouples Pure’s block storage, Azure Cloud Block Store, and compute in the Azure Cloud. It provides an external storage option for organizations needing to migrate storage volumes and VMs to Azure, providing the same on-prem block storage experience for VMs running in the Azure cloud. VAAI and vVols are supported.

AVS optimizes Azure storage instances, with Pure claiming customers can save up to 40 percent on their Azure VMware Solution costs when using it. It says data protection is built in with Pure’s SafeMode Snapshots, enabling systems to be back up-and-running in minutes when data is needing to be restored.

Because the storage environment, as seen from VMware, is the same on-premises and in Azure, a single hybrid data plane is in operation. Pure says, IT teams can centrally manage their storage and monitor usage without having two separate silos to look after.

AVS, a development of Pure’s Azure Native Integrations service, is being announced before it enters its preview development stage, which it will soon, says Pure.

Lightbits brings high-performance block storage to Oracle cloud

The Lightbits cloud virtual SAN software has been ported to Oracle Cloud Infrastructure (OCI), where it delivers fast, low-latency block storage. 

Lightbits block storage software has, until now, run in the AWS and Azure clouds, using ephemeral storage instances for faster than standard cloud block storage that also cost less. It creates a linearly scalable virtual SAN by clustering virtual machines via NVMe over TCP, and can deliver up to 1 million IOPS per volume with consistent latency down to 190 microseconds.

Lightbits, with certification on OCI, claims it enables organizations to run their most demanding, latency-sensitive workloads with sub-millisecond tail latencies, perfect for AI/ML, latency-sensitive databases, and real-time analytics workloads.

Kam Eshgi, Lightbits
Kam Eshgi

Kam Eshghi, co-founder and chief strategy officer of Lightbits, stated: “Certification on OCI marks a major step forward for Lightbits. We’re delivering a breakthrough in block storage performance, giving organizations the tools they need to migrate their most demanding applications to OCI and achieve faster, more reliable, and more efficient cloud services.”

OCI FIO benchmark runs, conducted with BM.DenseIO.E5.128 bare-metal OCI Compute shapes, supported by two BM.Standard.E5.192 shapes as clients, running Lightbits software on Oracle Linux 9.4, revealed:

  • 3 million 4K random read IOPS and 830K 4K random write IOPS per client with a replication factor of three, saturating the 100GbE network card configuration on BM.Standard.E5.192 servers.
  • Sub-300 microsecond latencies for both 4K random read and write operations, and 1ms latency when fully utilizing the clients for both random reads and writes – delivering fast performance even under heavy loads. 
  • In a mixed workload scenario (70 percent random reads, 30 percent random writes), each client achieved a combined 1.8 million IOPS, “setting a new benchmark for efficiency at scale.”

The Lightbits software on OCI can scale dynamically without downtime and has “seamless integration” with Kubernetes, OpenStack, and VMware environments. There is built-in high resiliency and availability, with snapshots, clones, and distributed management to prevent single points of failure.

Cameron Bahar, OCI
Cameron Bahar

Cameron Bahar, OCI SVP for Storage and Data Management, said: “Our collaboration with Lightbits and its certification on OCI delivers a modern approach to cloud storage with the performance and efficiency that enables our customers to bring their latency demanding enterprise workloads to OCI.”

Coincidentally, Lightbits competitor Volumez is also available in OCI, having been present in the Oracle Cloud Marketplace since September, claiming it provides 2.83 million IOPS, 135 microseconds ultra-low latency, and 16 GBps throughput per volume. It says users can harness Volumez’s SaaS services to create direct Linux-based data paths using a simple interface for Oracle Cloud Infrastructure (OCI) Compute VMs.

Find out more about Lightbits on OCI here and about OCI itself here.

Seagate developing ruggedized SSD for orbiting satellites

Seagate is developing an SSD that can operate in the vacuum of space aboard low Earth orbit satellites, providing mass storage for satellite applications.

It is testing this concept by having its Seagate Federal unit ship an SSD up to the International Space Station (ISS) as part of a BAE Systems Space & Mission Systems payload. This environmental monitoring payload was ferried to the ISS by NASA, where astronauts installed it.

The payload included a processor and real-time Linux plus containerized applications, and Microsoft Azure Space was involved in this aspect of the mission. Microsoft is developing an Orbital Space SDK and views satellites as a remote sensing and satellite communications edge computing location. Naturally, such a system needs mass storage.

The mission is scheduled to last for one year. At that point, the payload will be uninstalled and returned to Earth. BAE Systems engineers and scientists will analyze it to assess its impact from the space environment. Seagate’s engineers will check the SSD and its telemetry to see how well it withstood the rigors of space.

Unlike the space environment outside the ISS, the interior has atmospheric pressure, temperature control, and insulation from solar radiation. The SSD must operate while withstanding stronger solar radiation, extreme cold, and the vacuum of space.

Last year, HPE servers equipped with Kioxia SSDs were used on the ISS as part of NASA and HPE’s Spaceborne Computer-2 (SBC-2) program. We asked if this BAE program is doing pretty much the same thing as far as the storage drive is concerned.

Seagate said: “The main difference here is that we are being used outside the ISS, rather than inside it. The interior of the ISS is scientifically engineered to be a pristine environment, as it needs to protect its human inhabitants. It has a lot of shielding and air conditioning, which actually makes it a more desirable location than most places on Earth. We’re working to design technology that can survive without these advantages and function under higher levels of radiation, where there is no monitored climate or temperature, and in the vacuum of space – outside, in low Earth orbit (LEO).”

Seagate told us this task was undertaken to determine if technology could enhance LEO data storage capabilities. If successful, it could aid in extending content delivery networks (CDNs) for new AI-powered workflows. Satellites already provide the last mile connection to areas without fiber and cell connectivity, and with storage as part of the equation, AI inferencing could then occur in more places.

The SSD’s design “was based around Seagate SSD technology … however, our design is a 3U-VPX form factor, completely different than typical SSDs.” The form factor comes from the avionics world and has a size of 100 x 160 mm. VPX is used to specify how computer components connect across a VME bus and has been defined by the VMEbus International Trade Association (VITA) working group.

A 4 TB SSD is being used. We’re told: “For most of the mission the drive is being used as a general-purpose storage device to store mission data. Seagate SSD drives support FIPS140-2 data encryption, but that was not used in this mission. In addition to general-purpose storage, we ran special stress tests on the drive during parts of the mission and collected telemetry. Those tests revealed that many SSDs are susceptible to certain levels of radiation, and many are corrupted at the same exposure level. So, we did a lot of ‘failure testing’ to reverse engineer ways to make them more resistant and robust. On top of radiation soft errors under stress, we were also interested in measuring temperature and current.”

We wondered what this could mean for consumers and enterprises in the future. A Seagate spokesperson said: ”We intend to make this storage device available for purchase to both commercial and military aerospace customers. Right now, the market consists of off-the-shelf drives as a low-cost option, which, as you would expect, have a handful of faults for these applications. However, the opposite end is expensive military-grade hardware. We’re aiming to bridge the gap between the two and make the technology more accessible to consumers and enterprises. We are also looking whether this sort of ruggedized solution might be useful for terrestrial applications.”