Home Blog Page 73

Nasuni gets into bed with Microsoft’s Copilot

Nasuni customers can now integrate their data stores and workflows with customized Copilot assistants.

Microsoft’s Copilot is a GenAI chatbot integrated with Microsoft 365 apps such as Teams. Users can build their own customized Copilots using a Copilot Studio facility, which will work with their Microsoft app data. Nasuni provides distributed cloud-based File Data Platform services and stores its customers’ unstructured data in an Amazon S3 object store. It has developed a way for this data to be made available to customized Copilot chatbots to improve and broaden its ability to answer requests using Nasuni-stored unstructured data.

Jim Liddle, Nasuni’s chief innovation officer, said: “While Microsoft Copilot is an incredible general-purpose AI assistant, its true enterprise value is realized when it is infused with an organization’s domain-specific data. File data is typically locked up in siloed environments, making AI impossible. With Nasuni, customers can consolidate their data in the cloud and then leverage AI.”

To demonstrate this, Nasuni developed a version of Copilot for internal use. “We created our own Copilot chatbot that leverages our Nasuni data, called ‘Ask Nasuni,’ that is deployed within the Microsoft Teams environment for our employees to interact with. It only makes sense that our customers would want to do something similar to leverage their own corporate information.” 

The Copilot Studio facility enables customers to produce their own specific versions of the chatbot. Nasuni has a white paper guide to giving such customized Copilot chatbots Nasuni data access:

Nasuni white paper

It suggests such Copilots “work particularly well for static data sets that change infrequently” and typical use cases include: 

  • Domain-specific assistance: Create Copilots specialized in specific domains (e.g., healthcare, legal, finance) to provide accurate and relevant information
  • Custom FAQs: Build Copilots that answer frequently asked questions, reducing the load on human support teams
  • Content recommendations: Develop Copilots that recommend relevant articles, products, or services based on user queries
  • Process automation: Copilot Studio can guide users through complex processes or workflows
  • Personalized conversations: Customize Copilots to engage in natural conversations with users, enhancing user experience

Nasuni wants clients who are also Microsoft customers adopting Copilot chatbot technology to have Copilot-mediated natural language interaction with data in Nasuni-stored documents and files. 

Comment

This seems like a “no-brainer” issue. We could see all unstructured data storage suppliers adopting a similar approach and ensuring that data held in their stores is made available to their Microsoft-using customers’ Copilot-based chatbots. 

Qumulo launches Cloud Native file system on AWS

Scale-out parallel file system supplier Qumulo now runs natively in AWS. The company has announced the initial private availability of Qumulo Cloud Native file system in the AWS cloud.

Qumulo’s Core file system software was available on AWS back in 2021, but this latest addition is different.

Kiran Bhageshpur, Qumulo
Kiran Bhageshpur

Kiran Bhageshpur, CTO at Qumulo, told B&F: “What we had announced back in 2021 was what I would call our ‘Gen 1’ offering, i.e. a ‘lift and shift’ from our on-premises node-based architecture. A node on-premises became an EC2 instance, drives on-premises became EBS volumes, and you could tie together 4-265 nodes to deliver an ‘on-premises’-like file service. Even that was less expensive in terms of TCO as compared to alternatives like FSx-ONTAP while being more scalable and more performant.”

“What we announced as being in private availability to a select number of customers is our ‘cloud native architecture,’ similar to the ‘engine’ behind our Azure Native Qumulo offering we released in November 2023.”

Qumulo Cloud Native file system uses S3 to store file data. This file system layer is disaggregated from object storage, but both work together to deliver high throughput and transactional performance. The company says this disaggregation provides the ability to complete 90-99 percent of all transactions in the file system layer, reducing and saving costs. Customers can deploy a Qumulo file system on AWS in minutes and scale from 4 GBps to hundreds of GBps while using AWS’s elastic cloud compute (Amazon EC2) and storage (Amazon S3) infrastructure.

Bhagespur said: “It leverages AWS S3 for persistent storage. This is eleven nines (99.999999999 percent) durability and is resilient to zonal faults. We use compute (EC2) to run file and data services, i.e. it’s just code. We leverage node local storage (instance-attached NVMe and EBS) explicitly for caching with no long term persistence at this layer.”

Qumulo Cloud Native’s scalability is only limited by the scale of S3. It’s elastic in that compute and cache can be dynamically increased and reduced based on the workload need. Bhageshpur told us it brings to customers “the same file and data services that they know and love, whether they were on-prem or using our ‘Gen 1’ offering in AWS.”

He claims it’s also “around 80 percent less expensive than all other alternatives.” 

Additionally, Qumulo has a Global Namespace (GNS), which lets customers consolidate all their Qumulo instances – edge, core, and cloud – into a unified data plane that enables local-like access to remote files. It is based on an AI-driven algorithm to pre-fetch data from any Qumulo Instance anywhere before it is needed. GNS lets users define virtual paths to data, effectively freeing it from physical location constraints. It enables all of a customer’s workflows to use the same path to access the same data no matter where it’s physically located.

Qumulo has been using and developing this algorithm over ten years. Customer Bardel Entertainment plans to use this capability to centralize data and make it accessible to artists in remote locations without sacrificing performance, or so we’re told.

The company said that, combined with its Cloud Native file system, the GNS offering allows creative or other teams to collaborate from geographically dispersed locations, from any device connected to AWS infrastructure. This allows members of a global organization to work together, without sacrificing file system performance or cost.

Bardel technology VP Arash Roudafshan said: “A project that would have been region-bound is now global and can go to any resource at any time, anywhere. By being more agile, we give that opportunity back to our clients, and they can be more creative and agile in their thinking about how they want to run production.”

At present, Qumulo’s Cloud Native file system on AWS is offered only through private availability. Contact Qumulo to inquire about eligibility for private availability.

Comment

This remote-access-made-to-appear-local is a feature of the Hammerspace Global Data Environment. Does this position Qumulo for further encroachments onto the Hammerspace turf? We think Qumulo may announce a cloud-native version of its file system software on the Google Cloud Platform as well.

Kioxia plans IPO to tackle $5.8B debt amid industry shakeups

NAND and SSD manufacturer Kioxia is preparing an IPO to recapitalize itself as payment of a ¥900 billion ($5.8 billion) loan is due in June.

The company is 56.24 percent owned by a Bain Capital-led private equity consortium and 40.64 percent owned by Toshiba. The Bain consortium bought its stake from Toshiba in 2017 for $18 billion. Kioxia is a joint venture with Western Digital to operate NAND foundries, with both WD and Toshiba making SSDs based on the foundry-produced chips. A merger proposal between a spun-off Western Digital NAND/SSD business and Kioxia was thwarted when Bain consortium member and Kioxia shareholder SK hynix objected late last year.

The spun-off Western Digital company is coming into being because activist investor Elliott Management has convinced Western Digital’s board that its NAND/SSD business unit is undervalued by investors, which drags Western Digital’s stock price down. Splitting the company into separate and publicly listed disk drive and NAND/SSD businesses would drive their combined stock price above Western Digital’s current price, enabling Elliott and other shareholders to profit from the rise in value.

A merged Kioxia-Western Digital NAND/SSD business would take first or second place in the global market and be able to make more profit than the two separate businesses due to foundry and SSD production and supply efficiencies.

Now Reuters and Bloomberg are reporting that Kioxia needs to be recapitalized because the syndicated $5.8 billion has to be repaid or renegotiated. Kioxia revenues have been depressed by the multi-quarter NAND slump. It incurred a loss in its last financial year and is likely to do so again in the current one, which closed in March 2024. This could reduce its value below the ¥500 billion (approximately $3.2 billion) required by the syndicated loan terms, necessitating a renegotiation.

Kioxia expects to return to profitability in its next financial year, ending March 2025, as the NAND market picks up

Bain has suggested that Kioxia pursue an IPO to facilitate recapitalization, using the funds to repay the loan, and has been holding talks with the Sumitomo Mitsui, Mizuho, and Mitsubishi UFJ banks about this.

Such an IPO could pave the way to reopened merger talks with Western Digital’s NAND/SSD business unit. No one from Kioxia, Bain, or the banks involved responded to inquiries from Reuters and Bloomberg about the latest IPO talks.

Hitachi Vantara brings VSP One hybrid cloud storage to AWS

Hitachi Vantara’s Virtual Storage Platform One (VSP One), a unified hybrid cloud storage product, has moved into the realm of the public cloud with AWS.

Update: SDS File and object info added. 18 April 2024.

The high-end and mid-range VSP arrays were previously built on proprietary hardware up until a few years ago, but Hitachi Vantara added software-defined features and support for commodity x86-based hardware. It then announced Virtual Storage Software Block, layered on top of SVOS and presenting a single data plane across Hitachi Vantara’s mid-range, enterprise, and software-defined storage portfolio.

Hitachi Vantara said it would eventually extend into the public cloud. In February this year, all the storage products were being brought together under a hybrid VSP (Virtual Storage Platform) One brand. VSP One running on AWS now fulfills that aim of extending into the public cloud.

Octavian Tanase, Hitachi Vantara
Octavian Tanase

Octavian Tanase, chief product officer at Hitachi Vantara, said: “Virtual Storage Platform One is transformational in the storage landscape because it unifies data and provides flexibility regardless of whether your data is in an on-premises, cloud, or software-defined environment.” 

“Additionally, the platform is built with resiliency in mind, guaranteeing 100 percent data availability, modern storage assurance, and effective capacity across all its solutions, providing organizations with simplicity at scale and an unbreakable data foundation for hybrid cloud.”

There is a single control plane, data plane, and data fabric with VSP One, and three products available initially:

  • Virtual Storage Platform One SDS Block
  • Virtual Storage Platform One SDS Cloud – cloud-native SVOS in AWS
  • Virtual Storage Platform One File

We were told by a Hitachi V spokesperson: “The Virtual Storage Platform One File is an appliance. We plan to offer SDS File in 2025.” Note that Hitachi Content Software for File is based on an OEM relationship whereas: “Virtual Storage Platform One is home to our own IP only.” Also: “Longer term we will be offering Virtual Storage Platform One Object that will integrate file services as will our block offerings. In 2025 we will be launching the Virtual Storage Platform One Community that will be the home of our OEM and 3rd party offerings to build out a data platform into a custom solution.”

    Hitachi Vantara says VSP One features include:

    • VSP One SDS Cloud available in the AWS Marketplace
    • One data plane running across the VSP products on-premises, in colos, and in the AWS cloud
    • Automate VSP One SDS Block and Cloud with Ansible playbooks available on GitHub from launch, but not File
    • Cloud observability via a dashboard for the whole portfolio with Hitachi Ops Center Clear Sight
    • Simplified and accelerated VSP One File has a 100 percent data availability guarantee

    Dan McConnell, Hitachi Vantara SVP for product management, said in a blog late last year: “This announcement signals a major strategic direction for our company. Imagine a single data plane that spreads neatly across your organization’s structured and unstructured data, from traditional hardware optimized arrays to scalable software defined, to cloud-hosted.”

    The unstructured data includes files and also objects and mainframe data, according to an eBook.

    Hitachi Vantara graphic
    Hitachi Vantara graphic

    McConnell says VSP One will “be infused with Hitachi Vantara machine learning models that enable administrators to not only query and pull insights from the infrastructure but to automate and augment processes, such as determining the best deployment architecture for an application’s data.”

    Additional VSP One products will be available later this year. Various links off the Hitachi Vantara VSP One web page tell you more.

    Comment

    Sheila Rohra’s Hitachi Vantara is catching up with Dell, HPE, and NetApp as a long-term incumbent storage supplier embracing software-defined storage, commodity hardware, unified block, file and object storage, hybrid on-premises and public cloud availability, control planes, and a cloud-like operating model. We can expect VSP One to appear in the Azure and Google clouds and to support GenAI and retrieval-augmented generation.

    Commvault acquires cloud resiliency startup Appranix

    Commvault is increasing its cloud resiliency credentials by buying Appranix – a seven-year-old cloud app resiliency startup.

    Appranix has developed software to protect and recover applications running in the public clouds, using public cloud storage and services such as snapshots. It was founded in late 2016 in Boston by CEO Govind Rangasamy. The biz is self-funded and has around 70 employees. Commvault is buying it to help its customers “get up and running even faster after an outage or cyber attack.”

    Sanjay Mirchandani, Commvault
    Sanjay Mirchandani

    Sanjay Mirchandani, Commvault president and CEO, explained: “We are taking resilience to the next level by marrying Commvault’s extensive risk, readiness, and recovery capabilities with Appranix’s next-generation cloud-native rebuild capabilities.”

    Commvault wants to be able to offer cloud app recovery as well as its existing data loss recovery and protection in the cloud and on-premises – a combination of data and application resiliency.

    If an app running in the AWS, Azure, or Google cloud fails due to a malware attack or an outage, recovery dependencies include cloud networking, DNS configuration, application load balancing, security group access, and more. Appranix automates this infrastructure reset, and can reduce the time it takes to rebuild from days or weeks to – in some cases – hours or minutes. 

    Appranix graphic
    Appranix graphic
    Govind Rangasamy, Appranix
    Govind Rangasamy

    A Commvault blog written by Rangasamy declares: “Our customers will have a singular solution that makes recovered data instantly accessible, available, and secure with full cloud application recovery and environment rebuilding.”

    Rangasamy added: “Joining the Commvault family is a thrilling and natural next step for Appranix as we jointly change the market. We share  a common vision to go beyond traditional backups and disaster recovery. Our combined technologies will offer comprehensive, unmatched resilience capabilities for businesses globally.”

    The Appranix team is expected to join Commvault soon, with the integration of Appranix’s technology into Commvault’s portfolio anticipated by this northern hemisphere summer. Customers can access Appranix for their cloud application discovery and rebuild requirements via the AWS, Google Cloud, and Microsoft Azure marketplaces.

    Samsung reportedly plans to leapfrog to 430-layer NAND in 2025

    Samsung office
    Samsung office

    Samsung is planning to bypass the 300-layer 3D NAND level and go straight to 430-layer flash after a 290-layer product planned for next year.

    According to Korea’s Hankyung media outlet, Samsung will start producing 290-layer NAND later this month. Up until now, Samsung’s highest layer count was 236 in its version 8 V-NAND technology. 

    The report, translated from Korean, tells us that the 290-layer product will be version 9 of Samsung’s technology and utilize two stacks of strings of NAND to reach the 290-layer level. The report doesn’t reveal the layers in each string but, generally, a stringstack of two components has equal layer counts – suggesting two 145-layer strings will be used.

    The need for string stacking arises because, as the layer count in a string increases, the difficulty of etching vertical holes through the many layers increases – to the point that the holes become malformed and don’t function as they should.

    Updated 3D NAND layer technology table of the NAND suppliers

    Samsung will then move to a 430-layer triple-stringstacked technology with its v10 V-NAND, which we think will involve 3x 145 layers as the starting point. Samsung is moving straight to 430 layers from 290 because it thinks that there will be a need for AI inferencing workloads to take place at edge IT sites. These will need fast access to large data sets and 430-layer technology will enable flash drives with more capacity than are available now in less physical space.

    Competitors have generic plans to move up to the 400- and 500-layer levels, but nothing specific.

    Kioxia/Western Digital is producing 218-layer BiCS8 product with an intention to reach the 300-layer level next year. Micron is at the 232-layer level now with an intention to get to the 300 level next year as well. SK hynix will also introduce a 321-layer product next year while the plans of its Soldigm subsidiary, currently at 192 layers, are not known. YMTC, battling with US export restrictions, is at the 232-layer level now, using a twin stringstack. Hankyung reports that it intends to produce a 300-layer level product in the second half of this year, which would make it the first NAND fabricator to cross the 300-layer level.

    CTERA adds data theft honeypot decoy

    CTERA has added a decoy file and attack detection facility to its Ransom Protect offering.

    The enterprise developer provides geo-distributed global cloud file and object data services, enabling distributed users to access shared and synchronized unstructured data. Its Ransom Protect feature uses machine learning (ML) models to detect anomalous user or app behavior – such as a spike in encrypted writes – and apply preventative measures at once. CTERA has added a decoy files facility to this, so that data exfiltration attempts by insiders or external malware attackers can be detected in real time and thus begin reactive measures.

    CTERA CEO Oded Nagel explained: “Data exfiltration poses a severe risk to organizations, as threat actors can leverage stolen sensitive information for extortion, causing immense financial and reputational damage. With our new honeypot functionality as part of CTERA Ransom Protect, we are providing our customers robust active defense against these pernicious attacks, ensuring the protection of their valuable data assets.” 

    Ransom Protect deploys decoy files within a customer’s file system. A blog by CTERA CTO Aron Brand explains: “Any attempt to access them enables CTERA’s software to identify and stop unauthorized access or attempts at data theft, effectively neutralizing threats before significant damage can occur.”

    CTERA presentation slide
    CTERA presentation slide

    This enables Ransom Protect to defend customers against double extortion – an attack combining data exfiltration and encryption which has become a widespread cyber criminal attack method. Attackers first exfiltrate sensitive information from their targets before launching the ransomware encryption routine. They then demand attacked customers make a ransom payment to regain access to their encrypted data, threatening to expose the stolen data if the ransomware demand is not met. 

    The Ransom Protect product provides:

    • Data exfiltration prevention Decoy files enabling real-time detection and blocking of data exfiltration attacks;
    • Real-time AI detection Machine learning algorithms identify behavioral anomalies suggesting fraudulent file activity, and block offending users within seconds;
    • Zero-day protection Does not rely on traditional signature update services;
    • Incident management Admin dashboard for real-time attack monitoring, incident evidence logging and post-attack forensics;
    • Instant recovery Near-instant recovery of any affected files from snapshots that are securely stored in an air-gapped, immutable cloud object storage;
    • One-Click Deployment Single-click feature activation on CTERA Edge Filers with latest version release.

    Read more in a CTERA blog.

    Bootnote

    Commvault added ThreatWise honeypot malware deception and detection technology to its Metallic SaaS product in late 2022. Catalogic also introduced equivalent technology that year, with version 4.9 of its DPX product.

    Leil Storage trumpets green ‘hyperscaler’ backup for on-prem environments

    Leil Storage, a startup owned by Estonian storage systems builder DIAWAY, is extending its sustainability credentials in the second half of this year by introducing much lower power backup and archiving systems.

    Earlier this year, Leil Storage introduced backup and archiving platforms benefiting from the operational advantages offered by Host-Managed Shingled Magnetic Recording (HM-SMR) drives. A Leil pitch says: “At Leil Storage, we combined development expertise from Google with support from Western Digital and created a platform that allows more people to store more data at a lower cost while using less energy and making the world a greener place.” 

    Aleksandr Ragel, Leil Storage
    Aleksandr Ragel

    HM-SMR was previously a format mainly used by hyperscalers. Leil has now developed it to target enterprises and service providers that want to benefit from a lower energy consumption, a lower cost per terabyte, and better performance when compared to traditional legacy storage systems. HM-SMR is better, Leil says, for sequential workloads, not random IO.

    On the Leil Backup and Leil Archive systems, CEO Aleksandr Ragel told this week’s IT Press Tour in Rome: “We are bringing hyperscaler data storage to on-premises environments. We focus on data storage that is greener than the competition, and which uses the latest technologies to safeguard the data and protect investments.”

    On launch, the systems’ HM-SMR disks were promising 18 percent lower power usage when compared to non-SMR drives. In the second half, 2024, the same systems – which come with Leil’s software – will be equipped with the Power Disable feature, which promises to slash storage power draw by “25 percent or more” in what Leil is calling its Infinite Cold Engine (ICE), a variation of Massive Array of Idle Drives (MAID) technology, and Arctic Forest products. ICE is phased:

    • Phase 1: HM-SMR support, 18 percent total energy savings 
    • Phase 2: Write-group conception 2024, 43 percent projected total energy savings 
    • Phase 3: Popular Data Concentration (PDC), 2025, 50 percent projected total energy savings 
    • Phase 4: AI-driven background service, 2026, 70 percent projected total energy savings 
    Leil Storage ICE phases

    The Arctic Forest Concept has an additional immutability layer, built on the foundation of SaunaFS Copy-on-Write snapshots, to enhance the existing immutability features, presenting, Ragel says, a groundbreaking and exclusive market offering.

    The AI-driven background service will:

    • Gather data on power use, user behavior, and system changes for energy savings 
    • Test data classification parameters through simulation and experimenting with combinations 
    • Compare actual versus simulated pre-installation energy use and provide recommendations to align with theoretical outcomes 

    Leil Storage is a commercial closed source product that extends the open source, POSIX-compliant SaunaFS distributed file system with proprietary green features. Leil storage effectively equals SaunaFS plus ICE plus the Arctic Forest Concept.

    Leil Storage Architecture slide showing eight-node rack
    Leil Storage Architecture slide showing eight-node rack

    Its storage architecture is rack-based with eight nodes (JBODs) providing 15 PB usable capacity in a rack with a 6+2 erasure coding scheme. Total JBOD capacity is 2.6 PB with 102 x 28 TB HM-SMR drives (Western Digital UltraSMR). These have 256 MiB zones mapped into streamed raw data chunks. A node has 4 x NVMe SSDs, each with a 1.6 TB Drive Writes per Day rating.

    Leil says its systems are already backup and archiving targets with cloud data management players including Veeam, Acronis, Cohesity, Veritas, which is in-progress, and Rubrik, which are all technology partners of the company.

    Ragel identifies competing technologies as tape libraries, high-cap HDD arrays, object storage systems (using the high-cap drives), and general software-defined storage when used for nearline, backup target, and archival storage. He says hyperscalers such as Dropbox and Wasabi uses HM-SMR drives. The Ceph and Btrfs storage software products support HM-SMR.

    Ragel maintains that HM-SMR was always equipped with the technology to deliver better and greener performance, but, until now, the software wasn’t being developed to deliver it outside of hyperscalers. Now Leil is making the tech available for general enterprise use.

    Leil Storage’s pricing scheme is pay upfront for the capacity you need (including hardware, software, and support), with options for a three or five-year term. For SaunaFS, customers choose from monthly, yearly, or one-time payments for the software license and support, based on raw capacity, also available for three or five-year terms. Hardware is an option to be sold separately if needed.

    “We are the bread for large-scale data storage, not the butter, not the caviar. We provide simple products, not monstrous ‘everything for everyone’ systems that cost more,” added Ragel. “We may go into butter space at some point.”

    Bootnote

    Supplier Disk Archive Corporation has spin-down disk technology in its product.

    Micron unveils QLC NAND SSD with more than 200 layers

    Micron has launched a 2500 QLC NAND internal fit client product that’s faster than its 2550 TLC predecessor, despite TLC NAND typically being quicker than QLC NAND.

    TLC (triple level cell) NAND has 3 bits per cell while QLC (quad level cell) has 4 bits per cell, meaning the SSD’s controller has to negotiate QLC’s 16 voltage states per cell instead of 8 with TLC. It takes longer to read and write individual cells unless the controller firmware gets clever with such things as a very fast SLC (1 bit/cell) cache, and parallel data plane access.

    Micron’s 2500 is claimed to be the first 200-plus layer QLC drive for OEM desktop and notebook (client) computers. It uses 232-layer QLC NAND, and this gives it 30 percent more density, Micron says, than its 2400 176-layer QLC product, and 31 percent more than its 232-layer TLC NAND, as used in the 2550 M.2 format SSD. This was launched in December 2022 so it has taken 16 months to add the extra bit to the 232-layer NAND cells used in the 2500.

    The 2500 is also quicker performing than the 2450 176-layer TLC product from 2021, with 24 percent faster reads and 31 percent faster write performance. Overall, Micron claims the 2500 has best-in-class “performance that beats competitive TLC and QLC-based SSDs,” meaning better than its top five client OEM competitor products.

    A table compares the performance of Micron’s recent M.2 format SSDs:

    Micron SSD specs
    Micron’s 2400, 2450, and 3400 products are no longer current. They’re included for reference purposes

    This shows that the 2500 effectively replaces the slower 2550, providing higher capacity, more IOPS, and better bandwidth. It’s even nudging up against the high-end 3500, which also uses 232-layer flash, and in fact provides faster sequential reads – 7.1 GBps versus the 3500’s 7 GBps.

    Micron says the 2500 has an up to 45 percent better PC Mark10 benchmark score than three competing TLC products. 

    The 2500 is available in small (22 x 30 mm), medium (22 x 42 mm), and large (22 x 80 mm) M.2 formats. It is a PCIe gen 4 x 4 lane drive, using an NVMe v1.4 interface, like the other Micron M.2 drives mentioned, but the later v1.4c variant. The drive uses host-memory buffer technology, meaning there is no DRAM in the drive itself, like its predecessor.

    The drive’s endurance (TBW meaning terabytes written) varies with capacity over the five-year warranty period, with a 2 million hours MTTF rating:

    • 512 GB – 200 TBW
    • 1 TB – 300 TBW
    • 2 TB – 600 TBW

    It sips electricity, using less than 2.5 mW in sleep power state, less than 150 mW in active idle power state, and less than 6,300 watts in its active power state.

    Micron classifies the 3500 as being a performant drive for data-intensive workloads, the 2550 as a mainstream workload drive, and the 2500 is a “value QLC SSD at TLC performance” levels. We think it could replace the 2550 and overlap with the 3500.

    Hammerspace erasure codes Global Data Environment

    Hammerspace has added erasure coding to its data orchestrating Global Data Environment software, enabling it to use less overhead when storing and protecting data.

    The company supplies parallel NFS-based software to manage data in globally distributed and disparate sites, in file (NFS) and object storage using SSDs, disk drives, public cloud services (AWS, Google Cloud, Azure, and Seagate Lyve Cloud), and tape libraries. This enables it to be located, orchestrated, placed, and accessed as if it were local. Hammerspace bought RozoFS, a French startup developing its Mojette Transform erasure coding technology, in May 2023 to protect against data loss with less overhead and faster recovery than RAID.

    David Flynn, Hammerspace founder and CEO, said: “We started this year by adding support for data on tape, then pioneered the Hyperscale NAS architecture for AI and GPU computing, and now we are further expanding our data storage services with high-performance erasure coding. This gives customers even more choice and flexibility when it comes to their storage infrastructure.”

    Erasure coding (EC), like RAID, involves the use of mathematically computed codes – parity values in RAID – from source data. If some of the source data is lost then it can be recalculated from its fragments and the codes. The codes represent a storage overhead that is required above that of the raw data storage. In general, if a protection scheme means that this overhead can be reduced and the data recovery process completed in less time then it is to be preferred over less efficient schemes.

    Hammerspace Mojette Transform diagram
    Mojette Transform diagram

    EC generally requires less overhead than RAID. The Mojette Transform is, Hammerspace says, an “extremely CPU-efficient erasure code” that bypasses the complex mathematical calculations of usual erasure codes, leaving more compute power available to applications. The software’s basic algorithm is described here.

    RozoFS’s technology is, Hammerspace claims, “up to 2x faster than traditional erasure coding schemes.” As a result of its simpler mathematical structure, the Mojette Transform needs comparatively less storage space than other EC schemes, such as Reed-Solomon, and is scalable out to billions of files.

    Hammerspace says that this provides a new option for building high-performance storage using its technology and commodity hardware. Examples are when building scratch space for HPC and research environments, using Open Compute Project (OCP) hardware in hyperscale environments, or for building all-flash storage environments for AI initiatives to augment or replace legacy storage.

    Pierre Evenou, Hammerspace VP of Advanced Technology and founding engineer of Mojette Transform, said: “Our goal in developing the Mojette Transform erasure code was to deliver the highest reliability in data protection, coupled with extreme performance, leveraging commodity hardware. The result is delivering close to native performance of the underlying storage hardware to the application and compute environment without sacrificing data protection.”

    Seagate, Storj, Tuxera and Western Digital get NABbed

    It’s time for the National Association of Broadcasters show – NAB 2024 – at the Las Vegas Convention Center, and storage suppliers are vying to see who has the fastest and most efficient storage on offer to store media files and ship them around to apps and users.

    The show runs from April 14 to 17 and is the pre-eminent exhibition and conference for the media and entertainment industry, drawing thousands of content professionals from all corners of the broadcast, media and entertainment ecosystem – including storage.

    The storage suppliers exhibiting include:

    Seagate

    Seagate is showing its LaCie range of external storage products, which get its 24TB disk drives – up from the previous 20TB HDDs. They include:

    • LaCie 1big Dock 24TB – $1.049.00 
    • LaCie 2big Dock 48TB – $1,899.00
    • LaCie d2 Professional 24TB – $749.99
    • LaCie 2big RAID 48TB – $1,699.00

    Seagate said the LaCie 1big Dock and 2big Dock simplify editing workflows by enabling users to simultaneously store files, connect peripherals, and charge devices in one hub. The 1big Dock 24TB and 2big Dock 48TB deliver 20 percent more capacity than their predecessors. The LaCie d2 Professional 24TB and 2big RAID 48TB also received boosts of 20 percent in capacity. 

    The company suggests that, for hobbyists or studio experts who are looking at the latest production tools, the 48TB products are suitable for generative AI applications. We understand that to mean Gen AI inferencing, not training.

    The 1big Dock, 2big Dock and d2 Professional products are available now, while the 2big RAID 48TB will ship in May.

    Storj

    Trisha Winter.

    Decentralized storage supplier Storj, which says it’s now ten years old, is present and has just appointed its first CMO, Trisha Winter. The startup claims it provides up to 90 percent lower storage costs and significant carbon reduction compared to the public cloud.

    CEO Ben Golub’s prepared quote read: “Storj has grown from a ground-breaking Web3 tech innovation to a proven enterprise solution, giving cloud storage hyperscalers a run for their money. Differentiators that make us fast, secure and highly performant also ensure our product is the greenest cloud object storage on the planet. Our organization is stronger than ever in our tenth year and poised for continued rapid growth, especially with the addition of Trisha Winter to our executive team.”

    Starting this month, Storj’s user interface and invoices share data on each customer’s carbon emissions as a result of their use of Storj, compared with cloud hyperscalers, such as AWS, Azure and Google. There is a Storj whitepaper looking at emissions and savings, and its methodology.

    Tuxera

    The Finnish storage and network software developer is exhibiting its Fusion File Share (FFS) product and also has a presentation session with Toast Post Production, and Pixitmedia. FFS replaces the standard or default SMB software stacks, such as Samba, with a much faster one.

    It offers up to 60x higher throughput and as much as 500 percent better scalability than Samba SMB. FFS also offers RDMA capability and is compatible with the SMB protocol up to version 3.1.1. Its scale-out feature enables the creation of parallel and scalable multi-SMB cluster service, providing faster throughput with low CPU and memory usage.

    Veikko Ruuskanen, CEO of Toast Post Production Oy in Helsinki, Finland, explained: “We always prefer to play back raw material on our workstations in real time, regardless of the resolution or file format. Unfortunately, that was not always doable with equipment running Samba SMB server software. At times, we needed to create local caches which slowed down the workflow, frustrating both our artists and our clients. With the Fusion File Share software, we found that we could instantly manage real-time transfers of the most demanding file types. The change was immediate.”

    Western Digital

    WD is showing its SanDisk SD-format flash card products, G-Drive external disk drive models, and Ultrastar Transporter and JBOD offerings. The SD flash cards include the SanDisk SD Express (128GB/256GB) and microSD Express (128GB/256GB) products. But there are new memory cards as well:

    • 2TB SanDisk Extreme PRO SDXC UHS-I memory card – World’s first 2TB UHS-I SDXC card
    • 2TB SanDisk Extreme PRO microSDXC UHS-I memory card – Western Digital’s first and world’s fastest 2TB UHS-I microSD card
    • 4TB SanDisk Extreme PRO SDUC UHS-I memory card – the world’s first 4TB UHS-I SD card

    Desktop external 7,200rpm Ultrastar disk drives with 24TB capacities:

    • 24TB G-DRIVE ($699.99) – Ultra-reliable storage supporting USB-C (10Gbit/sec) for fast backup
    • 24TB G-DRIVE PROJECT ($929.99) – Backup and save project work. Compatible with Thunderbolt 3 and USB-C (10Gbit/sec). Also features a PRO-BLADE SSD Mag slot for modular SSD performance in sharing and editing across devices
    • 48TB G-RAID MIRROR ($1,599.99) – Ships in RAID 1 (Mirroring) to automatically create a duplicate of your working files for data redundancy. Compatible with Thunderbolt 3 and USB-C (10Gbit/sec). Also features a PRO-BLADE SSD Mag slot for modular SSD performance in sharing and editing across devices
    • 96TB G-RAID SHUTTLE 4 ($4,499.99) – Transportable four-bay hardware RAID solution allows for super-fast access and real-time video editing. This device ships in RAID 5, and supports RAID 0, 1 and 10 configurations
    • 192TB G-RAID SHUTTLE 8 ($7,499.99) – Transportable eight-bay hardware RAID product for consolidated backup, whether on location or in the studio. It supports RAID 0, 1, 5, 6, 10, 50, and 60 configurations, and provides transfer rates up to 1690MB/sec read and 1490MB/sec write in default RAID 5.

    Data transport and JBOD devices:

    • Ultrastar Transporter – Offers up to 368TB of fast NVMe SSD performance and dual port 200Gb/E connectivity for storing, editing and physically transporting dailies and massive files from one location to the next, including the cloud. Weighs less than 30lb (13.6kg), features a durable chassis design, and includes a transport case. Features added data security with a tamper evident case and is designed for FIPS 140-2 level 2 compliance with Trusted Platform Module (TPM) Version 2
    • Ultrastar Data102 JBOD Platform – External storage platform for mass storage, backup, online accessible archive and nearline content. Supports up to 24Gb SAS to the host and up to 2.65PB in a 4U enclosure with 24TB Ultrastar HDDs. 

    Visit the Western Digital booth #SL5041 in South Hall Lower. The 2TB memory cards are expected to be available at authorized retailers, e-tailers and the Western Digital store this summer. The 4TB card its expected to be available in 2025.

    Backblaze introduces Event Notifications for enhanced workflow automation

    Backblaze has added Event Notification data change alerts to its cloud storage so that such events can be dealt with faster by triggering automated workflows.

    The fast-growing B2 Cloud Storage provides S3-compliant storage for less money than Amazon S3 and with no egress charges. AWS offers a Simple Queue Service (SQS) designed for microservices, distributed systems, and serverless applications, enabling customers to connect components together using message queues. An S3 storage bucket can be configured to send notifications for specific events, such as object creation, to SQS and so on to SQS queue-reading services, which in turn can inform upstream applications to trigger processing.

    Gleb Budman, Backblaze CEO and chairperson, said: “Companies increasingly want to leverage best-of-breed providers to grow their business, versus being locked into the traditional closed cloud providers. Our new Event Notifications service unlocks the freedom for our customers to build their cloud workflows in whatever way they prefer.”

    This statement was a direct shot at AWS, as evidenced by an allied customer quote from Oleh Aleynik, senior software engineer and co-founder at CloudSpot, who said: “With Event Notifications, we can eliminate the final AWS component, Simple Queue Service (SQS), from our infrastructure. This completes our transition to a more streamlined and cost-effective tech stack.”

    Event Notifications can be triggered by data upload, update, or deletion, with alerts sent to users or external cloud services. Backblaze says this supports the expanding use of serverless architecture and specialized microservices across clouds, not just its own.

    It can trigger services such as provisioning cloud resources or automating transcoding and compute instances in response to data changes. This can accelerate content delivery and responsiveness to customer demand with automated asset tracking and streamlined media production. It also helps IT security teams monitor and respond to changes, with real-time notifications about changes to important data assets.

    Event Notifications is now available in private preview with general availability planned later this year. Interested parties can join a private preview waiting list.