Home Blog Page 44

LucidLink offers global teams tools for real-time, secure file access

LucidLink is providing unified real-time file-based collaboration among distributed teams working on massive projects, with instant, secure file access across desktop, web, and soon mobile.

Startup LucidLink sells file collaboration services to distributed users. Its Filespaces product streams parts of files from a central cloud repository, providing fast access to large files, protected by zero knowledge encryption. All the locally cached data and metadata on the client devices are stored encrypted on the local disk. This sub-file streaming is opposed to the full file sync ‘n’ share approach, which, it says, characterizes the services offered by CTERA, Egnyte, Nasuni, and Panzura. LucidLink’s software is used by entertainment and media, digital ad agencies, architectural firms, and gaming companies. 

Peter Thompson, LucidLink
Peter Thompson

Peter Thompson, co-founder and CEO of LucidLink, stated: “The new LucidLink is both an evolution of everything we’ve built so far and a revolution in how teams collaborate globally. For the first time, teams can collaborate instantly on projects of any size from desktop, browser or mobile, all while ensuring their data is secure.

“This milestone release marks a new chapter in our mission to make data instantly and securely accessible from anywhere and from any touchpoint. As we introduce more new features in the coming months, our focus remains on empowering teams to collaborate seamlessly, wherever they are.”

The real-time mobile collaborative capabilities will actually arrive in the first quarter next year. 

LucidLink says the latest software involves no downloading, syncing, or transferring data. It has a new desktop and web interface, streamlined onboarding, and flexible pricing for teams of all sizes, from freelancers to large enterprises, working from home, in datacenters or the public cloud.

LucidLink is providing a new desktop interface and a global user concept in which users can join multiple Filespaces across desktop, web, and soon mobile devices. There is a faster and smoother installation process for macOS users which “eliminates reboots or security changes.”

There is cloud flexibility as users can choose LucidLink’s bundled, egress-free AWS storage options or bring their own cloud storage provider.

The new LucidLink PC/notebook interface

There are more features scheduled for early 2025:

  • Mobile apps for Android and iOS: Full-featured mobile apps will give users immediate access to data.
  • External link sharing: Users can share content with external collaborators without needing the desktop application.
  • Browser-based upload: Users can drag and drop files directly from their browser for seamless collaboration.
  • Multi-Factor Authentication (MFA) and SAML-based SSO: Enhanced security options for all users.
  • Guest links: Teams can collaborate securely without requiring full user accounts.

An upcoming Filespaces upgrade tool will provide a smooth path to the new LucidLink for existing customers.

LucidLink says Spotify, Paramount, Adobe, and other creative teams worldwide have used LucidLink to increase productivity fivefold, access global talent, and “free their people to focus on creating.” 

We note that CTERA says its technology also offers “direct read/write access from the cloud, allowing desktop and server applications to handle large files effortlessly, without the need to upload or download them in their entirety. The data is streamed on-demand, allowing tools like Adobe Premiere or DaVinci Resolve to function smoothly and quickly, no different than if you were using a local disk.”

Bootnote

LucidLink’s Filespaces have a split-plane architecture in which data and metadata planes are managed separately. The metadata is synchronized through a central metadata service provided by LucidLink, while the data is streamed directly to and from the cloud or an on-premises object store.

Datadobi boosts StorageMAP with faster data mobility

Having pushed out the seventh major version of its unstructured data charting and moving tool in June, Datadobi says it has made StorageMAP faster, more scalable, and better able to deal with the now end-of-life Hitachi Data Ingestor (HDI).

StorageMAP software scans and lists (maps) a customer’s file and object storage estates. It can then optimize storage use by migrating old and cold data to lower-cost archival storage, for example, and delete dead data. Datadobi says warmer – more frequently accessed – data could be tagged, for example, for use in AI training and inference work. Warm data could also be migrated to a public cloud for access compute instances there.

Carl D'Halluin, Datadobi
Carl D’Halluin

v7.0 added custom dashboards and an analysis module. According to Datadobi CTO Carl D’Halluin: “StorageMAP 7.1 takes it a step further and solves some focused challenges facing our customers globally, including offering an innovative HDI Archive Appliance Bypass feature, example dashboards, and the most important one, improvements to scalability and performance.”

StorageMAP has a uDME feature, an unstructured Data Mobility Engine. This moves, copies, replicates, and verifies large and complex unstructured datasets based on trends and characteristics derived from the metadata intelligence stored in the StorageMAP metadata scanning engine’s catalog. 

Datadobi says the uDME has been made faster and more scalable, capable of handling greater capacities and larger numbers of files and objects.

An HDI Archive Appliance Bypass feature – we’re told – gets data faster from the primary NAS and archive (HCP) sides of an HDI installation, HDI being a file storage system that can move data off a primary NAS to a backend HCP vault for cheaper, long-term storage. With HDI now defunct, customers may need to migrate their data to actively supported NAS and backend stores, but the HDI software impedes data migration.

D’Halluin says it has “significant performance limitations that make migrating all active and archived data an extremely slow process typically riddled with errors.”

StorageMAP has a bypass that “involves using multiple StorageMAP connections to the storage systems – one connection to the primary storage system and a second connection to the archive storage system. These connections effectively bypass the middleware HDI archiving appliance, which is responsible for both relocating data to the archive storage system and retrieving it when a client application requests archived data.”

This is an alternative to the Hitachi Vantara-CTERA deal for moving data off HDI.

Lastly, DataDobi has added example dashboards to help customers take advantage of v7.0’s custom dashboard feature “that a customer can refer to for ideas to include in their own custom dashboards.”

Check out StorageMAP here.

NAKIVO adds Microsoft 365, Proxmox, and Spanish language support

NAKIVO has boosted its backup offering with additional VM support, Microsoft 365 protection, Spanish language adoption, and extended cybersecurity.

Sparks, Nevada-based NAKIVO was founded in 2011, five years after industry leader Veeam, to provide virtual machine and then physical server backup to small and medium enterprises. It says it has more than 29,000 customers spread across 183 countries who buy from more than 300 MSPs and 8,600 partners. That customer count is well short of Veeam’s 450,000-plus but is plenty high enough to give NAKIVO a viable business.

Bruce Talley, NAKIVO
Bruce Talley

CEO Bruce Talley is the co-founder and his founding partners are Ukraine-based VP of Software Nail Abdalla and Turkey-based VP Pof roduct Management Sergei Serdyuk. Talley said of the latest Backup & Replication v11 release: “With v11, we’re introducing features that align with today’s demands for flexible data protection, increased security, and multilingual support. Our goal with this release is to provide a comprehensive solution that supports data resilience for businesses worldwide.”

There is added support for open source, KVM-based Proxmox VE, which “has become a mainstream virtualization solution,” reflecting the move away from Broadcom-acquired VMware by some customers. Both Veeam and Rubrik have added Proxmox VE support in recent months. NAKIVO provides agentless VM backups, incremental backups, multiple backup targets, as well as encryption and immutability for backups in both local and cloud repositories.

v11 adds Microsoft 365 backup to the cloud, including Amazon S3, Wasabi, Azure Blob, Backblaze B2, and other S3-compatible storage targets. The Backup Copy feature means customers can create multiple backup copies and store them in various locations – tape, in the cloud, on S3-compatible storage, or network shares, which strengthens disaster recovery capabilities.

Adding Spanish language support, as Rubrik has done, means customers can operate and manage NAKIVO’s software using Spanish, and also access its website, educational content, and user documentation in Spanish.

v11 supports NAS (network-attached storage) backup, source-side backup encryption, which is integrated with the AWS Key Management Service (KMS), and NetApp FAS and AFF storage array snapshots. Customers can back up their VMware VMs stored on these devices this way. Supported storage devices now include HPE 3PAR, Nimble Storage, Primera, and Alletra, as well as the NetApp arrays.

It also introduces a Federated Repository feature. This allows customers to create a scalable storage pool from multiple repositories, or “members,” which automatically work together to ensure continuous operation. If a repository reaches capacity or becomes inaccessible, backups are seamlessly redirected to available members, ensuring uninterrupted protection and access to data.

Customers can scale storage capacity by adding or removing members as needs change, optimizing resource use without unnecessary costs. For MSPs, and in addition to the existing MSP Console, v11 introduces the Tenant Overview Dashboard, a centralized tool designed for MSPs to monitor and manage all tenants in one place.

Other additions include the extension of Real-Time Replication (Beta) for VMware functionality to cover vSphere 8.0. Customers can create replicas of vSphere 8 VMs and keep them updated as changes are made, as frequently as once per second. They can also now enable immutability for backups stored on NEC HydraStor systems.

NAKIVO Backup & Replication v11 is available for download, with a datasheet accessible here. Customers can either update their version of the solution or install the 15-day Free Trial to check how the new features work.

Toshiba drives power CERN’s data demands at the LHC

CERN, with more than 120,000 disk drives storing in excess of an exabyte of data, is probably Toshiba’s largest end-user customer in Europe. Toshiba has released a video talking about how its drives are used in making Large Hadron Collider (LHC) data available to hundreds of physicists around the world that are looking into how atoms are constructed.

The Toshiba drives are packaged inside a Promise Technology JBOD (just a bunch of drives) chassis and CERN has been a long-term customer, starting with Promise’s 24-bay VTrak 5800 JBOD and Toshiba’s 4 TB Enterprise Capacity drives. Their capacity increased over time to the 18 TB MG09 series of these drives.

When the LHC smashes atoms into each other in collisions, component particles are spun off and detected. The LHC has a 24/7 operation for its collision detectors. As the LHC breaks atoms up into myriad component particles, masses of data is generated – around 1 TB/minute, 60 TB/hour, 1.44 PB/day and 10.1 PB/week.

The data is organized and accessed within CERN’s EOS in-house file system. It currently looks after more than 4,000 Promise JBODs and the aforementioned 120,000-plus drives.

Toshiba is now testing 20 TB MG10 series drives in a Promise 60-bay, 4RU, VTrak 5960 SAS chassis, which has so-called GreenBoost technology. This is based on intelligent power management, which Promise says can deliver “energy savings of up to 30 percent when compared to competing enclosures.”

Promise CMO Alice Chang said: “The energy crisis is now a real challenge to all enterprises, including CERN. The VTrak J5960 offers a well-rounded solution to solve this dilemma, and we are confident that Toshiba’s Enterprise Capacity HDDs, installed and operated in this JBOD, will support CERN’s future need for growing data storage capacity in a reliable and energy-efficient way.”

Rainer Kaese, Senior Manager Business Development, Storage Products Division at Toshiba, said: “We continue to develop higher capacities, up to 30 TB and beyond, as HDDs are and will remain essential for storing the exabytes of data that CERN and the entire world produce in a cost-effective and energy-efficient manner.” 

That’s a sideswipe at the idea that SSDs will replace disk drives for mass capacity online data storage.

Veeam partners with Continuity Software to fend off ransomware attacks

Backup vendor Veeam is increasing its data security capabilities via an anti-ransomware partnership with Continuity Software to boost customer cyber-resiliency.

Continuity Software’s StorageGuard solution analyzes the security configuration of storage and backup systems. It says it scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage, backup, and data protection systems – including Dell, NetApp, Hitachi Vantara, Pure, Rubrik, Commvault, Veritas, HPE, Brocade, Cisco, Cohesity, IBM, Infinidat, VMware, AWS, Azure, and now Veeam. 

Andreas Neufert, Veeam
Andreas Neufert

This Continuity collaboration follows a Veeam-Palo Alto deal in which apps are being integrated with Palo Alto’s Cortex XSIAM and Cortex XSOAR systems for better cyber incident detection and response.

Veeam’s Andreas Neufert, VP of Product Management, Alliances, stated: “Partnering with Continuity is an additional step towards helping our customers maintain a safer security posture in compliance with specific regulations including CIS Control, NIST, and ISO throughout their Veeam Data Platform life cycles. The partnership helps to ensure our industry-leading technology, [and] also the surrounding environment, is continuously checked for misconfigurations and vulnerabilities to withstand cyberattacks, as well as adhering to ransomware protection best practices.”

Gil Hecht, Continuity Software
Gil Hecht

Continuity becomes a Veeam Technology Partner (TAP) and the two say its StorageGuard will provide automatic security hardening for environments to improve customers’ security posture, comply with industry and security standards, and meet IT audit requirements.

We’re told StorageGuard is a complementary offering to the Veeam Data Platform, enabling customers to automatically assess the security configuration of their environment, while validating the security of all backup targets, including disk storage systems, network-attached storage (NAS), cloud, and tape that connect to customers’ environments.

StorageGuard can prove audit compliance with various security and industry standards, such as ISO, NIST, PCI, CIS Controls, DORA, and so on.

Continuity CEO Gil Hecht said: “The partnership with Veeam is a testament to the powerful value proposition StorageGuard delivers. Veeam customers can get complete visibility of security risks across all their backup and data protection environments, while ensuring their Veeam and backup storage systems are continuously hardened to withstand cyberattacks.”

Pure Storage intros on-prem VMware migration service to Azure

On-prem VMware users with external block storage can face problems moving to the Azure cloud, and Pure Storage is hoping to attract customers who have those issues with its AVS (Azure VMware Solution) product.

The problems faced by orgs center on providing the same on-prem external block storage facilities in the Azure cloud that a customer has on-premises. For example they may be using vSphere Storage APIs for Array Integration (VAAI) and vSphere Virtual Volumes (vVols) in their VMware environment and support for them is lacking in Azure, according to Pure. They may also find it difficult to separate compute and storage instances in Azure for their vSphere environment and pay for for storage or compute by using combined instances. Pure can fix these refactoring issues with its AVS fully managed block Storage-as-a-Service (STaaS).

Shawn Hansen

Pure’s Shawn Hansen, GM for its Core Platform, stated: ”Enterprises have struggled for years with the inefficiencies and high costs tied to migrating VMware workloads to the cloud. [AVS] eliminates these obstacles by providing seamless, scalable storage as-a-service that scales efficiently and independently with business needs.”

Scott Hunter, VP Microsoft Developer Division, said: “Through this collaboration, Pure and Microsoft can better serve customer needs by enabling them to provision, use and manage Pure Storage on Azure just like other Azure services.”

AVS decouples Pure’s block storage, Azure Cloud Block Store, and compute in the Azure Cloud. It provides an external storage option for organizations needing to migrate storage volumes and VMs to Azure, providing the same on-prem block storage experience for VMs running in the Azure cloud. VAAI and vVols are supported.

AVS optimizes Azure storage instances, with Pure claiming customers can save up to 40 percent on their Azure VMware Solution costs when using it. It says data protection is built in with Pure’s SafeMode Snapshots, enabling systems to be back up-and-running in minutes when data is needing to be restored.

Because the storage environment, as seen from VMware, is the same on-premises and in Azure, a single hybrid data plane is in operation. Pure says, IT teams can centrally manage their storage and monitor usage without having two separate silos to look after.

AVS, a development of Pure’s Azure Native Integrations service, is being announced before it enters its preview development stage, which it will soon, says Pure.

Lightbits brings high-performance block storage to Oracle cloud

The Lightbits cloud virtual SAN software has been ported to Oracle Cloud Infrastructure (OCI), where it delivers fast, low-latency block storage. 

Lightbits block storage software has, until now, run in the AWS and Azure clouds, using ephemeral storage instances for faster than standard cloud block storage that also cost less. It creates a linearly scalable virtual SAN by clustering virtual machines via NVMe over TCP, and can deliver up to 1 million IOPS per volume with consistent latency down to 190 microseconds.

Lightbits, with certification on OCI, claims it enables organizations to run their most demanding, latency-sensitive workloads with sub-millisecond tail latencies, perfect for AI/ML, latency-sensitive databases, and real-time analytics workloads.

Kam Eshgi, Lightbits
Kam Eshgi

Kam Eshghi, co-founder and chief strategy officer of Lightbits, stated: “Certification on OCI marks a major step forward for Lightbits. We’re delivering a breakthrough in block storage performance, giving organizations the tools they need to migrate their most demanding applications to OCI and achieve faster, more reliable, and more efficient cloud services.”

OCI FIO benchmark runs, conducted with BM.DenseIO.E5.128 bare-metal OCI Compute shapes, supported by two BM.Standard.E5.192 shapes as clients, running Lightbits software on Oracle Linux 9.4, revealed:

  • 3 million 4K random read IOPS and 830K 4K random write IOPS per client with a replication factor of three, saturating the 100GbE network card configuration on BM.Standard.E5.192 servers.
  • Sub-300 microsecond latencies for both 4K random read and write operations, and 1ms latency when fully utilizing the clients for both random reads and writes – delivering fast performance even under heavy loads. 
  • In a mixed workload scenario (70 percent random reads, 30 percent random writes), each client achieved a combined 1.8 million IOPS, “setting a new benchmark for efficiency at scale.”

The Lightbits software on OCI can scale dynamically without downtime and has “seamless integration” with Kubernetes, OpenStack, and VMware environments. There is built-in high resiliency and availability, with snapshots, clones, and distributed management to prevent single points of failure.

Cameron Bahar, OCI
Cameron Bahar

Cameron Bahar, OCI SVP for Storage and Data Management, said: “Our collaboration with Lightbits and its certification on OCI delivers a modern approach to cloud storage with the performance and efficiency that enables our customers to bring their latency demanding enterprise workloads to OCI.”

Coincidentally, Lightbits competitor Volumez is also available in OCI, having been present in the Oracle Cloud Marketplace since September, claiming it provides 2.83 million IOPS, 135 microseconds ultra-low latency, and 16 GBps throughput per volume. It says users can harness Volumez’s SaaS services to create direct Linux-based data paths using a simple interface for Oracle Cloud Infrastructure (OCI) Compute VMs.

Find out more about Lightbits on OCI here and about OCI itself here.

Seagate developing ruggedized SSD for orbiting satellites

Seagate is developing an SSD that can operate in the vacuum of space aboard low Earth orbit satellites, providing mass storage for satellite applications.

It is testing this concept by having its Seagate Federal unit ship an SSD up to the International Space Station (ISS) as part of a BAE Systems Space & Mission Systems payload. This environmental monitoring payload was ferried to the ISS by NASA, where astronauts installed it.

The payload included a processor and real-time Linux plus containerized applications, and Microsoft Azure Space was involved in this aspect of the mission. Microsoft is developing an Orbital Space SDK and views satellites as a remote sensing and satellite communications edge computing location. Naturally, such a system needs mass storage.

The mission is scheduled to last for one year. At that point, the payload will be uninstalled and returned to Earth. BAE Systems engineers and scientists will analyze it to assess its impact from the space environment. Seagate’s engineers will check the SSD and its telemetry to see how well it withstood the rigors of space.

Unlike the space environment outside the ISS, the interior has atmospheric pressure, temperature control, and insulation from solar radiation. The SSD must operate while withstanding stronger solar radiation, extreme cold, and the vacuum of space.

Last year, HPE servers equipped with Kioxia SSDs were used on the ISS as part of NASA and HPE’s Spaceborne Computer-2 (SBC-2) program. We asked if this BAE program is doing pretty much the same thing as far as the storage drive is concerned.

Seagate said: “The main difference here is that we are being used outside the ISS, rather than inside it. The interior of the ISS is scientifically engineered to be a pristine environment, as it needs to protect its human inhabitants. It has a lot of shielding and air conditioning, which actually makes it a more desirable location than most places on Earth. We’re working to design technology that can survive without these advantages and function under higher levels of radiation, where there is no monitored climate or temperature, and in the vacuum of space – outside, in low Earth orbit (LEO).”

Seagate told us this task was undertaken to determine if technology could enhance LEO data storage capabilities. If successful, it could aid in extending content delivery networks (CDNs) for new AI-powered workflows. Satellites already provide the last mile connection to areas without fiber and cell connectivity, and with storage as part of the equation, AI inferencing could then occur in more places.

The SSD’s design “was based around Seagate SSD technology … however, our design is a 3U-VPX form factor, completely different than typical SSDs.” The form factor comes from the avionics world and has a size of 100 x 160 mm. VPX is used to specify how computer components connect across a VME bus and has been defined by the VMEbus International Trade Association (VITA) working group.

A 4 TB SSD is being used. We’re told: “For most of the mission the drive is being used as a general-purpose storage device to store mission data. Seagate SSD drives support FIPS140-2 data encryption, but that was not used in this mission. In addition to general-purpose storage, we ran special stress tests on the drive during parts of the mission and collected telemetry. Those tests revealed that many SSDs are susceptible to certain levels of radiation, and many are corrupted at the same exposure level. So, we did a lot of ‘failure testing’ to reverse engineer ways to make them more resistant and robust. On top of radiation soft errors under stress, we were also interested in measuring temperature and current.”

We wondered what this could mean for consumers and enterprises in the future. A Seagate spokesperson said: ”We intend to make this storage device available for purchase to both commercial and military aerospace customers. Right now, the market consists of off-the-shelf drives as a low-cost option, which, as you would expect, have a handful of faults for these applications. However, the opposite end is expensive military-grade hardware. We’re aiming to bridge the gap between the two and make the technology more accessible to consumers and enterprises. We are also looking whether this sort of ruggedized solution might be useful for terrestrial applications.”

Qumulo CEO charts path to tackle hybrid cloud and AI markets

Profile. Qumulo brought in Doug Gourlay in July, appointing him as president and chief executive. Gourlay also joined Qumulo’s board of directors. The business has an on-premises scale-out file system and cloud-native product that can be used in the general file-focused unstructured primary data market.

Qumulo has more than 1,000 customers in 56 countries, with several adjacent and overlapping markets. Segments it serves include cloud file services; public cloud unstructured data; high-performance computing (HPC); file lifecycle management and orchestration; and GenAI training and inferencing, for example. Gourlay’s stated mission is to accelerate the business’s growth but says there is a constant tension as Qumulo and its leadership considers its markets. Which are worth adopting and which are niche – for now, the mid-term, or long term? 

Gourlay tells Blocks & Files he has to consider things including the company’s product set, the core culture of the company, and market developments before he can steer Qumulo in the right direction.

Doug Gourlay

He says: “There was an obvious gap that the company did have, and that gap was it didn’t have a good story, it didn’t have a good talk track of where it was and where it was going. And a lot of our customer base wanted to know that the boat had a rudder and which way it was pointing.”

There are advantages in being a niche file-type product: “It’s not a niche product, it’s a enterprise-wide primary storage offering. And the downside of that is the niche offerings command significant price premiums because they do something in that niche that somebody thinks is valuable. Their go-to-market organization knows exactly what to call on, and they talk to this person, that person goes, wow, that’s amazing. That solves so much for me that I’m willing to pay you a metric ton of cash for.”

The more general-purpose products have lower prices. Should Qumulo move to these higher-priced niches? Gourlay says: “If I turned to just doing a specialty offering, we would lose the top-line revenue that the company depends on. If I just rotate it tomorrow to only do cloud, even though the unit economics are better, the margins are better, the deal velocity is better, I kill the company.”

Cloud and hybrid cloud

Blocks & Files sees Qumulo as a hybrid – on-premises and public cloud – company. Gourlay responds: ”Our customers are hybrid. And our cloud story is very simple. I can state in less than 30 seconds and probably one sentence: our job is to align our technologies with our customers’ priorities. Our job in cloud, therefore, is to give our customers the ability to make a business or economic decision about where their data and workloads reside and to eliminate all the technology barriers to that decision.”

He doesn’t buy into simple cloud repatriation stats, saying the situation is much more nuanced than we might think: “Michael Dell … posted to LinkedIn … four months or four weeks ago, which was 85 percent of the customers we polled are repatriating from the cloud. They’re going back on-prem.”

“OK, I have a book called How to Lie with Statistics. All right, if I tell you that data, it might be 100 percent accurate, but I’m not telling you how many workloads are moving. Somebody moving one workload? Check! You’re repatriating from the cloud.”

But it’s only a single workload, out of tens or hundreds. “There’s workloads that make sense in the cloud, and there’s workloads that don’t. Now, if I’m a cloud-only company, I’m going to tell you a completely different story. If the whole world is going cloud then, if you’re not going cloud, you’re a Luddite. If the answer isn’t cloud, the question is wrong. It’s cloud-first. It’s cloud-only.”

For now, Gourlay says: “Cloud, this quarter, is probably a fifth of our revenue contribution. It’s about 20 to 28 percent.”

It’s likely growing and the need for a hybrid data fabric with good on-prem-to-cloud data movement (and no doubt the other way) is too. He asks rhetorically: ”Can we invest in advanced technologies that allow rapid data replication between the cloud and the on-prem that accelerated and overcome some of the bandwidth delay product issues? So we’re testing those right now, with tremendous early results and further layering on capability that differentiates and maps these use cases.”

He sees similar one-dimensional – and wrong – thinking elsewhere: “If I’m Jensen (Huang), it’s all about the AI center, the datacenter is the AI center, the future is all GPUs and all our type of computing.”

This is half-blind thinking, he says: “These are wonderful stories from myopic points of view. Our customer base has a mosaic point of view, not a myopic one. And they want a system that works for their AI workloads, for their own workloads and their cloud workloads. And they want one that is consistent in operation and capability and capacity, regardless of which modality they’re executing against. That’s our job. That’s our job more than anything.”

He wants Qumulo and its people to have “organizational alignment around a common vision and strategy. A common technology evolution that addresses key priorities within our customers, whether those are AI, whether those are cloud adoption, whether those are a shift to different virtual machine types inside their datacenter, whether these are this ever expanding data set that our customers are getting weighed under.”

“Historically, they were deploying different file systems and different storage systems for each application. One of the great things about what we do is we can consolidate those. One of the great things about the way that we move the data inside of the system is we can start collapsing tiers very cost effectively.”

Having a global namespace (GNS) as an integral part of the product is a vital facility for Gourlay. He is not a believer in adding an external GNS, which is one way he views Hammerspace: “The unfortunate problem of an overlay GNS on third-party systems is no system has the verbs in place to give them an assurance that the data has been durably written to multiple targets before they acknowledge the data. And in that scenario, you end up with a tremendous risk of data loss, which our customers have experienced, multi petabyte data losses.”

“It’s funny that I looked throughout all of our marketing material, and that never came out to me. I see customers drawing up a four tier storage system. I’m like, but guys, I could put a lot of QLCs in here and a caching layer, and I could do hybrid over there. But if I just increase the cache sizes and the SSDs, [then] haven’t I really collapsed two or three of those tiers together, at a similar price point, with a larger storage cluster?”

Gourlay discusses a customer example: “I have a customer who does rocket launches. We store the telemetry for it. They may have had data loss because of organizations that didn’t have the ability to guarantee writes rights – so strict consistency matters when the data matters. Our customer base has data that matters.”

AI training and inference

What about AI where Qumulo, unlike NetApp, Pure Storage, and other competitors, does not support Nvidia’s GPUDirect protocol for feeding data to GPUs for AI work? He makes a distinction between AI model training, which does need GPUDirect, and AI model inference, which does not.

The Qumulo CEO thinks that AI training is a highly niche market. The 15th largest GPU cluster in the world has, he says, just 256 GPUs. “Numbers one through 14 are larger and number 16 onward are smaller. Why do I want to chase such a small market?”

He’s emphatic about this point: “Why do you want me to sit there and compete with four other companies, all chasing 14 companies that are spending enough to be worth calling them? Do you realize that the largest financial institution in the US has eight DGXs and doesn’t know what to do with them. Number one, the largest bank in America, has eight DGXs. Why should I bother with a tiny market?”

There are two successful storage suppliers to his knowledge, but hyperscaler customers are ruthless: “There’s two storage suppliers [that] are being pretty successful, right? One of them is getting kicked out of the largest cluster because they’re building their own. Now, that’s the other problem. These are hyperscalers. Yes, the largest AI cluster in the world is moving from an open storage environment, consuming a commercial product, to them building their own.”

This is a risky sales approach: “If I’m a company that has a 50 percent revenue concentration in one customer, that’s huge risk. I have 1,000 customers. I have over five exabytes to date under storage. I don’t have a single customer worth more than 2 percent revenue to me. I don’t have risk. They do. They have a customer concentration risk with customers who actively want them out of their system.”

”Hyperscalers either want to ram your margin down or get you out and replace you with something they can build themselves. I want customers who love me, who want to keep us in because they love what we do, and who aren’t capable of investing hundreds of engineers of effort in getting us out of their networks and systems.”

This is the situation now. It could change: “My statement wasn’t never. My statement was, I’m not going to chase it now.”

“I need to do this for a different reason. I need to do this because I have a substantial number of the largest ADAS (Automated Driver Assistance Systems) clusters in the world. I’m in seven of the eight largest autonomous driving clusters in the world, two largest research grant recipients in North America. We’re in the largest pediatric critical care facilities … we have customer demand now to start building it for things that are being delivered in the next two to four years, [the] cycle of our customers adopting next generation technologies. So it’s not a never, it’s at a right time.”

“I want to be right-timed. I want to hit a market inflection. I want to hit it when it scales out. To go for the broad customer adoption, for the base we have. I don’t want to build customer concentration risk in a market that wants to evict me. I want it with ones that embrace me. So I’m not saying never, definitely not saying never.”

If not never, when? “I think if you and I are sitting here next year having this conversation, I will happily have product to market that addresses some, if not all of what you and I are discussing.”

A last question from us: “NetApp is building a third ONTAP infrastructure to cope with supplying systems for AI, a disaggregated infrastructure. Dell is aiming to stick a parallel interface on top of PowerScale. Is Qumulo thinking about being able to respond already to these moves; developing something like a parallel architecture for itself?”

Gourlay exudes positivity about this: “I think if you take a look at the architecture that we’re using in our cloud-native offering, one that runs as a series of EC2 instances, backed with EBS and local NVMe, but then using parallel access to object storage in the back end, called S3 in the cloud. I think you see the exact architecture you’re describing in production today.”

****

A CEO like Gourlay can encapsulate key aspects of a company’s offerings and approach, then express them compellingly, not shying away from debate with opposing views, and winning doubters over with strongly argued and logical views. It’s a formidable talent and, when coupled with a clear view of a market and its realities, should enable a business to do very well indeed. 

Qumulo is being re-energized and carving out its own messaging under Gourlay. Its competitors are going to face a tougher fight when meeting it in customer bids, and he’ll relish that.

Panzura intros CloudFS 8.4, with improved access security

Panzura has released version 8.4 of its CloudFS core technology, claiming the latest update increases access security and lowers cloud storage costs.

CloudFS, billed by Panzura as being built on the world’s fastest global file system, has a global namespace and distributed file locking supported by a global metadata store. It is the underlying technology used by Panzura Data Services, which provide cloud file services, analytics, and governance, and also support the new Symphony feature for on-premises, private, public, and hybrid cloud object storage workloads managed through a single pane of glass.

Sunar Kanthadai, Panzura
Sunar Kanthadai

Panzura CTO Sundar Kanthadai blogs that CloudFS v8.4 “introduces important enhancements for cloud and on-premises storage. CloudFS 8.4 makes it easier to get into the cloud, reduce total cost of ownership (TCO), and improve user command and control over the unstructured data.”

Kanthadai claims the software provides “finely tuned, granular Role-Based Access Control (RBAC). This allows for precise user permissions within the CloudFS WebUI, ensuring compliance with internal and external controls.” It integrates with Active Directory (Entra) with “tailored access for various roles and enables even more compliance options.“

Competitors CTERA and Nasuni have RBAC support and Panzura has now caught up with them in this regard.

CloudFS v8.4 supports more S3 tiers through S3 Intelligent Tiering and also Glacier Instant Retrieval:

  • Frequent access for freshly uploaded data,
  • Infrequent access for data with no accesses in 30 days,
  • Archive Instant Access for data with no accesses within 90 days,
  • Glacier Instant Retrieval with millisecond access, faster than Glacier Standard or Deep Archive with their retrieval times in minutes or hours

Kanthadai writes: “CloudFS caches frequently used data at the edge, enabling local-feeling performance and virtually eliminating the need to egress data from the cloud to support active file operations. As such, Glacier Instant Retrieval may be used with CloudFS to offer performant retrieval on the occasions it is required, while offering substantial overall storage savings” of up to 68 percent.

Panzura admins can now directly assign “objects to their desired storage class upon upload, eliminating the need for separate lifecycle policies and reducing application programming interface (API) calls.”

CloudFS already supports VMware and Hyper-V on-premises edge nodes and now adds Red Hat Enterprise Linux (v9.4) KVM support.

V8.4 has existing support for cloud mirroring with identical copies of data in two separate object stores, and “now accelerates synchronization of data changes made any time the primary object store was unavailable” up to 10x faster. Also “the synchronization itself serves to dramatically reduce egress charges.”

More speed improvements come from “file operations for extremely large files and folders [being] faster with this release, which improves file and folder renaming as well as changes to file and folder permissions, and some file write operations.” 

SK hynix unveils 16-Hi HBM3e chips, sampling set for 2025

SK hynix has added another four layers to its 12-Hi HBM3e memory chips to increase capacity from 36 GB to 48 GB and is set to sample this 16-Hi product in 2025.

Up until now, all HBM3e chips have had a maximum of 12 layers, with 16-layer HBM understood to be arriving with the HBM4 standard in the next year or two. The 16-Hi technology was revealed by SK hynix CEO Kwak Noh-Jung during a keynote speech at the SK AI Summit in Seoul.

SK hynix CEO Kwak Noh-Jung presenting the 16-Hi HBM3e technology at the SK AI Summit in Seoul

High Bandwidth Memory (HBM) stacks memory dice and connects them to a processor via an interposer unit rather than via a socket system as a way of increasing memory to processor bandwidth. The latest generation of this standard is extended HBM3 (HBM3e).

The coming HBM4 standard differs from HBM3e by having an expected 10-plus Gbps per pin versus HBM3e’s max of 9.2 Gbps per pin. This would mean a stack bandwidth of around 1.5 TBps compared to HBM3e’s 1.2-plus TBps. HBM4 will likely support higher capacities than HBM3e, possibly up to to 64 GB, and also have a lower latency. A report says Nvidia’s Jensen Huang has requested SK hynix to deliver HBM4 chips 6 months earlier than planned. SK Group Chairman Chey Tae-won said the chips would be delivered in the second half of 2025.

The 16-Hi HBM3e chips have generated performance improvements of 18 percent in GenAI training and 32 percent in inference against 12-Hi products, according to SK hynix’s in-house testing.

SK hynix’s 16-Hi product is fabricated using a MR-MUF (mass reflow-molded underfill) technology. This combines a reflow and molding process, attaching semiconductor chips to circuits by melting the bumps between chips, and filling the space between chips and the bump gap with a material called liquid epoxy molding compound (EMC). This increases stack durability and heat dissipation.

The SK hynix CEO spoke of more memory developments by the company:

  • LPCAMM2 module for PCs
  • 1c nm-based LPDDR5 and LPDDR6 memory
  • PCIe gen 6 SSD
  • High-capacity QLC enterprise SSD
  • UFS 5.0 memory
  • HBM4 chips with logic process on the base die
  • CXL fabrics for external memory
  • Processing near Memory (PNM), Processing in Memory (PIM), and Computational Storage product technologies

All of these are being developed by SK hynix in the face of what it sees as a serious and sustained increase in memory demand from AI workloads. We can expect its competitors, Samsung and Micron, to develop similar capacity HBM3e technology.

Backblaze partners with Opti9 to launch Canadian datacenter region

Cloud storage player Backblaze is spreading the reach of managed hybrid cloud solutions firm Opti9 through a new partnership. As part of the alliance, Backblaze will open a new datacenter region in Canada, and Opti9 will be the exclusive Canadian channel for Backblaze B2 Reserve and the Powered by Backblaze program.

Opti9 delivers managed cloud services, application development and modernization, backup and disaster recovery, security, and compliance solutions to businesses around the world. B2 Cloud Storage promises secure, compliance-ready, “always-hot” object storage that is “one-fifth the price” of traditional cloud storage providers. B2 can be used in any of the solutions Opti9 provides.

Gleb Budman, Backblaze
Gleb Budman

Increasingly, say the new partners, companies seeking managed services support are demanding solutions made up of “best-of-breed providers”. While traditional cloud platforms “work against this principle,” Backblaze and Opti9 are committed to delivering cloud solutions without the “limitations, complexity, and high pricing” that are “holding customers back.”

The new Canadian data region gives businesses the freedom to access Backblaze’s offering, while still allowing them to benefit from local storage and compliance. Located in Toronto, Ontario, the datacenter complies with SOC 1 Type 2, SOC 2 Type 2, ISO 27001, PCI DSS, and HIPAA. The region will be available to customers in the first quarter of 2025.

Jim Stechyson, Opti9
Jim Stechyson

“Being able to integrate the high performance and low total cost of ownership of Backblaze’s object storage into our set of solutions will greatly enhance our ability to drive success for our customers,” said Jim Stechyson, president of Opti9. 

“Businesses want modern storage solutions that serve their needs without worrying about out-of-control fees, complexity, or other limits,” added Gleb Budman, CEO of Backblaze. “Coming together means we can unlock growth for even more businesses around the world.”

Opti9 has multiple offices in North America and has datacenter space in the US, Europe, and the APAC region. It is an AWS Advanced Consulting Partner, Platinum Veeam Cloud & Service Provider, and Zerto Alliance Partner.

Earlier this year, Backblaze took the wraps off Backblaze B2 Live Read, giving customers the ability to access, edit, and transform content while it’s still being uploaded into B2 Cloud Storage.