NetApp has refreshed the low end of its all-flash A-Series AFF arrays following the May midrange and high-end upgrade. It has also updated the capacity-optimized C-Series hardware and added functionality to StorageGRID object storage software.
Customers can now access its storage “at more accessible entry points, making it easier to scale up from a smaller starting point or expand their capabilities to remote and branch locations,” Net App said in a statement.
Sandeep Singh
Sandeep Singh, Enterprise Storage SVP and GM at NetApp, described the updated A-Series as “more powerful, intelligent, and secure”, and said the C-Series is more “scalable, efficient, and secure.”
A-Series
Until May, NetApp’s ONTAP-powered A-Series comprised the A150, A250, A400, A800, and A900. The bigger the number, the more powerful the system, with generally faster controllers and higher capacity as we move up the range. That naming was discontinued for certain systems in May when the company added three new models: A70, A90, and A1K (not A1000), which refreshed the A400, A800, and A900 – the midrange and high-end A-Series. No end of availability was announced for the A400, A800, and A900.
A major hardware change included the move to Intel’s Sapphire Rapids gen 4 Xeon SP processors.
NetApp AFF A20, A30 and A50.
Now NetApp has attended to the low end (A20, A30, and A50 systems), saying they have “sub-millisecond latency with up to 2.5x better performance over their predecessors.” That implies they get Sapphire Rapids CPUs as well. No end of availability has been announced for the existing low-end A150 and A250 arrays either.
The AFF A20 starts at 15.35 TB. The AFF A30 can scale to more than 1 PB of raw storage. The AFF A50, we’re told, delivers twice the performance of its predecessor in a third of the rack space.
Logic would suggest the A20 is the new low-end model, with the A30 positioned to replace the A150, and the A250 giving way to the A50. The implied new A-Series range, once the prior models are declared end-of-life, will be the A20, A30, A50, A70, A90, and A1K. NetApp has provided tech specs for the new systems;
Competitor Dell upgraded its PowerStore unified file and block storage arrays to gen 4 Xeon processors in May, with the range starting at the 500T and extending through the 1200T, 3200T, and 3200Q systems to the high-end 5200T and 9200T.
C-Series
NetApp AFF C-Series C30, C60 and C80.
Compared to the A-Series, NetApp’s C-Series are higher-capacity and lower cost all-flash ONTAP arrays using QLC (4bits/cell) SSDs. In February last year, the range consisted of three products – the C250, C400, and C800 – which all scaled out to 24-node clusters. They use NVMe drives, and the smallest, the C250 in a 2RU chassis, had 1.5 PB of effective capacity.
NetApp says there are new AFF C30, C60, and C80 systems with “an industry-leading 1.5 PB of storage capacity in two-rack [unit] deployments.” No change there, then. We’re told by Singh they are ”scalable, efficient, and secure.”
StorageGRID is an on-premises scale-out, S3-compatible object storage system. The existing high-end SGF6112 product now supports 60 TB SSDs, “doubling the density of object deployments,” NetApp said in a statement. The previous maximum raw SSD capacity was 368.4 TB using 12 x 30.7 TB drives in the 1RU chassis. These drives were introduced last May. Doubling that would imply 736.8 TB from 12 x 61.4 TB drives. Both Samsung and Solidigm supply 61.44 TB QLC SSDs.
The StorageGRID object storage software has been upgraded to v11.9 “with increased bucket counts.” It can now have “metadata-only and data-only nodes for increased performance with small object workloads and mixed-media grids.”
50 percent will move to cloud – this is split evenly across different cloud deployment options
24 percent will go to hybrid cloud, 24 percent will use multi-cloud, and 24 percent will use a Cassandra-as-a-Service
37 percent will run their environments in private datacenter environments
Cassandra users are predominantly running their own environments
94 percent run their own self-managed deployments
14 percent use service providers to manage Cassandra for them
Self-managed deployments include more complex deployments, both multi-cloud (20 percent) and hybrid cloud (16 percent)
84 percent of community members are planning a migration
23 percent will move in the next 6 months
48 percent will move in 6-12 months
13 percent will move, but in more than a year
16 percent do not plan to migrate to new versions
The Cassandra community is shifting from experimentation to production around AI
More than 50 percent of respondents have one or more GenAI use cases in production
28 percent of users have two use cases in production
17 percent of users have three or more use cases in production
…
Q3 2024 revenues for cloud storage provider Backblaze were up 29 percent to $32.6 million with a GAAP loss of $12.8 million, better than the year-ago $16.1 million loss. The B2 Cloud storage segment grew 39 percent to $16.2 million while the original computer backup business grew 20 percent to $16.4 million.
It looks set to be overtaken by B2 Cloud storage revenues next quarter, but analyst Jason Ader reports: “The 17 percent growth rate expected in the fourth quarter is a sharp deceleration from the 29 percent rate reported in the third quarter, and primarily reflects a slowdown in the B2 Cloud business where growth is expected to drop below 25 percent due mainly to higher churn and a go-to-market reset.”
CEO Gleb Budman said: “I’m excited that we have kicked off a go-to-market transformation and continue to build our upmarket momentum with two multi-year deals, each totaling approximately $1 million. We are also aggressively executing cost efficiencies throughout the organization to accelerate being adjusted free cash flow positive by Q4 2025.”
Ader said about the sales reset: ”Under new CRO Jason Wakeam, Backblaze is in the process of overhauling its go-to-market strategy, which is focused on three major areas: 1) upskilling, … 2) prioritizing partners in the channel … and 3) aligning the sales and marketing teams on a core set of sales plays.
“Backblaze is increasingly focused on driving cost efficiencies and improving its operating leverage. To this end, the company announced a 12 percent reduction in workforce, effective immediately, largely targeted at headcount reduction within marketing. In addition, Backblaze kicked off a zero-based budgeting approach, which now sees the company putting its existing vendors out to bid and negotiating all contracts. These initiatives should lead to an $8 million year-over-year decrease in run-rate operating expenses, helping the company achieve its target of 20 percent adjusted EBITDA margin in the fourth quarter of 2025.”
…
Cerabyte, developing ceramic-based data storage systems, announced its participation at SuperComputing 2024 (SC24), taking place at the Georgia World Congress Center from November 17-22. It will present in sessins at the Arcitecta booth. Arcitecta and Cerabyte are both members of the Active Archive Alliance. Arcitecta’s booth is designed in a way where they invited Cerabyte and other vendors to present in the Arcitecta booth as part of the Arcitecta Co-LAB at SC2024. Arcitrectya says the Co-LAB “is an exciting joint endeavor with the company’s customers, partners and friends. The lab offers a unique opportunity to engage with HPC thought leaders, including Cerabyte, Princeton University, the University of Melbourne and many others, who will ‘take over’ the Arcitecta booth. They will delve into forward-thinking ideas, share insights and experiences, present groundbreaking research, and discuss topics ranging from the future of big data to strategies for resilience against loss.”
…
Analyst house DCIG announced availability of the 2025-26 DCIG TOP 5 Modern SDS Block Storage Solutions Report. The top 5 are;
Data archiver FalconStor reported Q3 2024 revenues of $2.9 million, down 12 percent year-over-year, with a GAAP loss of $680,000, better than the year-ago $840,000 loss. CEO Todd Brooks said: “Our Q3 results highlight the ongoing strength of our Hybrid Cloud ARR run-rate growth across Cloud, On-Premises, and MSP segments, reflecting the effectiveness of our strategy and solutions.” If you say so.
…
SaaS data protector HYCU says Box users can use the latest HYCU R-Cloud integration, in addition to existing Box data protection capabilities, to safeguard data and recover from any data loss scenario. Valiantys, an Atlassian global consulting and services provider and Box partner, helped develop the R-Cloud integration for Box to provide these additional backup and recovery capabilities. HYCU is showcasing the R-Cloud data protection for Box capabilities at BoxWorks (Booth #1), Nov 12 – 13, Pier 27 in San Francisco.
…
Reuters reports Kioxia expects NAND demand to rise 2.7x between now and 2028. It’s readying a major capacity expansion at its new fab at Kitakami in Iwate prefecture, north of Tokyo. The Japanese government is providing up to $1.64 billion to Kioxia and Western Digital to expand capacity at Yokkaichi and Kitakami.
…
Kioxia is getting involved in CXL. It says it has been adopted by Japan’s national research and development agency, New Energy and Industrial Technology Development Organization (NEDO), for its proposal regarding CXL memory on the Development of Manufacturing Technology for Innovative Memory to enhance the post-5G information and communication system infrastructure. The objective is to develop memory that offers lower power consumption and higher bit density than DRAM, and faster read speed than flash memory. Kioxia does not make DRAM. A Kioxia diagram illustrates its view (right).
…
Kioxia, Reuters reports, has made a fresh filing for an IPO, partly to finance the ramp for its Kitakami 2 fab and its BiCS8 technology, as well as return cash to its private equity owners. The latest IPO intention is to go for a listing between December 2024 and June 2025. Kioxia anticipates receiving approval from the Tokyo Stock Exchange in late November, with the indicative price of the IPO disclosed at that time.
…
Micron said its low-power double data rate 5X (LPDDR5X) memory and universal flash storage 4.0 (UFS 4.0) are validated for use with, and featured in the reference designs of, the latest mobile platform from Qualcomm Technologies for flagship smartphones, the Snapdragon 8 Elite Mobile Platform.
…
MSP backup storage supplier N-able reported Q3 2024 revenues up 8.3 percent year-over-year to $116.4 million, but down from last quarter’s $119.4 million. There was a $10.8 million GAAP profit, up 80 percent year-over-year, and subscription revenue rose 9.3 percent year-over-year to $115 million, again down sequentially from the prior quarter’s $117.4 million.
…
Cloud file services supplier Nasuni is working with major brands in the consumer and retail market (such as Mattel, Crate & Barrel, Williams Sonoma, BJ’s Wholesale Club, Barnes & Noble, and Patagonia) to optimize their cloud infrastructure, streamline collaboration, and reduce costs ahead of the holiday season. Other Nasuni customers include Pernod Ricard, Peter Millar, Dyson, SharkNinja, and Tommy Bahama.
Kelly Wells
…
Object First, which supplies a Veeam backup target appliance, promoted Kelly Wells to COO. She had been overseeing the Global Operations organization since joining last year, coming from being VP Customer Success & Enablement at Axcient.
…
OWC (Other World Computing) announced that 100 percent of its Thunderbolt docks and hubs, USB-C docks, storage, and Thunderbolt (USB-C) cables are fully compatible with Apple’s latest iMac with M4 release. Its USB-C Dual HDMI 4K Display Adapter offers seamless compatibility with this model.
…
PNY today announced specs and availability of two new high-performance DDR5 desktop memory products engineered for PC gamers and enthusiasts. XLR8 Gaming DDR5 and XLR8 Gaming EPICheat spreader design offer dual support for both Intel XMP 3.0 and AMD EXPO, and will be available as a 32 GB kit (2 x 16 GB) with speeds ranging from 5,600 MHz to 6,400 MHz with a CAS latency of 36.
…
Solix Technologies unveiled its SOLIXCloud Enterprise Data Lake that enables organizations to develop applications within their AI infrastructure, integrating governance elements like metadata and cataloging for robust data management. It provides a comprehensive framework for high-performance data fabric for streaming transactions, and ensures compliance with complex data regulations. Solix Enterprise Data Lake supports structured, unstructured, and semi-structured data across a wide range of open table and open file formats including Apache Hudi, as well as Parquet, CSV, Postgres, and Oracle. It also incorporates cloud-native object storage tiers and enables auto-scaling of compute engines including Apache Spark. More information here.
…
TrendForce reports that HBM vendors are considering whether to adopt hybrid bonding – which does not require microbumps between HBM DRAM layers – for HBM4 16hi stack products and have confirmed plans to implement this technology in the HBM5 20hi stack generation.
…
Veeamannounced the addition of the Recon Scanner to Veeam Data Platform for free; a new lightweight software agent designed to proactively scan Veeam backup servers. From now on, it’s available to all Veeam Data Platform Premium customers. The Recon Scanner recognizes suspicious activity in the backup servers and maps it to adversary tactics, techniques, and procedures (TTPs) so that organizations can take defensive and mitigative actions.
…
UK-based Predatar’s Cyber Recovery Orchestration has achieved Veeam Ready Security status in the Security category, and is the only Veeam Ready solution that alerts Veeam users of confirmed malware that has been ingested into their backups using Veeam’s Incident API. Predatar uses AI-powered threat detection to identify signs of hidden malware that has been ingested into its customers’ storage environments. It then automates recovery tests and malware verification on suspect workloads. Predatar will also run a search to check if viruses have spread.
…
Virtuozzo Hybrid Infrastructure 6.3 has been released with new features to enhance disaster recovery, streamline backup management, improve storage capabilities, and optimize Kubernetes networking.
Seamless backup with Commvault integration
Integration with Storware and Cloudbase Coriolis disaster recovery solutions
Broadcom announced that VMware Live Recovery will support Google Cloud VMware Engine (GCVE) as a target Isolated Recovery Environment (IRE) for VCF workloads, in addition to VMware Cloud on AWS and on-premises IRE, for both cyber and disaster recovery. This builds on VMware Live Recovery’s existing protection of GCVE sites as a source, and enables a consistent, secure, and simplified experience for those looking to protect VMware workloads running on-premises or in the cloud to GCVE. VMware vDefend Advanced Service for VCF now offers GenAI-based intelligent assistance to help IT security teams proactively triage sophisticated threat campaigns and recommend remediation options. For more details, read these blogs.
NetApp is pushing out its storage solutions through the cybersecurity channel after signing a distribution deal with global cyber player Exclusive Networks. The partners reckon that combining security with storage is a canny offer in the market as ransomware tries to take hold, and protection for mushrooming AI data becomes increasingly important.
Rob Tomlin
Initially, the pair said they will provide “highly secure, data-centric solutions” to channel partners in the UK and Ireland as data-centric approaches are “no longer an option but a requirement” for ransomware protection and data breach prevention.
As a result of the deal, NetApp will become Exclusive’s primary data infrastructure vendor in the UK and Ireland. There are currently no confirmed plans to extend the deal to other territories.
“We see data security and resilience as significant growth sectors and a key part of our cybersecurity strategy,” said Rob Tomlin, managing director of Exclusive Networks UK & Ireland. “NetApp became the obvious partner due to their best-in-class technology, channel-first approach, and strong technical integrations with many of our core cybersecurity vendors. NetApp’s intelligent data infrastructure solutions are a strategic addition to our cybersecurity portfolio.”
Sonya Mathieu
Sonya Mathieu, partner lead for NetApp UK & Ireland, added: “This leverages Exclusive Networks’ specialist cyber expertise and partner ecosystems, and will introduce NetApp’s intelligent data infrastructure to new customers, empowering businesses to navigate today’s complex security landscape with unmatched data protection and recovery capabilities.”
For the half-year, Exclusive reported revenues of €723 million ($779 million), which was an annual drop of 7 percent, largely due to lower hardware sales. By bringing NetApp into the fold, Exclusive will be generating extra hardware sales.
Earlier this week, Exclusive posted a 10 percent increase in third-quarter sales. The Euronext-listed company is currently in talks with equity investment firms to take it private in a deal valued at €2.2 billion ($2.4 billion) this July.
Huawei’s in-house development of Magneto-Electric Disk (MED) archive storage technology combines an SSD with a Huawei-developed tape drive to provide warm (nearline) and cold data storage.
MED technology was first revealed back in March. We were told that, facing potential disk supply disruption due to US technology export restrictions, Huawei was working on its own warm and cold data storage device by combining an SSD, tape cartridge, and drive in a single enclosure. Its storage portfolio could then run from fast (SSD) for hot data and MED for warm and cold data, skipping disk drives entirely.
Presentation images of the MED now show a seven-inch device:
The MED is a sealed unit presenting a disk-like, block storage interface to the outside world, not a streaming tape interface. Inside the enclosure there are two separate storage media devices: a solid-state drive with NAND, and a tape system, including a tape motor for moving the tape ribbon, a read-write head, and tape spools.
This is unlike current tape cartridges, which contain a single reel of tape, approximately 1,000 meters long, and have to be loaded into a separate drive for the tape to be read and have data written to it. A tape autoloader contains the motor and spare reel with tape cartridges loaded into it and moved to the drive by a robotic mover. Much bigger tape libraries also have robotics to select cartridges from the hundreds or thousands stored inside them, and transport them to and from the tape drives.
The MED contains an internal motor to move the tape and an empty reel on which to rewind the tape from the full reel after it is pulled out and moved through the read and write heads. A conceptual diagram of the device illustrates its design:
The MED contains a full reel of tape, about half the length of an LTO tape, motor, read-write heads and an empty reel to hold the used tape. Huawei engineers could choose to have the tape ribbon positioned by default with half on one reel and half on the other so that the read-write heads are at the midpoint of the ribbon, shortening the time to get to either end of the tape.
The system is designed to be a combined archive for cold data and nearline store for warm data. Data flows into the MED through the SSD at NAND speed, from where it is written to the tape in sequentially streamed blocks. Warm data can be read from the SSD at NAND speed. Cold data is read from the MED more slowly as it has to be located on the tape and the tape ribbon moved to the right position before reading can begin. This can take up to two minutes.
The MED has a disk-like, block interface, with the SSD logically having a flash translation layer (FTL) in its controller that takes in incoming data and stores it in NAND cell blocks. From there, a logical second tape translation layer assembles them into a sequential stream and writes them to the tape.
When the MED receives a data read request, the controller system locates the requisite blocks using a metadata map, stored and maintained in the NAND, and then fetches the data either from the NAND, or from the tape, streaming it out through the MED’s IO ports.
Huawei and its Chinese suppliers have developed their tape media and the read-write technology, not using IBM LTO tape drive technology or LTO tape media, which is made by Fujifilm and Sony. The tape media ribbon is about half the length of an LTO tape and has a much higher areal density. The MED NAND is produced in China as well. Huawei is open to using NAND from other suppliers should US technology export restrictions allow it.
The MED system and its components are protected by patents. The first-generation MED should arrive sometime in 2025. A second-generation MED, with a 3.5-inch disk bay slot size, with a shorter and much higher density tape ribbon, has a 2026/2027 position on the MED roadmap:
A gen 1 MED will store 72 TB, and draw just 10 percent of the electricity needed by a disk drive.
It should have a 20 percent lower total cost of ownership than an equivalent capacity tape system.
A gen 1 MED rack will deliver 8 GBps, hold more than 10 PB, and need less than 2 kW of electricity
We don’t know if the 72 TB capacity is based on raw or compressed data.
The MEDs won’t run hot as they store mostly archive data. A MED chassis has no need of robots and can be filled with MEDs like a dense JBOD. It will function like a better-than-tape archive system, providing much faster data access, both for reads and writes, draw less electricity, and occupy less datacenter rack space.
It is simple to envisage MED variants with more or less NAND storage, pitched at applications needing more or warm storage compared to cold, archival data storage in the future, squeezing the disk market somewhat. In effect, Huawei is compressing the storage hierarchy from three elements to two. From “SSD-to-HDD-to-Tape” to “SSD-to-MED.”
Such two-element hierarchies could be easier to manage, more power efficient and enable faster cold data access. They could become popular in regions with constrained disk supply through US restrictions, and elsewhere as well, because they will make on-premises datacenter and tier 1, 2, and 3 public cloud archival storage more practicable. Chinese public cloud suppliers are having conversations with Huawei about using the technology, we’re told.
It is possible that MEDs could have a profound effect on the robotics-using tape autoloader and library systems markets, prompting suppliers of such systems to look at developing their own MED-like technology. MEDs might also add to the pressure on disk drives from NAND by moving some nearline data to MEDs, squeezing the disk drive market from two sides.
It’s notable that Huawei has only developed its MED technology because of US disk tech export restrictions, and that MED technology could end up threatening Western Digital and Seagate because of Huawei’s inventive response to those restrictions.
Bootnote
Huawei is said to be developing its own 60 TB capacity SSD, using QLC NAND with an SLC cache.
DDN enterprise storage subsidiary Tintri is releasing data management features for Kubernetes environments, with its new VMstore Container Storage Interface (CSI) driver.
The VMstore platform provides visibility into performance, data protection, and management for virtual machine workloads. The new CSI driver provides VMstore customers with that same insight within Kubernetes, using a single interface.
With cloud-native application support, VMstore can efficiently manage data for microservices-based deployments.
The driver allows admins to manage all data using familiar Tintri interfaces and tools to reduce complexity in hybrid VM/container environments, said the provider. The driver enables dynamic provisioning and automatic attachment and detachment of volumes to containers.
Brock Mowry
“This IO-aware CSI driver is the most adaptable data management platform for Kubernetes, transforming how IT administrators handle Kubernetes environments in both cloud and on-prem,” said Brock Mowry, CTO at Tintri. “The driver empowers administrators, regardless of their Kubernetes expertise, with the essential tools to efficiently manage and optimize data across physical and virtual clusters.”
The driver also enables the easy management of workload transitions between cloud environments, enhancing operational efficiency through automated performance tuning. In addition, ETPH analytics provide insight to optimize cloud storage costs.
The driver leans on Tintri’s TxOS performance, analysis, and optimization capabilities, allowing admins to dynamically manage container performance and autonomously prioritize application workloads in real time, we are told. With Tintri Global Center (TGC), admins can manage multiple VMstores serving as Kubernetes clusters, either globally or locally, through a single pane of glass.
Through the VMstore TxOS integration, Tintri also brings data protection and disaster recovery to Kubernetes environments, including snapshots and cloning of persistent volumes or large data sets, ensuring consistent storage, secure data management, and efficient recoverability, according to the company.
Tim Averill, US CTO at IT infrastructure and managed security service provider at Silicon Sky, said: “We are leveraging the Tintri CSI driver within our datacenters, both in the cloud and on-premises. By providing primary storage, disaster recovery and data protection in one solution, we are simplifying and enhancing our IT operations.”
In August, Tintri said it was developing a disaster recovery feature with autonomous detection and alerting to combat ransomware attacks.
Lightbits Labs says its block storage is supporting the expansion of Crusoe Energy Systems’ sustainable AI cloud service. Crusoe powers its datacenters with a combination of wasted, stranded, and clean energy resources to lower the cost and environmental impact of AI cloud computing.
“Stranded” energy is methane being flared or excess production from clean and renewable sources. Crusoe, which has dual headquarters in Denver and San Francisco, currently operates in seven countries with around 200 MW of total datacenter power capacity at its disposal, some owned by the company and some at shared datacenter sites.
In September, Crusoe said it was collaborating with VAST Data to offer its customers VAST’s Shared Disks technology, which is another high-performance storage product for AI workloads.
Lightbits uses NVMe/TCP to enable direct access to NVMe storage over standard Ethernet TCP/IP networks. This architecture is designed to significantly reduce latency and maximize throughput, making it ideal for demanding AI and ML workloads, according to Lightbits.
Patrick McGregor
Lightbits scales IOPS with increased load while consistently maintaining latencies under 500 μs. The clustered architecture provides up to three replicas per volume across multiple availability zones for high availability.
“Lightbits’ suite of enterprise-grade functionality has been instrumental in helping us build a high-performance, climate-aligned AI cloud platform, addressing performance and operational gaps that other block storage solutions struggle with,” said Patrick McGregor, Crusoe chief product officer.
“From data preprocessing to real-time inference, the advantages of lower and more consistent latency, higher throughput, and linear scalability make Lightbits high-performance block storage an excellent offering to our customers to optimize their AI workflows.”
Kam Eshghi
Users can resize their VMs and consume high-performance storage in the form of persistent disks on demand, while leveraging the OS images pipeline to generate their workload-specific images, such as LLM training with Jax or generative AI with Stable Diffusion. Lightbits’ technology is integrated with Kubernetes, OpenStack, and VMware to support modern cloud-native apps and traditional virtualized apps.
Kam Eshghi, Lightbits co-founder and chief strategy officer, added: “This expanded partnership reflects the tangible results Crusoe has seen and demonstrates our crucial role in shaping the future of AI cloud technology.”
Earlier this week, it was announced that Lightbits’ cloud virtual SAN software is now available in Oracle Cloud Infrastructure (OCI), following availability in the AWS and Azure clouds.
Nutanix is getting closer to AWS, with on-prem/public cloud hybridity front and center, to both ease app migration to AWS and use AWS for on-prem extension.
It’s doing this through an upgrade to Nutanix Cloud Clusters (NC2) that run both on-premises and in the AWS cloud. Nutanix says NC2 operates as an extension of on-prem datacenters that span private and public clouds. It’s operated as a single cloud with a unified management console.
The idea is that NC2 on AWS provides disaster recovery, datacenter extension, and application migration facilities for an on-prem Nutanix deployment. The expanded partnership will enable customers “to seamlessly extend their on-premises Nutanix environment to AWS.”
Tarkan Maner
Tarkan Maner, chief commercial officer, stated: “Our expanded strategic partnership with AWS is a win-win-win for both companies and our customers, as it will help simplify their cloud migration journeys, accelerate their adoption of AWS using NC2, and open the door to hybrid cloud and on-prem Nutanix opportunities.”
NC2 on AWS places the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal instance in Amazon Elastic Compute Cloud (EC2). It runs AOS and AHV on the AWS instances and packages the same CLI, GUI, and APIs that cloud operators use in their on-prem environments. Nutanix provisions the full bare-metal host for your use, and the bare-metal hosts are not shared by multiple customers.
On-prem workloads can be migrated to AWS with the Nutanix Move migration tool, without refactoring. Customers get access to AWS services, databases, S3, AI and ML services. The elasticity of AWS can be used to manage expected and unexpected capacity demands. Procurement is simplified by using AWS Marketplace for all Nutanix software licensing needs.
Customers can also get access to promotional credits for migrating VMware on AWS workloads to NC2 on AWS through an AWS VMware Migration Accelerator offering. If migrating workloads from other clouds or on-premises they will also have access to the AWS Migration Acceleration Program benefits, including free proof-of-concept trials, migration assessment, and support with AWS credits, as well as Nutanix licensing pricing promotions.
Nutanix cloud platform diagram. Nutanix Cloud Clusters also run on Azure
Dave Pearson, an IDC Research VP, said: “The partnership between Nutanix and AWS emerges as a strategic solution to enable more seamless migrations to Nutanix Cloud Clusters on AWS.” You can obtain more information from on the AWS partnership here.
LucidLink is providing unified real-time file-based collaboration among distributed teams working on massive projects, with instant, secure file access across desktop, web, and soon mobile.
Startup LucidLink sells file collaboration services to distributed users. Its Filespaces product streams parts of files from a central cloud repository, providing fast access to large files, protected by zero knowledge encryption. All the locally cached data and metadata on the client devices are stored encrypted on the local disk. This sub-file streaming is opposed to the full file sync ‘n’ share approach, which, it says, characterizes the services offered by CTERA, Egnyte, Nasuni, and Panzura. LucidLink’s software is used by entertainment and media, digital ad agencies, architectural firms, and gaming companies.
Peter Thompson
Peter Thompson, co-founder and CEO of LucidLink, stated: “The new LucidLink is both an evolution of everything we’ve built so far and a revolution in how teams collaborate globally. For the first time, teams can collaborate instantly on projects of any size from desktop, browser or mobile, all while ensuring their data is secure.
“This milestone release marks a new chapter in our mission to make data instantly and securely accessible from anywhere and from any touchpoint. As we introduce more new features in the coming months, our focus remains on empowering teams to collaborate seamlessly, wherever they are.”
The real-time mobile collaborative capabilities will actually arrive in the first quarter next year.
LucidLink says the latest software involves no downloading, syncing, or transferring data. It has a new desktop and web interface, streamlined onboarding, and flexible pricing for teams of all sizes, from freelancers to large enterprises, working from home, in datacenters or the public cloud.
LucidLink is providing a new desktop interface and a global user concept in which users can join multiple Filespaces across desktop, web, and soon mobile devices. There is a faster and smoother installation process for macOS users which “eliminates reboots or security changes.”
There is cloud flexibility as users can choose LucidLink’s bundled, egress-free AWS storage options or bring their own cloud storage provider.
The new LucidLink PC/notebook interface
There are more features scheduled for early 2025:
Mobile apps for Android and iOS: Full-featured mobile apps will give users immediate access to data.
External link sharing: Users can share content with external collaborators without needing the desktop application.
Browser-based upload: Users can drag and drop files directly from their browser for seamless collaboration.
Multi-Factor Authentication (MFA) and SAML-based SSO: Enhanced security options for all users.
Guest links: Teams can collaborate securely without requiring full user accounts.
An upcoming Filespaces upgrade tool will provide a smooth path to the new LucidLink for existing customers.
LucidLink says Spotify, Paramount, Adobe, and other creative teams worldwide have used LucidLink to increase productivity fivefold, access global talent, and “free their people to focus on creating.”
We note that CTERA says its technology also offers “direct read/write access from the cloud, allowing desktop and server applications to handle large files effortlessly, without the need to upload or download them in their entirety. The data is streamed on-demand, allowing tools like Adobe Premiere or DaVinci Resolve to function smoothly and quickly, no different than if you were using a local disk.”
Bootnote
LucidLink’s Filespaces have a split-plane architecture in which data and metadata planes are managed separately. The metadata is synchronized through a central metadata service provided by LucidLink, while the data is streamed directly to and from the cloud or an on-premises object store.
Having pushed out the seventh major version of its unstructured data charting and moving tool in June, Datadobi says it has made StorageMAP faster, more scalable, and better able to deal with the now end-of-life Hitachi Data Ingestor (HDI).
StorageMAP software scans and lists (maps) a customer’s file and object storage estates. It can then optimize storage use by migrating old and cold data to lower-cost archival storage, for example, and delete dead data. Datadobi says warmer – more frequently accessed – data could be tagged, for example, for use in AI training and inference work. Warm data could also be migrated to a public cloud for access compute instances there.
Carl D’Halluin
v7.0 added custom dashboards and an analysis module. According to Datadobi CTO Carl D’Halluin: “StorageMAP 7.1 takes it a step further and solves some focused challenges facing our customers globally, including offering an innovative HDI Archive Appliance Bypass feature, example dashboards, and the most important one, improvements to scalability and performance.”
StorageMAP has a uDME feature, an unstructured Data Mobility Engine. This moves, copies, replicates, and verifies large and complex unstructured datasets based on trends and characteristics derived from the metadata intelligence stored in the StorageMAP metadata scanning engine’s catalog.
Datadobi says the uDME has been made faster and more scalable, capable of handling greater capacities and larger numbers of files and objects.
An HDI Archive Appliance Bypass feature – we’re told – gets data faster from the primary NAS and archive (HCP) sides of an HDI installation, HDI being a file storage system that can move data off a primary NAS to a backend HCP vault for cheaper, long-term storage. With HDI now defunct, customers may need to migrate their data to actively supported NAS and backend stores, but the HDI software impedes data migration.
D’Halluin says it has “significant performance limitations that make migrating all active and archived data an extremely slow process typically riddled with errors.”
StorageMAP has a bypass that “involves using multiple StorageMAP connections to the storage systems – one connection to the primary storage system and a second connection to the archive storage system. These connections effectively bypass the middleware HDI archiving appliance, which is responsible for both relocating data to the archive storage system and retrieving it when a client application requests archived data.”
This is an alternative to the Hitachi Vantara-CTERA deal for moving data off HDI.
Lastly, DataDobi has added example dashboards to help customers take advantage of v7.0’s custom dashboard feature “that a customer can refer to for ideas to include in their own custom dashboards.”
NAKIVO has boosted its backup offering with additional VM support, Microsoft 365 protection, Spanish language adoption, and extended cybersecurity.
Sparks, Nevada-based NAKIVO was founded in 2011, five years after industry leader Veeam, to provide virtual machine and then physical server backup to small and medium enterprises. It says it has more than 29,000 customers spread across 183 countries who buy from more than 300 MSPs and 8,600 partners. That customer count is well short of Veeam’s 450,000-plus but is plenty high enough to give NAKIVO a viable business.
Bruce Talley
CEO Bruce Talley is the co-founder and his founding partners are Ukraine-based VP of Software Nail Abdalla and Turkey-based VP Pof roduct Management Sergei Serdyuk. Talley said of the latest Backup & Replication v11 release: “With v11, we’re introducing features that align with today’s demands for flexible data protection, increased security, and multilingual support. Our goal with this release is to provide a comprehensive solution that supports data resilience for businesses worldwide.”
There is added support for open source, KVM-based Proxmox VE, which “has become a mainstream virtualization solution,” reflecting the move away from Broadcom-acquired VMware by some customers. Both Veeam and Rubrik have added Proxmox VE support in recent months. NAKIVO provides agentless VM backups, incremental backups, multiple backup targets, as well as encryption and immutability for backups in both local and cloud repositories.
v11 adds Microsoft 365 backup to the cloud, including Amazon S3, Wasabi, Azure Blob, Backblaze B2, and other S3-compatible storage targets. The Backup Copy feature means customers can create multiple backup copies and store them in various locations – tape, in the cloud, on S3-compatible storage, or network shares, which strengthens disaster recovery capabilities.
Adding Spanish language support, as Rubrik has done, means customers can operate and manage NAKIVO’s software using Spanish, and also access its website, educational content, and user documentation in Spanish.
v11 supports NAS (network-attached storage) backup, source-side backup encryption, which is integrated with the AWS Key Management Service (KMS), and NetApp FAS and AFF storage array snapshots. Customers can back up their VMware VMs stored on these devices this way. Supported storage devices now include HPE 3PAR, Nimble Storage, Primera, and Alletra, as well as the NetApp arrays.
It also introduces a Federated Repository feature. This allows customers to create a scalable storage pool from multiple repositories, or “members,” which automatically work together to ensure continuous operation. If a repository reaches capacity or becomes inaccessible, backups are seamlessly redirected to available members, ensuring uninterrupted protection and access to data.
Customers can scale storage capacity by adding or removing members as needs change, optimizing resource use without unnecessary costs. For MSPs, and in addition to the existing MSP Console, v11 introduces the Tenant Overview Dashboard, a centralized tool designed for MSPs to monitor and manage all tenants in one place.
Other additions include the extension of Real-Time Replication (Beta) for VMware functionality to cover vSphere 8.0. Customers can create replicas of vSphere 8 VMs and keep them updated as changes are made, as frequently as once per second. They can also now enable immutability for backups stored on NEC HydraStor systems.
NAKIVO Backup & Replication v11 is available for download, with a datasheet accessible here. Customers can either update their version of the solution or install the 15-day Free Trial to check how the new features work.
CERN, with more than 120,000 disk drives storing in excess of an exabyte of data, is probably Toshiba’s largest end-user customer in Europe. Toshiba has released a video talking about how its drives are used in making Large Hadron Collider (LHC) data available to hundreds of physicists around the world that are looking into how atoms are constructed.
The Toshiba drives are packaged inside a Promise Technology JBOD (just a bunch of drives) chassis and CERN has been a long-term customer, starting with Promise’s 24-bay VTrak 5800 JBOD and Toshiba’s 4 TB Enterprise Capacity drives. Their capacity increased over time to the 18 TB MG09 series of these drives.
When the LHC smashes atoms into each other in collisions, component particles are spun off and detected. The LHC has a 24/7 operation for its collision detectors. As the LHC breaks atoms up into myriad component particles, masses of data is generated – around 1 TB/minute, 60 TB/hour, 1.44 PB/day and 10.1 PB/week.
The data is organized and accessed within CERN’s EOS in-house file system. It currently looks after more than 4,000 Promise JBODs and the aforementioned 120,000-plus drives.
Toshiba is now testing 20 TB MG10 series drives in a Promise 60-bay, 4RU, VTrak 5960 SAS chassis, which has so-called GreenBoost technology. This is based on intelligent power management, which Promise says can deliver “energy savings of up to 30 percent when compared to competing enclosures.”
Promise CMO Alice Chang said: “The energy crisis is now a real challenge to all enterprises, including CERN. The VTrak J5960 offers a well-rounded solution to solve this dilemma, and we are confident that Toshiba’s Enterprise Capacity HDDs, installed and operated in this JBOD, will support CERN’s future need for growing data storage capacity in a reliable and energy-efficient way.”
Rainer Kaese, Senior Manager Business Development, Storage Products Division at Toshiba, said: “We continue to develop higher capacities, up to 30 TB and beyond, as HDDs are and will remain essential for storing the exabytes of data that CERN and the entire world produce in a cost-effective and energy-efficient manner.”
That’s a sideswipe at the idea that SSDs will replace disk drives for mass capacity online data storage.
Backup vendor Veeam is increasing its data security capabilities via an anti-ransomware partnership with Continuity Software to boost customer cyber-resiliency.
Continuity Software’s StorageGuard solution analyzes the security configuration of storage and backup systems. It says it scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage, backup, and data protection systems – including Dell, NetApp, Hitachi Vantara, Pure, Rubrik, Commvault, Veritas, HPE, Brocade, Cisco, Cohesity, IBM, Infinidat, VMware, AWS, Azure, and now Veeam.
Andreas Neufert
This Continuity collaboration follows a Veeam-Palo Alto deal in which apps are being integrated with Palo Alto’s Cortex XSIAM and Cortex XSOAR systems for better cyber incident detection and response.
Veeam’s Andreas Neufert, VP of Product Management, Alliances, stated: “Partnering with Continuity is an additional step towards helping our customers maintain a safer security posture in compliance with specific regulations including CIS Control, NIST, and ISO throughout their Veeam Data Platform life cycles. The partnership helps to ensure our industry-leading technology, [and] also the surrounding environment, is continuously checked for misconfigurations and vulnerabilities to withstand cyberattacks, as well as adhering to ransomware protection best practices.”
Gil Hecht
Continuity becomes a Veeam Technology Partner (TAP) and the two say its StorageGuard will provide automatic security hardening for environments to improve customers’ security posture, comply with industry and security standards, and meet IT audit requirements.
We’re told StorageGuard is a complementary offering to the Veeam Data Platform, enabling customers to automatically assess the security configuration of their environment, while validating the security of all backup targets, including disk storage systems, network-attached storage (NAS), cloud, and tape that connect to customers’ environments.
StorageGuard can prove audit compliance with various security and industry standards, such as ISO, NIST, PCI, CIS Controls, DORA, and so on.
Continuity CEO Gil Hecht said: “The partnership with Veeam is a testament to the powerful value proposition StorageGuard delivers. Veeam customers can get complete visibility of security risks across all their backup and data protection environments, while ensuring their Veeam and backup storage systems are continuously hardened to withstand cyberattacks.”