Home Blog Page 94

NetApp, SpectraLogic link up for on-prem archival storage

SpectraLogic is providing validated on-premises object archival storage on tape for NetApp StorageGRID systems as an affordable alternative to S3-based public cloud long-term retention.

StorageGRID is NetApp’s S3-based object storage product with its software running in a virtual machine, bare metal server or NetApp appliance, and using SSD or disk-based storage. It can store billions of objects in a single namespace and scale up to 16 datacenters worldwide. Spectra’s on-prem archive pairs a tape library system and its BlackPearl object interface and storage device to provide an S3-to-tape front end to Spectra tape library systems. The key point here is that an On-Prem Glacier archive can provide both faster access and lower costs than an AWS S3 archive store.

Vishnu Vardhan, Director of Product Management for Object Storage at NetApp, said: “The Spectra On-Prem Glacier solution provides NetApp StorageGRID customers with the ability to add a Glacier tier configured with object-based tape to extend the capacity and reduce  the costs of their long-term on-prem object storage.

NetApp, SpectraLogic diagram
Spectra diagram

“The validated integration gives organizations more flexibility in how they store their data, especially archives and back-up  that are not in active use. By combining both technologies, our customers can get both the agility of flash and the longevity of tape to help ensure their data is always ready when they need it.”

Spectra and NetApp say that on-prem object stores can fill up with old and cold data. If that is tiered or mirrored to a backend long-term retention store based on tape, it can offer faster retrieval than Amazon’s public cloud S3 Standard and Amazon S3 Glacier protocols, and up to two thirds lower cost because there are no egress fees. Of course you’ll need a relatively large amount of data to go into on-prem Glacier to make the cost-savings appear as the necessary hardware and software must be purchased and maintained.

There are three tiers of Spectra’s On-Prem Glacier with varying retrieval speeds – Instant Retrieval, Eco, and Archive – as a table indicates: 

NetApp, SpectraLogic table

The On-Prem Glacier provides air-gapped tape and object lock to secure the data and provides an on-premises hybrid cloud. It can be used as a replication endpoint for the StorageGRID CloudMirror service create a protected copy of the objects.

Chris Bukowski, Senior Manager, Product Marketing, Spectra Logic, said: “We’re excited that NetApp has validated Spectra On-Prem Glacier for use by StorageGRID customers … We recognize the significance of the increasing costs of storing data in the cloud. This integration provides a valuable new option for reducing  cloud data retrieval and accompanying egress fees, while scaling to near-limitless capacity with less complexity.”

Access a NetApp solution brief document here. A short NetApp blog provides that company’s perspective on the partnership.

Comment

Object storage tends to grow and grow. Moving stale objects to background storage as a way of saving space and cost on the primary object can make sense once the cost savings are greater than the cost of the backend storage itself and the value of having faster data access is factored in to the equation.

Bootnote

Spectra tells us: “The BlackPearl appliance comes in two physically separate versions, BlackPearl NAS and BlackPearl S3. Only the BlackPearl S3 includes S3 Glacier and an object-based interface to tape.”

QiStor secures funding for key-value data store acceleration

Startup QiStor has scored pre-seed funding to develop hardware acceleration for key-value data store access.

Generalized server CPUs have been augmented by specialized hardware accelerators for some time now, ranging from RAID controllers, through SmartNICs (Bluefield), DPUs, SQL Accelerators (Pliops), to separate GPUs. The idea is to invent focused hardware to run a particular kind of processing faster than an x86 CPU and so enable it to run more app code. Key-value stores hold variable length data addressed by a data string or key and represent low-level storage engines such as RocksDB, used by Redis for Flash and MongoDB.

Silicon Valley-based QiStor’s founders have set out to run key-value store (KVS) data access operations faster and in a Platform-as-a-Service (PaaS) business model.

Founding CEO Andy Tomlin stated: “Our revolutionary service reduces [compute] power by 10x, enabling us to offer our customers high-performance solutions more economically and with reduced environmental impact … Just as the GPU is essential for AI, our technology will play a similar role for key-value.”

Tomlin was a fellow at Kioxia from 2020 to 2022, a CTO at devicepros LLC, and VP Engineering at Samsung’s closed-down startup subsidiary Stellus, which launched a key-value store-based all-flash array with NVMe-oF access in May 2020.

QiStor’s other founders are architect Justin Jones and design lead Chris Brewer. Its board includes John Scaramuzzo. Jones, who has authored about 30 storage patents, has been a principal engineer at Stellus and Samsung Electronics America before that. Brewer was an ASIC architect engineer at Toshiba America and principal engineer for ASIC design at SandForce, LSI, and Seagate prior to that.

Scaramuzzo has 30-plus years’ storage industry experience, holding CRO and CEO advisor roles at troubled Nyriad, and executive positions with Western Digital, SanDisk, Seagate, and Maxtor. He founded, led, and sold SMART Storage Systems to SanDisk for $307 million.

The funding round was led by datacenter expert Samir Raizada, who joins QiStor’s board. He said: “Our experience in the datacenter shows that QiStor is addressing an important customer need in a fast-growing market, especially with increasing AI demand. The QiStor team experience and technology really impressed the investors.”

Now that it has scored initial funding, QiStor plans to develop its technology further and expand its reach. We asked how its hardware acceleration unit would connect to a host server. Andy Tomlin told us: “QiStor’s algorithms run on FPGAs which connect via PCI and are directly provisioned in existing cloud platforms. No ASICs or custom add-in cards are needed.”

The company says key-value data stores enables every modern app in social media, mobile, web, AI, and gaming spaces to store data at scale. Tomlin thinks that a value proposition of QiStor is hardware acceleration in a Platform-as-a-Service (PaaS) operation versus needing to develop and sell an expensive ASIC and card. This will be “much easier and cost efficient for a customer to deploy.”

This positions QiStor against Pliops, which has developed its XDP add-in card to provide a key-value store interface and functionality to an NVMe SSD to speed up applications such as relational and NoSQL databases, and stores such as MySQL, MongoDB, and Ceph.

Komprise unveils elastic replication for non-critical data

Komprise has pushed out elastic replication that it says provides more affordable disaster recovery for non-mission-critical file data at sub-volume level.

Komprise’s Intelligent Data Management product can tier data between costly and fast storage and less expensive but slower access storage on-premises or in the public cloud. It says that disaster recovery of mission-critical file data is traditionally provided by mirroring NAS systems to a remote site and this synchronously replicates data at the volume level to an identical target NAS system using the same infrastructure. If the source NAS is affected by ransomware, that gets replicated as well.

Kumar Goswami

Komprise CEO Kumar Goswami states: “Our customers are uneasy about not having disaster recovery plans for all unstructured data, but as unstructured data volumes continue to balloon, the one-size-fits-all mirroring approach is too expensive for most.”

The underlying assumption is that customers also wish to secure their non-mission-critical data against disasters, but find it prohibitively expensive. It’s important to note that unstructured (file) data varies in criticality, ranging from “must not lose” to “doesn’t really matter.”

Komprise is providing so-called elastic replication at the sub-volume level so that shares or directories can be asynchronously replicated to a target file or object store that could be in the public cloud. It implies that this provides time for ransomware affecting such files to be detected and cleaned before the replication, and hence is safer than sync replication, which is nearly instantaneous. The replication can be to an immutable, object-locked target to provide resistance to ransomware attacks against the target systems.

Its replication is based on snapshots and the file metadata, including versioning, is retained if the files are sent to object targets, meaning that files can be restored from them with no loss of fidelity. The replication schedule can be based on user-set policies.

Komprise elastic disaster recovery

Komprise also claims that it can cut the cost of disaster recovery by 70 percent or more from the synchronous NAS-to-NAS schemes, with the source for this claim described in a blog going live later today.

Goswami said: ”We are excited to help organizations customize disaster recovery so they can afford the protection they need for all of their data, within tight budgets.”

Peer Software also provides asynchronous replication for Windows Server, as does Dell with its PowerFlex and StorONE for itsS1 systems. However, these solutions are system-specific, whereas Komprise is not.

Elastic replication is provided in a winter release of the Intelligent Data Management software. This allows users to save custom report configurations and maintain multiple versions of any type of report. It also supports tiering from Pure Storage FlashBlade//S to an on-premises FlashBlade//E target. As we wrote in March, the FlashBlade//E evolved from the FlashBlade//S and adds storage-only blades to the existing compute+storage blades to increase capacity and lower cost. Moving rarely accessed files and objects from the //S to the //E makes space on the //S for more more important data. Pure resells Komprise software.

Clean sweep for NetApp, Pure and Infinidat in GigaOm’s mid-size primary storage Radar

measuring tape
measuring tape

As in the enterprise storage Radar NetApp, Pure and Infinidat are the clear and only leaders and outperformers in GigaOm’s mid-size enterprise storage Radar report and chart.

The Radar report looks at suppliers’ products for a market sector based on key and emerging functional features beyond table stakes and business criteria, such as upgrability and efficiency, from a value point of view. The Radar diagram features a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes – balancing Maturity versus Innovation and Feature Play versus Platform Play – with a prediction of the supplier’s development speed over the coming 12 to 18 months, from Forward Mover, through Fast Mover to Outperformer.

A companion Key Criteria report looks at functional items in more detail. This Radar report also includes consideration of departmental needs in large enterprises. This year’s edition of the primary storage for mid-size enterprises Radar examines 14 vendors and 15 products – both IntelliFlash and Tintri arrays from DDN are evaluated. Last year’s edition looked at 13 vendors.

Overall, the analysts, Max Mortillaro and Arjan Timmerman, note that AI-based analytics, helping to defend against ransomware attacks, and STaaS are development areas in this mature mid-size enterprise primary storage market. Electricity price increases have caused a renewed emphasis on energy efficiency and carbon footprints.

Compared to last year’s report, Synology has entered the arena, and Dell, Hitachi Vantara, and IBM have all been demoted from the Leaders circle to the Challenger ring. The bulk of the suppliers – 11 of the 13 – are in the bottom half circle in three quite tight groups. The other three are classed as more nature and spread around the outer entrants’ ring or very close to it.

StorOne, StorPool and Lightbits Labs (NVMe/TCP-based storage) are classed as Challengers in the innovation-feature play quadrant, with Synology rated as a new entrant. Dell (PowerStore), Hitachi Vantara (VSP E-Series), HPE (Alletra 5000 and 6000) and DataCore (SANsymphony) are Challengers in the Innovative-Platform Play quadrant. All eight are fast movers – not developing as fast as the three Leaders. The report authors call out Lightbits’s blisteringly fast technology.

DDN (IntelliFlash), IBM (Spectrum Virtualize, FlashSystem 7300), and iXSystems (TrueNAS) are placed in the Maturity-Platform Play quadrant and judge to be forward movers. DDN (Tintri) is an outlier, a forward mover, all on its own in the Maturity-Feature Play quadrant.

The report authors point out that DDN’s NexentaStor, evaluated in the 2023 edition, is no longer actively developed – existing customers get only maintenance releases. DDN’s Tintri is now very mature and stable and there has been little development from its core design center of presenting VMware-specific storage features with no file or block abstractions. The authors observe that it was architected to solve a niche use case and has little potential for evolution.

A glance at the 2023 edition’s Radar chart shows a more evenly distributed set of suppliers in the Challengers ring.There has been a substantial rise in innovation since then causing the clumping into two supplier groups in the bottom half of the 2024 chart.

We are left with the impression that this mature market has seen renewed development because of ransomware encouraging AI analytics, and a cost efficiency focus helping STaaS offers proliferate. That has also meant green agenda items – such as energy efficiency and a lower carbon footprint – gain a stronger emphasis.

It may be that DDN’s Tintri and IntelliFlash products will enjoy renewed development this year, as might IBM’s FlashSystem/Spectrum Virtualize product, and not be outliers in the 2025 edition of this report. GigaOm subscribers receive a copy of the report. If you are not a subscriber then its detailed contents are not available.

NetApp and VAST Data take rivalry to the F1 racetrack

NetApp has partnered with the Aston Martin Aramco Racing team and VAST Data is linking up with the Williams Racing team as the two file storage suppliers duke it out in Formula One.

Formula 1 sponsorships are big business in the world of motor sports. Businesses that sponsor F1 teams include financial, telecom, tech, and consumer brands wanting exposure to F1’s global audience. Examples include Petronas (Mercedes), Aramco (Aston Martin), and Oracle (Red Bull). Many sponsorship contracts are valued in the tens of millions on an annual basis. Title and principal sponsorships can be $50 million-plus per year with top teams. NetApp and the Aston Martin Aramco team have renewed a three-year agreement where NetApp is the team’s Global Data Infrastructure Partner.

Clare Lansley and George Kurian, NetApp CEO
Clare Lansley and George Kurian, NetApp CEO

Clare Lansley, Aston Martin Aramco CIO, said: “NetApp has been with us all the way on this journey together and are fundamental to our trackside operations and at our Headquarters at the AMRTC in Silverstone. We use data to improve our performance and go faster and NetApp’s work with the team is vital to this success.”

We’re told that the Aston Martin Aramco team collects data from hundreds of sensors, including real-time performance statistics such as track temperature, tire degradation, and aerodynamics. Instant access to that data enables the team to adapt its race strategy in real time. Sharing data between the track and team headquarters is a mission-critical function.

Aston Martin Aramco reached the podium eight times, scored 280 points, and finished fifth in the Constructors’ Championship in the 2023 Formula One season, partly by using the data NetApp stores and manages. NetApp provides its FlexPod, Cloud Volumes ONTAP, Cloud Insights, and Storage Workload Security products and services to the Aston Martin team.

We’re told the sheer speed at which Aston Martin Aramco can harness its stored data gives it a competitive advantage. NetApp CMO Gabie Boko said: “When Aston Martin returned to the Formula One circuit after more than 60 years away, they needed a technology partner to help them rise to every moment, both on and off the track. NetApp provides Aston Martin Aramco with an intelligent data infrastructure that runs at the speed of Formula One.” 

VAST and Williams

VAST Data has joined Williams as an Official Partner and technology vendor for the 2024 season and beyond. The logo will be plastered on on driver overalls and the FW46 cars driven by Alex Albon and Logan Sargeant in the upcoming Formula One season.

The yet-to-be revealed FW46 car for the 2024 season
The yet-to-be revealed FW46 car for the 2024 season

In a typical race weekend, the hundreds of sensors on a Williams F1 car will generate 1 TB of data, and there are two cars per race. Williams said designing and simulation testing the car generates hundreds more terabytes. Understanding the in-race and design-and-test data is critical to on-track success, and VAST Data’s skills in managing and processing large datasets can help optimize the team’s performance, we’re told.

Peter Gadd, VP International at VAST Data, said in a statement: “VAST Data is thrilled to be an Official Partner of Williams Racing. This partnership symbolises our commitment to pushing the boundaries of technology and performance. Williams Racing’s legacy of innovation and excellence in Formula 1 aligns perfectly with our vision of revolutionising data-driven insights in high-stakes environments.

“By bringing our advanced data management capabilities to the forefront of motor racing, we are not just sponsoring a team; we are driving a new era of technological synergy between data science and the pinnacle of motorsport.”

Pat Fry, Williams Racing CTO, said in his statement: “F1 teams generate enormous amounts of data every day, so we’re privileged to partner with Vast Data whose expertise in managing and processing large datasets will play a crucial role in optimizing our performance. The collaboration will allow us to harness the full potential of our data and help move us up the grid.”

The Williams team has faded since the glory days of the ’90s when its drivers won world championships and Williams itself won Constructors’ championships. The team was bought by private equity business Dorilton Ventures for around $200 million in 2021 and aims to revive its racing fortunes.

Not every sponsorship is a direct cash payment. Some sponsorships involve companies providing technical support, R&D partnerships, or supplying components to the team. Exact details of the sponsorship arrangements for VAST Data and NetApp are confidential.

The two storage suppliers are fighting a marketing war using Formula One racing team proxies.

Rubrik planning IPO after US fraud investigation completes

Rubrik is planning an IPO in April when a fraud investigation in the US should be finished, according to a Reuters report.

Bipul Sinha.

The firm began its startup life as a data protector with backup and restore software, and has since moved into cyber security and resilience. It was founded in 2014 by ex-venture capitalist and CEO Bipul Sinha, CTO Arvind ‘Nytro’ Nithrakashyap, VP engineering Arvind Jain and Soham Mazumdar. It has raised in excess of $550 million with an E-round for $260 million in 2019 followed by a Microsoft equity investment in 2021, at a valuation of around $4 billion. The business has grown quickly, having gained >5,000 customers and $600 million annual recurring revenue. It hired bankers – Goldman Sachs, Barclays and Citigroup – to work on an IPO in June last year according to an earlier Reuters report and has filed IPO papers with the SEC.

Bloomberg suggested in September last year that it could IPO by the end of 2023 and Pitchbook notes Rubrik has raised about $1 billion in funding.

Sinha has fought hard to keep Rubrik on top of emerging trends in the market. It announced a Ruby Gen AI copilot in November last year, adopted zero trust principles in its product, provided a ransomware guarantee, moved into SaaS app data protection, set up an MSP business, and acquired Laminar for its data security posture management software.

Rubrik competes with legacy data protection vendors, such as Commvault, Dell and Veritas, relative monster newcomer Veeam, Druva, fellow startup Cohesity – which has evolved into a data management company as well as espousing cyber resilience – and a mass of smaller suppliers such as HYCU and Asigra. Cohesity made an IPO filing at the end of 2021 but no IPO has yet taken place.

Reuters based its report on people familiar with the matter, and said the US Department of Justice is investigating a former sales division employee and the diversion of an undisclosed amount of funds from 110 US government contracts, worth $46 million, into a separate business vehicle. If the investigation finishes in March then Rubrik, which is co-operating with the Justice Department, could run its IPO in April.

Neither Rubrik, Microsoft nor the US Department of Justice commented on the Reuters story. When we asked Rubrik a spokesperson replied: “We decline to comment.”

Data steward Rimage pivoting from optical disk publishing to data lifecycle management

Established optical disc-based archiver Rimage is pivoting to data management with an AI-powered digital asset metadata extraction and search product announced, and advanced optical storage in its roadmap.

Rimage describes itself as a global leader in on-demand enterprise and consumer digital publishing to CD, DVD, and Blu-ray discs and USB sticks with 194 channel partners and over 20,000 systems installed by more than 3,000 customers in law enforcement, government, manufacturing health and finance. It has just launched its SOPHIE digital lifecycle management system and has an Electronic Laser Storage (ELS) initiative underway focussed on emerging optical disk technology. Rimage presented its SOPHIA product and ELS plans to an IT Press Tour in Silicon Valley in January. Before we examine these it is instructive to understand Rimage’s history and grasp a key takeaway.

Rimage history

It was founded in 1987 as IXI inc. in Minnesota, with its founders having bought floppy disk manufacturing assets from the previous owner. This evolved into CD-ROM printing, with colored cartridge labels and also direct thermal printing in the disks, and  CD-R and DVD-R duplication equipment. Rimage went public in 1992. The diskette business became loss-making after 1995 and it put its main focus on the duplication systems, and full-color images on CDs. DVDs came along in 2000 and Rimage produced DVD burners and photo-quality printing systems for opticaldisk labels. High-definition Blu-ray disk support was added in 2007. It acquired Qumu in 2011 for $57 million to enter the enterprise video content industry. Rimage changed its name to Qumu in 2013 but then ran into problems with its declining disk publishing business while the video content business had healthy prospects.

Rick Bunce.

Qumu Corp sold its Rimage assets to private equity business Equus Holdings in 2014 for $23 million, with Equus ressurrecting the Rimage name. It did not prosper as well as hoped and Christopher Rence was appointed Rimage CEO in 2020 to continue an ongoing transformation. The pivot to digital data asset management is taking place as part of Rence’s strategy and product line restructuring. COO and CRO Rick Bunce told the Press Tour audience that, like Smith Corona with its typewriters, Rimage had to change. It has been retreating from some verticals – eg finance – as its product offer stagnated, and needs new products.

The takeaway to bear in mind is this: Rimage has always been about the production of disks – from floppy disks through CDs and DVD to Blu-ray disks. Although it uses optical disks for archiving data it has not been involved with archiving per se, specifically with archiving software and archiving data to tape. Its DNA is centered on disks and not ribbons – of digital tape. SOPHIA could change that.

The pivot

Bunce told the Press Tour that AI is a new trend, some of which is LLM (Large Language Model)-based and other purpose-built AI. He identified AI that is used for storage, and storage that is used for AI, and asserted that purpose-built AI is what’s going to matter and machine learning is key.

Rimage’s broader strategy has hardware (Enterprise LaserStorage) and software (Data Lifecycle Management) elements that help customers to manage data from birth to destruction. Rimage now sees itself as a digital steward and thinks the data lifecycle management (DLM)/document asset management (DAM) market is fragmented with many suppliers and no one dominant vendor. That means a new entrant can prosper.

it has two product categories in its data stewardship portfolio: DLM and ELS. SOPHIE is the first product in the DLM area and is AI-powered.

SOPHIA

SOPHIA is from the Greek for knowledge and wisdom. Rimage’s product is the first of a set of modules, software abstractions above storage hardware, and is concerned with Digital Asset Management (DAM).

Rimage claims it seamlessly automates data ingestion from any source and builds a global repository to organize and optimize digital assets in a streamlined and secure way. The product uses AI to help it extract and generate contextual metadata from ingested digital assets and users can also customize filter fields, upload requirements and workflow controls to make its operation more efficient. Search is AI-powered with metadata filtering capabilities. There are permissions controls to help with security and governance features to aid compliance.

SOPHIA has file syncronization features to make files available to any location, whatever the network connection, syncs all changes back to the SOPHIA library. It works with any native Windows or OSX (macOS) application, eliminating any need for plug-ins or third-party applications. Linux support is coming.

SOPHIA integrates into all of the Rimage family of products and supports Al–driven data management.

The software is not entirely home-grown. Bunce explained: “We use a number of partners bringing infrastructural software into Sophia. We buy or license it and build on top of that. … We will slot the new software into our existing management layer.” One its partners is Germany-based PoINT Software which produces archive software, including software moving archive data to object storage on tape.

SOPHIA covers part of the DLM feature set. Rosen said Destruct (data destruction) is coming soon and Rimage is working hard on developing deeper analytics.

The product uses Google Vision AI and Rimage has written some of its own AI software.

SOPHIA is data agnostic, supporting file, block, object, structured and unstructured data. It has customizable AI features,an API, and flexible deployment, either on-premises, as SaaS or in hybrid form. SaaS SOPHIA can use Azure or AWS or horizontal/vertical MSPs.

ELS

The term ELS, Electronic Laser Storage, refers back to Rimage’s CD/DVD/Blu-ray heritage as all these optical disk drives use lasers for reading and writing. Their use in archiving is falling away because of capacity limitations. Enterprise sales director Jeff Rosen noted: “Blu-ray has a maximum of 200GB/disk. It needs increasing and the density will scale.”

Optical disks can last for 50 years and complement LTO tape by having better (faster) random data access. Tape is not actually physically immutable, Rosen explained, being subject to EMPs (Electro-Magnetic Pulses). Optical disk is a safer medium in this regard. He also observed: “China has a mandate that all of its archive data will be optical disk in five years and not tape [and] China will effectively restart an optical disk industry. A Chinese company is building what we’re doing.”

Also Rosen and Bunce revealed multi-layered optical disk density improvements are coming from suppliers such as Sony, Folio Photonics, and Pioneer. In other words, optical disk is going to make a comeback. But these products don’t exist yet.

He dismissed Microsoft’s Project Silica though, saying: “Microsoft has abandoned glass – it’s buried maybe in R&D.”

Comment

The ELS product, when it arrives, being optical disk-based will provide an upgrade path for Rimage’s installed base of >3,000 customers with 20,000 or so optical disk printing and publishing machines.

The general theme of DLM and archiving software is that it is media-agnostic. Digital archives can live in tape libraries, on optical disk, even on spun-down disk, or on whatever devices are used by the public clouds underneath their S3 or Blob storage abstractions – be they disk or tape. 

Based on this thought, Rimage could embrace LTO tape technology, also object-to-tape technology, and set up alliances with tape library system vendors such as Quantum and SpectraLogic. It should perhaps join the Active Archive Alliance and learn what the archive software players are doing. 

Yes, the data lifecycle management market is fragmented, with players including Komprise, Datadobi, Data Dynamics, SpectraLogic, Arcitecta, Quantum, and StorMagic. But it is also quite mature in a software sense. We can expect all the existing archive software system players to add AI features to their products, with AI becoming table stakes. It is already happening in the data protection market. On the hardware side, new glass-based media technologies will likely come along, but the archive storage software layer will adapt to them and not be revolutionized or replaced.

If Rimage now sees itself as a data lifecycle management steward then turning its face away from the tape archive system market may seem like it is deliberately missing an opportunity.

Bootnote

Rimage has a range of additional products, which include:

  • Rimage Protection Shield for cyber security;
  • Rimage Data Solutions to gather and ingest digital assets from various devices onto multiple storage platforms;
  • Rimage Data Preservation for long-term retention.

Storage news ticker – February 4

Data protector Acronis has become a member of the Microsoft Intelligent Security Association (MISA), an ecosystem of independent software vendors (ISV) and managed security service providers (MSSP). MISA members have integrated their products with Microsoft security technology to build a better defense against increasing cybersecurity threats.

Dell’Oro Group predicts that the SmartNIC market will exceed $5 billion by 2028. Accelerated computing will continue to push the boundaries in server connectivity, demanding port speeds of 400 Gbps and higher speeds.

  • The total Ethernet Controller and Adapter market, excluding the AI backend network market, is forecast to exceed $8 billion by 2028.
  • The majority of accelerated servers will have server access speeds of 400 Gbps and higher by 2028.
  • SmartNICs are expected to cannibalize standard NICs during the forecast period.
Jim O'Dorisio, HPE storage exec
Jim O’Dorisio

… 

HPE has hired Jim O’Dorisio as its SVP and GM for storage, replacing the promoted Tom Black. O’Dorisio comes from being SVP and GM at Iron Mountain. He was VP and COO of business and technology at EMC before that, in the period before Dell bought EMC, spending 14 years at EMC altogether. He’ll report to Fidelma Russo, EVP of Hybrid Cloud at HPE, as does Black. O’Dorisio and Russo overlapped at Iron Mountain and EMC.

Storage industry analyst firm DCIG has named Infinidat‘s InfiniBox and InfiniGuard solutions as among the top five cyber secure backup targets. DCIG reviewed 27 different 2 PB+ cyber secure backup targets as part of its independent research in the enterprise market where ransomware and malware are first attacking backup targets to hinder an enterprise from recovering from a cyberattack. DCIG opted to solely focus its report on cyber secure backup targets that supported NAS interfaces. The top five suppliers/products were ExaGrid EX189, Huawei OceanProtect X9000, Infinidat InfiniBox/InfiniGuard, Nexsan Unity NV10000, and VAST Data’s Data Platform. The “2024-25 DCIG TOP 5 2PB+ Cyber Secure Backup Targets Global Edition” report is now available here.

Neurelo emerged from stealth, introducing a comprehensive and extensible data access platform that enhances developer productivity by simplifying and accelerating the way they build and run applications with databases. The Neurelo Cloud Data API Platform is generally available as of today, providing auto-generated, purpose-built REST and GraphQL APIs, AI-generated custom-query endpoints, deep query observability, and Schema as Code. The company has secured $5 million in seed funding led by Foundation Capital with participation from Cortical Ventures, Secure Octane Investments, and Aviso Ventures.

Torsten Volk, managing research director, Enterprise Management Associates (EMA), said: “Neurelo turns data sources into centrally controlled and governed APIs that developers can simply consume instead of having to worry about the intricacies of the specific type of database. The ability for DBAs, cloud admins, IAM admins, and other relevant roles to ensure consistency and compliance of individual databases organization-wide instead of chasing after each data source separately is a significant upside of the Neurelo platform.”

Belgian company MT-C S.A., which produces the Nodeum HPC data mover product, is renaming itself Nodeum.

Own Company, a SaaS data protection supplier, launched a global Channel Partner Program so resellers and system integrators can prevent their customers from losing mission-critical data and metadata. With automated backups and rapid recovery, Own partners will be equipped with the resources, skills, and support necessary to generate new lines of business and increase profit margins.

Scale-out filer Qumulo announced the availability of Superna Data Security Edition for Qumulo and its Ransomware Defender, which automates real-time detection of malicious behaviors, false positives, and other events consistent with ransomware access patterns for both SMB and NFS files. Superna Data Security Edition for Qumulo detects malicious behavior at the onset (often referred to as the “burrowing event”) and also automates access lockout. Superna’s native integrations with SIEM, SOAR, and ticketing platforms automatically alerts administrators and other users involved in incident response, providing them with the information required to accurately prioritize the incident. Get more information here.

Wells Fargo analyst Aaron Rakers met Seagate CFO Gianluca Romano and told subscribers: “Seagate is not focused on, nor does it expect, its leadership position in HAMR-based HDDs to drive nearline market share expansion.” No need for Western Digital and Toshiba to worry then.

The Financial Times reports that SK hynix will build a plant in Indiana to stack DRAM chips into HBM units for packaging with Nvidia GPUs in other, possibly TSMC, plants in the USA.

The SNIA’s Storage Management Initiative (SMI) announced that the Swordfish v1.2.6 bundle is now available for public review. Swordfish provides a standardized approach to manage storage and servers in hyperscale and cloud infrastructure environments. Swordfish v1.2.6 key features include:

  • New “NVMe-oF and Swordfish” white paper, which discusses how NVMe over Fabrics (NVMe-oF) configurations are represented in Swordfish Application Programming Interface (API).
  • New metrics for FileSystem, StoragePool, StorageService, and enhancements to VolumeMetrics.
  • New mapping and masking models using Connections in the Fabric model and deprecates StorageGroups.
  • Support for new volume properties: ProvidingStoragePool, ChangeStripSize, Asymmetric Logical Unit Access (ALUA) to manage reservations
  • Enhancements to NVMe Domain Management, including ALUA support.
  • Updates to NVMe namespaces, such as simplified Logical Block Address (LBA) Format representation and multiple namespace management.

Startup Zilliz, which supplies the open source Milvus vector database, has a forthcoming Zilliz Cloud service update, with RAG/GenAI, recommendation systems, and cybersecurity/fraud detection capability. RAG/GenAI will empower autonomous agents replacing human support agents. Recommendation systems will be powering ads, product, and news recommendations based on user preferences and actions taken. The cybersecurity features will be applied in banking transactions and antivirus systems to quickly identify anomalies in new data by finding similarities within a short time frame.

Weka bags another GPU-as-a-Service farm customer

UK-based NexGen Cloud has signed up Weka to provide parallel access filesystem software for its GPU-as-a-Service customers, becoming Weka’s third GPU cloud farm customer.

GPU-as-a-Service (GPUaaS) operations are spinning up in response to the generative AI training and inferencing boom. They buy thousands of Nvidia GPUs and rent them out on a pay-per-use basis. Enterprises buying smaller numbers of GPUs or GPU servers are finding that the supply is constrained because Nvidia can’t get enough of them built to meet the demand. NexGen Cloud, started up in 2020, has one of the largest GPU fleets in Europe – including H100 GPUs – and it’s powered by renewable energy sources. Nexgen says its services are up to 75 percent more cost-effective than legacy cloud providers. Weka’s Data Platform is a fast scale-out and parallel filesystem with integrated data services.

Weka president Jonathan Martin explained in a statement: “GPU cloud providers like NexGen Cloud will play a critical role in accelerating the next wave of AI innovation. The Weka Data Platform helps GPUs to run at peak performance and efficiency, reducing energy consumption and giving customers a much more sustainable way to run enterprise AI workloads – even at extreme scale.”

Weka NexGen Cloud video screengrab.

NexGen has an existing HyperStack offering and is developing an AI Supercloud. Both use Weka’s filesystem to provide file storage for customer data. NexGen plans to invest $1 billion to build its AI Supercloud in Europe, with $576 million already committed in hardware orders with suppliers. Deployment in European datacenters began late last year.

Chris Starkey, cofounder and CEO at NexGen Cloud, recalled: “When we started building our AI Supercloud solution, we looked at several data platforms and parallel filesystem solutions. The environment’s extreme scale and performance demands quickly removed other vendors from consideration.”

Weka being software-defined and hardware-agnostic was a characteristic that resonated with Starkey. “The Weka Data Platform immediately stood out, not only for its exceptional performance and low latency but also for its ability to maximize the efficiency of our GPU cloud with a hardware-agnostic, innovative software solution. It enabled us to leverage existing hardware investments and power all of our cloud services as efficiently and sustainably as possible, which is core to our mission.”

Weka competitor VAST Data has benefited from GPU cloud farm adoption, counting CoreWeave, Lambda Labs and Genesis Cloud as customers. Its pitch is that standard NFS is easier to use than parallel filesystems, like Weka’s Data Platform. Weka’s pitch is that its software provides very high performance, is hardware-agnostic, and has lots of data services.

Lambda Labs also has a storage deal with DDN.

Weka has existing filesystem supply deals with two North American GPUaaS providers: Applied Digital (under Sai Computing) and Iris Energy – a Canadian bitcoin miner and GPUaaS operator with datacenters in Canada, the US and Australia. Incidentally, Iris Energy also uses renewable energy sources.

Find out more about NexGen Cloud’s GPU cloud offerings powered by Weka here.

Ex-Veeam CTO set to rock up at Snyk

Veeam CTO Danny Allan

Danny Allan, the now-former chief technology officer at Veeam, is moving to developer code security business Snyk

Snyk’s CEO, Peter Mackay, posted a comment on Allan’s LinkedIn post about his departure from Veeam, talking up the “next chapter” of Allan’s career “with his friends at Snyk”.

Mackay was co-CEO and President at Veeam from July 2016 to November 2018 and worked at VMware, Desktone, IBM and Watchfire before that. Allan was also at Veeam during the same period, and has spent time at VMware, Desktone, IBM and Watchfire too. The pair of execs appear to go way back.

Danny Allan.

Adi Sharabani, who also used to work at Watchfire and IBM, became a Snyk investor and advisor in November 2018 and remains in these roles, according to his LinkedIn profile. He also was appointed as Snyk’s CTO in May 2022 and left that role in June last year. So Snyk has a CTO vacancy and Allan is moving to the position.

A Snyk spokesperson told B&F: ”Danny is joining Snyk as its new CTO on Tuesday.” That will be February 6 and Snyk will make a formal announcement then.

Snyk was founded in Tel-Aviv and London in 2015 by Assaf Hefetz, Danny Grander, Guy Podjarny, and Jacob Tarango. Podjarny was the founding CEO and he gave way to Peter Mackay, who was an investor in the company, in June 2019. The headquarters are located in Boston.

The business has raised $1.2 billion in funding, according to Crunchbase, with the last G-round in 2022 pulling in $196.5 million to give Snyk a $7.4 billion valuation. However an earlier $304 million F-round in 2021 was at a higher valuation of $8.5 billion. There was a $25 million corporate round last year with ServiceNow the investor.

Privately-owned Snyk reportedly achieved a 153 percent revenue increase from 2021 to 2022 to $147 million, when it had in excess of 2,300 customers. It had 1,135 employees at the end of 2022 but laid off 128 of them in April 2023.

Researcher proposes DNA-based computing platform

A researcher at the Rochester Institute of Technology (RIT) has devised a microfluidic lab on a chip that can perform artificial neural network (ANN) computation on data stored in DNA.

DNA storage relies on data being stored as specific combinations of the four nucleobases – cytosine (C), guanine (G), adenine (A), and thymine (T) – found in the double helix formation in the DNA biopolymer molecule. One method has pairs of these nucleobases symbolizing binary ones and zeros. Amlan Ganguly, computer engineering department head in RIT’s Kate Gleason College of Engineering, co-authored a scientific paper in which he envisions “a computing platform using DNA molecules that are capable of computation in-situ without the need for domain conversion of information from DNA to electronics.”

Ganguly states: “DNA is excellent at storing information, in fact, it is much better than the electronic modes of memory because it is about 3-to-6 orders of magnitude more compact than most memory hardware that we have; it is also much more reliable and durable.”

He adds: “We proposed to represent numbers through concentrations of solutions containing specifically manipulated DNA molecules and computing operations as manipulation of DNA molecules – operations like addition and multiplication and other non-linear functions necessary for network computations can be performed. That is the bridge from storage to computation and using DNA as a vehicle to do the computation.” 

DNA storage diagram
Diagram from Ganguly’s paper

The paper states: “While biochemical reaction representing computations using DNA molecules are several orders of magnitude slower than electronic gates, their data density is 3 orders of magnitude higher and 8 orders of magnitude lower in energy consumption than solid state memory.”

The RIT integrated circuit (IC) based on microfluidics is suited for “highly dense, throughput-demanding bio-compatible applications such as an intelligent Organ-on-Chip or other biomedical applications that may not be latency-critical.”

The research paper abstract says: ”It computes entirely in the molecular domain without converting data to electrical form, making it a form of in-memory computing on DNA. The computation is achieved by topologically modifying DNA strands through the use of enzymes called nickases.”

Data is represented stochastically through the concentration of the DNA molecules that are nicked at specific sites. A stochastic process has random or probabilistic dynamics over time, and the randomness is modeled mathematically to study the statistical properties of the process. The probabilities of different outcomes can be analyzed.

Ganguly says DNA computation and storage uses less energy than electronic storage and computation: “We are in the age of big data that needs to be stored somewhere. We don’t think that more datacenters are the answer, or even the best answer. Each datacenter requires the equivalent of a city block of power. Building, maintaining, and operating more traditional datacenters is not sustainable.”

Comment

DNA storage has little chance of replacing electronic storage media such as disk and solid state drives, which have millisecond-class data access speeds, and DRAM and CPU/GPU computation operate at microsecond-class speeds or faster. This is because reading and writing with DNA, meaning sequencing and synthesizing DNA molecules, is slow. It needs hours. Ganguly admits that computation using biochemical reactions is several orders of magnitude slower than using a CPU or GPU.

VCs embrace continuation funds for longstanding startups

Startups that are over ten years old often disappoint their early venture capitalists (VCs) due to prolonged timelines in yielding returns. This locks up their invested cash, making it unavailable for other ventures, like startups with quicker exit potentials.

Silicon Valley VCs are taking a leaf out of the private equity playbook by setting up continuation or secondary funds. These funds purchase investments in mature startups that have not yet exited through acquisition or IPO. The cash is used by the VC to return money to its investment partners, the LPs (Limited Partners), while the continuation fund retains the VC’s holdings in the startup.

According to the UK’s Financial Times, a VC startup investment can have a ten-year run time with a possible two-year extension. After that, and if there is no return prospect, they can shut the startup down or sell their holding at a discount.

Here is a table, not an exhaustive one, listing some long-life storage startups:

Many appear to be growing, such as ExaGrid, which claims to achieve record revenues virtually every quarter, and yet do not IPO.

Ditto the two object storage suppliers, Cloudian and Scality. They’re growing, but competing with ten-year-old MinIO and its $126 million in funding. Meanwhile, all the main IT storage suppliers, except HPE, have their own object storage tech, meaning an acquisition looks unlikely.

Many storage startups are thus delaying an IPO. Some, due to high valuations, face problematic acquisition exits – as in, who can afford them? Others face an unattractive IPO landscape as investor interests have shifted from the areas popular during their founding days to newer fields like big data analytics and AI. The same can be true of acquisitive businesses that may, these days, also want to invest in generative AI technology businesses and not technology that was a good idea ten years ago.

The continuation fund concept provides a way for worn-out VC investors to get some money back from their holdings in these companies, and thus provide liquidity for new investments.

Insight Partners has set up a continuation fund and moved 32 of the companies in which it had investments into the new fund. Insight’s LPs received $1.3 billion as a result. Potential LPs for a VC will look at the capital it has distributed to LPs in the past and, if this has shrunk, they’ll look elsewhere to for a dependable return on their cash.

In effect, the continuation fund is a way for VCs to transition towards becoming private equity players.

The FT also reports that Lightspeed Venture Partners is talking to investors about setting up a $1 billion continuation fund for ten of its holdings. It has $25 billion in invested assets, including holdings in Nest, Snap, and, of particular relevance to our storage focus, Rubrik. We are not saying that any one of these is heading for continuation fund status. Rubrik seems more likely to IPO than not.

The continuation fund notion raises two questions in our mind. First, will the valuation of VC-held companies change if they are moved into continuation funds?

Secondly, will the continuation fund VCs, adopting more of a private equity mindset, start involving themselves assertively in the management of such startups, and restructure them to become profitable and acquisition-ready? Long-life startups cannot be VC-funded businesses forever. It’s an IPO, acquisition exit, or private equity management via the halfway continuation fund house for them.

Bootnote

The Institutional Limited Partners Association has various continuation fund resources for investors, both limited and general partners in VC funds.