Home Blog Page 70

Dremio paddles harder in the growing lakehouse market with product improvements

Unified data lakehouse platform Dremio has unveiled new capabilities that are designed to help its Apache Iceberg-driven lakehouse continue to challenge in the data analytics market.

Dremio global customers include Maersk, Amazon, Regeneron, NetApp, and S&P Global. Customers use the technology to control and power their data mesh, data warehouse migration, data virtualization, and unified data access projects and workloads.

Using open source Apache Iceberg and Apache Arrow, Dremio claims it can provide an open lakehouse architecture that delivers platform flexibility and “fastest time to insight” at a “fraction of the cost” compared to rivals.

The vendor has just announced partner-driven collaborations that bring the lakehouse to “every environment” for all of a customer’s data. Additionally, Dremio has detailed new capabilities that enhance its “price-performant” SQL data engine, making it “even faster,” “easier to use,” and more broadly integrated across the analytics and AI ecosystem.

“Dremio is committed to enhancing our customers’ analytical capabilities, no matter where their data resides,” said Sendur Sellakumar, CEO of Dremio. “Teams need easy-to-use, self-service tools to swiftly extract value from their data. We’re continually advancing our core capabilities and innovating to help global customers meet critical business objectives and stay ahead of the competition.”

As part of the delivery effort, Dremio has signed two new go-to-market partnerships. Cloud data storage player VAST Data is building up its existing partnership with Dremio, with the introduction of its Zero Trust Lakehouse Platform with Dremio offering. And STACKIT, which offers a lakehouse platform service to European enterprises to meet stringent data residency requirements, is now aligned with Dremio’s technology too.

On the technical enhancement side, Dremio’s SQL engine has been speeded further with Reflections Query Acceleration technology. Improvements include recommendations for optimal query acceleration usage, scheduled Reflections for up-to-date data access, and incremental updates for streamlined management and cost reduction, “advancing towards autonomous Reflections,” said Dremio.

Boosting its open source credentials, the supplier has now incorporated Nessie into its software, with these capabilities simplifying data engineering with Git-like workflows on lakehouse data, enabling users to run production workloads with end-to-end Dremio support for a Nessie-native Apache Iceberg catalog.

In addition, more GenAI capabilities for “faster insight” have been added to the platform. GenAI Text-to-SQL enables intuitive querying through natural language, while advanced GenAI-driven data descriptions and labelling facilitate “fast, accurate data discovery and curation.”

The product and service updates come after Dremio hired ex-Splunk chief cloud officer Sendur Sellakumar as its CEO and president last year.

Everspin’s MRAM revenues spin down a tad

MRAM supplier Everspin saw its first FY2024 quarter make a loss after 11 consecutive profitable quarters.

Everspin’s Magneto-resistive RAM (STT-MRAM) is non-volatile memory with DRAM-level speed, and comes in two formats: Toggle MRAM combining the non-volatility of flash with the speed and endurance of SRAM, and STT-MRAM (spin-transfer torque MRAM) which needs less switching energy than Toggle MRAM and enables higher densities and lower costs. Like Optane, MRAM has not been able to make much progress outside specialist low volume markets.

Revenues of $14.4 million in the quarter ended March 31 were 3 percent down on a year ago but towards the high end of Everspin’s forecast. However a loss of $200,000 was not forecast. 

The first quarter-on-quarter revenue decline since FY2022.

President and CEO Sanjeev Aggarwal said: “Our first quarter revenue came in near the high end of our expectations while our GAAP net income came in below our expectations.”

On the bright side: “We are very pleased with some of our recent wins, most notably with IBM for our PERSYST STT-MRAM solution, which will be used in their FCM4 FlashCore Module, and we are entering into an agreement with a commercial customer to provide foundry services. Looking ahead, we expect our Toggle and STT-MRAM design wins to ramp in the second half of 2024.”

CFO Anuj Aggarwal had said at the previous quarter close that “Profitability [was] a key focus for the company.”  This time he said: “We are pleased to end the quarter with a strong balance sheet and solid gross margin. We are encouraged by the traction our products have had, as evidenced by our recent design wins, and we remain confident in our ability to scale the business and convert those design wins to revenue.”

Next quarter’s outlook is for revenues between $10 and $11 million, which would represent a 33.333 percent drop on the year-ago Q2, and another loss. CEO Aggarwal said in the earnings call: “Looking ahead, we expect to see flattish product revenue in the second quarter compared to Q1 due to continued weakness in Asia Pacific and in industrial, consumer and auto end markets.” 

With FY2024 revenues weighted to the second half there is doubt of FY2024 being a growth year.

CFO Aggarwal said: “We anticipate revenue for the first half of 2024 to be lower than our typical seasonality. This has proven true for our first quarter and we expect our second quarter to be down from the first quarter, reflecting flattish Toggle revenue and lower RAD-Hard revenue.”

RAD-Hard refers to a strategic radiation-hardened FPGA and a project renewal has been delayed because of changes in a US government agency’s funding schedule. CEO Aggarwal said: “We met all our deliverables in Q1, so we fully expect to get that project going sometime in Q2 or as soon as the funding comes through.”

As for the outlook for all of 2024, CEO Aggarewal said: “We expect the year to be weighted more heavily towards the second half as we continue to experience a slower start to the year. This slower start can be attributed to continued economic weakness in Asia Pacific, as well as higher interest rates which have driven customers to focus on lean inventory practices along with shifting project schedules for some government contracts.”

The IBM FlashCore Module win is excellent for Everspin but it could really do with more sales in this area.

Storage Ticker – May 3, 2024

Storage news
Storage news

Dremio is partnering with STACKIT, a data-sovereign cloud provider in Europe to provide European organizations with the first fully managed, cloud-based lakehouse offering capable of meeting today’s data residency requirements. STACKIT is the Schwarz Group’s cloud and colocation provider and belongs to the Schwarz Digits IT and digital division.

Hyve Solutions has become a design partner for the Nvidia HGX platform, helping its focus on accelerating datacenter artificial intelligence (AI) architectures. 

Infinidat and Index Engines are announcing support for VMware datastores to InfiniSafe Cyber Detection. Infinidat’s InfiniSafe Cyber Detection is powered by Index Engines’ CyberSense for the the InfiniBox SSA and InfiniBox. The two have been partnering for a year. The newest InfiniSafe Cyber Detection release enables deep forensic scanning of VMware datastores operating on an InfiniBox or InfiniBox SSA platform and protected via guaranteed immutable snapshots provided within InfuzeOS – the  platform’s data services layer. When an attack occurs, CyberSense generates comprehensive forensic reports detailing the scope, pinpointing locations, and providing insights into the most recent clean versions of the affected files, facilitating swift recovery through InfiniBox.

Micron says it’s the first to ship monolithic 32Gb DRAM die-based 128GB DDR5 RDIMM memory in speeds up to 5,600 MTps on all leading server platforms. Praveen Vaidyanathan, vice president and general manager of Micron’s Compute Products Group, boasted “AI servers will now be configured with Micron’s 24GB 8-high HBM3E for GPU-attached memory and Micron’s 128 GB RDIMMs for CPU-attached memory to deliver the capacity, bandwidth and power-optimized infrastructure required for memory intensive workloads.”

MongoDB announced new capabilities for MongoDB Atlas that make it faster and easier to build, deploy, and run apps with the performance and scale organizations require. Now generally available, MongoDB Atlas Stream Processing enables developers to take advantage of data in motion and data at rest to power event-driven applications  that can respond to changing conditions. MongoDB Atlas Search Nodes – generally available on AWS and Google Cloud, and now in preview on Microsoft Azure – provide dedicated infrastructure for generative AI and relevance-based search workloads that use MongoDB Atlas Vector Search and MongoDB Atlas Search. Now available in public preview, MongoDB Atlas Edge Server, a local instance of MongoDB, gives developers the capability to deploy and operate distributed applications in the cloud and at the edge.

Data protector NAKIVO reports 10 percent revenue growth in the EMEA market in Q1 2024. Of the total revenue, 60 percent came from the EMEA region, 29 percent from the Americas, and 11 percent from the Asia-Pacific region. The highest-growing producers of revenue in Q1 2024 for NAKIVO were Panama and Thailand. NAKIVO’s revenue grew more than 100 percent quarter-on-quarter in Bulgaria, Croatia, the Dominican Republic, Egypt, El Salvador, Hungary, India, New Zealand, Morocco, Norway, Panama, Philippines, South Africa, and the United Arab Emirates, among others. NAKIVO has over 28,000 paid customers in 183 countries. The customer base grew by 12 percent in Q1 2024 vs Q1 2023. 

Pure Storage has a 150TB Direct Flash Module (DFM) coming later this year. Bill Cerreta, GM for Pure’s hyperscale business, posted an image of it in a LinkedIn post. He said Pure has shipped almost 800,000 DFMS in the seven years since its first ship. “We’re shipping 75TB modules at volume now, and customers are telling us these mega-sized drives are exactly what’s needed for AI.” But: “In 2024, we’ve got something larger brewing in the lab, and I thought I’d give you a peek.” What a tease!

… 

Rambus announced availability of its family of DDR5 server Power Management ICs (PMICs), including an extreme current device for high-performance applications. It says it offers module manufacturers a complete DDR5 RDIMM memory interface chipset supporting a broad range of datacenter use cases.

IT infrastructure services provider Kyndryl is working with Rubrik in a global strategic alliance to build cyber resilient environments for its customers worldwide. As part of the strategic alliance, Rubrik collaborated with Kyndryl to co-develop and launch Kyndryl Incident Recovery with Rubrik – a fully managed as-a-service offering providing customers with data protection and cyber incident recovery, backup, and disaster recovery for cloud and on-premises workloads. 

Taiwan-based TrendForce research house has published QLC NAND market numbers showing Samsung (45% share) and Sk hynix subsidiary Solidigm (32 percent) in the lead. 

It predicts shipments of QLC enterprise SSD bits to reach 30 exabytes in 2024 – increasing fourfold in volume from 2023, with AI Inference servers a key market.

SK hynix says its HBM from 2024 output is already sold out, while that from 2025 is almost sold out. It’s  planning to provide samples of 12-high HBM3E with industry-best performance in May, and enable the start of mass production in 3Q. It says the total volume of data generated globally in the AI era is forecast to jump to 660 zettabytes in 2030 from 15ZB in 2014. It’s planning to introduce new memory – such as HBM4, HBM4E, LPDDR6 – 300TB SSD CXL Pooled Memory Solution, and Processing-In-Memory products. MR-MUF is a core technology for HBM packaging. It plans to adopt Advanced MR-MUF for realization of 16-high HBM4, while preemptively reviewing Hybrid Bonding technology.

Starburst announced that Apache Iceberg Project Management Committee (PMC) Member since 2018 and former Tabular Cofounder, Carl Steinbach, has joined Starburst’s R&D team bringing deep expertise in lakehouse technology. Additionally, Starburst is announcing more support for Iceberg on the heels of three major product capabilities announced last month: streaming ingestion, managed Iceberg tables, near-real-time data pipelines. hat should make deploying and operating an Icehouse architecture easier than ever.

Quantum takes wraps off subscription private cloud storage

Nasdaq-listed Quantum Corporation may be having financial reporting problems – facing a stock market de-listing as a result – but it has not gone quiet on the product front.

It has just launched Quantum GO – a private cloud subscription model – to “meet growing data demands and cost objectives.” This launch comes after last month unveiling two all-flash appliances as part of its deduping backup target product set.

Public cloud storage can generate high and unpredictable costs, and it can be difficult for organizations to estimate and purchase years of storage requirements in advance. The vendor claims Quantum GO offers customers a private cloud experience with a low initial entry point, and low fixed monthly payment options.

Quantum GO is said to be especially beneficial for customers who need long-term archiving, the ability to create massive data lakes for AI/ML, or who want to build private clouds using the supplier’s ActiveScale object storage software.

Each Quantum GO implementation is managed and monitored by Quantum support. The offering is available in flexible durations, with either monthly, quarterly, or annual payments. There are also options for deferred initial costs and payments, to help customers secure the subscription that meets their needs.

Key GO benefits marketed by Quantum include a public cloud experience in customers’ datacenters, a cloud-like support experience where the infrastructure is deployed and managed for them, and the ability to store and access data for optimal workflow without worrying about unpredictable egress or other access fees.

Better control and security over data is also being trumpeted, with the solution installed in customers’ own datacenters or third-party data spaces as a secure private cloud solution. Always up-to-date technology through full solution management and monitoring is cited too.

“Quantum GO provides our customers with seamless access to our end-to-end data management solutions,” declared Ross Fujii, chief customer officer at Quantum. “By offering flexible payment options, we aim to alleviate the financial burden associated with initial purchases, upgrading or expanding data management platforms, allowing businesses to focus on their core objectives.”

The data firm’s reporting problems came to light at the beginning of the year. They relate to an accounting issue connected to component pricing in product bundles. This has affected the firm’s ability to file SEC reports for its second and third quarters of fiscal 2024.

HPE launches cost-effective storage system for HPC and AI

HPE has built a downsized ClusterStor supercomputer storage array for entry-level and mid-range HPC and AI compute clusters.

Update. C500 does support Nvidia’s GPU Direct protocol. 7 May 2024.

The ClusterStor line, acquired by HPE when it bought Cray in 2019, has a parallel architecture using SSDs and HDDs with Lustre file system software. Its XE E1000 model scales from 60 TB to tens of petabytes across hundreds of racks, each with up to 6.8 PB of capacity. It delivers up to 1.6 TBps and 50 million IOPS/rack. HPE positions ClusterStor as storage for exascale (Frontier, Aurora, El Capitan), pre-exascale (LUMI, Perlmutter, Adastra), and national AI supercomputers (Isambard-AI, Alps, Shaheen III) running Cray EX supercomputers.

Ulrich Plechschmidt, HPE
Ulrich Plechschmidt

Ulrich Plechschmidt, HPE Product Marketing for parallel HPC and AI storage, says the new Cray Storage Systems C500 will “provide [E1000] leadership-class storage technologies at a fraction of the entry-price point and with increased ease-of-use.”

It’s based on the E1000 and intended for use by customers running modeling, simulation, and AI workloads on smaller compute clusters, often built, Plechschmidt says, with Cray XD systems.

The Cray EX system is a liquid-cooled, rack-scale, high-end supercomputer, while the lesser XD can be air or liquid-cooled and comes in 2RU chassis. Both the EX and XD support AMD and Intel x86 CPUs and Nvidia Hopper GPUs. 

HPE Cray XD systems

The mid-range XD665 supports Slingshot 11, Infiniband NDR, and Ethernet networking, and provides direct switchable connections between its high-speed fabric, GPUs, NVMe drives, and CPUs. It supports Nvidia’s GPUDirect protocol.

Bearing in mind its open source-based Lustre support, Plechschmidt declares that a C500 buyer can “feel secure in the fact that your valuable data sits in a file system that is owned by a vibrant community and not by a single company.” Such as, we might think, the one run by Jensen Huang.

The C500 runs the same Lustre software as the E1000, with the same 2RU x 24 drive storage controllers and 5RU x 84 HDD enclosures in a converged and less expensive design.

Entry-level HPE C500 with controller and storage chassis
Entry-level C500 with controller and storage chassis

C500 details:

  • Cheaper ProLiant DL325 server than the E1000’s System Management Unit (SMU) storage controller
  • Combined Metadata Unit (MDU) and Scalable Storage Unit Flash (SSU-F) chassis holding 2RU x 24 NVME SSDs
  • Support for half and fully populated storage enclosures in specific configurations
  • C500 expansion chassis with 2RU x 24 NVMe drives or 5U x 84 HDDs can increase the usable file system capacity to 2.6 PB all-flash or 4 PB hybrid (SSD/HDD) capacity

The entry-level C500 provides between 22 TB and 513 TB usable capacity from 24 NVMe SSDs, delivering up to 80 GBps aggregate read and 60 GBps write performance to the compute nodes. In comparison, IBM’s GPUDirect-supporting ES3500 delivers 126 GBps read and 60 GBps write bandwidth to Nvidia GPUs using the Storage Scale parallel file system. DDN’s Lustre-using A1400X2 Turbo provides 120 GBps read and 75 GBps write bandwidth to these same GPUs.

Plechschmidt says HPE is “rolling out major software improvements and new functionalities that make the storage systems easier to deploy and easier to manage.” Bizarrely, the details are hidden behind an HPE QuickSpecs webpage requiring an authorized partner or HPE employee login. We ordinary folks don’t get to see them.

HPE QuickSpecs webpage
HPE QuickSpecs webpage

HPE fixed the problem, saying: “There was a disconnect internally on when the QuickSpecs doc would go live, but it was not yet live today which is why you got that message.” Download the doc here.

Disk failure rates in datacenters are falling, says Backblaze

Cloud storage and data backup firm Backblaze has published the latest annual failure rates (AFRs) of the disk drives it uses to supply services to customers in its datacenters.

As of the end of Q1 2024, Backblaze was monitoring 279,572 drives in its cloud storage servers, located around the world. 

In this group the company identified 275 individual drives which exceeded their manufacturer’s temperature specification at some point in their operational life. As such, these drives were removed from AFR calculations.

The remaining 279,297 drives were divided into two groups. The primary group consists of the drive models which had at least 100 drives in operation, as of the end of the quarter, and which accumulated over 10,000 drive days during the same quarter. This group consists of 278,656 drives grouped into 29 drive models. 

The secondary group contained the remaining 641 drives that did not meet the criteria noted. They were not measured for AFR.

The AFR for Q1 2024 was 1.41 percent. That’s down from Q4 2023 at 1.53 percent, and also down from one year ago (Q1 2023) at 1.54 percent. The continuing process of replacing older 4 TB drives is a “primary driver” of this decrease, said Backblaze, as the Q1 2024 AFR (1.36 percent) for the 4 TB drive cohort is down from a high of 2.33 percent in Q2 2023.

In Q1 2024, three Seagate drive models had zero failures:

  • 16 TB Seagate (model: ST16000NM002J)
    • Q1 2024 drive days: 42,133
    • Lifetime drive days: 216,019
    • Lifetime AFR: 0.68 percent
    • Lifetime confidence interval: 1.4 percent
  • 8 TB Seagate (model: ST8000NM000A)
    • Q1 2024 drive days: 19,684
    • Lifetime drive days: 106,759
    • Lifetime AFR: 0.00 percent
    • Lifetime confidence interval: 1.9 percent
  • 6 TB Seagate (model: ST6000DX000)
    • Q1 2024 drive days: 80,262
    • Lifetime drive days: 4,268,373
    • Lifetime AFR: 0.86 percent
    • Lifetime confidence interval: 0.3 percent

“All three of these drives have a lifetime AFR of less than 1 percent, but in the case of the 8 TB and 16 TB drive models the confidence interval is still too high,” said Backblaze. “While it is possible the two drives models will continue to perform well, we’d like to see the confidence interval below 1 percent, and preferably below 0.5 percent, before we can trust the lifetime AFR.”

Backblaze also took a look at the average age in which a drive fails. Over the past year, it recorded 4,406 failed drives. The average age of failure for all of the Backblaze drive models went up from two years and 6 months to two years and 10 months.

With the retired drive models, there were three, totaling 196 drives, that moved from active to retired from Q1 2023 to Q1 2024. The average age of failure for the retired drive cohort remained at 2 years and 7 months over the year.

For the active drive models, the average age of failure for each drive went up from 2 years and 6 months to 2 years and 11 months.

“This suggests that once retired drive models are removed from service, the average age of failure for the retired drive models will increase,” said Backblaze. It thinks the average age of failure for its retired drive models will eventually exceed 2 years and 10 months. Further, it predicts the average age of failure will reach closer to 4 years for the retired drive models, once its 4 TB drive models are removed from service.

For lifetime reviews, a drive model was required to have 500 or more drives as of the end of Q1 2024, and have over 100,000 accumulated drive days during their lifetime. This resulted in 277,910 drives grouped into 26 models.

With three exceptions, the confidence interval for each drive model is 0.5 percent or less at 95 percent certainty. For the three exceptions – the 10 TB Seagate, the 14 TB Seagate, and the 14 TB Toshiba models – the occurrence of drive failure from quarter to quarter was “too variable” over their lifetime. This volatility has a negative effect on the confidence interval.

“The combination of a low lifetime AFR and a small confidence interval is helpful in identifying the drive models which work well in our environment,” Backblaze said.

It also sought to find out its best 12, 14, and 16 TB performers when it came to lifetime AFR. These are:

  • 12 TB drive models – The three 12 TB HGST models are “great performers”, but are hard to find new. Western Digital, which purchased the HGST drive business in 2012, is using its own model numbers for these drives, “so it can be confusing”. “If you do find an original HGST, make sure it is new,” said Backblaze.
  • 14TB drive models – “These models look to be solid”: the WDC (WUH721414ALE6L4), the Toshiba (MG07ACA14TA), and the Seagate (ST14000NM001G).
  • 16TB drive models – All six drive models “performing well to this point”, although the “WDC models are the best of the best to date”.

Arcserve backers bring in new CEO

Arcserve is coming out of a challenging few years with a new CEO and fresh funding.

Arcserve provides UDP backup software, 9000 UDP-powered appliances, SaaS backup of Microsoft 365, Google Workspace, and Salesforce, DRaaS, and OneXafe file accessed, scale-out, immutable object store, and backup target offerings. It has embraced cyber resilience – as have virtually all backup suppliers – and is a small player in a large market with many fragmented suppliers. Arcserve is owned by the private equity firm Marlin Equity Partners, which acquired it in 2014 and remains the majority owner after a merger with StorageCraft in 2021. It also has long-term financial backing from H.I.G. WhiteHorse Capital and Monroe Capital.

Prior Arcserve CEO Brannon Lacey left in March, becoming interim and then full-time CEO at software tester Worksoft. Just one month later, Chris Babel has been appointed in his stead along with a significant infusion in funding from WhiteHorse and Monroe Capital.

Mark Bernier, managing director at H.I.G. WhiteHorse, declared: “We are thrilled to welcome Chris Babel as the new CEO of Arcserve. Chris is a transformative leader and industry visionary who brings significant experience in establishing businesses as global leaders in the security arena. Our customers face an expanding threat landscape, and we are confident that Chris is the right person to lead the company as  it continues its mission to keep global businesses and their data secure.”  

Matt London, managing director at Monroe Capital, added: “Arcserve’s track record of innovation and customer commitment positions it at the forefront of the rapidly evolving data resilience market.

“Our investment will allow Arcserve to continue developing customer-centric innovative products and solutions globally. We  look forward to supporting Chris and the leadership team as they enable more organizations to protect, access, and leverage their data through any challenge.” 

Under Lacey, appointed in October 2021, Arcserve made a couple of mis-steps. March 2022 saw its StorageCraft Cloud Services, providing DR as a service, suffer extended unavailability issues due to metadata servers being improperly decommissioned. In other words, a self-initiated disaster struck a DRaaS supplier.

In February this year, it suddenly stopped selling its OneXafe Solo and Arcserve Cloud Services, leaving its MSPs to find replacements from other suppliers. OneXafe Solo, an appliance that streamed backup data to cloud services, was discontinued as Arcserve considered replacing its in-house services with external cloud suppliers. Upsetting channel partners, on whom you depend for business, is rarely a good tactic.

Babel joins Arcserve from VC and private equity-owned TrustArc, where he was CEO from 2010 to 2023, and is said to have turned the company into a global leader in privacy compliance and risk management. He explained: “Every day, the volume and value of data grows for businesses. Data has truly become business-critical and the lifeblood of organizations, making  it imperative for businesses to ensure its availability.

“With this investment, we will accelerate Arcserve’s development of innovative offerings and help businesses address the crucial challenge of protecting and leveraging  their data in the face of increasing cyber threats and AI opportunities.”

Arcserve claims it has 150,000 end-customers, and 30,000 partners. They’ll all be keen to see how he deals with product development and channel relationships.

Seagate pushes back against SSD dominance claims

HDD maker Seagate wants us to understand three truths about the myth of SSDs replacing disk drives: SSD prices will not match spinning disk prices, SSD fab capacity won’t match HDD fab capacity, and SSDs are a bad fit for nearline disk workloads.

The points are made in Seagate presentation deck that is effectively a response to Pure Storage CEO Charlie Giancarlo’s assertion that “there won’t be any new disk systems sold in five years,” meaning by the end of 2028. In other words, disk and hybrid array customers could still be buying disk drives after that to replenish existing HDD storage but new storage systems will be flash-based.

We note that Pure is not a commercial SSD supplier, buying in raw NAND chips and building its own Direct Flash Module (DFM) drives. SSD suppliers and NAND manufacturers are not supporting Pure Storage in its claims, at least not publicly.

The disk drive manufacturers – Western Digital, Toshiba and Seagate – think this is wrong. Although SSDs are replacing disk drives in notebooks and desktop computers and also in the enterprise 10.2K 2.5-inch market, they are not replacing high-capacity, 7,200 rpm nearline drives in the enterprise and hyperscaler markets. That’s because the total cost of ownership of SSDs is significantly higher than that of HDDs and will remain so.

Seagate’s pitch deck explains why they think this is true. It identifies three claims:

  • SSD pricing will soon match the pricing of disk drives
  • NAND supply can increase to replace all disk drive exabytes
  • Only all-flash arrays (AFAs) can meet modern enterprise workload performance needs

The Seagate slide deck then rebuts each argument.

Price point

Seagate believes that disk drives will retain a greater than 6:1 $/TB advantage over SSDs through to 2027. The average for the period is 6.6:1, with dips below that happening, but the price differential then recovering.

This is based on its analysis and three reports:

  • Forward Insights Q323 SSD Insights, August 2023
  • IDC Worldwide Hard Disk Drive Forecast 2022-2027, April 2023, Doc. #US50568323, 
  • TrendFocus SDAS Long-Term Forecast, August 2023

Partly this is based on Seagate extrapolating disk capacity growth, and that depends upon the HDD makers being able to increase areal density. Equally it depends upon NAND suppliers increasing 3D NAND layer counts and manufacturing capacity. Here’s the chart from Seagate’s deck:

Seagate chart

The TCO of HDDs and SSDs is composed of acquisition costs and then running costs, basically meaning power for operation and cooling, and other minor costs. Seagate asserts that SSD TCO is greater than HDD in $/TB terms over the product’s lifetime, saying the disk “price advantage is magnified at scale, where device acquisition cost is by far the most significant element of TCO.” 

Seagate chart

NAND manufacturing capacity

A NAND fab costs a great deal of money. For example, a coming SK hynix M15X DRAM fab in Korea will cost ₩5.3 trillion ($3.86 billion) and be ready in November 2025. There were 333 EB of NAND manufactured in 2023. TrendForce and IDC analyses predict that 3,686 EB of combined NAND and HDD capacity will be needed in 2027. The NAND industry could build 963 EB of that with HDDs contributing 2,723 EB. 

Were that disk contribution to be replaced by NAND, the projected cost would be $206 billion, and Seagate says this makes SSD replacement of HDDs cost-prohibitive, as its chart indicates:

Seagate chart

But Pure is not saying no more HDDs will be sold after 2028. Its pitch is that “there won’t be any new disk systems sold in five years,” which is different. There is no distinction in Seagate’s argument between new disk-based storage systems and replacement disk storage systems.

This gives Pure wiggle room for its claim as all it needs to show is that new systems are using SSDs instead of disk, and it will be darn near impossible for analysts and research houses to differentiate between old and new storage systems.

For example, if a hyperscaler expands capacity in an existing HDD-based system in 2029, adding 1,000 EB of disk to a storage tier, is that a new system sale? Clearly not, in the strict sense of a “new system.” Yet disk capacity ships will go up that year, apparently disproving Pure’s claim.

Seeing disk capacity shipments or, even better, unit shipments go down from 2029 onward would be the clearest indication that Pure’s claim is correct. Seagate might say that the gulf between projected NAND manufacturing capacity in 2027 and the overall SDD+HDD exabyte need is so vast that distinctions like this don’t matter.

Also, NAND capacity manufacturing limitations are just one of three planks in its argument and the third one will make this argument even stronger.

Enterprise workload performance needs

A Seagate analysis of IDC’s May 2023 Global DataSphere report superimposes streaming and real-time data on IDC’s numbers, mapping enterprise workloads to EBs stored: 

It shows types of enterprise storage workload: nominal-time (disk response time), real-time and ultra real-time (SSD response time), and plots the scope of these against a low to high capacity axis. The bulk of the workloads, 90 percent, is in the  medium to high capacity and nominal data transfer time area, meaning disks are suitable and SSDs not because they are over-performant and their capacity costs too much.

Ten percent of the workloads are low capacity and need real-time data transfer with 1 percent being even lower capacity and needing ultra real-time transfer, meaning SSDs are best for both.

In this view, Seagate is saying that HDDs and SSDs are not mutually exclusive. They are additive.

Thus, because SSDs have a 6:1 price premium over HDDs, NAND manufacturing capacity is limited, and the bulk of enterprise workloads don’t need SSD speed, there will be no wholesale replacement of HDDs by SSDs.

This leads to its final slide in the deck, the three Seagate truths:

Seagate chart

Comment

It seems to B&F that, were SSD prices to tumble, such that the SSD premium reduced to 4:1 or less, and were SSD capacity supply not to be as limited as Seagate supposes, then enterprises and hyperscalers would adopt SSDs in preference to disk. After all, who does not want access to their data faster given the opportunity?

We think Seagate and the HDD industry’s pitch relies mainly on SSDs having a 6:1 price premium and, were that diminished, NAND fabs would be unable to make enough EBs of flash after 2028 for HDDs to start being replaced. 

If the NAND fabbers saw a clear and stable market need for more NAND, due perhaps to the price premium dropping, then they would build more fabs, as night follows day. It would take time, probably extending into the 2030s and beyond. 

In our view, price is the key to this. Adding another bit to flash cells with PLC (5 bits/cell) would boost capacity beyond QLC (4 bits/cell) flash but the technology looks unproven and impractical. Adding more flash layers is another capacity increase factor, but unless the HDD manufacturers stumble with HAMR and fail to get to the 4- or 5TB/platter areas, it looks as if we are facing a continuing status quo – and Pure Storage is wrong in its assertion.

Baffle enables computation on encrypted Amazon RDS and Aurora data

Baffle is providing enterprise-grade data security for Amazon’s RDS and Aurora databases, preventing malware actors from accessing any raw data.

Baffle’s technology encrypts data in the public cloud so that databases store encrypted data with customers able to bring their own keys. It’s based on secure multiparty computation, which allows apps to process encrypted data without needing to decrypt it. It supports masking, tokenization, and encryption with role-based access control at any level in the logical database. The data can be searched, sorted, and analyzed in encrypted form, enabling support of data sensitivity compliance needs such as GDPR, HIPAA, and PCI DSS v4.

Ameesh Divatia, co-founder and CEO of Baffle, said: “This breakthrough technology enables SQL queries on data that is always encrypted in a PostgreSQL database at rest and in use, allowing data owners to implement the Shared Responsibility Model with their cloud service providers and control their data even on infrastructure that they don’t manage.”

Baffle founders
Baffle founders

According to the IEEE, secure multiparty computation is based on secret sharing, “a cryptography algorithm where a private key is divided into shares. These shares are distributed to different parties so each party possesses only part of the secret, ensuring no one has the entire secret. The secret is obtainable only through recombination of shares. However, computations can still take place on the shares. More importantly, the output of those computations is still correct, and the data is still a secret.”

Baffle’s software goes beyond temporary or Transient Data Encryption (TDE) by protecting data in PostgreSQL databases at the application tier, enabling full compliance with PCI DSS v4. 

By working with AWS’s Trusted Language Extensions for PostgreSQL, it’s possible to run SQL queries on encrypted data stored within Amazon RDS and Aurora, making them, Baffle says, the only Postgres Database As A Service (DBaaS) offerings with this functionality.

Amazon RDS and Aurora with Baffle feature:

  • Field, row, and column-level anonymization of sensitive data with no application code changes 
  • Prevention of database administrators and “superusers,” including those from the cloud service provider, from accessing private data
  • Access/authorization controls to regulated sensitive data to meet compliance requirements (including AWS cloud and database administrators)
  • Support for SQL queries on sensitive data that is in the database memory or storage in encrypted or tokenized form – a claimed industry first
  • Support for commercial-off-the-shelf (COTS) applications, such as Tableau or PowerBI, to query encrypted data from Amazon RDS or Amazon Aurora PostgreSQL databases

Baffle was founded in 2015 by CEO Ameesh Divatia and CTO Priyadarshan Kolte. Former Emulex exec Divatia was president and CEO of photonics supplier Lightwire, which was was acquired by Cisco in 2012. Ex-PMC-Sierra software guy Kolte was a Principal Scientist at Texas Multicore Technologies before joining Divatia to start Baffle. 

They have raised $36.5 million across seed, A, and B-rounds of VC funding, the most recent raising $20 million in August 2021. Investors include Celesta Venture Capital, National Grid Partners, Lytical Ventures, Nepenthe Capital, True Ventures, Greenspring Associates, Clearvision Ventures, Engineering Capital, Triphammer Venture, ServiceNow Ventures, Thomvest Ventures, and Industry Ventures.

Baffle has offices in Santa Clara and Kundalahalli, Bengaluru.

Storage news ticker – May 1

SaaS backup startup Alcion founder and CEO Niraj Tolia tweets: “Let me be blunt. On-prem backup is an evolutionary dead end, especially for SaaS services like Microsoft 365.”

AWS announced GA of Amazon Q, a generative AI-powered assistant for accelerating software development and leveraging companies’ internal data. It generates highly accurate code, tests, debugs, and has multi-step planning and reasoning capabilities that can transform (e.g. perform Java version upgrades) and implement new code generated from developer requests. It also makes it easier for employees to get answers to questions across business data such as company policies, product information, business results, code base, employees, and many other topics by connecting to enterprise data repositories to summarize the data logically, analyze trends, and engage in dialog about the data. 

AWS is also introducing Amazon Q Apps, which enable employees to build generative AI apps using their company’s data. Employees simply describe the type of app they want, in natural language, and Q Apps will quickly generate an app that accomplishes their desired task, helping them streamline and automate their daily work with ease and efficiency. Learn more about Amazon Q here.

Cryptographer Baffle announced capabilities that allow organizations to secure structured and unstructured data when moving to and across the cloud. It protects information at the field level for unstructured, semi-structured, and structured files. Access control actively enforces compliance with privacy regulations at the endpoint of the receiving system. Baffle can be customized to protect information in non-standard file formats including PDF, CSV, Parquet, JSON objects, and more.

    CData’s new ETL/ELT SaaS tool, CData Sync Cloud, is now live. It brings CData Sync’s 300-plus standardized data drivers to the cloud, embedding directly into companies’ products to power data integrations.

    Data engine supplier Cribl has signed a Strategic Collaboration Agreement (SCA) with Amazon Web Services to make it easier for customers to manage the growing influx of IT and security data. The SCA enables:

    • Further development of Cribl.Cloud on AWS, accelerating customers’ use of Cribl’s vendor-agnostic solutions to collect, process, route, and analyze IT and security data with their sources and destinations of choice. 
    • Go-to-market expansion with AWS to deliver solutions that address growing data challenges and drive greater efficiency.
    • Global growth initiatives to support the expansion of Cribl into new regions, including Europe, Australia, and New Zealand.
    • Continued innovation with AWS to give customers freedom through its integration with Amazon Security Lake and showcasing Cribl Search capabilities. 

    Quantitative trading firm Jump Trading is using DDN SFA 400NVX2 arrays with NVMe QLC flash storage in its HPC infrastructure to support its AI and ML capabilities for competitive advantage in global financial markets. Watch a video about it here.

    Everspin’s PERSYST EMD4E001G 1 Gb STT-MRAM is used by IBM in its 4th generation FlashCore Modules. It ensures data integrity during power loss. With a DDR4 interface, it delivers 2.7 Gbps of both read and write bandwidth, coupled with instant non-volatility. 

    … 

    An HPE report, Architect an AI Advantage, which surveyed more than 2,000 IT leaders from 14 countries, found that while global commitment to AI shows growing investments, businesses are overlooking key areas that will have a bearing on their ability to deliver successful AI outcomes – including low data maturity levels, possible deficiencies in their networking and compute provisioning, and vital ethics and compliance considerations. The report also uncovered significant disconnects in both strategy and understanding that could adversely affect future return on investment.

    IBM has released Storage Scale v5.2.0 enhancing its hybrid cloud credentials by adding cloudkit, an interactive guided CLI for multi-cloud provisioning. Cloudkit offers a composable deployment architecture with:

    Check here for a summary of changes.

    Nasuni says analyst firm DCIG has named the Nasuni File Data Platform a “Top Five Performer” in two enterprise reports: Enterprise Hybrid Cloud SDS NAS Solutions and Enterprise Multi-site File Collaboration Solutions. They highlight Nasuni’s native hybrid cloud architecture, fast file and object access, data security and ransomware protection, and its add-on services. Nasuni says it has had a year of strong momentum for the leading hybrid cloud provider, including 46 percent growth in new customers and 120-plus new large enterprise customers in FY23. 

    NetApp released its second annual Cloud Complexity Report, which found a clear divide between AI leaders and laggards across several areas including:

    • Regions: 60 percent of AI-leading countries (India, Singapore, UK, USA) have AI projects up and running or in pilot, in stark contrast to 36 percent in AI-lagging countries (Spain, Australia/New Zealand, Germany, Japan).
    • Industries: Technology leads with 70 percent of AI projects up and running or in pilot, while Banking & Financial Services and Manufacturing follow with 55 percent and 50 percent respectively. However, Healthcare (38 percent) and Media & Entertainment (25 percent) are trailing.
    • Company size: Larger companies (with more than 250 employees) are more likely to have AI projects in motion, with 62 percent reporting projects up and running or in pilot, versus 36 percent of smaller companies (with fewer than 250 employees).

    “The rise of AI is ushering in a new disrupt-or-die era,” said Gabie Boko, NetApp CMO. “Data-ready enterprises that connect and unify broad structured and unstructured data sets into an intelligent data infrastructure are best positioned to win in the age of AI.”

    Fred Chen tweets that Nuocun Microelectronics (NORMEM), a semiconductor company based in Suzhou, China, has developed the first 3D NOR flash memory prototype chip. It uses layering processes like 3D NAND and is patented as CN105870121A. It provides random access to data and is claimed to cost a third of the price of planar NOR, and can compete with NAND.

    HCI software vendor Nutanix’s sixth annual global Public Sector Enterprise Cloud Index (ECI) survey and research report showed that Public Sector IT leaders expect substantial near-term adoption of multiple IT operating models (87 percent) yet current usage (57 percent) is slightly behind the average compared to other industries (60 percent). The use of hybrid multicloud models in the industry is forecast to quadruple over the next one to three years as IT decision-makers at public sector organizations look to modernize datacenters into private clouds and preserve choice in public cloud deployments.

    Panmnesia is going to demonstrate the interoperability of its CXL IP with CXL-enabled CPUs from major CPU vendors at the inaugural CXL DevCon next week. It’s also planning to attend an official CXL compliance test event in the near future to prove the functionality of its CXL IP product, which supports all the features of the latest CXL specification and maintains backward compatibility with previous versions. Panmnesia’s Chief Strategy Officer (CSO), Miryeong Kwon, will present state-of-the-art CXL switch-based solutions for next-generation datacenters. After that, Kwon will introduce another CXL switch technology at the OCP (OpenComputeProject) meeting hosted at Meta’s Sunnyvale campus.

    Panmnesia has designed its CXL switch to incorporate sophisticated software known as the fabric manager to enable configurable connectivity to allow users to scale their CXL system effortlessly. Kwon said: “Panmnesia’s CXL switch technology will be crucial for optimizing the efficiency of big data services such as generative AI.”

    Open source database supplier Percona announced Liz Warner as CTO. She was previously CTO at Kubernetes management company Weaveworks, and has held tech leadership positions with Nationwide, 10x Future, and Toyota Connected Europe. She has also led development and technology teams at Mastercard and Apple. Warner’s appointment comes as Percona co-founder and long-time CTO Vadim Tkachenko steps away from the executive position to assume a new set of duties as Technology Fellow at Percona. He will focus Percona’s involvement in the Linux Foundation’s Valkey Community, the growth of vector databases, and other emerging technologies.

    Quantum is expanding its global partnership model across South Korea, Japan, Australia, and New Zealand, building on existing coverage of other Asia-Pacific markets such as China, India, and Singapore. It has entered into exclusive distributor agreements with TS Line Systems for Korea, ACA Pacific for Australia and New Zealand, and NGC for Japan. Each already has a foundation with Quantum and knowledge of its products, with existing joint customers today.

    Rubrik has seen its Zero Labs publish a report, The State of Data Security: Measuring Your Data’s Risk, which shows healthcare organizations experienced 50 percent more encryption events than the global average across 2023. Cloud continues to drive inherent risk and security blind spots as 70 percent of all data is typically not machine readable by security appliances. Leadership changes following cyberattacks are on the rise, with major personnel changes reported by 44 percent of organizations – up from 36 percent in 2022.

    App HA and DR supplier SIOS Technology announced the availability of its SIOS LifeKeeper for Linux Admin training on Udemy, an online skills marketplace and learning platform.

    Snowflake launched Snowflake Arctic, a new enterprise-grade large language model (LLM) with a Mixture-of-Experts (MoE) architecture. It features:

    • Performance: Model that outperforms leading open models including DBRX, Mixtral-8x7B, and more in coding (HumanEval+, MBPP+) and SQL generation (Spider), while simultaneously providing leading performance in general language understanding (MMLU). 
    • Efficiency: Mixture-of-Experts (MoE) architecture that activates 50 percent or fewer tokens than comparably sized models. 
    • Open: Unhindered access and customization with open weights and detailed training recipes under Apache 2.0 license.

    Enterprise data management company Syniti announced record-breaking Q1 2024 results, setting the stage for a strong year. The company reported its highest revenue in company history and saw significant year-over-year increases in both software revenue and annual recurring revenue (ARR). Service bookings were the highest recorded for Q1, up 65 percent over Q1 2023.

    Data warehouser Teradata announced an open and connected approach to supporting open table formats (OTFs) Apache Iceberg and Linux Foundation Delta Lake, embracing the industry pivot toward open source technologies and offering customer choice in data management. It adds a new dimension to Teradata VantageCloud Lake, its cloud-native analytics and data platform for AI, as well as Teradata AI Unlimited, an on-demand and cloud-native AI/ML engine – which will move to public preview on both AWS and Azure Marketplaces beginning in Q2 2024. It says OTFs represent a significant change from proprietary data storage to more flexible storage that can be used across platforms. The vision is greater interoperability, cost efficiency, and choice. 

    “In today’s data landscape, we’re seeing wide adoption of open table formats with 51 percent of organizations actively adopting Delta tables and 27 percent adopting Apache Iceberg. This trend reflects the industry’s focus on a single source of data and the ability to leverage multiple engines against that data,” said David Menninger, Executive Director at Ventana Research, part of ISG.

    TerraMaster released its D8 Hybrid, saying it’s the industry’s first hybrid HDD/NVMe SSD enclosure. It can hold 4x SATA HDDs/SSDs and 4x M.2 2280 NVMe SSDs, with a capacity of 24 TB per HDD and 8 TB per M.2 SSD, providing users with up to 128 TB of storage space. Think of it as storage expansion for Windows, Mac or Linux computers. Hot data stored on high-speed SSDs can be quickly accessed, while cold data is cost-effectively stored on HDDs. TerraMaster TPC Backupper simplifies Windows PC backups. Users can schedule folder or disk partition backups to TerraMaster USB HDD storage or NAS servers effortlessly. TPC Backupper supports incremental and differential backup strategies and is compatible with Windows 8/8.1/10/11.

    D8 Hybrid adopts USB 3.2 Gen2 protocol for high-speed data transmission up to 10 Gbps. With a single M.2 SSD, the read/write speed can reach up to 980 MBps. With 2x SSDs in RAID 0, the read speed can reach 960 MBps. With 2x hard drives in RAID 0, the read/write speed can reach up to 521 MBps.

    According to the Storage Newsletter, research house Trendfocus says prelim calendar Q1 2024 HDD supplier ship numbers were:

    • Seagate: 11.5-12 million with a 39.2-39.5 percent market share, down 17.6-21 percent year-over-year
    • Toshiba: 6.5-7 million with a 22.3-22.9 percent market share, up 2-9.9 percent year-over-year
    • Western Digital: 11.1-11.6 million with a 37.9-38.1 percent market share, down 8.4-12.3 percent year-over-year

    But Western Digital earned more revenues from HDDs in its Q1 2024 (Q3 FY 24) than Seagate.

    ….

    Research house TrendForce says North American customers are increasing their orders for storage products as energy efficiency becomes a key priority for AI inference servers. This is driving up demand for QLC enterprise SSDs. Currently, only Solidigm and Samsung have certified QLC products, with Solidigm actively promoting its QLC products and standing to benefit the most from this surge in demand. It predicts shipments of QLC enterprise SSD bits to reach 30 exabytes in 2024 – increasing fourfold in volume from 2023.

    The Stack newsletter says United Healthcare paid a $22 million ransom to BlackCat/ALPHV malware attackers of its Change Healthcare subsidiary, and that “the cyberattack, which took place in February 2024, halted medical payments and prescriptions across the US.” United Healthcare said the attack had cost it “$872 million in unfavorable cyberattack effects” in the first 2024 quarter. Total losses from the attack are expected to be around $1.6 billion. The attackers used stolen credentials, not protected by multi-factior authentication (MFA), to gain access to United Healthcare’s system via a Citrix portal for remote desktop access, then encrypted data after exfiltrating it. Read more in the United Healthcare CEO’s testimony to a US House of Representatives committee.

    VergeIO announced that MSP and CSP CenterGrid has implemented VergeIO’s Ultraconverged Infrastructure (UCI) to advance CenterGrid’s service offerings, particularly in the Media and Entertainment (M&E) sector and GPU-powered workloads such as VFX and AI. VergeOS was chosen for its per-server licensing model, comprehensive networking, virtualization, storage integration, and migration capabilities from legacy infrastructure software systems.

    Software RAID supplier Xinnor is partnering CineSys LLC, a broadcast and media SI in North America. CineSys is offering two xiRAID-based products. First, NVMe workstations for Autodesk Flame, BMD Resolve, and Adobe. VFX artists and editors will be able to write and read data from the CineSys NVMe workstation at 50 GBps writes and 70 GBps reads. The deployment aims to replace traditional local uncompressed framestores, enhancing efficiency and performance for content creators. Secondly, SAN file systems using target nodes. This setup features an iSCSI over RDMA SAN comprising two target nodes running on Supermicro 1U AMD Epyc-based PCIe 5 NVME storage. With each node equipped with 16x 7.68 TB drives, delivering 107 TB of usable space per node. With 4x 100 Gbps network links per node, the cluster achieves a performance of 50 GBps read and write per node, totaling 100 GBps read and write for the entire cluster. 

    Commvault surpasses forecasts with strong subscriber growth

    Solid customer and subscription growth sent Commvault quarterly revenues past forecasts and set the scene for continued growth.

    Revenues in the quarter, ended March 31, were $232.3 million – 9.7 percent higher than a year ago, and soundly beating the $214 million high end of its guidance range. There was a profit of $126.1 million – a huge 56.5 percent of revenue, and a massive turnaround from the year-ago $43.5 million loss. The large profit rise was due to an income tax benefit of $103.1 million. Excluding that fillip, profits were $23 million.

    Full fiscal 2024 revenues were up 7 percent year-over-year to $839.2 million, with profits of $168.9 million compared to FY 2023’s $35.8 million loss.

    Sanjay Mirchandani, Commvault CEO, said: “We had an outstanding quarter and a breakout year, highlighted by 10 percent total revenue growth and 15 percent total ARR growth in the fourth quarter.” These results are “setting the stage for FY 2025 and beyond.”

    Commvault revenue by quarter
    A flat Q1 2024 has been followed by three growth quarters

    Quarter financial summary:

    • Gross margin – 83.2 percent
    • Operating cash flow – $80 million
    • Free cash flow – $79.1 million and up 18 percent
    • EPS – $0.79
    • Share repurchases – $50.4 million compared to none a year ago

    Subscription-based revenue growth was solidly positive:

    • ARR – $770 million and up 15 percent annually
    • Subscription ARR – $120 million, up 27 percent
    • SaaS ARR – $168 million, up 65 percent, and termed “explosive” growth
    Commvault ARR
    A smoothly rising growth curve

    Commvault has 9,300 subscription base customers – up 26 percent annually and 5.7 percent sequentially. There are 5,000 SaaS customers and the SaaS net dollar retention rate was 123 percent, down slightly from 125 percent a year ago. CFO Gary Merrill said: “The penetration we saw in the Americas on new customers was one of the strongest quarters we’ve had in quite some time.”

    Mirchandani said Commvault moved its focus from data protection to cyber resilience last November, introducing a Commvault Cloud cyber resilience offering. Announcements like the Appranix acquisition and Cleanroom Recovery strengthened that focus. In the earnings call he pointed out that: At the heart, cyber resilience is a customer’s ability to recover in the face of an attack.”

    He added: “We continue to flesh out our capabilities and we don’t separate out artificially like some of our competitors do on-premise workloads from cloud workloads because, eventually in the hybrid world, workloads will move and they will live in different places at different times and in different clouds.”

    The outlook for next quarter is for revenues between $213 million and $216 million, meaning a rise of 8.3 percent at the midpoint. Its full FY 2025 outlook is for $904 million to $914 million in revenues, with the $909 million midpoint 8.3 percent higher than the FY 2024 revenue number.

    This does rather depend on Commvault recruiting a new sales boss. The previous CRO, Riccardo di Blasio, went to NetApp in January to become its SVP of North America sales.

    Commvault presented its aspirations for fiscal 2026: $1 billion in ARR and 14 percent CAGR from FY 2024, and $310 million to $330 million in SaaS ARR, around 40 percent CAGR from 2024.

    Tiger Tech gets a head transplant

    Tiger Technology has a new CEO and full-time COO as its founder looks looks to expand the company across four continents: Asia, Africa, North America and Europe.

    Tiger Technology provides a hybrid multi-cloud file namespace for Windows Servers. It has several products: Tiger Store on-premises file sharing; Tiger Pool, which combines multiple volumes into a single pool; Tiger Spaces file sharing among work group members; and the Tiger Bridge cloud storage gateway, syncing and tiering product. The latter enables file tiering from on-premises servers to cloud-based file and object stores. 

    The company was founded in Bulgaria in 2003 by Alexander Lefterov, who was its CEO up until now. Lefterov had previously founded over 10 high-tech companies and sold several of them. There are some 70-80 employees and it sells into the general file sharing and tiering markets and specialized ones such as digital pathology and surveillance with surveillance files tiered to the public cloud. There are more than 11,000 Tiger customers.

    Iravan Hira (left) and Alexander Lefterov (right)

    The new CEO is Iravan Hira who has held managerial roles at HP and HPE, being MD in Bulgaria for both and a 21-year HP+HPE veteran, also at Karoll Capital Management. He was most recently the managing editor for the Bulgarian division of the Financial Times. A supplied Hira quote said: ”I have been following Tiger Technology for several years, and I believe that together with Alex, we will elevate the company to new heights, meeting the exponentially growing demand for our innovative data storage and management solutions in a hybrid environment.”

    He inherits a business organization with CEOs for two subsidiary units: Lance Kelson in the USA, who runs Tiger Surveillance, and Nikola Apostolov who leads Tiger Health Technology in Bulgaria. Lefterov’s new role has not been revealed but we understand it to be a CTO-like role with him being chairman as well.

    Mila Petrova

    Lefterov thinks that Tiger can expand within an AI-influenced market, because customers need their locally stored and managed data used by AI process in the cloud. Tiger’s tech can be used to make that locally held data available to cloud AI. He said: “I’ve been observing the changes in the world as well as technological needs, and I see we are on the threshold of a new era. I want to participate in this transformation, and I understand that I can do it much more effectively if I focus on technology.”

    He needed a replacement CEO. In his view: “Since technology and business are inextricably linked, I’ve realized the need to invite a visionary in business on this journey. In Iravan, I find the experience, the skills, and that flair for expansion that will unleash our potential to bring value by solving global problems in digital transformation.”

    To help scale the business, Mila Petrova becomes Tiger’s full-time COO, having being part-time since January. She was previously a commercial director at Shell, and subsequently its Digital Development Manager for Central and Eastern Europe.