Home Blog Page 114

SQream screeches to $111M funding mark in big GPU data processing play

Ami Gal at SQream Technologies
Ami Gal, co-founder and CEO at SQream Technologies

Data acceleration firm SQream has completed a $45 million Series C funding round to help it win more business in the big data analytics and artificial intelligence/machine learning (AI/ML) workload markets.

According to Blocks & Files record-keeping, this round completion takes the company’s publicly declared total funding past the $111 million mark since 2012. The last raise, in 2020, totaled over $39 million. The firm was founded in Tel Aviv, Israel, in 2010.

The latest round was led by World Trade Ventures, with participation from new and current investors, including Schusterman Investments, George Kaiser Foundation (Atento), Icon Continuity Fund, Blumberg Capital, and Freddy & Helen Holdings.

SQream said the new money will be used to further expand its footprint in North America, extend its strategic partnerships, and further develop its capabilities in enterprise AI/ML.

The firm’s relational database management system (RDBMS) is used with third-party graphic processing units (GPUs), with the aim of getting the most out of this hardware, ultimately reducing the amount of hardware that has to be deployed, achieving faster data processing and gleaning quicker analytics results for business users.

“By utilising the massive and parallel processing capabilities of GPUs, SQream’s solution allows companies to process extremely large and complex datasets faster, more cost-efficiently, with a smaller carbon footprint, using less hardware, and consuming less energy than conventional big data solutions that rely strictly on CPUs (central processing units),” the vendor said.

Ami Gal, chief executive officer of SQream, said of the cash injection: “Companies are very focused on driving analytics maturity right now, and this funding round is another step in our mission to better equip our customers with cutting-edge data analytics and processing solutions, that empower them to derive meaningful insights from their vast datasets, and drive growth in ways previously thought impossible.”

He acknowledged the cash had been raised in a “tight funding climate.”

Abraham Schwartz, partner at World Trade Ventures, added: “We are pleased to contribute to SQream’s ascent towards reshaping the big data and AI landscape, and look forward to seeing the significant impact they will make in the market, especially in North America, to meet this increasing demand.”

SQream recently appointed Deborah Leff, the former global head of business analytics sales at IBM, as chief revenue officer as part of its effort to bump up US sales.

The firm’s international customers include Samsung, LiveAction, Sinch, Orange, AIS, Vodafone, and LG. Last year, data systems provider Hitachi Vantara announced a go-to-market alliance with SQream.

It’s no dance still at ailing data management artist formerly known as WANdisco

Edge and cloud data migration and management specialist WANdisco has reported more financial pain as it goes through what it terms as its “transitional” year. After previous misreporting of sales, the company’s shares on AIM – the London Stock Exchange sub-market – were suspended at one point this March. The firm is to now be rebranded as Cirata.

For the half-year ended 30 June, 2023, the company saw sales slump to only $3 million, compared to $5.8 million generated in the same period last year. The firm’s bookings in the period also crashed to $2.8 million from $7.3 million last year, and the adjusted EBITDA loss widened to $14.8 million (it was $14.1 million last time).

The statutory loss from operations was $18.8 million, up from $17.2 million.

“The discovery of the irregularities had a significant impact on prospective customers, strategic partners, the pipeline and the overall business,” said the firm in its financial statement this morning. “Not only did the company suffer interruption to normal commercial activities, but a review of pipeline qualification was also a necessary step in the instigation of the turnaround plan to set a realistic baseline.”

The pipeline, it reckons, has now been “appropriately cleansed and qualified”, and management are confident that what remains is “robust and of high quality.” However, “that pipeline continues to be in the early stages of a rebuild.” The company is predicting a return to growth next year.

After the March share suspension, WANdisco revealed the extent of the problem: “The board now expects that anticipated FY22 revenue could be as low as $9 million and not $24 million as previously reported.”

And in 2021, the company ran at an operating loss of almost $40 million, on sales of just $7.3 million.

But though WANdisco had never made a profit since its formation in 2005, its value was estimated at $1 billion before the share suspension.

As previously announced, the firm is to be rebranded as Cirata, and over the coming weeks there will be a “rolling programme of brand introduction and delivery” across the company’s operations.

The company’s share ticker on AIM is expected to be changed from “WAND” to “CRTA” by “early October 2023”.

Stephen Kelly, the former chief executive officer of accountancy software firm Sage, was brought in as CEO to turn the company around after the sales reporting fiasco, and recently helped seal a $30 million equity fundraise from shareholders as part of the company’s recovery efforts, which also included being relisted on AIM.

Kelly said this morning management had conducted a “root and branch” review of the company, and admitted that “sadly, very little from the past deserves preservation.” He added: “We are building from the ground up.”

Increased cloud costs pulls focus of some SMEs to on-site storage

A snapshot survey just published indicates smaller companies may well baulk at paying rapidly increasing cloud storage costs, with a number instead intending to move some of their data onto on-premises storage systems to mitigate the increases.

All the hyperscale cloud storage suppliers have announced price increases recently, and last week, IBM announced its own raft of more expensive services, taking effect from January 2024.

The research suggests that almost two-thirds (62 percent) of UK SMEs that use cloud computing services will take steps to combat their rising cost by the end of 2023.

The report, commissioned by business internet service provider Beaming, questioned the leaders of 500 UK companies employing fewer than 250 people, and found that a third (33 percent) of them planned to reduce the amount of data they stored in the cloud. In addition, a quarter (24 percent) were going to reduce the number of cloud services they used.

Although more than a quarter (27 percent) of SMEs said they initially adopted cloud to reduce computing expenditure, they now expect a 10 percent increase in the cost of cloud services this year. Several major cloud providers have introduced double-digit price increases for services used by SMEs already.

Just one in five (20 percent) of SMEs say they will absorb the extra costs, however.

In addition to reducing the amount of data stored in the cloud, and cutting the number of cloud services used, 21 percent of firms say they will downgrade some subscriptions to basic services. And a further 17 percent plan to move data or applications from the cloud to on-premise systems instead.

“While the cloud has delivered many benefits to businesses, the cost of cloud has been creeping up for some time now, and that creep is starting to look unjustified to businesses dealing with wider inflationary pressures,” said Sonia Blizzard, managing director at Beaming. “Many SMEs, some of which rushed to the cloud to support remote working during the pandemic, are questioning the value of these services for the first time, and are now taking action to get on top of cost increases.”

Uber takes the fast lane with Alluxio

Uber is using Alluxio’s virtual distributed file system to speed its Hadoop-based analytics processing by caching hot read data.

Alluxio has developed open source data store virtualization software, a virtual distributed file system with multi-tier caching. Analytics apps can access data stores in a single way instead of using unique access routes to each storage silo. This speeds up data silo access and includes an HDFS driver and interface, which proved useful to Uber.

Alluxio graphic
Alluxio graphic

The ride-hailing company stores exabytes of data across Hadoop Distributed File System (HDFS) clusters based on 4TB disk drives. It’s moving to 16TB drives and has a disk IO problem – capacity per drive has gone up but bandwidth has not, causing slow reads from data nodes.

It analyzed traffic on the production cluster and found “that the majority of I/O operations usually came from the read traffic and targeted only a small portion of the data stored on each DataNode.” A table shows some of the statistics gathered from ~20 hours of read/write traces from DataNodes across multiple hot clusters. 

Uber IO operations

The table above shows the majority of the traffic is read IO and less than 20 percent of the data blocks were read in a 16 to 20-hour time window. In one hour, the top 10,000 blocks were responsible for 90 percent or more of the read traffic.

Based on this, the IT people at Uber reckoned that caching the hot read data would speed overall IO. They thought a 4TB SSD cache should be able to store 10,000 blocks based on the average block size in hot clusters.

They used Alluxio to have a read-only DataNode local cache for the hot data alongside the 525TB disk capacity using 16TB HDDs. The DataNode local SSD cache is situated within the DataNode process, and remains entirely transparent to HDFS NameNode and clients. It resulted in “nearly 2x faster read performance, as well as reducing the chance of process blocking on read by about one-third.” 

Uber blog diagram.

Uber experienced a 99.5 percent cache ratio, meaning there was a high chance that a data block in the cache would be accessed again. Cache DataNode read throughput was found to be significantly higher, nearly twice that of non-cache data node read throughput.

Uber concluded: “Overall, the local caching solution has proven to be effective in improving HDFS’s efficiency and enhancing the overall performance.”

An Uber blog contains much more information about the Alluxio caching arrangements.

Druva flying higher in SaaS data protection

SaaS backup supplier Druva has passed $200 million in annual recurring revenue, with co-founder and CEO Jaspreet Singh claiming it’s “the first (and only) 100 percent SaaS data resiliency platform to reach this milestone.” [See bootnote below.]

Update: N-able and Druva subscription differences explained by Druva. 8 Sep 2023.

The company’s Data Resiliency Cloud is a cloud-native backup-as-a-service offering based on AWS’s cloud infrastructure. It provides protection for SaaS applications, public and private cloud environments, and end-user devices. Druva sells its product itself and through Dell EMC as PowerProtect and in APEX Backup Services. It announced a $10 million SaaS data protection service warranty in August last year. The company has more than 5,000 customers, growing by 30 percent in the last year.

Jaspreet Singh.

Singh writes: “In an era where evolving cyber threats shape the global IT landscape, Druva has proven itself as a disruptive force driving industry change and a more secure future.”

He says SaaS backup is better than purchased backup software because “Druva customers do not have to think about procuring backup hardware and managing supply chain issues. They do not have to spend weekends ensuring their hundreds of appliance nodes are patched with security updates, and they don’t have to worry about scaling their infrastructure to protect new applications.”

Commvault’s Metallic SaaS data protection service had a $113 million ARR last month from almost 4,000 customers – $28,250 per customer compared to Druva’s $40,000 per customer. 

Competitor HYCU reported having more than 3,000 customers at the start of 2022, and passed the 3,200 mark in June of that year. Its ARR number hasn’t been revealed.

Druva has launched an MSP program, which Singh says “has now grown by nearly 300 percent annually.”

We note that N-able, which also supplies backup-as-a-service through MSPs, recorded $106.1 million revenue in the second 2023 quarter, giving it a $424 million annual revenue run rate. Druva’s $200 million number is for its annual recurring revenue. [See below for adiscission of Druva and N-able’s subscription revenues.]

Druva intends to harness AI to “enable users to improve backup operations with automation, efficiency, and simplicity while reducing threat response and recovery time through trend analysis and orchestration.” CTO Stephen Manley writes: “We will further incorporate AI into Druva’s platform to take the next step towards autonomous backups, which will deliver the simplicity, efficiency and proactive security that will revolutionize data protection.”

The company is also intending to expand from AWS with Azure protection. Azure and AI announcements are said to be coming soon.

Druva was founded in 2008 and has raised $475 million through eight funding rounds, the latest one for $147 million in 2021. It made the shift from selling backup software licenses to a SaaS business model in 2013 and now carries out more than 6 billion backups annually, managing more than 275PB of data. 

Update – Druva and N-able

We asked Druva some questions about its positioning vs N-Able and Druva CFO Druva CFO Mahesh Patel provided an explanation of relevant terms and phrases in Druva’s blog and release.

First and Only 100% SaaS Data Resiliency Platform

N-Ables’ PRIMARY business is MSP Remote Monitoring and Management and they also offer security with EDR and DNS filtering. They do not disclose the breakdown of their revenue across these services and backup, but of note, there is no mention of storage in any financial filings and cloud hosting costs are their second highest COGs item behind technical support personnel. From our view of SaaS data protection, customer storage/cloud hosting costs would be the highest component of COGs, but it is not for N-Able. Our read is that MSP Remote Monitoring and Management is a dominant portion of their ARR and validates our position. 

SaaS versus Subscription

We believe SaaS is materially different than Subscription ARR. SaaS in our view is outcome driven, which means we provide a turn key solution which includes the storage as well as the automation (patching, upgrades, etc.) at a predictable price. Providing software for a customer to deploy on existing hardware is subscription, but not SaaS.  In these cases, when the hardware depreciates or fails, the software gets replaced.  

Please note the highlighted language from N-Ables SEC Filed 10-K below which states that a portion of revenue is “self-managed” by their customers. While they recognize this revenue upfront in the “subscription revenue” line item, they ratably recognize the maintenance portion for these customers in subscription revenues. SaaS companies do not have a separate maintenance components.  Also, below is a picture from N-able’s website offering customers an option to deploy on existing storage.  Based on the notes in the financials a minimum of 20% of their revenue is “self-managed.  

N-able note

Subscription Revenue. We primarily derive subscription revenue from the sale of subscriptions to the SaaS solutions that we host and manage on our platform. Our subscriptions provide access to the latest versions of our software platform, technical support and unspecified software upgrades and updates. Subscription revenue for our SaaS solutions is generally recognized ratably over the subscription term once the service is made available to the MSP partner or when we have the right to invoice for services performed. In addition, our subscription revenue includes sales of our self-managed solutions, which are hosted and managed by our MSP partners. Subscriptions of our self-managed solutions include term licenses, technical support and unspecified software upgrades. Revenue from the license performance obligation of our self-managed solutions is recognized at a point in time upon delivery of the access to the licenses and revenue from the performance obligation related to the technical support and unspecified software upgrades of our subscription-based license arrangements is recognized ratably over the agreement period. We generally invoice subscription agreements monthly based on usage or in advance over the subscription period on either a monthly or annual basis.”

Bootnote

A Druva spokesperson told us: “Our “first and only” was specifically intended to describe our “100% Saas Data Resiliency platform” not $200ARR.” 

Five emerging memory technologies, with MRAM in pole position

Two semiconductor analysts have written an Emerging Memories report and suggest MRAM has good prospects for replacing SRAM and NOR flash in edge computing devices needing to process and analyze data in real-time.

Tom Coughlin of Coughlin Associates and Objective Analysis’ Jim Handy have produced a 272-page, 30-table analysis of the prospects for five emerging memory technologies: MRAM,  Phase-Change Memory (PCM), Ferro-Electric RAM (FERAM), Resistive RAM (ReRAM) and NRAM/UltraRAM. These are typically non-volatile memories with DRAM data access speeds, and roadmaps to increased densities that promise to go beyond the scaling limits of NAND and NOR and use less electricity than the constantly refreshed DRAM and SRAM.

Emerging memories report graphic.

Coughlin writes: “As the Internet of Things builds out, and as a growing number of devices make new measurements of data that was previously unmeasured, the world’s data processing needs will grow exponentially, including AI training and inference using data from these devices. This growth will not be matched with increases in communication bandwidth, despite the adoption of new wireless standards like 5G.”

That means that the data will need to be processed where it is generated and where analytics results have to be applied – in IoT edge locations. Communication links between edge sites and remote datacenters will be too slow, Coughlin writes, for remote datacenters to do the work within the real time and near-real time time limits needed.

Intel had the most determined attempt to popularise an emerging memory technology with its Optane 3D-XPoint based on PCM technology. This failed because of the complexity of its programming needs and its cost. The cost of producing the Optane chips is basically production equipment and materials cost divided by the chip output numbers.

It is inherent in semiconductor fabrication that the higher the output volume, the lower the cost.

With Optane, Intel could have priced its chips based on a much higher output and sold more of them, but it would have written down a loss – though the cost would have been astronomically high. This is the emerging memory production cost trap. High sales volume requires low prices based on high volume output from a costly plant. But the supplier starts out with low production volume, meaning high chip costs, and has to bear the loss until sales volume ramps up high enough to justify production volume increases which lower per-chip costs.

It is not guaranteed that sales volumes will ramp up enough and so a gamble that the manufacturer can sell enough chips over time to justify production increases. This did not happen with Optane, and Intel, and its manufacturing partner Micron, pulled the plug and killed the product.

None of the five emerging memory technologies identified by Coughlin and Handy are that new in a time sense, but none of them have yet crossed the chasm into mass-production. The two think MRAM and its close cousin STT-RAM have the leading chance of becoming widely used, with MRAM having the largest chance.

They estimate that 133TB of MRAM were produced in 2022, generating $118 million in revenues, and this could rise to 4.56EB in 2033 and revenues of $980 million.At that time DRAM and NAND revenues would be vastly higher, as a chart they use (above) indicates, with DRAM looking to amount to near $100 billion.

The Emerging Memories Branch Out report is available for $7,500 from Coughlin Associates with a substantial brochure freely available here.

Bootnote

MRAM manufacturer Everspin was profiled here. NRAM is Nantero’s Nano-RAM. UltraRAM is a charge-based memory.

Gartner unveils hottest storage trends for 2023

Gartner reckons the top enterprise storage trends for 2023 include cloud operating models, new SSD technologies, cyber security against ransomware, and better use of unstructured data for analytical insights.

We get to see its document thanks to cyber storage and protection supplier RackTop Systems, which has made it available.

The Gartner analysts start with a set of strategic planning assumptions, and then lay out the nine top trends in three categories:

Gartner enterprise storage trends

None of these topics will be news to B&F readers. Each trend is anchored by a strategic planning assumption (SPA), such as “By 2028, consumption-based STaaS will replace over 35 percent of enterprise storage capex, up from less than 10 percent in 2023.”

Gartner then renames the three headings from left to right to be Common, Unstructured Data Storage, and Block Storage, and have an analyst discuss each of the nine trend topics.

Storage-as-a-Service – Managed STaaS is powered by storage vendor’s software and/or appliances, which bring the enterprise feature set, availability, and performance, while delivering a cloud-like consumption model.

Cyber storage – By 2028, 100 percent of storage products will include cyber storage capabilities focused on active defense beyond recovery from cyber events, from 10 percent in early 2023. Most major storage vendors are actively working on cyber storage capabilities, which might be included with storage systems or enabled as a separate product. Innovative startups are releasing products supporting heterogeneous capabilities to protect enterprise data for block, file, and object storage.

QLC flash – By 2027, enterprises will use QLC in 25 percent of their SSD flash media, up from five percent in late 2022. The cost delta advantages of QLC versus triple-level cell (TLC)-based arrays, coupled with enhanced durability and performance, provide enterprises with sufficient long-term benefits (e.g. the rapid restoration of backup data in the event of a ransomware event recovery).

Single file and object storage platform – By 2028, 70 percent of file and object data will be deployed on a consolidated unstructured data storage platform, up from 35 percent in early 2023. A single platform for file and object data enables consolidation of all unstructured data workloads, which not only simplifies storage operations but also storage sourcing activities.

Data storage management services – By 2027, at least 40 percent of organizations will deploy data storage management solutions for classification, insights, and optimization, up from 15 percent in early 2023.

Hybrid cloud file data services – By 2027, 60 percent of infrastructure and operations leaders will implement hybrid cloud file deployments, up from 20 percent in early 2023. This will consolidate unstructured data to a single copy, enabling centralized management around the protection and security of the underlying data, thereby simplifying operations while consolidating use cases. Typical outcomes include cost optimization to align the cost of the storage with the value of the data; data governance to ensure sensitive data has the right protection and retention policies applied; data security to enable the right level of permission and access level controls; and enhanced analytics workflows that leverage data classification and optional tag the data with custom metadata.

NVMe over Fabric – By 2027, 25 percent of enterprise organizations will deploy NVMe-oF as a storage network protocol, up from less than 10 percent in mid-2023. Of the NVMe-oF options, NVMe-TCP has the most to gain as on-premises where Ethernet costs and simplicity can rival both iSCSI and low-end Fibre Channel SAN bandwidth requirements at or below 16Gbit/sec. Further, NVMe-oF can scale out to high capacity levels with high-availability features and be managed from a central location, serving dozens of compute clients.

Container-native storage – By 2027, 80 percent of Kubernetes deployments will require advanced features for persistent containers storage compared to 30 percent in early 2023. While containers are built to be stateless, the volume of deployments requiring persistent data for stateful applications is increasing. A good starting point for an initial Kubernetes deployment is traditional storage approaches, where Container Storage Interface (CSI) is abstracting the underlying storage platform.

Captive NVMe SSD – “Captive” being Gartner’s way of describing computational storage SSDs doing onboard compression etc. By 2026, captive NVMe SSDs will replace over 30 percent of deployed on-premises capacity, up from less than five percent mid-2023. Use of captive NVMe SSD drives provide myriad favorable benefits ranging from enhanced storage operations and cost savings to a more resilient and intelligent data storage services environment.

Gartner says: “The challenges arise from managing the explosive growth of enterprise data, the business need for an on-premises cloud-like consumption model to mitigate cyber attack risks, or leveraging latest storage technologies like QLC or NVMe over fabric for better cost and/or performance.”

IT leaders can, it asserts, proactively respond to business demands by using these technology trends to build flexible and agile storage platforms.

B&F thinks fast technology followers will be doing this already. Gartner’s doc reassures the slower and less certain that they should be doing it too.

Storage news ticker – September 7

Storage news
Storage news

Airbyte has announced an integration with Datadog, the monitoring and security platform for cloud applications. It gives customers the ability to monitor and analyze data pipelines with reporting of nearly 50 metrics – at no additional cost. Users get an overview of the overall performance of Airbyte data pipelines from a centralized location, detection and instant alerts on failing syncs or connections, and notifications on long-running jobs, which could indicate latency issues. The integration is available immediately. To begin using it, existing Datadog customers can configure their Airbyte deployments to send metrics to Datadog. Users not already on Datadog can sign up and get started with a free trial. Users not already on Airbyte can sign up for free.

Backup software supplier Bacula has revealed its product roadmap. It aims to:

  • Further heighten security levels via smart alerts on suspicious patterns
  • Detect Data Poisoning
  • Monitor and detect anomalies in your data
  • Get more insights on how your data evolves and changes
  • Get details about infected, corrupted or error files in your infrastructure
  • Improve security levels of your environment with personalized recommendations
  • Detect any insecure or inadvisable configuration
  • Plus possible OpenStack support

CData has achieved Google Cloud Ready – Cloud SQL Designation for Cloud SQL, Google Cloud’s fully managed relational database service for MySQL, PostgreSQL, and SQL Server. We’re told that this designation enables organizations to simplify data connectivity in the cloud, eliminate data silos, and break down barriers to insights. CData says it provides enterprise data connectivity offerings that ingest live data from more than 250 applications, systems, and data sources directly into Cloud SQL to support analytics, reporting, and other business initiatives.

Ian Wood, of storage company Commvault
Ian Wood

Data protector Commvault has hired Ian Wood to be its as Senior Sales Engineering Director for the UK & Ireland. He spent 23 years at Veritas, where most recently he was Senior Sales Engineering Director for the UK & Ireland. 

Cloud database supplier Couchbase reported revenues of $43.1 million, up 8 percent anually, for its Q2 ended July 31. Total ARR as of that date was $180.7 million, up 24 percent. Counchbase says it “is not able, at this time, to provide GAAP targets for operating loss for the third quarter or full year of fiscal 2024 because of the difficulty of estimating certain items excluded from non-GAAP operating loss that cannot be reasonably predicted, such as charges related to stock-based compensation expense. The effect of these excluded items may be significant.” Operating loss was $21.9 million versus an operating loss of $15.2 million a year earlier.

Databricks has revealed an extensive product roadmap for its Azure version, with more than 20 bullet point items. Here’s a sample: 

Databricks Azure roadmap

GRAX, a supplier of Salesforce data management and protection, has released GRAX Lite for free Salesforce backup and recovery. It enables backups into AWS or Azure cloud storage environments and runs entirely in the customer-owned cloud. There are automated incremental backups of Salesforce data, including files and attachments, with granular control; users can restore individual records. More info here.

SaaS backup supplier HYCU’s CEO and co-founder, Simon Taylor, has a book coming out: Averting the SaaS Data Apocalypse. It will be launched at the Empire Boston in Boston, MA, on September 12. Attendees are promised “a captivating presentation by Simon offering insights into the challenges and solutions presented in the book.”

IBM’s public cloud has raised its prices outside the USA, effective January 1, 2024. The rises are listed here. They are presented as uplifts from US base prices and vary from 2.9 to 7.5 percent.

IBM cloud prices

InfluxData, which built the time series platform InfluxDB, has announced InfluxDB Clustered, a self-managed time series database for on-prem or private cloud deployments, deployed natively in Kubernetes. It says this completes its commercial product line developed on InfluxDB 3.0, its rebuilt database engine optimized for real-time analytics with higher performance, unlimited cardinality, and SQL support. InfluxDB Clustered has 100x faster queries on high-cardinality data, 45x faster ingest, and 90 percent storage reduction cost compared to the preceding InfluxDB Enterprise. It has encryption at rest and in transit, single sign-on, attribute-based access control, and air-gap support. InfluxData also recently announced the availability of InfluxDB Cloud Dedicated, a fully managed and scalable single-tenant InfluxDB cluster based on the InfluxDB 3.0 architecture and intended for large-scale time series workloads.

Kingston XS1000 storage
Kingston XS1000

Kingston has a new key fob-sized, all-black external SSD, the XS1000 with USB 3.2 gen 2 performance and 1TB or 2TB capacity. It sits alongside the existing XS2000 with its up to 4TB capacity. Both weigh less than 29 grams. The XS1000 provides read speeds up to 1,050MBps, write speeds up to 1,000MBps, and comes with a USB-C to USB-A cable.

Samsung has doubled the capacity of its 16Gb DDR5 chip to 33Gb, using 12nm-class process technology, and with the same package size – but the die size is probably twice as big. It says this enables the production of 128GB DRAM modules, and it can see a route to producing 1TB DRAM modules with this technology. Development is fast as 16Gb DDR5 DRAM die production only started in May. The 32Gb die does not use Through Silicon Via (TSV) technology (electrically conducting holes through the die), unlike the existing 16Gb die. Specifically, “by using Samsung’s 32Gb DRAM, the 128GB module can now be produced without using the TSV process, while reducing power consumption by approximately 10 percent compared to 128GB modules with 16Gb DRAM.” Mass production of the new 12nm-class 32Gb DDR5 DRAM is scheduled to begin by year-end. What Samsung uses instead of TSVs is not revealed. 

Samsung 32Gb DDR5 DRAM storage
Samsung 32Gb DDR5 DRAM die images

SK hynix is investigating how its NAND and LPDDR5 memory products are being used in Huawei’s Mate 60 Pro smartphone in contravention of US tech export restrictions. Bloomberg reported on the phone including SK hynix components based on a teardown exercise.

The SNIA’s Storage Developer Conference (SDC23) runs from September 18 to 21 in Fremont, CA. Presentation tracks include Magic Memory Access, Data Security,  Emerging Technologies, Cloud Storage, DNA Data & Archival Storage, DPUs, AI/MI, and Data Placement. SNIA members get a $300 discount. View the sessions here. Register attendance here.

Storage Developer Conference flyer

SAN supplier StorPool has announced its integration with Proxmox Virtual Environment (VE), a hyperconverged infrastructure hosted KVM hypervisor that can run operating systems like Linux and Windows on x64 hardware. Users get a block storage data platform, running on standard servers available as a fully managed storage-as-a-service (STaaS) bundle. Storpool says its Proxmox VE integration provides millions of IOPS for the most demanding applications.

Everspin MRAM revenue growth up by single digits

Everspin, a pioneer in MRAM, recently released its earnings report, shedding light on the progess for the technology.

The company supplies Spin-Transfer Torque Magneto-resistive RAM (STT-MRAM) non-volatile memory devices. Thanks to their DRAM-class speed, they function as storage-class memories. However, like Intel’s Optane, the technology has failed to break through the NAND-DRAM gap to become a mainstream technology. Factors such as high chip prices have affected sales volume, while low manufacturing volume has kept costs elevated.

Everspin also supplies Toggle MRAM, a variant of STT-MRAM with lower density.

For Q2 of the company’s fiscal 2023 ended June 30, Everspin reported revenues of $15.7 million, surpassing its guidance and equating to year-on-year growth of six percent.

Actual MRAM product sales in the quarter were $13.4 million, up from the $13.22 million recorded a year earlier. Licensing, royalties, patents, and other revenue was $2.3 million, $857,000 higher than in the first quarter, mostly due to Radiation Hard license deals. Radiation Hard MRAM is used to provide memory for FPGA devices that are exposed to high amounts of radiation and used in defense and space applications.

The resulting net profit came in at $3.9 million, up from $1.671 million in the same quarter of 2022.

The company has sustained profitability for nine consecutive quarters. This suggests it is being run as a mature entity with a focus on profitability rather than as a business with high-growth turnover ambitions being run at a loss. Everspin certainly started out as a VC-funded operation, raising $80.3 million across five funding events with a five-phase B-round closing in 2015, culminating in a successful IPO in 2016 under president and CEO Phil LoPresti. Since then, the company has operated independently for eight years.

LoPresti resigned in September 2017, having, as his LinkedIn profile says, “created vision, mission, strategy and business plan to address a $2B market opportunity with a new product roadmap. Identified staffing requirements and expanded staff to deliver and support $100M+ revenue.” Despite these ambitious goals, six years later, the company is tracking to a $60 million annual run rate.

Board member Kevin Conley succeeded LoPresti as CEO in September 2017 but stepped down in January 2021. Darin Billerbeck, ex-CEO of Lattice Semiconductor and Zilog, joined Everspin’s board in 2018. He became Everspin’s executive chairman in March 2019 then interim CEO when Conley quit following reduced earnings. Sanjeev Aggarwal, previously VP R&D and CTO, became president and CEO in March 2022, with Billerbeck retaining his exec chairman role.

Commenting on the latest results, Aggarwal said: “Everspin [managed] through the supply chain constraints for our Toggle business and delivering a strong performance on our Radiation Hard programs. We continue to exceed expectations on our Radiation Hard programs to deliver STT-MRAM based solutions for a ‘high-density memory array’ and a ‘distributed configuration memory’ for instant-on FPGAs with multiple time programmability. Healthy backlog in our industrial and automotive markets indicate strength in our core businesses.”

Everspin revenue

A look at Everspin’s revenue history shows a rise after the first 2021 quarter to a peak in the fourth quarter of that fiscal, caused by a patent license agreement. But then the wind went out of its sales, so to speak, and it has been generating around $15 million per quarter in revenue since then.

Everspin revenue

Charting its revenues by fiscal year (above) we can see it passed though a memory market downturn in fiscal 2019, grew somewhat patchily though 2020 and 2021, and is now poised to continue its relatively slow growth.

A cynic would say MRAM is not an emerging technology at all. It is a niche market centered on FPGAs.

Everspin expects next quarter’s revenues to be $15.9 million +/- $500K, and the company warned: “This outlook is dependent on Everspin’s current expectations, which may be impacted by, among other things, evolving external conditions, such as the resurgence of COVID-19 and its variants, local safety guidelines, worsening impacts due to supply chain constraints or interruptions, including due to the military conflict in Ukraine and recent market volatility, semiconductor downturn and the other risk factors.”

Nutanix’s looming profitability helped by Cisco deal, Broadcom-VMware concerns and NVidia support

Nutanix logo
Nutanix logo

Interview. Hyper-converged infrastructure supplier Nutanix has good prospects with a newly minted Cisco partnership, current Nvidia relationship, and looming profitability

Update: Nutanix confirmed it doesn’t support GPUDirect but is considering future support. 8 Sept 2023.

The firm’s software-defined infrastructure software virtualizes on-prem servers and their storage and network connections, as well as running in the public clouds. It creates a hybrid and multi-cloud data platform on which to run applications and is the main alternative to VMware with its vSAN and VMware Cloud Foundation offering.

Broadcom’s pending acquisition of VMware has raised doubts about VMware’s future development strategy and general situation. Nutanix is poised to capitalize on such customer concerns, has set up a new route to market with Cisco, and has a strong relationship with Nvidia that’s relevant to customers looking to develop their AI/ML capabilities. These three factors are combining with Nutanix’s improvement in its business operating efficiency and direction to bring profitability and maturity to Nutanix.

We asked Nutanix CEO Rajiv Ramswami some questions about these topics, and edited the overall conversation for readability.

Rajiv Ramswami.

Blocks & Files: I wondered whether you thought Nutanix and Dell were both capitalizing or benefiting from doubts over Broadcom’s acquisition of VMware?

Rajiv Ramswami: I would say for that it’s still early days for us. We’ve certainly seen Dell change their positioning from how they used to lead with VxRail before. Not anymore. Now, it’s much more than PowerFlex. And we’ve seen that certainly happened over the last year. 

Now, I think, we certainly are seeing a lot of interest from customers. There’s no doubt about that. And we’ve seen some deals starting to close as well – and probably some large ones, you know, seven figure ACV deal with a Fortune 500 company this last quarter. But what remains to be seen here is how many of these engagements actually result in a significant transaction for us [versus] using us as leverage to just try to extract more from VMware when it comes to a price negotiation.

Also, many of these customers have signed up for multi release with VMware to protect themselves prior to the Broadcom deal closing. I think long term this is definitely going to be in our favor. We will see more opportunities as a result of this.

Blocks & Files: How do you see Nutanix competing with external storage vendors? My suspicion is that customers decide whether they’re going to use hyperconverged infrastructure, or not, and then acquire separate compute, separate networking, and separate storage. And you come in after they’ve made that decision. Is that right?

Rajiv Ramswami: So I would agree on the first part of what you said, but not on the second part. The customers have those two choices. They can stick with traditional three tier systems – separate compute, storage, and network – or they can go with hyperconverged, but we’re a big factor in helping them influence that decision. 

It’s not like they make that decision up front and then they just say OK, I decided to go HCI and then we’ll look at Nutanix. We are an integral part of saying, look at the benefits of one versus the other. In fact, as part of our selling motion is, “Hey, we can do this better than three tier. And here’s why we can produce a total cost of ownership, we can get a comparable, if not better performance. And many of these storage arrays, we can also provide a platform for hybrid cloud.”

So we go into that motion as actually a core selling motion. We influence a customer’s choice of whether they go with a traditional array, or they try and come to HCI.

Blocks & Files: Do subscription deals like HPE’s GreenLake change this?

Rajiv Ramswami: We have actually had deals through HPE and Greenlake, where we are part of that solution as well with hyperconverged. Now, GreenLake to me doesn’t change the dynamic of HCI [versus] separate management, compute, storage, network. It’s just putting a subscription overlay on top of that. It doesn’t really fundamentally change the dynamic of whether you go three tier, or whether you go hyperconverged. In either scenario, you can put it in like an overlay on top of it to consume it in a subscription, pay as you go monthly or annually.

Blocks & Files: Would you be able to position Cisco and Nutanix versus Cisco’s HyperFlex offering?

Rajiv Ramswami: We’re the market leader when it comes to hyperconverged. And Cisco has tried with their own solutions for quite a while, many years, and the market share data clearly indicates that we are by far the market share leader compared to their market share.

The one thing about Cisco, and I spent many years of my life there, is that they understand what could have been in the market, and they want to be a market leader. They don’t want to be a market follower. 

So, I think that they made the right decision by saying, look, if we do this, and partner with Nutanix, we can make a lot more in the market, with our customers. And it’s the right thing for the customers because they are really likely to choose Nutanix. They’re winning, so why not? Let’s make that easier. Let’s offer that as a solution. And that’s what drove this relationship. 

it’s good for the customer, because now they get to buy a complete solution from Cisco.

Cisco is perfectly complementary to us. They don’t have their own storage arrays and stuff like that, right? … You can take your best in class from us and best in class from them, put it together and really get a winning solution in the market. So it makes sense for the customer, it makes sense for Cisco, it makes sense for us.

Blocks & Files: Could you give your view on NVidia GPUs in Nutanix’s GPT-in-a-Box offering?

Rajiv Ramswami: GPT-in-a-box runs on top of our standard qualified hardware platforms, which are servers with GPUs – Nvidia GPUs. As part of this we are virtualizing the GPUs. We are making them accessible. We also support the full immediate GPU asset. And, after getting certified as a partner for Nvidia, it’s very much an integral part of the offering.

Blocks & Files: When do you see Nutanix becoming profitable in a GAAP sense? In the next 12 months?

Rajiv Ramswami: We are certainly profitable on a non-GAAP basis. And we’re generating good free cash flow, ten times more free cash flow this year. The next milestone for us clearly is GAAP profitability. And if you look at the primary difference for us between non-GAAP and GAAP, it is stock-based compensation. And we have been working over the last several years to bring down stock-based compensation as a function of revenue.

We ask you to hold that question till our next Investor Day. We have one coming in next month. And that’s when we plan to give our investors longer term views in terms of what the output looks like, including GAAP profitability.

Comment

Nutanix is in a favorable market situation. The Cisco partnership should bring in a substantial amount of extra business. Broadcom-VMware worries should also increase Nutanix sales over the next few year or so. Its Nvidia partnership should help it ride the AI interest wave and pick up deals from that.

We predict Nutanix will be profitable in the next quarter or two.

Bootnote: Nvidia GPUDirect support

Nutanix supports Nvidia’s vGPU (virtual GPU) software, which virtualizes a GPU. It creates virtual GPUs that can be shared across multiple virtual machines (VM), accessed by any device, anywhere. The vGPU software enables multiple VMs to have simultaneous, direct access to a single physical GPU, using the same Nvidia graphics drivers that are deployed on non-virtualized operating systems.

Nvidia’s GPUDirect enables network adapters and storage drives to directly read and write to/from GPU memory, without passing through a server host’s CPU and memory as is the case with traditional storage I/O. This speeds data transfer between a server’s storage (DAS or external SAN/filer) and a GPU’s memory.

GPUDirect component technologies – GPUDirect Storage, GPUDirect Remote Direct Memory Access (RDMA), GPUDirect Peer to Peer (P2P) and GPUDirect Video – are accessed through a set of APIs. The GPUDirect Storage facility enables a direct data path between local or remote storage – such as NVMe or NVMe over Fabric (NVMe-oF) – and GPU memory. It avoids the making of extra and time-consuming copies by the host CPU in a so-called bounce buffer in the CPU’s memory. This enables a direct memory access (DMA) engine near the NIC or storage to move data on a direct path into or out of GPU memory.

VMware’s recently announced Private AI Foundation with Nvidia includes VMware’s vSAN Express Storage Architecture, which will provide NVMe storage and supports GPUDirect storage over RDMA, allowing for direct – and so faster – I/O transfer from storage to GPUs without CPU involvement. Nvidia GPUDirect partners with systems in production are DDN, Dell EMC, HPE, Hitachi, IBM, Kioxia, Liqid, Micron, NetApp, Samsung, ScaleFlux, SuperMicro, VAST Data and Weka. Pure Storage is not listed and neither is Nutanix.

We think that a GPUDirect support capability will be announced by Nutanix in the future. A company spokesperson said: “Nutanix is currently evaluating future support for Nvidia’s GPUDirect storage access protocol.”

In conversation with new Panasas CEO Ken Claffey

HPC parallel file storage supplier Panasas has hired Ken Claffey as its new CEO, replacing Tom Shea who has returned to his COO role.

Panasas was founded in 2000 and has raised around $155 million in funding. It provides PanFS software, with NFS and SMB/CIFS support, and Supermicro-built ActiveStor scale-out hardware with a mix of all-flash, hybrid flash+disk and disk storage capacity. The recent GPT-led surge in AI interest is expanding its market.

Ken Claffey, Panasas
Ken Claffey

Claffey said: “I am honored to join Panasas at this exciting juncture. The company’s unique core parallel file system technology is central to enabling the secular growth of HPC and AI. These areas are driving the increasing need for the performance, scalability, and ease of use that is uniquely inherent to the PanFS software platform.

“I am committed to capitalizing on this momentum by spearheading a strategic agenda that improves software portability and enhances performance and ecosystem integration, all of which will fuel revenue growth. I am confident that Panasas will emerge as a preeminent leader in the HPC and AI storage realm.”

Claffey joins Panasas from Seagate where he was SVP and GM of its enterprise business, which encompassed its Exos RAID array and Corvault JBOD products. He was involved with the high-performance computing (HPC) business in the past, having spent seven years at HPC storage supplier Xyratex, finishing as SVP and GM when it was acquired by Seagate at the end of 2013.

Shea was promoted to Panasas president and CEO in September 2020 after being COO for six years, and now returns to that position, providing continuity for Claffey. Brian Peterson was hired as COO in November 2020 but he left the company at the end of 2022, we’re told.

Panasas competes with DDN on hardware and software, IBM’s Storage Scale, Weka, and BeeGFS in the parallel file system area, and VAST Data in the HPC file storage system market.

HPC customers tend to be sticky and Panasas has a customer base including Airbus, Harvard Medical School, NASA, Northrop Grumman, and many universities. In the last three to five years, Weka and VAST have emerged as fast-growing HPC and file software suppliers, giving Panasas more competition than before.

On top of that, in the scale-out filer area, Dell has rejuvenated Isilon with PowerScale, Qumulo is a fast-growing business, and NetApp continues its market strength.

Q&A

With that background, we asked Claffey some questions about his appointment. His answers have been lightly edited for brevity.

Blocks and Files: Why join Panasas and return to HPC?

Ken Claffey: I think it’s a great community, great market, and being on the cutting edge of technology is really exciting. And then when I was looking for an opportunity to be CEO, I wanted to make sure that I joined a company that was a the right opportunity for me, but also that I was the right person that could make a really positive impact to where the company was at at that time.

Why this company? Look, first principles thinking; doing a parallel file system is really hard. The world of HPC, and indeed AI, needs fast storage, ergo needs a parallel file system. There’s not many companies that have done it and doing it well… We can argue there’s three to four companies that have a true parallel file system today. And Panasas is one of them.

Why wasn’t it even more successful than it’s been? It was tied to a legacy hardware architecture that really underserved and did not really fully explore the value of the underlying software technology. When you look the work the team has done over the last few years; they’ve moved off that proprietary hardware, moved to a software defined architecture, as based on standard Linux, that uses cutting edge, you know, Btrfs file system, as well as made some major enhancements, like adding erasure coding into a parallel file system. 

So you put all that together, and, again, first principles, talking markets, their technology is really good. It’s robust, it works, right? This is a great opportunity for me to come in and help scale this business.

Blocks and Files: Why is Tom Shea returning to the COO role?

Ken Claffey: From my conversations with Tom, I really wanted him to continue. Yes, he’s got a lot of tribal knowledge… going back maybe 15 years. Frankly, having gone through a lot of leadership changes myself through my career, I see mistakes that happen. The new person comes in and gets rid of all the old team and all that tribal knowledge walks out the door. That was a mistake that I was very anxious not to make. And, frankly, Tom, having been in the CEO role, and having probably had good conversations with me about where I see the company going, it was like, ‘hey, I can be a partner, I want to help, and I’m very committed to this company’. And I [thought], hey, you’ve got a lot of knowledge. And I would love to learn from that knowledge and help me make sure I don’t make any… new newbie mistakes. And that’s literally the conversation that we’ve had.

Blocks & Files: Are you coming into a company where the employees have a strong sense of commitment to it?

Ken Claffey: When you walk in anywhere and take over any team, whether it’s a kid’s baseball team or soccer team, like a coach, what you look for is commitment from the players, right? Are people engaged? Are people motivated? Do people care? And that came across loud and clear. As the new CEO, if that’s not there, it’s very difficult. [But] if that’s there, then well, what a great advantage that is.

Blocks and Files:  Since 2020, Qumulo has grown and grown, Dell has rejuvenated Isilon with PowerScale, and VAST Data and WekaIO have sprung upon the scene as well. How do you see Panasas going forward against that background?

Ken Claffey: The first thing I’ve got to say is, why are all these companies investing in this space? So back to first principles, it must be a really good space to be in? Yes, all these new companies are coming along, but the reality is, Panasas has been here, has been executing and building its capabilities for much longer. So that was the attraction piece. And then you know you understand or respect those competitors tremendously, but also recognize that there’s some inherent [Panasas] strengths and capabilities and classes that have been built on. And, frankly, we were held back by being tied to that legacy architecture. 

Now that we too have taken [our] history of robustness, scalability, but moved to a software-defined architecture, then that creates some really interesting options for us from a technology roadmap and even business model perspective.

Blocks and Files: Will you be doing more to link with the public cloud and porting PanFS to operate on the three main public clouds?

Ken Claffey:  If you look at us, we are a software company, right? It may sometimes not have felt that way to people looking on the outside. Today we deliver our software value to our customers through an appliance. This model will certainly continue to be used to do that. But you mentioned many competitors in this space. I think when you look at many of the competitors, they have an omni-channel model that includes public-private hybrid architectures. That’s certainly something that will be part of our vision.

Blocks and Files: Panasas doesn’t yet support Nvidia’s GPU direct. Is that likely to be something on your roadmap?

Ken Claffey: That is likely to be something on our roadmap. When you look at the attractiveness and the need for our parallel file system, whether it’s a traditional x86, your supercluster, your supercomputer, or you’ve got an Nvidia SuperPod, you need a parallel file system. And … we may just have the best one for that application in particular. 

Blocks and Files: What’s the appeal of Panasas’s parallel file system compared to the other suppliers?

Ken Claffey: I do think if there’s one thing that we all know, from our history of supercomputing, there is a very good reason why you use parallel file systems when you get to scale them… a true parallel file system that is robust and enterprise class like PanFS… If you’re spending huge amounts of money on a super cluster, a SuperPod, you must have the fast [speed] but you want something that’s robust and reliable and easy.

You want to spend time building your inference models. You do not want to spend time administering your storage, or worrying about your storage… When you look at the actual architecture of PanFS versus other parallel file systems, you’ll see that robustness was designed in from the ground up. And that is extremely attractive to enterprise-class customers. 

Why are customers so passionate about trying PanFS and buying Panasas? It’s because of that ease of use; you just set it and forget it. It just works. And it’s super, super reliable.

Blocks and Files: Suppose I say that a parallel file system software is very, very sticky. It’s like a backup application. Customers really don’t want to change it once they’re embraced it. But that means that your ability to convert other people to using your parallel file system and not storage scale, say, is very limited because they will not want to move. So to grow your market, does that mean you have to find greenfield customers? Or you have to find greenfield HPC applications? Or greenfield application locations such as edge sites? How is Panasas going to grow?

Ken Claffey: I’m not sure I agree with you. I respectfully agree and disagree. So one of the things that I’ve seen, even in my short time here, is the growing customer list. So this company has been growing for the last couple of years. That was awesome. One of the things that was attractive for me was, when you look at the customers that are converting from other parallel file systems… if it’s not working, and the customer is not happy. It’s not very sticky, right?

What we’re seeing already is customers converting from other parallel file systems… It appears to be the level, the reputation of Panasas for being easy to use, and reliable at scale, is maybe not something that’s offered by some of the other alternatives out there.

Blocks and Files: Would you want to see Panasas have close relationships with some processor, hardware and system vendors to differentiate yourself from being just another piece of software that runs on x86 servers?

Ken Claffey: I think there’s a tremendous amount of untapped opportunity across the entire ecosystem of devices. When you’re a software company, everyone thinks it’s all about software. When you’re a hardware company, it’s all about hardware. And the reality is for customers, no, irrespective of your deployment model, it’s about all the above and having worked together, no matter what level of virtualization you put in, understanding the underlying fundamental devices and their architecture and our capabilities is really important. And I think there’s a lot of opportunity there… I think we can have the balance of being software defined, but still take advantage of some of the inherent capabilities that are there at a device level.

Nasuni working with Presidio to manage AWS costs

Cloud file data services supplier Nasuni is extending its Presidio partnership to include AWS public cloud control through a managed service offering.

Nasuni says it’s optimizing AWS cloud use and reducing opex with Presidio’s Proactive Recapture into Savings Management (PRISM) program. It’s said to use an AI/ML engine to save customers from overpaying for cloud instances. Presidio – a US-based consulting, service, and IT systems supplier – says it will “assume all of the overpayment risk of buying, selling, operationalizing and day-to-day managing of RIs (Reserved Instances).” 

David Grant, Nasuni President, said: “Combining Presidio’s global reach, comprehensive technology portfolio, and digital transformation expertise with the Nasuni File Data Platform in AWS enables customers to accelerate their cloud journey at any stage.” The journey is away from legacy file infrastructures, meaning on-premises filers.

Presidio’s PRISM surveys a customer’s AWS RI usage and presents a savings model based on its data. If the customer then signs a PRISM deal, Presidio buys AWS services upfront on the customer’s behalf, freeing up cash and capital costs. Presidio is a premier consulting partner within the Amazon Partner Network, which allows customers to make purchases through the AWS Marketplace and benefit from the AWS Enterprise Discount Program (EDP)

This is similar in one way to NetApp’s Blue XP CloudOps service, which, with its Spot-based service, dynamically scales cloud instances up or down as applications require to avoid customers paying for unneeded instance resources. By enlarging the scope of its Presidio partnership, Nasuni has a competitive answer to part of NetApp’s Blue XP offering.

Nasuni has signed a multi-year business agreement with Presidio to simplify how companies store, protect, and manage file data in hybrid cloud environments. It says that with Presidio’s managed services taking care of operational management of Nasuni’s file data cloud environment, the Nasuni team is saving time and able to focus on enhancing its product and developing new features.

Raphael Meyerowitz, Engineering VP, Technology Solutions and Strategic Partnerships at Presidio, said: “Through our partnership, our customers can significantly improve file management and access while reducing infrastructure and gaining an important strategic advantage by being able to better tap into data.”

Presidio will be speaking at Nasuni’s virtual annual conference, Nasuni CloudBound23, on November 1-2, 2023.