Home Blog Page 256

Cool it: Optane needs a DRAM fix, but legal restrictions may have hindered partner progress

A claim that Optane cells can’t be immediately read because they need to cool after being written has been rebuffed by Intel, but it declined to comment on suggestions contractual restrictions hindered Optane’s adoption by third parties.

The suggested delayed read time cell cooling issue emerged after talking to sources who were testing early 3D XPoint technology, and who had in turn spoken with both Micron and Intel Optane product managers and engineers.

Optane is Intel’s 3D XPoint technology based on phase-change memory cells. The chips were manufactured by Micron in its Lehi fab, until Micron pulled the plug and walked away from 3D XPoint sales, marketing and manufacturing.

Intel graphic.

The Intel technology pitch around Optane was that it is cheaper than DRAM but not much slower, while being costlier than NAND but much faster. However, it has struggled to find its niche in the market and a new reason has been identified/suggested for that: the phase change memory cell inside it needs to be heated during the write process and has to cool down afterwards. That means content, once written, can’t be immediately read.

Therefore, Intel had to add DRAM cache to an Optane DIMM/SSD to fix the issue, raising its cost and complexity and making its price/performance ratio increasingly problematic. 

Intel, via a spokesperson, said “The assertion that Optane 3D XPoint memory needs to cool down after content has been written is incorrect. Optane memory can be read immediately after it has been written, and this does not drive any special DRAM caching circuitry.”

The reason for the DRAM caching was due to the DRAM-Optane speed difference, according to Intel. “The access latency of Optane 3D XPoint memory is inherently longer than DRAM and to mitigate this impact, Optane is utilised in ‘Memory Mode’. In this mode, a smaller set of DRAM DIMMs are configured as a cache for the larger Optane memory space. The net performance, at a solution level, is 90–100 per cent of the performance of DRAM-only implementation.”

Legal restrictions

We also heard that lawyers protecting Intel and Micron commercial interests put obstacles in the way of third parties needing to talk to their engineers and product people when wanting to use 3D XPoint technology in their own systems. One person told us his people could talk to Micron engineers about 3D XPoint but couldn’t tell Intel they were talking to Micron. They could also talk to Intel XPoint engineers but couldn’t tell Micron about that.

That meant that (a) they couldn’t talk to the Intel and Micron engineers in one room at the same time, and (b) they received mixed messages. For example, Intel product managers said XPoint development was progressing well while Micron engineers said it was meeting lots of problems. Our source wondered if Intel’s XPoint commercial management people knew of this, and whether messages were going upwards as they should.

This made them seriously doubt XPoint’s prospects. If this description of the situation is accurate it represents, in B&F‘s view, product development lunacy. The engineers/product managers were in a product development race but had to try to run while using crutches.

The industry consultant told us “Micron never liked 3DXP. It doesn’t solve any Micron issues, it is very complex to bring a new memory to market and Intel, Micron, Numonyx had been working on it for 20 years.

“Intel liked it for system reasons. Intel needs to get large amounts of memory, preferably non-volatile, to systems. This helps keep the storage component from slowing down compute. Two-level memory has been a plan from Intel for 10+ years. Intel can solve a lot of system problems with this.

“As a result of the above, Intel pushed Micron into doing 3DXP and come up with a Micron can’t lose finance model. Basically, Intel would pay for cost plus large overhead for bits shipped.”

“Micron wanted to convert the factory to DRAM/3D NAND … Intel said ‘No’, wanted to keep it for possible ramp up to support needs. Intel did a weird loan to IMFT to pay for upgrades and capital.

“Micron annexed the fab as allowed by agreement with the idea they could sell bits to Intel at a high price and still be able to sell SSDs. Intel’s ramp plans failed and they didn’t need the bits so they cancelled orders. Micron found the fast SSD business is not big  enough. At this point, the two companies’ relationship was falling apart and they each blamed the other for financial issues.

“Intel had data embargoes, refused to let companies have samples, made them work at Intel sites, etc.”

He said “At one point, there were no Micron engineers working on 3DXP at all in development. Intel was supplying all the work at the Micron plant.”

The Intel spokesperson told us: “We aren’t going to comment on rumour or speculation.”

MIcron’s XPoint SSD-only restriction

Intel and Micron had, we’re told, a contractual agreement that Intel would sell both Optane SSDs and DIMMs but Micron could only sell SSDs. As we know, with hindsight, Micron never took advantage of this to properly push its own QuantX brand XPoint SSDs into the market.

The industry consultant said “Intel only strategically wanted DIMMs. The market for SSD was and is limited. DIMMs potentially could be a multi-billion dollar business. They had some agreements on where to focus and to allow Intel to drive the DIMM market. This limited Micron’s ability to see any large market. Again, the fast SSD market is very small and takes a lot of work. That’s why Micron gave up.

“The only ‘big win’ for Micron would be when the DIMM market took off, and Micron became second source and no one else could source it. Intels plans were that 50 per cent of the server boards had 3D XPoint. This never got above two per cent … and Micron couldn’t sell any SSDs, and Intel didn’t want any bits from Micron, so they took a huge underload charge on their part of the fab. At that point Micron realised there was no success ever for the market or fab.”

The spokesperson said: “We are not providing additional details about specific business agreements.”

Optane and 3D XPoint’s market success

This background conflict/stand-off between Intel and Micron, together with the need to add DRAM to Optane drives (possibly to overcome a write cooling problem), slowed Optane’s market penetration to the point where, today, its niche in the memory-storage hierarchy has shrunk to a very small size. Within that niche it is still important, but the question is if the niche is big enough to sustain a commercial Optane manufacturing and development operation.

We’d love to know if Intel offered the Optane business to SK hynix, along with its SSD operation, and if SK hynix declined. 

An industry consultant we contacted, who wishes to remain anonymous, said “I have heard rumours that Intel offered it to SK hynix. The issue is that Intel wants to be all-controlling, so they said ‘you take it, we have all control, we have [right of first refusal] on all bits, you can’t license it… etc.’ 

“Hynix refused. Classic Intel: they think they have invented the greatest thing ever and try to control everyone. The end result is a lack of partnership, which kills them.”

In his view, “I have said the best option by far is for Intel to sell Optane to a Chinese company. Tsinghua, or another one who needs memory – let them throw capital at it. You can structure it so that the US government has no say in it. The issue is that Intel can’t control them and it will kill sales since China has political issues now.

“The end result is that the technology is a niche (big niche). Much slower than DRAM. much more expensive than NAND. It is not a game changer. Intel says it’s like when SSDs came out but the fact is that SSDs didn’t become ubiquitous until Samsung, Toshiba, etc sold them to everyone.”

Once more the Intel spokesperson said: “We aren’t going to comment on rumour or speculation.”

In the future we’ll no doubt get to read “spill the beans” stories about XPoint and Optane development. I for one can’t wait.

Ondat backs Trousseau secrets manager for Kubernetes as open source project goes live

Kubernetes stateful app platform supplier Ondat is helping to protect sensitive data in containerised environments with the open source Trousseau product safeguarding the keys needed to access the data.

There is no standard way in the Kubernetes world to protect access to sensitive (secret) data, with the result that many have composed their own ways. With enterprises using Kubernetes to run more and more stateful applications, safeguarding sensitive data is becoming more important.

The project lead for Trousseau is Ondat principal cloud architect Romuald Vandepoel, who said in a statement: “There have been previous projects that attempted to solve this problem, but they required adding lots of components. Naturally, security teams didn’t like that approach because it introduced additional complexity making security more difficult. Secrets management has always been one of the most difficult issues in Kubernetes and Trousseau Vault integration provides the long-sought answer to that problem.”

Ondat diagram

Trousseau uses Kubernetes etcd to store API object definitions and states. The Kubernetes secrets are shipped into the etcd key-value store database using an in-flight envelope encryption scheme with a remote transit key saved in a key management system (KMS).

Secrets protected and encrypted with Trousseau and its native Kubernetes integration can connect with a KMS to secure database credentials, a configuration file or Transport Layer Security (TLS) certificate that contains critical information and is accessible by an application using the standard Kubernetes API primitives.

Any user/workload can leverage the native Kubernetes way to store and access secrets in a safe way by plugging into any KMS provider, like Hashicorp Vault (Community and Enterprise editions), using the Kubernetes KMS provider framework. Users can transition among Kubernetes platforms using the consistent Kubernetes API.

Trousseau is currently being rolled out in a production customer implementation on Suse Rancher Kubernetes Engine 2, leveraging Ondat as the data management platform, along with Hashicorp Vault. 

For more information, read How to keep a secret secret within Kubernetes, and join the Data on Kubernetes Meetup Unravel the Key to Kubernetes Secrets workshop on February 16.

The project is maintained by Trousseau-io.

NetApp beefs up its Astra Control plane for Kubernetes apps

NetApp has added a host of features to Astra Control – its control plane for managing K8S apps supporting more distributions, cloud block stores, adding Operator support and better data protection.

Astra Control is an app-aware control plane that protects, recovers, and moves data-rich Kubernetes workloads in both public clouds and on-premises. It provides Kubernetes workload data protection, disaster recovery and migration using NetApp ONTAP array snapshot, backup, replication, and cloning technology.

There are two versions:

  • Astra Control Service – a fully managed service operated by NetApp;
  • Astra Control Center – a customer-managed software suite with the same rich functionality.  

Update details

Extended cloud storage – Astra Control now supports Azure Disk and Google Persistent Disk cloud block stores in addition to Azure NetApp Files and Cloud Volumes Service in Azure and Google Cloud. You can protect, recover, and move existing and new applications backed by Azure Disks and Google Persistent Disks, as long as they are accessed using the respective CSI drivers with support for snapshots and cloning functionality.

Support for these two block storage providers also enables complete application-data management for your applications that use a combination of file and block storage, like Azure NetApp Files/Cloud Volumes Service and Azure Disks/Google Persistent Disks.

Data management Operators – Users can protect and move apps that are deployed and managed using Operators, in addition to to Helm and Labels. Astra Control automatically discovers Operator-deployed applications, with their custom resources and the associated controllers. You can then do application-aware snapshots, backups, restores, and clones, as you can with any other application. 

Extended K8S support – The Rancher Kubernetes Engine (RKE) and community Kubernetes platforms are supported, in addition to OpenShift Container Platform (OCP). You can protect, recover, and move your K8s apps running on RKE and community Kubernetes with NetApp ONTAP as the storage provider.  

OpenShift – The Astra Control Center Operator is now certified with Red Hat’s OpenShift. It is jointly supported by Red Hat and NetApp, and is monitored and updated automatically to reduce interoperability failure and security risks.

Execution hooks feature – Astra Control provides automatic freeze/thaw logic for popular apps like PostgreSQL, MariaDB, MySQL, and Jenkins, which it discovers and protects automatically. It supports crash-consistent snapshots for all other applications. Execution hooks enable Astra Control to provide custom freeze/thaw logic for in-house developed apps before and after taking snapshots to ensure application consistency. Astra Control can take app-consistent snapshots and backups of any application by using this execution hooks feature.

Restore in place – Astra Control can restore K8s apps in place from an existing snapshot or a backup within the original namespace within the same cluster where the application resides. This means you can recover from service disruption scenarios like accidental or malicious data corruption or deletion, a failed application upgrade, and other similar issues. It allows you to replace your existing app and associated data with a previous instance of the same app and its associated data that you can select from one or more previously recorded application-aware snapshots or backups. 

NetApp says that a preview release of its Astra Data Store (ADS) – a Kubernetes-native, shared file, unified data store for containers and VMs – is now available. ADS was announced in October last year.

This video walks you through the end-to-end procedure to deploy Astra Data Store preview in your Kubernetes cluster.

Free trials are available for both Astra Control Service and Astra Control Center.

Azure offers free inward migration with Data Dynamics and Komprise

Microsoft’s Azure public cloud is providing free inward data migration courtesy of deals with Data Dynamics and Komprise.

An Azure Storage Blog by Karl Rautenstrauch, Microsoft principal program manager for Storage Partners, says these deals with “unstructured data management partners … help you migrate your file data to Azure Storage at no cost!” It’s recommended for use by customers with 50TB or more of data to migrate. Users with less data can use tools such as AzCopy, rsync, or Azure Storage Explorer.

He adds: “We intend this new program to help our customers and partners migrate from on-premises and non-Azure deployments of Windows or Linux File Servers, Network Attached Storage, and S3-compliant object stores to Azure Blob Storage, Azure Files, or Azure NetApp Files.”

This program “is a complement to the Azure Migrate portfolio which many Azure customers have used to automate and orchestrate the migration of servers, desktops, databases, web applications, and more to Azure.”

Customers who take up the program “will be given an onboarding session to learn how to use the software and will receive access to the support knowledgebase and email support for the chosen ISV and up to two support phone calls. We have also co-authored ‘Getting Started Guides’ and our ISVs have created ‘How-To’ videos to help you quickly begin your migration,” writes Rautenstrauch. The program “does not include professional services to help you configure the software beyond the onboarding session, Getting Started Guides and How-To videos.”

Data Dynamics uses its StorageX product while Komprise supplies its Elastic Data Migration (EDM) offering. EDM was launched in March 2020 and takes NFS/SMB/CIFS file data and moves it across a network to a target NAS system, or via S3 to object storage systems or the public cloud.

B&F diagram.

Komprise says EDM eliminates cost and complexity in managing file data by providing analytics-driven data migration to Azure:

  • It provides analytics across existing NAS (e.g. NetApp, Dell, Windows) to identify which data sets to migrate and to which tier of Azure;
  • Systematically migrates files and scales elastically according to the distribution of the shares, directories and files;
  • Ensures data integrity by migrating all file attributes and permissions with full MD5 checksums on every file.

Customers can upgrade to the full product, Komprise Intelligent Data Management, which means they can transparently tier across Azure Storage platforms, cutting up to 70 per cent of cloud costs.

Find out more about the Azure and Komprise migration deal here. Let’s see if Amazon Web Services and the Google Cloud Platform follow in Azure’s footsteps.

Storage news ticker – February 7

Storage
Ticker tape women in Waldorf Astoria

Computational storage startup Eideticom has been assigned two patents: 10,996,892 for “Apparatus and Method for Controlling Data Acceleration” and 11,231,868 for “System and Method for Performing Computational Storage Utilizing a Hardware Accelerator.”

Lee Caswell.

Lee Caswell (Mr. HCI at VMware) has joined Nutanix as SVP for Product Marketing. He resigned from VMware as its VP for Marketing in December 2021 to begin a “new adventure.” He reports to ex-VMware guy Rajiv Ramaswami, Nutanix’ CEO, who left VMware in December 2020.

The latest Toshiba re-organisation plan envisages a two-way split. Japan Semiconductor and Toshiba Electronic Devices and Storage would be spun-off into one entity. A second Toshiba entity would own Toshiba’s 40.6 per cent stake in NAND flash and SSD maker Kioxia, which is destined for an IPO. If the plan gains board approval it will go to a June 23 AGM for shareholder approval.

UK-based replicator WANdisco has adopted a four-day working week with the weekend now consisting of Friday, Saturday and Sunday. According to the Financial Times story, CEO and co-founder David Richards said that staff productivity had become higher during the COVID-19 pandemic, as a result of them working from home. Staff can choose an alternative weekday off and their salaries will not be affected by the move.

Tier-2 public cloud object storage provider Wasabi announced a partnership with Vultr, a cloud infrastructure vendor, to deliver an infrastructure-as-a-service (SaaS) offering claimed to be easier to use and cheaper than hyperscale cloud providers’ services. Customers can use Vultr compute with Wasabi storage to run web apps, big data analytics, media processing and other data-intensive workloads with highly predictable and simplified pricing, meaning no hidden fees. Customers can transfer data at no cost between their Wasabi storage and Vultr virtual machines and bare metal servers.

Should Western Digital acquire Kioxia? Objective Analysis consultant Jim Handy sees downsides in his latest blog. WD, if it acquired Kioxia, would then have to bear the cost of excess NAND chip production from the Kioxia-WD joint venture-owned foundries, and this affects its profitability.

100 per cent of VAST Data‘s customers have a positive perception of VAST. This is shown by a Gartner Peer Insights report and is said to be the first time this has happened to a file and object storage supplier in such a report.

WEKA and Cisco funding contribution: Now you see it, now you don’t

Cheshire Cat

On January 4, WEKA announced Cisco was among contributors to its $73 million funding round. Now Cisco’s name is off the list – for the latest round, at least.

The original release read: “WEKA, the data platform for AI, today announced that Hitachi Ventures led its recent round raising $73 million in funding, which brings the total amount raised to $140 million. Other investors participating in this round were strategic investors, including Hewlett Packard Enterprise, Nvidia, Micron, and Cisco, and financial investors including MoreTech Ventures, Ibex Investors, and Key 1 Capital.”

Spot the difference with a corrected version issued on February 5: “WEKA, the data platform for AI, today announced that Hitachi Ventures led its recent round raising $73 million in funding, which brings the total amount raised to $140 million. Other investors participating in this round were strategic investors, including Hewlett Packard Enterprise, Nvidia, Micron, and Digital Alpha, and financial investors including MoreTech Ventures, Ibex Investors, and Key 1 Capital.”

Enter Digital Alpha as a strategic investor, which now contributes a quote: “As Digital Infrastructure investors, we see Enterprise AI as a highly attractive segment,” says Rick Shrotri, managing partner at Digital Alpha, “and we are delighted to be investors in WEKA’s market-leading AI data platform.” 

What is Digital Alpha?

It describes itself on its website as a premier alternative asset manager focused on digital infrastructure and lists Cisco as a key partner.

Digital Alpha website strategy graphic.

Rick Shrotri is the founder and managing partner at Digital Alpha, and says on LinkedIn that he is the “leader of a private investment fund targeting digital infrastructure worldwide, with proprietary access to deal flow and a knowledge base backed by Cisco Systems, Inc. and other corporate partners.”

He founded Digital Alpha in Feb 2017. Before that he spent almost ten years at Cisco, finishing up as managing director and global head of business acceleration. In that role he was said to have “created and led a global initiative to bring third party equity capital to invest in attractive opportunities from Cisco’s most coveted customers.”

In a WeLink press release Digital Alpha Advisors describes itself as an investment firm focused on digital infrastructure and services required by the digital economy, with a strategic collaboration agreement with Cisco Systems. As part of this agreement, Digital Alpha has preferred access to Cisco’s “pipeline of commercial opportunities requiring equity financing.” This is not mentioned on the revised WEKA release.

In April last year, Digital Alpha announced the closing of Digital Alpha Fund II, over its initial hard cap with over $1 billion in commitments. The fund was oversubscribed.

Digital Alpha is, effectively, a Cisco investment business. That’s why it is classed as a strategic investor, not just as another contributing VC.

Fractal Index Tree

Fractal Index Tree – a data structure used for indexing in databases, which combines elements of traditional tree-based indexing methods like B-trees with concepts from fractal geometry to enhance performance, especially in large-scale, multidimensional databases. Here’s how it generally works:

Key Characteristics:

  1. Fractal Nature:
    • The term “fractal” here refers to the self-similar structure of the tree. Each node in the tree can be seen as a smaller version of the entire tree, allowing for local decisions that mimic global behaviors. This self-similarity helps in scaling the structure efficiently across different levels of data granularity.
  2. Adaptive Indexing:
    • Unlike traditional static trees where the structure is fixed once data is inserted, fractal index trees adapt dynamically. They can reorganize themselves based on data distribution, query patterns, and updates, leading to better performance over time.
  3. Search Efficiency:
    • The tree structure is designed to reduce the average path length to find data, which is particularly useful in handling large datasets or high-dimensional data where traditional methods might perform poorly. The structure might involve:
      • Variable node sizes: Nodes can have different capacities based on the data they hold or the level in the tree they are at.
      • Multiple paths: Unlike a strict B-tree where each node has one parent, fractal trees might allow for multiple paths or connections between levels for quicker access.
  4. Cache Efficiency:
    • Fractal trees are designed with modern hardware in mind, particularly cache hierarchies. They are structured to minimize cache misses by keeping frequently accessed data closer to the root or in more cache-friendly patterns.
  5. Concurrency and Transactions:
    • These trees often include mechanisms to handle concurrent accesses better than traditional trees. They might use techniques like copy-on-write for updates, which allows for more efficient locking strategies or even lock-free operations in some scenarios.

Practical Applications:

  • Databases: Particularly in scenarios where data is frequently updated, or where queries span multiple dimensions or very large datasets. Systems like TokuDB (now part of Percona Server for MySQL) have implemented fractal tree indexing.
  • Big Data: For managing indexes in big data platforms where scalability, speed, and the ability to handle vast amounts of data are crucial.
  • Geospatial Indexing: Where the spatial nature of data can be better managed through fractal-like structures, enhancing query performance across spatial dimensions.

Limitations:

  • Complexity: The adaptive nature and complex structure mean that implementing and maintaining fractal index trees can be more challenging than simpler tree structures.
  • Overhead: There can be additional overhead in terms of memory usage and processing for managing the tree’s dynamic nature.

In essence, a fractal index tree employs principles of fractals to create a more adaptive, efficient, and scalable indexing mechanism for database queries, particularly in environments where data size, dimensionality, and update frequency challenge traditional indexing approaches.

vLLM

vLLM –  virtual Large Language Model (LLM). The vLLM technology was developed at UC Berkeley as “an open source library for fast LLM inference and serving” and is now an open source project. According to Red Hat, it “is an inference server that speeds up the output of generative AI applications by making better use of the GPU memory.”

Red Hat says: “Essentially, vLLM works as a set of instructions that encourage the KV (KeyValue) cache to create shortcuts by continuously ‘batching’ user responses.” The KV cache is a “short-term memory of an LLM [which] shrinks and grows during throughput.”

REST

REST – An appliacyion programming interface (API) the REST API (Representational State Transfer Application Programming Interface) is a web service that follows REST principles to enable communication between clients and servers over the internet. REST APIs allow applications to request and exchange data using standard HTTP methods.

REST API features

  1. Stateless – Each request from a client to the server must contain all necessary information, and the server does not store client state.
  2. Resource-Based – Resources (e.g., users, products, orders) are identified by URLs (Uniform Resource Locators).
  3. Standard HTTP Methods:
  • GET – Retrieve data from the server.
  • POST – Create new resources.
  • PUT – Update existing resources.
  • DELETE – Remove resources.

REST APIs typically exchange data in JSON (JavaScript Object Notation) or XML (Extensible Markup Language), with JSON being the most common. They also have a uniform interface and follow consistent patterns in API design, making it easy for developers to use.

Optane’s 2020 half billion dollar operating loss

Intel Optane

Intel made a greater than half-billion dollar loss on its Optane 3D XPoint business in 2020.

Update. Intel’s Optane head leaves and $500 million revenue number claim added; 9 Feb 2022.

Some gory financial details are laid bare in Intel’s annual 10K SEC filing for 2021, which states in the Non-Volatile Solutions Group section (page 33): “Revenue decreased $1.1 billion, driven by $712 million lower ASPs due to market softness and pricing pressure and $392 million due to the transfer of the Intel Optane memory business to DCG.”

It also says “Operating income also benefited from the transfer of the Intel Optane memory business from 2021 NSG results (a loss of $576 million in 2020).”

Q3 2021 Optane numbers

We were able to show this chart in November last year, based on Intel’s 10-Q SEC filing for the third 2021 quarter, and showing some Optane revenues and operating income numbers to Q3 in Intel’s fiscal 2021:

The revenues for Q3 2021 – $188 million – are considerably higher (54.3%) than the Q3 2020 revenues of $86 million. 

With the the recent Q4 2021 10K statement we can add Q4 revenue and full year 2020 operating income numbers, and back-calculate others (in italics in the table):

It’s still a partial picture but we can see that Intel made a $576 million operating loss on Optane products in its 2020 year. The nine-month number for 2021, $271 million, was 9.1 per cent lower than the nine-month number for 2020, $298 million, and the Q4 2021 revenue number of $121 million was sequentially down on Q3’s $188 million.

Did Intel’s Optane business make an operating loss in 2021? We don’t know, but suspect it did because the 2020 operating losses were 134.9 (Q3) to 158.7 (Q1+Q3+Q3) per cent higher than revenues. Applying the lower ratio to the $392 million full 2021 year Optane revenues we would expect a $529 million operating loss in 2021.

A final thought: if Intel gains CPU performance superiority over AMD it won’t need to use Optane as a loss-leading way for Xeon servers to get a performance advantage over AMD servers. Then it could get rid of a substantial chunk of operating losses by exiting the XPoint business.

Update

CRN revealed that Alper Ilkbahar, VP and GM for Intel’s Data Centre Memory and Storage Solutions, who runs its Optane business, is resigning for personal reasons. David Tuhy takes over as Optane Group GM. A memo from Sandra Rivera, EVP and GM of Intel’s Data Centre and AI group, seen by CRN said; “Optane revenues have grown to $500 million in fewer than 3 years.”

This $500 million does not accord with intel’s 10-Q filing figure of $392 million. Intel wouldn’t comment on the Rivera memo’s $500 million number.

ACID

ACID – an acronym referring to Atomicity, Consistency, Isolation and Durabilityas applied to database transactions. ACID transactions guarantee that each read, write, or modification of a database table has the following properties:

  • Atomicity – each read, write, update or delete statement in a transaction is treated as a single unit and either the entire statement is executed, or none of it is. This prevents data loss and corruption from occurring if a transaction fails midway,
  • Consistency – transactions only make changes to tables in predefined, predictable ways so that data corruption or errors don’t create unintended consequences for the integrity of your table.
  • Isolation – when multiple users are reading and writing from the same table all at once, isolation of their transactions ensures that the concurrent transactions don’t interfere with or affect one another. Each request can occur as though they were occurring one by one, even though they’re actually occurring simultaneously.
  • Durability – ensures that changes to your data made by successfully executed transactions will be saved, even in the event of system failure.

In summary, an ACID-compliant database transaction is any operation that is treated as a single unit of work, which either completes fully or does not complete at all, and leaves the storage system in a consistent state. ACID properties ensure that a set of database operations (grouped together in a transaction) leave the database in a valid state even in the event of unexpected errors. A common example of a single transaction is the withdrawal of money from an ATM. Find out more here.

Extreme inbetweeners: The rise of long-life no-IPO, no-acquisition, storage startups

Analysis: As mainstream storage suppliers have adopted and acquired HCI, NVMeoF, and object storage technologies and others, a set of startups have found themselves in a well-funded and long-life state in which they appear to have no short-term need of an IPO or acquisition – contravening conventional financial mores.

They appear in no danger of failing and are examples of companies defying typical startup wisdom that you need to IPO or get acquired within six years or so after receiving the first venture capital investment.

According to Statista, in 2020, VC-backed companies went public approximately 5.3 years after securing their first VC investment.

Let’s look at three possible outcomes for a startup:

  1. IPO or positive acquisition – for example Snowflake IPO and Cleversafe acquisition.
  2. Crash, burn or distress acquisition – eg Pivot3.
  3. In-between state – for example Panasas and many others.
Blocks & Files chart

Above, we have devised a startup progress chart with these three possible outcomes plotted.

Chart explainer

The chart puts startups’ progress in a time versus valuation 2D space. The coloured bars are venture capital (VC) funding rounds and we show, as an example, A-, B- and C-rounds with the height of the bar representing total funding so far. The dashed line represents the total VC funds invested in the startup.

The solid line above this represents the company’s VC-calculated valuation at reach round. The red curve represents a theoretical startup’s progress as its trading value – what an acquirer would pay grows over time, surpasses the total invested amount and then rises further, past the theoretical valuation, putting the company in IPO territory as it approachers and passes that valuation. It has a positive trading value gap in our chart’s terminology.

At this point the VCs would get a good return on their investment if the company had an IPO or was acquired.

The blue line shows an under-performing startup whose trading value never rises above or drops below the total amount invested. There is an extreme negative value gap between the two and the VCs exit the company because it crashes – Coho Data and Tintri for example – or gets acquired as a distress purchase before bankruptcy, with Pivot3 as our cited example.

The middle, dotted, medium trading value line represents companies which don’t attain a trading value above their theoretical valuation but neither does their trading value drop below the total invested, giving the VCs hope that they will be able to achieve a good exit, eventually. We say these inbetweener companies have a negative value gap.

VC theoretical valuations 

Niraj Tolia

How are VC valuations worked out? Kasten co-founder Niraj Tolia said “There is the theory that valuations should be based on discounted cash flows in the future but usually the reality is a lot more imprecise.”

Sometimes sane valuations for later stage companies will be based on “forward multiples” of, say, a multiple of the next 12 months projected revenue. Harder to do for an earlier stage company and that will then look at TAM and other market comps, founder histories, potential for exit and more. 

“And, finally, there are some valuations that are simply hard to justify from the outside looking in. Might be a hot deal that got competitive and all metrics got thrown out of the window (like last year in a LOT of deals). Sometimes it’s what a founder can command based on a large vision. 

“Basically, more art than science.” 

Inbetweeners

Such companies can’t raise any additional funding and they either motor along in good medium trading value shape with adequate profitability, limp along with low profitability unless and until their VC backers want to exit with a closure or firesale or just cash burn themselves out of money. Perhaps a good example of this negative trading value state is Pivot3, a hyper-converged startup. It was started up in 2003 and took in a lot of money – $247 million across 12 funding events. Seventeen years after being founded it was bought by Quantum for $8.9 million, meaning a loss of $238 million for its backers.

What happened? The mainstream vendors built or acquired their HCI products and Pivot3 was not acquired. It then found that its effective addressable market shrank because the big beasts – Cisco (Springpath), Dell EMC (VxRAIL, vSAN), HPE (Nimble dHCI, Simplivity), NetApp (Elements HCI) and Nutanix – were roaming around and mopping up customers. Pivot3’s growth tailed off, faltered and stopped.

The COVID pandemic didn’t help and Pivot3’s board decided to get out of the HCI business.

Inbetweener status can be attained by any startup, whatever its funding amount. The tension between the aims of the VC backers (and most probably part-owners) meaning a good exit, and what the company’s executive leadership can deliver in terms of trading value is greatest in the well-funded in-betweeners. There is simply more VC money at risk.

Identifying inbetweeners

How might we identify long-life, well-funded inbetweener storage companies? Try this: we’ll look for VC-backed, $100 million-plus funded, post-startup suppliers, meaning more than five years since being funded, who have not significantly pivoted in the last few years. Here’s a starting list of $100 million-plus funded, “inbetweener” suppliers:

  • Databricks – $3.6 billion
  • Fivetran – $730 million
  • Cohesity – $600 million (IPO filed)
  • Rubrik – $552 million plus
  • OwnBackup – $507 million
  • Druva – $475 million
  • Dremio – $410 million
  • Qumulo – $351 million
  • Redis Labs – $347 million
  • Infinidat – $325 million
  • Silk – $313 million
  • Pensando – $313 million
  • Fungible – $311 million
  • Wasabi – $284 million
  • Firebolt – $269 million
  • SingleStore – $264 million
  • VAST Data – $263 million
  • Yellowbrick Data – $248 million
  • Clumio – $186 million
  • Cloudian – $173 million
  • Scality – $172 million
  • Nasuni – $167 million
  • Spin Memory – $166 million
  • Virtana – $165 million
  • Panasas – $155 Million
  • Liqid – $150 million
  • WEKA – $140 million
  • Egnyte – $138 million
  • MinIO – $126 million
  • Delphix – $124 million
  • Kyligence – $118 million
  • Datameer – $117 million
  • Pliops – $115 million
  • Pavilion Data Systems – $107 million
  • ExaGrid – $107 million
  • Scale Computing – $104 million
  • Elastic Search – $104 million
  • GoodData – $101 million
  • CTERA – $100 million

Now we’ll separate out those that are seven years old or older, classifying them as startups still:

  • Databricks – $3.6 billion and in 9th year
  • Fivetran – $730 million and in 10th year
  • Cohesity – $600 million and in 9th year (IPO filed)
  • Rubrik – $552 million+ and in 8th year
  • OwnBackup – $507 million and in 10th year
  • Druva – $475 million and in 14th year
  • Qumulo – $351 million and in 10th year
  • Redis Labs – $347 million and in 11th year
  • Infinidat – $325 million and in 12th year
  • Silk – $313 million and in 14th year (including Kaminario period)
  • SingleStore – $264 million and in 11th year
  • Yellowbrick Data – $248 million and in eighth year
  • Cloudian – $173 million and in 11th year
  • Scality – $172 million and in 13th year
  • Nasuni – $167 million and in 14th year
  • Spin Memory – $166 million and in 15th year
  • Virtana – $165 million and in 14th year (including Virtual Instruments period)
  • Panasas – $155 million and in 22nd year
  • WEKA – $140 million and in 9th year
  • Egnyte – $138 million and in 15th year
  • MinIO – $126 million and in 8th year
  • Delphix – $124 million and in 14th year
  • Nantero – $120 million and in 10th year
  • Datameer – $117 million and in 13th year
  • Pavilion Data Systems – $107 million and in 8th year
  • ExaGrid – $107 million and in 15th year
  • Scale Computing – $104 million and in 15th year
  • Elastic Search – $104 million and in 10th year
  • GoodData – $101 million and in 15th year
  • CTERA – $100 million and in 14th year

This gives 30 inbetweeener companies. Let’s find out the longest-lived and best-funded ones by putting them in age order, add in the time since the last funding event, and strike out companies less than ten years old, or with two years or less since their last round. 

Extreme inbetweeners

We now have nine extreme “inbetweener” companies who are ten years or more old, have $100 million or more in funding, it is ten years or more since their first VC round, and their last funding round was three or more years ago. If asked, all would say they are growing and in good shape. None of them are closing offices or laying off staff. They exhibit no signs of financial stress as far as we can see. Indeed some are visibly and publicly growing, such as Egnyte and ExaGrid.

The current storage market supports – indeed seems to welcome – these players, as they continue to develop and grow their businesses. Long may it continue and let’s hope that their VC backers find some way to crystallise their investments without the company getting into dire straits. There needs to be, we might say, some way for the holdings of the relatively short-termist VC investors to be turned into the holdings of more, benevolent and long-term investors.

Perhaps a long-term view private equity acquisition, meaning not a slash-and-burn reorganising one, is one possible good outcome.