Who knew cloud backup was so hot? Storage conglomerate Kaseya is buying Datto out of the hands of private equity for a phenomenal $6.2 billion.
Datto IPO’d in October 2020 with private equity business Vista Equity Partners holding 69 percent of the shares. Datto signalled it was looking for a takeover or another private equity buyer in March and a month later it has succumbed to Kaseya’s advance.
Fred Voccola, Kaseya’s CEO, said in a statement: “Kaseya is known for our outstanding track record of retaining the brands and cultures of the companies we acquire and supercharging product quality. We couldn’t be more excited about what lies before us – Kaseya and Datto will be better together to serve our customers.”
The all-cash transaction will be funded by an equity consortium led by Insight Partners, with significant investment from TPG and Temasek, and participation from other investors including Sixth Street. Under the terms of the agreement, Datto stockholders will receive $35.50 per share in a transaction that values Datto at approximately $6.2 billion. The offer represents a 52 per cent premium to Datto’s stock price of $23.37 on March 16, 2022.
Datto’s current stock price is $34.70, reflecting the Kaseya bid, and its market capitalisation is $5.7 billion.
Kaseya, which was hit by a ransomware attack last year is a portfolio business selling infrastructure management, data protection and security products to SMBs through MSPs. It includes the Unitrends backup business and the Spanning in-cloud backup offering, both of which promise integration opportunities with Datto.
Datto CEO Tim Weller said: ”I’m encouraged by the continued investment in the rapidly-expanding global MSP community, and this transaction is another important validation of the channel.”
The purchase is currently expected to conclude in the second half of 2022, subject to customary closing conditions.
Researchers at TrendFocus reckon that all disk drive shipments declined sequentially in Q1, with nearline-only shipments growing year-on-year but declining slightly quarter-on-quarter and all other categories seeing double-digit percentage quarter-on-quarter declines.
Total disk ships in the quarter ranged from 53.0 million to 55.2 million – 54.1 million at the mid-point – which would be down 16 per cent year-on-year
There were an estimated 17.5 million to 18 million nearline HDDs shipped in the quarter – 17.75 million at the mid-point – which compares to 16.5 million a year ago and 18 million last quarter (Q4 calendar 2021) and 19.75 in the quarter before that (Q3 2021). Nearline unit ships have declined over the past two quarters, although the latest quarter is still higher than the year-ago 16.5 million.
Wells Fargo analyst Aaron Rakers told subscribers that “Assuming an ongoing increased average TB/drive (estimated 13.5TB/drive vs 13.1TB/drive in Q421), we would estimate that nearline capacity ship growth slowed to the mid/high-20 per cent year over range vs +60 per cent year over year in Q421.” He estimates that nearline HDD revenue in the quarter was >$3.5 billion, up over 12 per cent year-on-year.
The other HDD categories:
Mission-critical HDD shipments were less than 2.9 million, falling nearly 20 per cent quarter-on-quarter;
2.5-inch mobile and consumer electronics (CE) HDD ships were under 16 million, declining more than 15 per cent quarter-on-quarter;
3.5-inch mobile and CE HDD ships were about 18 million, down over 10 per cent quarter-on-quarter.
Mission-critical disk drives are being replaced by faster SSDs. Rakers points out there was a measurable decline in notebook demand during the quarter. People bought fewer notebooks, rather than buying the same number of notebooks as before but with SSDs fitted instead of disk drives. That sent the notebook HDD segment downwards. The desktop drive decline was attributed to seasonal weakness and worries about the economy.
TrendFocus also reckons Western Digital grew its HDD unit ship share by 200 basis points quarter-on-quarter – that’s two per cent, to an about 37 per cent share. That compares with Seagate staying flat at around 44 per cent and Toshiba declining two per cent or so to about 19 per cent. Rakers thinks that WD’s share gain comes from increased nearline disk shipments.
We have charted TrendForce disk shipments over the past few quarters:
Blocks & Files chart using TrendFocus numbers.
The TrendFocus numbers for a quarter are estimated and often get revised in subsequent quarters, so this chart is only a rough indication of what’s going on. Most nearline drives are bought by hyperscalers these days and their buying patterns can be lumpy. Having said that, it will be interesting to see if growth resumes over the next two or three quarters, or stays flat.
Intel will use existing gen 2 3D XPoint media in its third generation Optane SSDs and Persistent Memory products, and gen 4 Optane devices will likely be freed from Xeon dependency.
These points were revealed by Kristie Mann, VP and GM at Intel’s Datacenter and AI (DCAI) group, and other presenters at a Tech Field Day Session on April 8. Optane is Intel’s storage-class memory using 3D XPoint media – it is faster than NAND but slower than DRAM.
Gen 1 Optane SSDs and Persistent Memory DIMMs were introduced in 2018 with gen 2 devices (P5800X SSD and PMem 200 Series DIMMs) coming in 2020. These used gen 2 3D XPoint media with four layers of cells rather than gen 1’s two.
Mann showed an Optane roadmap slide and said: “Coming near the end of this year, we’re going to have our 300 series.”
Kristie Mann’s Optane roadmap slide
The roadmap links Optane Persistent Memory (PMem) and Xeon CPU generations, with the current PMem 200 series synced with Cooper Lake and Ice Lake gen 3 Xeon processors.
The future Optane PMem 300 series will be twinned with Sapphire Rapids Xeons, Intel’s fourth generation of its scalable processor design. Sapphire Rapids will support CXL v1.1, an interim CXL specification before CXL 2.0, which will introduce memory pooling, which is the ability to have pools of memory remotely accessed over the CXL bus.
Mann confirmed the PMem 300 series will use PMem 200 Series Optane media: “It’s using second generation Optane media.”
That means Intel, as yet, has no public plans to develop 3rd-generation 3D XPoint media with, for example, 8 decks or layers compared to the second XPoint media generation and its 4 decks.
CXL 2.0
Some of an Optane device’s controller functions are carried out by a Xeon host CPU.
Mann said: “With Optane, we actually put part of the memory controller in the Xeon processor. So we we’ve come out with all sorts of memory control optimizations right in the Xeon processor itself.”
That means Optane can only be used in servers with Xeon CPUs.
The coming third generation Optane SSD, Lake Stream in Intel’s code-name, will be developed and arrive in the Sapphire Rapids era and overlap with Grand Rapids. The Optane SSDs are not closely tied to Xeon generations whereas the Optane PMem have been synchronised with Xeon CPU families. Mann said this could end: “Once we’re a fully compliant CXL persistent memory type of device, we may also have the flexibility to decouple from Xeon.”
The Granite Rapids iteration of the Xeon processor will support CXL 2.0, at least that’s Intel’s intention, and a future Optane PMem product – logically the PMem 400 Series – will support CXL 2.0 as well. Mann said: “And at that point, that’s when we want to have a CXL type of device for persistent memory. In that timeframe.”
In more detail she said: “So let’s put it this way: this CXL standard is still in development. So I would say we have very limited visibility into what types of devices and functionality are being built into all of the various … processors and controllers out there.
“Our approach right now, just like it has been with starting with a minimum viable product in the first generation and moving forward, is that we’re we’re doing all of the design, the validation, the optimization around Xeon. And we’re not actively going out and trying to connect to other types of processors, but it is going to be a CXL-compliant device. And so depending on how everybody implements these things, there could be compatibility. Just we’re not designing for, or guaranteeing it at this point in time,” Mann said.
Three other points. First, Intel is going to make it possible to address smaller portions of Optane memory. Intel Fellow and Senior Software Engineer Andy Rudoff said: “The things you see listed here, like optimising the power utilisation generation over generation, or making finer granularity access to our media, that’s the cache line access size.”
Andy Rudoff’s summary slide
Secondly, CXL 2.0 should see performance increases: “Optane and CXL, I gotta say, are like made for each other … So the RAM has the DDR bus all to itself. Octane has CXL and now we get concurrency, those two things operating at the same time that we didn’t have before. So we should see some great improved performance by splitting that apart.”
Lastly, when Optane is available on CXL 2.0, the existing Optane applications should still work, said Intel.
KVCache – Key-Value Cache is a mechanism used to store past Gen AI large language model (LLM) layers’ activations (keys and values) during the inference phase. It allows LLMs to bypass recomputation of these activations, improving performance. The cache serves as a repository to “remember” previous information, the pre-computed key and value pairs, reducing the need to reprocess entire sequences repeatedly. The memory-based KVCache applies to the attention mechanism of a transformer model (LLM). This attention layer computes relationships between input tokens (words or subwords) using three items: queries (Q), keys (K), and values (V). During text generation, the model processes one token at a time, predicting the next token based on all previous ones. Without caching, it would need to recompute the keys and values for all prior tokens at every step, which is an inefficient process.
Keys (K) represent the “context” or features of each token in the sequence. Values (V) hold the actual content or information tied to those tokens.
By maintaining a KVCache, the model only computes K and V for the new token at each step, appending them to the cache, and then uses the full set of cached K-V pairs to compute attention scores with the current query (Q). This drastically reduces computational overhead.
OEM – Opto-Electronic Memory. These are devices that use light – visible, ultra-violet or infra-red – signals and properties to store and retrieve data. They feature optical-to-electrical and electrical-to-optical transduction.
OEM more generally stands for Original Equipment Manufacturer and OEMs manufacture and supply IT systems and components for use by upstream suppliers. The Opto-Electronic Memory use of OEM is specific to the memory area.
CTERA is leaving its storage roots behind and becoming a cloud-based data services business selling to large enterprises.
Update. Nasuni comment added 12 April 2022.
That was the message presented to IT Press Tour attendees in Tel-Aviv by co-founder and CEO Liran Eshel and his team.
CTERA is a 14-year old company, VC-funded with its last round worth $30 million in 2018. It has raised $100 million in total and is getting close to being profitable and thus self-funding. CTERA was founded by Eshel and VP for R&D Zohar Kaufman after running an appliance company in the SMB network security area. They built a cloud file storage gateway appliance and software providing file sync and share, and security.
CTERA now presents itself as a Cloud Data Services supplier with a layer of services above its edge filers and edge-to-core-to-cloud global file system:
Programmable deployment is based on a DevOps SDK, which enables developers to automate system rollout. CTERA says an edge filer can be deployed in 90 seconds using the SDK. Discovery and migration supports customer file and object discovery across their IT estate. It can then migrate files to the cloud, using network transmission for SMB customers and Amazon Snowball for larger projects.
Eshel says CTERA thought about working with Datadobi on migration but realized it could do things better by taking advantage of its edge filers. It deduplicates and compresses every block sent up to the cloud using them.
Liran Eshel
CTERA Insight provides information about files in CTERA’s filesystem and their content. No-code data pipelines can be set up to filter and extract data sets then feed them to analytics applications in the cloud or Lamda functions. Data is extracted from compressed and deduped and written to a new dataset for this purpose.
Cloud Analytics can extract data from CTERA’s immutable S3 buckets using filters, rehydrate it, and send it to analytics applications using an S3 data export microservice. CTERA also provides multi-layer virus scanning and malware protection combining on-access edge scanning with a cloud-based ICAP service for later detected threats. The ransomware protection involves continuous replication into immutable S3 buckets, zero-day detection, and instant recovery using rollback.
CTERA competition
CTERA sees itself mostly competing with NetApp, aiming to replace its filers. But it also competes with Nasuni and Panzura, fellow companies in the cloud file services and collaboration space.
Eshel thinks CTERA can scale much farther than Nasuni, with support for thousands of endpoints. He suggested Nasuni installations typically amounted to 30 or so endpoints and thinks CTERA has superior deployment and automation capabilities to Nasuni.
In general, we can assume it is easier for CTERA, or Nasuni and Panzura, to sell to customers who have on-premises filers and no connected public cloud file system or collaboration resources than to try and replace each other. Greenfield should be an easier sell than brownfield.
NetApp is the giant to beat for CTERA, as for Nasuni and Panzura. Nasuni has raised around $229 million, with a $60 million round earlier this year. Our thinking is that it is second in revenue behind CTERA in this distributed cloud file system services market, with Panzura in third place. (Update; see the Nasuni comment below.)
Veteran Affairs
In December 2021 CTERA won a place in Peraton’s Department of Veteran Affairs bid for a $497 million contract to provide infrastructure-as-a-managed-service (laaMS) for storage and computing facilities across the US and globally. CTERA, which provides the only global filesystem included on the Department of Defense Information Network (DoDIN) Approved Products List (APL), will deliver file services for mission-critical workloads, connecting up to 300 distributed sites (hospitals) to the VA Enterprise Cloud powered by AWS GovCloud (US).
This involved 220PB of data from business operations to medical imaging. It uses edge filers, based on Cisco HyperFlex hyperconverged systems, and CTERA-supported 80TB Amazon Snowball boxes being used to ship data up to the AWS GovCloud for S3 storage. This cloud-based storage replaced NetApp storage systems and HPE servers.
HPE and NetApp bid a system using 3PAR tier 1 and NetApp StorageGRID tier 2 storage through system integrator Thundercat. It failed to win, even after Thundercat protested the contract being awarded to Peraton.
Eshel thinks CTERA is taking share from NetApp because it is growing faster. As a supplier of caching, cloud-connected edge filers, CTERA says it is about much more than storage, with cloud-based data services being an essential part of its deployments. CTERA is a cloud-centric data services business whereas, in our view, NetApp is an on-premises storage supplier with a very good cloud connectivity story.
Eshel said he has no plans to raise more VC money. He wouldn’t reveal his run rate. The Veteran Affairs was a huge win for CTERA and other wins – such as Iron Mountain (hundred of endpoints worldwide), and the Thales-led London Underground digital modernization – suggest to us it has attained a run rate amounting to multiple tens of million dollars a year. Data services is bringing in the dollars.
Nasuni
On seeing this article Nasuni told me;
On market and growth aspects, Nasuni can point to evidence in the public domain (such as number of LinkedIn employees) which would suggest that it is substantially larger in terms of revenue, employees and customers than CTERA
Nasuni is nearing a revenue milestone that will see it surpass CTERA and Panzura substantially
Nasuni’s last funding round added to the company’s cash balance of $100 million which suggests there has been little cash burn from operations over the last 2-3 years
On Nasuni scalability, Nasuni customers AECOM (the world’s biggest engineering and design firm) and Omnicom (the world’s largest advertising company) can confirm that Nasuni Edge instances are running in hundreds of sites that are all managed through Nasuni’s central management console
Regarding endpoints, CTERA appear to mean their desktop sync client running on home user desktops, rather than caching instances serving up files in branch or remote office locations – so they seem to be comparing apples with oranges somewhat there
Nasuni competes with NetApp in the enterprise 95 percent of the time and Panzura and CTERA only in SMB.
NetApp has bought eight companies in two years as Anthony Lye, EVP and GM of the Cloud DataServices Business Unit, builds up a cloud data operations facility.
The aim is to provide customers with the means of using various cloud resources optimally and cost-efficiently. They don’t have to get down and dirty in the weeds of evaluating, for example, which of 475 AWS compute instances to use, how to reserve instances, or optimize Spark.
None of this has much to do with NetApp’s traditional core focus on on-premises storage, and adjacent focus on providing its storage facilities in the AWS, Azure, and Google clouds.
The Cloud Business Unit was initially incubated and run by Jonathan Kissane, SVP and general manager. Lye came on board in March 2017 to make NetApp’s Cloud Business Unit profitable. When Lye was hired, Kissane became NetApp’s Chief Strategy Officer but left eight months later. Who was Lye and where did he come from?
Anthony Lye
It’s no Lye
Lye was a product marketing manager at Tivoli in the early ’90s and then became a senior director/ major accounts sales rep at Remedy Corp for four years from 1994. Then came a big move to president and CEO of SaaS company ePeople from March 1999.
In late 2005 he became Group VP and GM at Siebel Systems, which was acquired by Oracle, with Lye moving onto SVP and GM of Customer Relationship Management. He managed a unit of 3,000 people and acquired 10 companies for more than $2.5 billion.
In 2012 he joined Publicis as Global President for Digital Platforms and Product, but spent just nine months at the French multinational advertising and public relations company. He moved to become Chief Product Officer at HotSchedules, a provider of workforce and inventory management services for the restaurant industry, and became its President and CEO, leaving in late 2016 to be EVP and Chief Cloud Officer at Guidewire Software.
Guidewire supplies an industry platform-as-a-service for property and casualty insurance carriers. Lye quit after six months, jumping ship to NetApp. With hindsight, it looks like he was asked to replicate the things he did for Siebel/Oracle at NetApp in the cloud area.
This is a seasoned SaaS business executive, with experience building SaaS business units and acquiring companies to help with that.
Buying time
In January 2018 NetApp’s Cloud BU became the Cloud Data Services BU and Lye set out on the acquisition trail.
The first was Talon Software, which provided its FAST software-defined storage enabling global enterprises to centralize and consolidate IT storage infrastructure to the public clouds. NetApp said at the time that the combination of its Cloud Volumes technology and Talon FAST software meant enterprises could centralize data in the cloud while still maintaining a consistent branch office experience.
Lye said at the time: “As we grow our cloud data services offerings with solutions like Cloud Volumes ONTAP, Cloud Volumes Service, Azure NetApp Files and Cloud Insights, we are excited about the potential that lies in front of this new combined team to deliver complete solutions for primary workloads. We share the same vision as the team did at Talon – a unified footprint of unstructured data that all users access seamlessly, regardless of where in the world they are, as if all users and data were in the same physical location.”
This acquisition was a cloud product-as-a-service deal, not a cloud service operations deal. That came the next month with CloudJumper, a cloud VDI player and the first of eight such acquisitions.
NetApp has bought rapidly growing companies in what is still a fast-expanding market. These companies had great prospects and NetApp will have paid a good price for them, as the $450 million reported for Spot indicates, a 6.6x increase on Spot’s $52.6 million funding total.
We have collated, in the table below, information on the nine acquired companies, the total funding (where it’s known), as well as the reported acquisition cost.
We must emphasize that this is highly speculative, but we also applied what we felt was a more conservative 5X increase to the total funding of these eight companies, $148.65 million, to arrive at a guess of $743.25 million for the vendor’s spending. There was no funding amount for four of the acquisitions, so this table assumes NetApp spent $10 million to buy each of them, giving a total potential cost of around $783 million.
Even if the number turns out to be smaller, a nine-firm buy is a huge bet by NetApp, and is notable because the acquisitions are in a completely new enterprise market that is quite a long way from its traditional and core storage on-premises market and growing adjacent public cloud storage market. Signalling this separation, Lye’s CloudOps software products were kept out of Chief Product Officer Harvinder Bhela’s control when he was hired by NetApp in December 2021.
Lye’s BU is doing well. In March we wrote NetApp’s “Cloud Operations (CloudOps) business looks so good for NetApp that the company has increased its overall revenue targets.”
These were the numbers behind that: “NetApp thinks its public cloud annual recurring revenue (ARR) will be $2 billion by its fiscal year 2026, up from $1 billion in 2025. NetApp should now reach its $1 billion public cloud target by 2024 – a year sooner than predicted.”
These numbers show why NetApp thinks it’s worthwhile to spend hundreds of millions in its rush to build a cloud data services business before anybody else. Public CloudOps is going to become, NetApp hopes, its second goldmine after storage.
Open-source ETLer Airbyte is joining forces with Grouparoo, which has built a reverse-ETL open source product already used by hundreds of users. Grouparoo will play an important role in moving data out of data warehouses into systems of action. It was always part of Airbyte’s plans to add reverse ETL and now this accelerates that timeline. A blog says more.
…
Data automation cloud startup Ascend.io announced a $31 million Series B funding round led by Tiger Global with participation from Shasta Ventures and existing investor Accel. The additional capital will be used by Ascend.io to scale go-to-market efforts and expansion into new geographies, as well as extend Ascend’s Data Automation Cloud to support full multi-cloud data mesh automation.
…
Rackspace announced a strategic partnership with data manager Cohesity to deliver multicloud managed backup and recovery solutions for Rackspace customers globally. Rackspace Technology will offer customers Rackspace Data Protection, a high-performance, software-defined Cohesity-Powered backup and recovery service that delivers cyber resilient managed backup and recovery across VMware-based clouds. Rackspace Data Protection includes backup and recovery for VMware workloads and options such as advisory services and ransomware anomaly detection and remediation services.
…
Delphix, the DevOps Test Data Management (TDM) supplier, has announced the appointments (promotions) of Tammi Warfield to Chief Customer Officer and Alex Hesterberg to Chief Strategy Officer. Tammi will lead onboarding, professional services, customer success, and support for Delphix worldwide with a focus on building and delivering customer services. Alex Hesterberg is to lead strategic partnerships, OEMs, channels, solutions and systems engineering teams supporting the company’s technology innovation, corporate development and go-to-market efforts.
…
Scality CEO Jérôme Lecat is leading a charity non-fungible token (NFT) sale with #TogetherUkr, an NFT collection created by renowned artists to help the people of Ukraine. Launching on Friday, 15th April on www.togetherukr.com, the initiative will be selling 25,000 NFTs to raise funds for the humanitarian actions led by non-profit organisation #EnsembleUkraine, in partnership with the Paris Blockchain Summit, and to help Ukrainian artists.
Parquet – A columnar data table format optimized for use with big data processing frameworks such as Apache Hadoop, Apache Spark, and others, and designed to allow complex data processing operations to be performed quickly. The main features:
Columnar Storage: Unlike row-based formats (like CSV), Parquet stores data in separate columns, which can be accessed and analysed omitted their own.
Compression: Compression techniques (like Snappy, Gzip, or LZO) reduce file size more effectively than row-based formats.
Data Encoding: It employs encoding schemes (e.g., dictionary encoding, run-length encoding) to further optimize storage and speed up queries by reducing data redundancy.
Metadata: Parquet files include embedded metadata that describes the schema, column statistics (like min/max values), and other details, enabling query engines to skip irrelevant data, improving performance.
Predicate Pushdown: Because of its metadata and columnar structure, Parquet supports predicate pushdown, where filters (e.g., “WHERE age > 30”) are applied at the storage level, reducing the amount of data scanned.
Compatibility: It’s widely supported across data processing ecosystems, for data lakes and large-scale analytics.
Parquet is meant for use with large datasets used in data warehouses and machine learning pipelines, not for small, simple datasets or data tables featuring frequent row-level updates. It’s optimized for batch processing rather than transactional workloads.
Infinidat will strengthen its products with access to a wider range of client protocols, as it sticks to its on-premises high-end storage focus – but not to the extent of adding a mainframe connector or porting its software to the public cloud.
Update; NVMe-oF support timescales added. 13 April 2022.
The company presented its roadmap at an IT Press Tour event in Tel Aviv. A large element of its strategy is to add extra front-end protocol access methods to widen the pool of client systems that can use the Infinibox array. SMB access is being developed for servers using that protocol. Version 4 NFS will be added and so too will be S3, AWS’s object storage protocol.
Phil Bullinger
CEO Phil Bullinger said: “We now have a laser focus on selling the right product to the right customer.”
Bullinger became CEO in January 2021 and set about replenishing the executive team with a focus on execution rather than changing the world through product development, but retaining a strong focus on engineering.
The results have been dramatic – with increased growth as the disk-based array’s bullet-proof reliability and consistent high performance through memory caching have been extended to an all-flash product and a purpose-built backup appliance with cyber-resiliency. Highly appreciative customers have helped spread the word as well.
LTV = Life time value
The stronger growth means Infinidat is profitable and reinvesting in the business.
With the extended access protocol options, Infinidat will offer concurrent block, file, and object storage access via S3. Bullinger believes that performant object storage will become much more important – that object will become primary storage for some applications in the high-end enterprise storage space.
The block access area will be enhanced further by adding both iSCSI and NVMe-oF. NVME-oF/RoCE and NVME-oF/FC will be added to the existing NVMe/TCP. With NVMe access added inside the array, Infinidat will have end-to-end NVMe capability.
Timetable:
NVMe/TCP – now
NVMe-oF RoCE – Q3/2023
NVMe/FC – Q3/2023
Infinidat will look to add more AI and ML features to its AIOps management and to the ransomware attack detection area.
What’s not on the roadmap
Infinidat does not plan to port its InfiniBox OS to the public cloud, nor to provide mainframe connectivity. Replicating Infinidat’s memory caching would be an issue, as available public cloud instances don’t support the concept.
Mainframe connectivity was attempted when Moshe Yanai was CEO, but it foundered because of cost and time issues – far more technology was needed than just adding a FICON network pipe to the array. This sucked up engineering dollars and, at the same time, the problem of getting IBM certification – a mandatory requirement – was hindered by bureaucracy, with a year-plus timescale being mentioned.
The overall development cost became too high and Infinidat abandoned the attempt.
Future
Bullinger’s view is that there are three doors ahead of Infinidat: IPO, acquisition, or funding for more growth. On joining, he oversaw a 40 per cent bookings growth from 2020 to 2021 and wants to do better still.
In his view, Infinidat has plenty of potential growth in the high-performance, enterprise, mission-critical storage array market without having to enter the mainframe market or port the software to the three main public clouds. Infinidat can stick to its knitting with growth possibilities enhanced because its main competitors – Dell EMC, Hitachi Vantara, HPE, and NetApp – have taken their eyes off the ball in the Infinidat market.
PowerMax is not as successful as Dell might have hoped. Hitachi Vantara is like a flywheel that has lost its motive power – still spinning but slowing down. This is so even though GigaOm rates its SAN storage highly. A recent refresh could change its market situation.
HPE storage is, market-wise, weak. It has its XP, with Hitachi Japan as an OEM, but we hear less and less about it. It is architecturally different from the 3PAR-Primera-Alletra 9000 line which is not setting the world on fire and is, again, architecturally different from the Nimble-Alletra 6000 line. NetApp is perceived as not really having PowerMax, DS8000, VSP, or XP-class machines. Newcomer Pure is not in this space either.
So Infinidat has this particular market wide open to its persuasive sales reps and partners.
With that basic growth prospect, Bullinger might pursue funding to gain technology through an acquisition/acquihire exercise – but only to strengthen the core InfiniBox line, not to enter a completely new market. He might ask for funding to build out Infinidat’s business infrastructure and have more presence in the market so that Infinidat could grow even faster.
Our feeling is that he will do just that: request funding for a growth-related initiative.
Blistering. That’s the best way to describe the pace of business development at Kubernetes troubleshooting startup Komodor.
Kubernetes growth is exploding on the back of DevOps. Stateful container storage players, working with CSI plugins or storage containers, can find diagnosing and fixing Kubernetes infrastructure problems daunting – multiple separate tools, each with its own interface, are needed to look into aspects of the infrastructure. The challenge is to organise the different pieces of information into a coherent picture, check that out, then work out how to fix the problem. DevOps people using Kubernetes need an integrated master tool to help them diagnose and fix faster.
This is what Komodor is developing.
Komodor marketing VP Igal Zeifman told a Tel Aviv IT Press Tour audience: “The Kubernetes ecosystem is uncharted. Kubernetes troubleshooting usually involves many tools, support tickets and escalation. Komodor brings the tools together and enables DIY troubleshooting.”
The company was started in June 2020 in Tel Aviv by CEO Ben Ofiri – a former Google project manager and engineer – along with CTO Itiel Shwartz – a software engineer with a stint at eBay in his CV.
Itiel Shwartz (left) and Ben Ofiri
Their ideas convinced angel investors to stump up $4 million and the Komodorans set about writing code for benevolent agent software – a data collection engine that would sit in a customer’s Kubernetes cluster and map the local environment and its history of changes to the microservices within. They also produced integration code – an automation engine that would call the various diagnostic and monitoring tools in the Kubernetes ecosystem, like DataDog and Grafana, interpret their output, and present the results to the user through a smart dashboard along with recommendations for what to do next.
The intent was for their app to review all the data, understand the context, and enable developers to troubleshoot. Often when a problem occurs in a running environment it has been directly or indirectly triggered by one or more changes in that environment. Komodor’s app helps a developer understand which elements in a microservices environment have changed over time and relate them to problematic issues. Then the developer can find out what the changes involved and tease out the root cause.
Komodor dashboard showing event stream
This then facilitates remediation, saving developers time – 30 hours per week for one customer. Early results were encouraging and customer numbers grew. It became clear to Ofiri and Shwartz that they had a facility that was in demand. Although other suppliers like Sysdig offered similar features at first glance, they were not exclusively focused on Kubernetes, and Komodor was of greater help to developers. Gartner recognized the company as a Cool Vendor in 2021.
In early 2021 it became clear that they needed more funds to hire engineers, get a larger office, and grow. Talks with VCs and industry figures resulted in a $21 million A-round in May 2021, just 11 months after starting the company. The VCs were Accel, Pitango, and NfX. Komodor now has 18 employees and dozens of customers in North America, South America, and Europe, including Intel, Lacework, and Varonis.
Basic pricing is $10/node/month for up to 250 nodes (servers). Assume 36 customers each with 50 nodes and we have $(36 x 50 x 10 x 12) revenue per year, or $216,000 in a little under 24 months after starting operations. Asked about the size of Komodor’s total addressable market, Ofiri said it was as big as the DevOps community, as big as the cloud, with every company needing to have modern software features. “It’s incalculable,” were his words.
Komodor is not producing hardware or even monolithic software. Its own DevOps people use its software to speed in-house development – eating their own dog food. It can scale very quickly.
Equinix is selling Dell managed storage, hyperconverged and data protection products as-a-service from its datacenters.
The products include Dell PowerStore (all-flash), VxRail and PowerProtect Data Domain Virtual Edition (DDVE). PowerStore is Dell’s mid-range unified file and block storage array, VxRail is its hyperconverged system running VMware, and DDVE is a software-only storage appliance, which can be either on-premises or cloud deployed. Equinix offers bare metal as-a-service in 18 data centers worldwide.
Zac Smith, Managing Director of Equinix Metal, said in a blog: ”Our job is to stay humble and enable customers’ opinions to shine through by offering choice, such as the Dell PowerStore, VxRail and DDVE or Pure Storage solutions we’re now operating as a service, or our new Workload Optimized server lineup.”
Equinix Metal data centres (red dots)
The Dell products will be sold as a fully operated service. This includes Equinix procuring, installing and maintaining the hardware, and also managing the colocation, power, top of rack and distribution networking throughout the contract term. Dell says VxRail on Equinix Metal provides a “soup to nuts” data center as-a-service experience that includes compute, storage, networking, and integrated VMware. DDVE on Equinix Metal can be connected to private environments as well as to any cloud platform through Equinix onramps.
In Smith’s view: “Just about every technology player is moving as a Service, and as the world’s digital infrastructure company, we’re focused on helping to make them successful.”
He said: ”Equinix’s digital services are foundational building blocks that customers and partners can assemble… The big unlock that we’re providing is an API interface for customers to invest in digital transformation at the most fundamental, physical level.”
Smith said he is keen on partnering. “We believe that our customers are best served by choosing from best of breed solutions that are part of our ecosystem.” He outlined several Equinix Metal partnerships:
Nutanix Cloud and Dell VxRail for a hybrid multi-cloud application experience,
Dell PowerStore and Pure Storage for scalable storage,
Cohesity Helios or Dell PowerProtect DDVE for data resiliency,
NVIDIA Launchpad for machine learning.
Equinix added that it’s “happy to be valued for the as-a-Service plumbing that it provides.”
Equinix diagram
The Equinix Metal cloud has close proximity to the public cloud and is the bridge to end users, enabling interconnections between them. Equinix says it sees itself starring in a colocation comeback story, with on demand features and presence in worldwide geos. The company wants to establish Equinix Metal as a serious player in the infrastructure marketplace, a so-called one-stop shop for hybrid and hyperscale infrastructure in a “rent-the runway” model, and the center of customers’ private clouds.