Solidigm is touting a PCIe gen 4 QLC flash SSD offering TLC-class read performance and has appointed a pair of co-CEOs.
QLC or 4bits/cell NAND provides less expensive SSD capacity than TLC (3 bits/cell) NAND but has generally lower performance and a shorter working life. Solidigm is making a big deal about its new optimized QLC drive, which it says can cost-effectively replace both a TLC flash and a hybrid disk/SSD setup in a 7PB object storage array.
Greg Matson, VP of Solidigm’s datacenter group, played the sustainability card: “Datacenters need to store and analyze massive amounts of data with cost-effective and sustainable solutions. Solidigm’s D5-P5430 drives are ideal for this purpose, delivering high density, reduced TCO, and ‘just right’ performance for mainstream and read-intensive workloads.”
Solidigm says the DC P5430 is a drop-in replacement for TLC NAND-based PCIe gen 4 SSDs. It is claimed to reduce TCO by up to 27 percent for a typical object storage solution with a 1.5x increase in storage density and 18 percent lower energy cost. And it can deliver up to 14 percent higher lifetime writes than competing TLC SSDs.
From left; D5 P5430 in U.2 (15mm), E1.S (9.5mm) and E3.S (7.5mm) formats
The P540 uses 192-layer 3D NAND with QLC cells, and comes in three formats: U.2, E1.S and E3.S. The capacities are 3.84TB, 7.68TB, 15.36TB, and 30.72TB, but with the physically smaller E1.S model limited to a 15.36TB maximum. The drive does up to 971,000/120,000 random read/write IOPS and its sequential read and write bandwidth numbers are up to 7GBps and 3GBps respectively.
Solidigm says the new SSD has read performance optimized for both mainstream workloads, such as email/unified communications, decision support systems, fast object storage, and read-intensive workloads like content delivery networks, data lakes/pipelines, and video-on-demand. These have an 80 percent or higher read IO component.
How does the performance stack up versus competing drives? Solidigm has a table showing this:
The comparison ratings are normalized to Micron’s 7450 Pro and they do look good. The P5430’s endurance is limited, though, with Solidigm providing two values dependent upon workload type – random to 0.58 DWPD, and sequential up to 1.83 DWPD. It is a read-optimized drive after all.
Solidigm wants us to know that it has up to 90 percent IOPS consistency and ~6 percent variability over the drive’s life, and it supports massive petabytes written (PBW) totals of up to 32PB. Kioxia’s CD6-R goes up to 28PBW.
It says a 7PB object storage system using 1,667 x 18TB disk drives with 152 TLC NAND cache drives will cost $395,944/year to run. A 7PB alternative,using 480 x 30.72TB P5430s will cost $242,863/year – 39 percent less.
Solidigm ran the same comparison against an all-TLC SSD 7PB Object storage array and says its kit costs $257,791/year, 27 percent less than the TLC system’s $334,593/year. The TLC NAND system uses 15.36TB drives while Solidigm’s P5430-based box uses its 30.72TB drives, giving it a smaller rack footprint.
The D5-P5420 SSDs are available now but there is a delay before the maximum capacity 30.72TB versions arrive, which should be later this year.
The CEOs
Solidigm’s board has appointed two co-CEOs. The original CEO, Intel veteran Rob Crooke, left abruptly in November last year. The co-CEO of SK hynix, Noh-Jung Kwak, was put in place as interim CEO. Now two execs, SK hynix president Kevin Noh and David Dixon, ex-VP and GM for Data Center at Solidigm, are sharing responsibility.
David Dixon and Kevin Noh
Noh was previously chief business officer for Solidigm, joining in January this year. He has a 20-year history as an SK Telecom and SK hynix exec. Dixon was a near-28-year Intel vet before moving to Solidigm when that rebranded Intel NAND business was sold to SK hynix.
Bootnote
Here are the calculations Solidigm supplied for its comparison between a hybrid HDD/SSD and all-P5430 7PB object storage array and all-TLC array:
Micron has built a TLC SSD with 232-layer tech that’s faster and more efficient than Solidigm’s lower cost QLC drives. It’s also launched a fast SLC SSD for caching.
SLC (1 bit/cell) NAND is the fastest flash with the longest endurance. TLC (3bits/cell) makes for higher-capacity drives using lower cost NAND but with slower speed and shorter working life. QLC (4 bits/cell) is lower cost NAND again but natively has slower speeds and less endurance. 3D NAND has layers of cells stacked in a die – the more layers the more capacity in the die and, generally, the lower the manufacturing cost. Solidigm and other NAND suppliers are shipping flash with less than 200 layers while Micron has jumped to 232 layers.
Alvaro Toledo
Alvaro Toledo, Micron VP and GM for datacenter storage, told us: “Very clearly, we’re going after QLC drives in the market like the [Solidigm] P5316. And what we can say is this drive will match that on price, but beats it on value by a mile and a half. We have 56 percent better power efficiency, at the same time giving you 62 percent more random reads.”
The power efficiency claim is based on the P5316 providing 32,000 IPS/watt versus Micron’s 6500 ION delivering 50,000 IPS/watt.
Micron provides a set of performance comparison charts versus the P5316:
Solidigm coincidentally launched an updated QLC SSD, the P5430, today. Micron will have to rerun its tests and redraw its charts.
We have crafted a table showing the main speeds and feeds of the two Solidigm drives and the 6500 ION – all PCIe 4 drives – for a quick comparison:
The 6500 ION has a single 30.72TB capacity point and beats both Solidigm QLC drives with its up to 1 million/200K random read/write IOPS performance, loses out on sequential read bandwidth of 6.8GB/sec vs Solidigm’s 7GB/sec, and regains top spot with a 5GB/sec sequential write speed, soundly beating Solidigm.
Toledo points out that 6500 ION supports 4K writes with no indirection unit. Solidigm’s P5316 “drive writes in 64K chunks.” This, Toledo claims, incurs more read, modify, write cycles as the 64K is mapped to the drive’s 4K pages.
Micron 6500 ION
Its capacity and cost means that “by lowering the barriers of entry, we see that the consolidation play will make a lot more sense right now. You can store one petabyte per rack unit, which gives you up to 35 petabytes per rack.”
Toledo says the 6500 ION is much better in terms of price/performance than QLC drives: “Just the amount of value that we’re creating here is gigantic.” You can use it to feed AI processors operating on data lakes with 9x better power efficiency than disk drives in his view. And it’s better than QLC drives too: “This ION drive is just absolutely the sweet spot, the Goldilocks spot in the middle, where for about 1.2, 1.3 watts per terabyte, you can get all the high capacity that you need fast enough to feed that [AI] beast with a very low power utilization.”
The XTR is positioned as an affordable single-port caching SSD. Micron compares it to Intel’s discontinued Optane P800X series storage-class memory (SCM) drive, saying it has up to 44 percent less power consumption, 20 percent more usable capacity, and up to 35 percent of P5800X endurance at 20 percent of the cost.
Micron XTR.
It suggests using the XTR as a caching drive paired with its 6500 ION, claiming this provides identical query performance as an Optane SSD cache would provide. Toledo said: “We are addressing the storage-class memory workload that requires high endurance; this is not a low latency drive.”
Kioxia also has a drive it positions as an Optane-type device, the FL6, and Micron’s XTR doesn’t fare that well against it in random IO but does better in sequential reads, as a table shows:
Toledo says the FL6 is going after lower latency workloads than the XTR, but: “If you need to strive for endurance, the XTR can go toe to toe with a storage-class memory solution.”
Micron says the XTR has good security ratings – better than the Optane P5800X products, such as FIPS 140-3 L2 certification at the ASIC level – and provides up to 35 random DWPD endurance (60 for sequential workloads).
NetApp has doubled down on its all-flash systems with a new SAN array, the ASA A-Series, and a new StorageGRID object storage system, and has added a ransomware recovery guarantee as well as making its ONTAP One data services software suite available to all ONTAP users without added charge.
Update: Infinidat cyber resilience guarantee information added. May 18, 2023.
NetApp supplies unified block and file storage arrays running its ONTAP operating system. These arrays include the FAS hybrid flash+disk arrays and the AFA (All-Flash Array) systems. There’s the TLC flash-based A-Series and lower cost, capacity-focused C-Series, which use QLC flash. The SAN-only ASA product line, introduced in October 2019, is powered by a block-only access version of ONTAP and is based on AFA A-Series hardware.
Sandeep Singh
NetApp SVP and GM of Enterprise Storage Sandeep Singh told us: “What it means for customers is that NetApp has been already been a leader in NAND. NetApp has been the leader in unified including with file, block and object capabilities. And now NetApp is becoming a leader in SAN.”
He justified this by saying: “We are building on a very, very strong base of SAN workload deployments. It turns out that over 20,000 customers already trust NetApp with their SAN workloads. And out of those 20,000, 5,000 customers deploy that for their SAN-only workloads.”
ASA
The new ASA systems are for customers who separate out their SAN workloads, such as SAP and Oracle-accessing ones, from unstructured data access during unplanned outages with a symmetric, active-active controller architecture. Product marketer and VP Jeff Baxter told us: “Symmetric active-active multipathing … is typically reserved only for high-end frame arrays.”
NetApp ASA systems
The new ASAs use NVMe SSDs and support both NVMe/TCP and NVMe/FC access. NetApp offers a six nines availability guarantee, with remediation available if downtime exceeds 31.56 seconds a year, plus a storage efficiency guarantee of a minimum 4:1 data reduction based on in-line compression, deduplication and compaction.
There is a five-member product range running from the entry-level A150 and A250 through the mid-range A400 and on up to the A800 and range-topping A900. A data sheet table provides speeds and feeds:
NetApp claims the ASA delivers up to 50 percent lower power consumption and associated carbon emissions than competitive offerings but without supplying comparative numbers – you’ll need to check.
StorageGRID
NetApp offers low-end SG100 and SG1000 object storage nodes, mainstream SG5000 series cost-optimized boxes, and the high-end SG6000 series. These comprise the SG6060 and SG6060-Expansion systems for transactional small object and large scale data lake deployments, plus the all-flash SGF6024 for primary object-based workloads needing more performance. That system is now surpassed by the new SGF6112 StorageGRID system running v11.7 of the StorageGRID operating system.
The SGF6112 supports 1.9TB, 3.8TB, or 15.3TB SED (self-encrypting drives) or non-SED drives. NetApp blogger Tudor Pascu writes: “As an all-flash platform, the SGF6112 hits a sweet spot for workloads with small object ingest profiles. The main difference between the SGF6112 and the previous-generation all-flash appliance is the fact that the new appliance no longer leverages the EF disks and controllers.”
It uses a software-based RAID 5 (2 x 5+1) configuration for node-level data redundancy. NetApp says the SGF6112 has improved performance and density. We don’t have any performance or capacity comparison between the SGF6024 and the latest SGF6112 to back this up.
v11.7 adds cross-grid replication which replicates objects from one grid to another, physically separate, grid, also cloning tenant information and bi-directional replication. This is better than the existing CloudMirror object replication because it supports disaster recovery.
It also has an improved UI showing capacity utilization and top tenants by logical space. There is also a software wizard to set up tiering from ONTAP to StorageGRID.
Ransomware
NetApp is offering a ransomware recovery guarantee. Singh said: “What NetApp uniquely delivers is providing customers this flexibility to have the ability to protect, detect, and recover in the event of a ransomware attack … In the event that they are unable to recover their data then NetApp will offer them compensation.”
This is based on ONTAP automatically blocking known malicious file types, rogue admins, and malicious users with multi-admin verification, and tamper-proof snapshots that can’t be deleted, not even by a storage admin.
It also looks at IO patterns. Baxter said: “Our autonomous ransomware protocol, when it’s enabled, goes into a learning period. It learns what’s the IO rate look like? What’s the entropy, the change rate percentage? What’s the throughput of the volumes in question? And then, once it’s learned enough, it shifts into an active mode. And it does so automatically in our new version of ONTAP that just shipped.”
Singh added: “When ONTAP detects a ransomware attack, it automatically creates a tamper proof snapshot as a recovery point and notifies the administrators and then customers were able to recover literally within seconds to minutes their data from the data copies that are available to them. This we believe is industry leading and uniquely available from NetApp.”
Data protection vendors Druva and Rubrik offer ransomware recovery guarantees. Infinidat has a cyber storage resilience guarantee for its InfiniSafe offering. This protects the backup repository on Infinidat’s InfiniGuard data protection arrays. Infinidat also has a cyber storage resilience guarantee on InfiniBox and InfiniBox SSA II for production storage based on extending its InfiniGuard protection to its these arrays. NetApp, like Infinidat, is offering its guarantee on a production storage system but it is not the first to do so.
ONTAP One
The ONTAP One Storage software suite, including all available NetApp ONTAP software, was announced for NetApp’s AFA C-Series in March. It is now available for all AFF, ASA, and FAS systems. ONTAP One is also available to existing NetApp deployed systems under support.
NetApp says it has also expanded its Advance set of buyer programs and guarantees.
Scott Sinclair, Practice Director at the Enterprise Strategy Group, offered this thought: “This announcement is strategic for NetApp, but also aligns to what ESG is finding within our research, which is that the datacenter isn’t going away.”
One theme of this set of announcements is increased value for money and that could help NetApp sustain its revenues or even grow them in its current downturn.
Commissioned: The current economic uncertainty has cautious CIOs tucking in for tough times.
Although still important for brand viability and competitiveness, potentially transformative digital projects and so-called moonshoots appear to be giving way to workstreams that bolster organizational resiliency.
Those tasks? System uptime and cybersecurity and, naturally, cost optimization.
Companies are seeking IT leaders who can be surgical about cutting costs and other critical measures for building business resiliency during a downturn, executive recruiters recently told CIO Journal. Also, financial leaders for leading technology providers noted in earnings calls that customers are optimizing their consumption of cloud software in the face of macroeconomic headwinds.
Their common keywords and phrases? Slower growth of consumption. Economic uncertainty. Tough economic conditions. Optimize their cloud spending. New workload trends.
New workload trends. This means IT leaders are rethinking where and how their workloads are running. They are focused on workload optimization and placement, or rightsizing and reallocating applications across a variety of enterprise locations.
How IT leaders got here
Consider this new focus a correction to trends that date more than a decade ago and accelerated in recent years.
As organizations expanded their IT capabilities to meet new business demands they allocated more assets outside their datacenters, widening their technology footprints. For instance, many CIOs launched new ecommerce capabilities and built mobile applications and services.
The sentiment was: Build it and fast, test it quickly and ship it. We’ll worry about the technical debt and other code consequences later.
From remote call centers and analytics tools to mobile payment and curbside pickup services, IT leaders built solutions to strengthen connections between their companies and the employees and customers they serve.
Collectively, these new digital services hastened an already proliferating sprawl of workloads across public and private clouds, colocation facilities and even edge environments.
Fast forward to today and businesses are grappling with these over-rotations. They’ve built an array of systems that are multicloud-by-default, which is inefficient and clunky, with data latency and performance taxes, not to mention unwieldy security profiles.
A playbook for building business value
Fortunately, IT leaders have at their disposal a playbook that enables them to optimize the placement of their workloads across a multicloud IT estate.
This multicloud-by-design approach brings management consistency to storing, protecting and securing data in multicloud environments. Delivered as-a-Service and via a pay-per-use model, this cloud-like strategy helps IT leaders provide a cloud experience, while hardening the business and reining in costs.
The multicloud-by-design strategy helps build business value in four ways:
Cost optimization. High CapEx costs are a burden. But IT teams can operate like the public cloud while retaining your assets on-premises
A pay-per-use consumption model allows IT leaders to align infrastructure costs with use. This can help reduce overprovisioning by up to 42 percent and save up to 39 percent in costs over a three-year operating period, according to IDC research commissioned by Dell.
Productivity. The talent crunch is real, but IT can reduce the reliance on reskilling by placing workloads in their optimal location, which will help mitigate risk and control costs. For instance, reducing time and effort IT staff spend on patching, monitoring and troubleshooting, among other routine tasks, is a key issue for talent-strapped IT teams. Such actions can help make infrastructure teams up to 38 percent more efficient, says IDC.
Digital resiliency.Recovering from outages and other incidents is challenging regardless of the operating model. Subscription-based offerings help reduce risks associated with unplanned downtime and costs with as much as 46 percent faster time to recovery, IDC data shows.
Business acceleration. Optimizing workload placement with pay per-use and subscription models helps shorten cycles for procuring and deploying new compute, storage or data protection capacity by as much as 60 percent faster, IDC says.
This helps business boost time to value compared with existing on-premises environments that do not leverage flexible consumption models.
The takeaway
In a tight economy, IT leaders must boost business value as they deploy their IT solutions. Yet they still struggle with unpredictable and higher than expected costs, performance and latency concerns, as well as security and data locality issues.
Sound daunting? It absolutely is. But running a more efficient IT shop shouldn’t be a moonshot.
Our Dell APEX as-a-Service suite of solutions can help IT departments exercise intelligent workload placement and provision infrastructure resources quickly. Dell APEX will also help IT teams improve interoperability across IT environments while enabling teams to focus on high value tasks. All while protecting corporate data and keeping it compliant with regulations.
Ultimately, organizations that can react nimbly to changing business requirements with resiliency are more likely to prosper. But this requires new ways of operating IT – and the right partner to help execute.
The whole area of IT storage is buzzing with innovation, as startups and incumbents race to provide the capacity needed in a world exploding with unstructured data. Chatbot technology, the I/O speed to get data into memory, the memory capacity needed, and the software to provide, manage, and analyze vast lakes of data all create massive opportunities.
Growing amounts of data need storing, managing, and analyzing, and that is driving technology developments at all levels of the IT storage stack. We’ve tried to put this idea in a diagram, looking at the storage stack as a spectrum running from semiconductor technology at the left (yellow boxes) to data analytics technology at the right (green boxes):
Talk about a busy slide! It starts with big blue and pink on-premises and public cloud rectangles. Below this is a string of ten boxes, stretching from the smallest hardware to the largest software. The first six have sample technology types below them. A set of nine blue bubbles contain example suppliers who are constantly bringing out new technology. A tenth such bubble can be found to the top left – building public cloud structures on-premises.
This diagram is not meant to be an authoritative and comprehensive view of the storage scene – think of it as a representative sample. There isn’t an exhaustive list of innovating suppliers either, just examples. Exclusion does not mean a supplier isn’t busy developing new technology – they all are. You can’t survive in the IT storage world unless you are constantly innovating – both incrementally with existing technology and through new means.
Although innovation is ongoing, the amount of money and number of funding events for storage startups has slowed this year. So far this year we have recorded:
Cloudian – $60 million
Impossible Cloud – $7 million
Intrinsic Semiconductor – $9.73 million
Komprise – $37 million
Pinecone – $100 million
Volumez – $20 million
Weebit Nano – $40 million share placement
That’s a total of $273.7 million – not a lot compared to 2022, when we saw $3.1 billion in total funding events for storage startups.
Near the midpoint this year there have been just five acquisitions, and no IPOs:
Iguaz.io bought by McKinsey & Co.
Model9 bought by BMC
Ondat acquired by Akamai
Databricks bought Okera
Serene bought crashed Storcentric and its Nexsan products
In 2022 Backblaze IPO’d, Datto was bought for $1.6 billion, Fungible was acquired by Microsoft, MariaDB had a SPAC exit, AMD bought Pensando, Rakuten bought Robin, Hammerspace acquired Rozo, and Nasuni bought Storage Made Easy – it was a much busier year.
There could be more this year as DRAM and NAND foundries buy CXL technology, and data warehouse and lakehouse suppliers acquire AI technologies.
We have, we think, at least three IPOs pending: Cohesity, Rubrik and VAST Data. Possibly MinIO as well, and we might see private equity takeouts for other companies.
The world is bristling with new technology developments. Think increased layer counts in the NAND area, memory pooling and sharing with CXL, large language model interfaces to analytical data stores, disk drive recording (HAMR/MAMR), cloud file services, tier-2 CSP changes, incumbent supplier public cloud-like services for on-premises products, SaaS application backup, and Web3 storage advances.
Keeping up with all this is getting to be a full-time job.
Scandal-hit WANdisco wants to raise $30 million in equity to avoid running out of working capital by mid-July as the business embarks on a “deep transformation recovery program”, including sustained efforts to reduce expenses and bolster trade.
The data replicator biz risks running out of cash in the wake of falsified sales reporting that was revealed on March 9. The AIM-listed company reported revenues of $24 million for 2022 when they should have been $9.7 million, and sales bookings were also grossly inflated at $127 million rather than the actual $11.4 million. An investigation is ongoing but so far WANdisco has laid the blame for the fake sales at the feet on one unnamed salesperson.
Share trading in WANdisco stock was immediately suspended following the discovery of the incorrect sales data. Co-founder, CEO and chairman Dave Richards subsequently quit along with CFO Erik Miller, and new C-suite hires were made including a board chairman, interim CEO and CFO.
Ken Lever
Interim chair Ken Lever said: “Having now been in the business for some six weeks, there is no doubt in my mind that the company should have a very bright future given its differentiated technology. However, improvements across sales and marketing need to be made to properly take advantage of the opportunity.
“To do this, the business needs to be urgently properly capitalized and so today we are announcing our desire to raise $30 million towards the end of June.“
He said the decision to raise equity “is a direct result of the issues that led to our announcement on 9 March.” The company is cutting costs with 30 percent of its staff leaving, and trying to reduce its annualized cost base from $41 million to around $25 million, but it only has an $8.1 million cash left in the bank. It could run out of working capital by mid-July unless the finances are shored up, WANdisco said.
A Proposed Fundraise is deemed the most suitable way to rebuild the balance sheet and fund operations, yet given the uncertainty of the share price, WANdisco might have insufficient shareholder authorities to issue the required number of Ordinary Share to deliver the new ordinary shares, it said.
“The Board strongly believes there are significant benefits in asking for shareholder authority to issue shares for the Proposed Fundraise in advance, rather than following the announcement of the Proposed Fundraise with the admission of the New Ordinary Shares subject to approval. This is because the Board cannot realistically launch the Proposed Fundraise until it is confident that the suspension in the Company’s shares will be lifted at the point in time the New Ordinary Shares are issued, or shortly thereafter,” the company said.
Trading in WANdisco stock won’t happen again on AIM until after the audit of 2022 company accounts are concluded, which is expected at the end of next month. Resident execs will, as such, seek approval from requisite shareholder authorities before launching the Proposed Fundraise.
Management said it intends to try to issue the New Ordinary Shares at a price that “minimises dilution for existing shareholders whilst also ensuring the Company raises sufficient capital”. Pricing the new shares will be done by contacting potential investors and asking them how much they’d be prepared to pay; this is called a bookbuild process.
Potential investors will need to think the company has a future, and Interim CEO Stephen Kelly is developing a turnaround plan with six elements:
Go-to-market structure – this will include sales, marketing, pipeline creation and partnerships to build the foundations towards consistent sales execution.
Enhanced board and management to run the company properly.
Better investor engagement with improved disclosures and transparency.
Headcount and organization cost reductions to achieve the milestone of cash flow break-even with progress to sustainable profitable growth.
Market validation with a realistic view of the obtainable market based on product/market fit, competitive differentiation, proof of value, commercial pricing, and branding.
Excellence in the Company’s Governance and Control environment – meaning no more incorrect sales reporting
Product market strategy
The existing strategy – replicating live data from edge and core data centers up to the public cloud – will be given added data migrator capabilities, and there will be a new target market. This is Application Lifecycle Management (APM), which means selling WANdisco product and also services, SaaS, to help distributed software development organizations collaborate more efficiently. This relies on using WANdisco’s replication and load-balancing technology to provide a globally-distributed active-active configuration across wide area networks.
Enhancing data migrator sales means adding support for more targets via agreements with Microsoft, Google, AWS, IBM and Oracle, plus integration with cloud-centric data analytic platforms including Databricks and Snowflake. The product will be enhanced with more performance, scale, and ease of use, we’re told.
WANdisco’s new management says its Data Migrator technology lies in the Data Integration Software tools segment of Gartner’s Data Management market. The total addressable market (TAM) for such software tools is $4.4 billion in 2023 with a forecast average annual growth rate of 8.7 percent taking it to a $6.3 billion TAM in 2027.
We note that WANdisco competitors include Cirrus Data (block) and Atempo, Datadobi, Data Dynamics and Komprise in the file moving area.
Time is tight and WANdisco’s runway limited. Tom Kennedy, analyst at Megabuyte, said: “This certainly looks like a final throw of the dice for WANdisco, and the odds do not look in its favour.”
“For one, we struggle to believe it can complete a funding round in a matter of weeks, particularly with the added complexity of its share suspension. While, even if it does manage to organise the round, we struggle to see how shareholders have even a remotely positive reaction to being asked for money yet again given its dismal track record – it has already almost entirely wasted the proceeds from its IPO in 2012 and follow-on placings in 2013, 2015, 2016, 2017, 2019 (twice), 2020, 2021, and 2022.
“Moreover, even if it miraculously manages to pull it off, its new annualised $25.0m cost base will quickly burn a hole into the new funds, and it’s limited in further cost-cutting as additional headcount reductions will come at a significant price. Frankly, this looks like a last-ditch attempt to extend its life long enough to complete an asset / IP sale process and return some value back to shareholders, albeit at pennies on the pound, but we just can’t see it happening,” added Kennedy.
Analyst house GigaOm reckons Cohesity is the leading supplier of infrastructure unstructured data management, having upped its game with its DataHawk threat intelligence and data classification suite.
GigaOm puts out two unstructured data management (UDM) reports, this one focused on infrastructure UDM and the second looking at business-oriented UDM. Business UDM products look at compliance, security, data governance, big data analytics, e-discovery and the like. Infrastructure UDM feature automatic tiering and basic information lifecycle management, data copy management, analytics, index, and search.
The report authors write: “As you can see in the Radar chart [below] vendors are spread across an arc that lies primarily in the lower half of the Radar, denoting a market that is particularly innovation-driven.”
There are 12 vendors included, with four leaders: Cohesity, followed by NetApp and Hitachi Vantara and the Komprise. Three: Cohesity, NetApp and Komprise, are fast movers.
No other suppliers are expected to reach the Leader ring any time soon.
There are five Challengers: CTERA, Druva and Arcitecta, Datadobi (StorageMap) and Dell (DataIQ). We find Panzura and Atempo (Miria) moving from new entrancy into the Challengers’ ring and DataDynamics moving towards it.
Five suppliers are placed in the lower right quadrant; the innovation-platform play area, with three more set to join them: CTERA, Datadobi and Hitachi Vantara. Product tech is developing fast and general platform appeal is more important than having extra features.
Hitachi Vantara has made the report available to interested readers, saying the report highlights the exceptional performance of HCP (Hitachi Content Platform) across a wide range of metrics and criteria. You can get a copy here.
Absentees
We asked ourselves why Nasuni and Hammerspace were not included? We were curious about Nasuni because competitors CTERA and Panzura were included. Ditto Hammerspace as Komprise was included. Did they not meet the criteria for entry?
So we asked GigaOm, and Arjan Timmerman, one of the report authors, told us: “We have an Infrastructure as well as a Business-oriented report [and] we made the following decisions.”
“Hammerspace does not have a UDM solution that would fit in either report. Nasuni has a decent business-orientated offering, but that does not really fit on the infra side.”
“CTERA is working hard on their Unstructured Data Management solution and [we included] Panzura although it didn’t brief us or provide a questionnaire response. But it was in both in reports last year so we decided to add them in both reports as well. We reached out to them on multiple occasions.”
Analysts rely on vendor co-operation and analytical life gets harder if vendors decide not to play nice.
By the fours
The GigaOm Radar Screen is a four-circle, four-axis, four-quadrant diagram. The circles form concentric rings, for new entrant, challenger, or leader, and mature tech (inner white ring). A supplier’s status is indicated by ring placement and product/service type relates to axis placement.
The four axes are maturity, horizontal platform play, innovation and feature play.
There is a depiction of supplier progression, with new entrants growing to become challengers and then, if all goes well, leaders. The speed and direction of progression is shown by a shorter or longer arrow, indicating slow, fast and out-performing vendors.
Supplier placement does not take into account vendor market shares.
A new chapter is opening in the all-flash storage array (AFA) world as QLC flash enables closer cost comparisons with hybrid flash/disk and disk drive arrays and filers.
There are now 14 AFA players, some dedicated and others with hybrid array/filer alternatives. They can be placed in three groups. The eight incumbents, long-term enterprise storage array and filer suppliers, are: Dell, DDN, Hitachi Vantara, HPE, Huawei, IBM, NetApp and Pure Storage. Pure is the newest incumbent and classed as such by us because of its public ownership status and sustained growth rate. It is not a legacy player, though, as it was only founded in 2009, just 14 years ago.
An updated AFA history graphic – we first used this graphic in 2019 – shows how these incumbents have adopted AFA technology both by developing it in-house and acquiring AFA startups. We have added Huawei to the chart as an incumbent; it is a substantial AFA supplier globally.
A second wave of hybrid array startups adopting all-flash technology – Infinidat, Nimble, Tegile and Tintri – have mostly been acquired, HPE buying Nimble, Western Digital buying Tegile, and DDN ending up with Tintri. But Infinidat has grown and grown and added an all-flash SSA model to its InfiniBox product line.
Infinidat is of a similar age to Pure, being founded in 2010, and has unique Neural Cache memory caching technology which it uses to build high-end enterprise arrays competing with Dell EMC’s PowerMax and similar products. It has successfully completed a CEO transition from founder Moshe Yanai to ex-Western Digital business line exec Phil Bullinger, and has been growing strongly for three years. It’s positioned to become a new incumbent.
The third wave was formed of six NVMe-focused startups. They have all gone now as well, either acquired or crashed and burned. NVMe storage and NVMe-oF access proved to be technology features and not products as all the incumbents adopted them and basically blew this group of startups out of the water.
The fourth wave of AFA startups has three startup members and two existing players moving into the AFA space. All five are software-defined, use commodity hardware, and are different from each other.
VAST Data has a reinvented filer using a single tier of QLC flash positioned as being suitable from performance workloads, using SCM-based caching and metadata storage, and parallel access to scale-out storage controllers and nodes, with data reduction making its QLC flash good for capacity data storage as well. Its brand new partnership with HPE gives it access to the mid-range enterprise market while it concentrates its direct sales on high-end customers.
StorONE is a more general-purpose supplier, with a rewritten and highly efficient storage software stack and a completely different philosophy about market and sales growth to VAST Data. VAST has taken $263 million and is prioritizing growth and more growth, while StorONE has raised around $30 million and is focused on profitable growth.
Lightbits is different again, providing a block storage array accessed by NVMe/TCP. It is relatively new, being started up in 2015.
Kioxia is included this group because it has steadily developed its Kumoscale JBOF software capabilities. The system supports OpenStack and uses Kioxia’s NVMe SSDs. Kioxia does not release any data about its Kumoscale sales or market progress but has kept on developing the software without creating much marketing noise about it.
Lastly Quantum has joined this fourth wave group because of its Myriad software announcement. This provides a unified and scale-out file and object storage software stack. Its development was led by Brian Pawlowski who has deep experience of NetApp’s FlashRay development and and Pure Storage AFA technologies. He characterizes Quantum as a late mover in AFA software technology, aware of the properties and limitations of existing AFA tech, and crafting all-new software to fix them.
We have not included suppliers such as Panasas, Qumulo and Weka in our list. Panasas has an all-flash system but is a relatively small player with an HPC focus. Scale-out filesystem supplier Qumulo also supports all-flash hardware but, in our view, is predominantly a hybrid multi-cloud software supplier. WEKA too is a software-focused supplier
Two object storage suppliers support all-flash hardware – Cloudian and Scality – but the majority of their sales are on disk-based hardware. Scality CEO Jerome Lecat tells us he believes the market is not really there for all-flash object storage. These two players are not included in our AFA suppliers’ list as a result.
The main focus in the AFA market is on taking share from all-disk and hybrid-flash disk suppliers in the nearline bulk storage space. In general they think that SSDs will continue to exceed HDD capacity; 60TB drives are coming before the end of the year and are generally confident they will continue to grow their businesses at the expense of the hybrid array suppliers. Some, like Pure Storage, are even predicting a disk drive wipeout. That prediction may come back to haunt them – or they could be laughing all the way to the QLC-powered flash bank.
Airbyte, which supplies an open source data integration platform, today announced its first premium support offering for Airbyte Open Source. Until now, Airbyte provided support to its users through its community Slack and Discourse platforms. The premium support plan offers one business day response time for Severity 0 and 1 issues, two business days response time for Severity 2 and 3, one week response time for pull request reviews, and an ability to request a Zoom call.
…
Data intelligence supplier Alation has hired former Peloton CFO Jill Woodworth as its CFO, and David Chao, formerly VP and Head of Product Marketing at DataDog, joins as Chief Marketing Officer. Alation has opened new offices in London, UK, and Chennai, India. The new Chennai office will host more than 160 employees from across engineering, product, finance, and HR teams.
…
Alluxio has published a Presto Optimization Handbook, downloadable here; Presto being a distributed query engine for data analytics. For customers using Trino (formerly PrestoSQL), check out The Trino Optimization Handbook here.
…
CTERA has expanded its relationship with Hitachi Vantara. This concerns Hitachi’s announcement of Hitachi Data Ingestor (HDI) reaching end-of-life. CTERA provides a migration path by introducing CTERA Migrate for HDI, a turnkey solution that replaces HDI and preserves the existing storage repository investment. It’s part of the CTERA Enterprise Files Services platform. There’s more info in this blog.
…
Data lakehouse supplier Databricks is buying Okera, an AI-centric data governance platform. It says Okera says it addresses data privacy and governance challenges across the spectrum of data and AI. It says it simplifies data visibility and transparency, helping organizations understand their data, necessary in the age of LLMs, and to address concerns about their biases. Okera offers an AI-powered interface and self-service portal to automatically discover, classify, and tag sensitive data such as personally identifiable information (PII). Okera has been developing a new isolation technology that can support arbitrary workloads while enforcing governance control without sacrificing performance.
Nong Li, Okera co-founder and CEO, is known for creating Apache Parquet, the open source standard storage format that Databricks and the rest of the industry builds on.
…
DDN says it’s sold more AI storage appliances, like the A1400X2, in the first four months of 2023 than it had for all of 2022, partly due to the broad enthusiasm for generative AI. Dr James Coomer, SVP of Products, said: “The trillions of data objects and parameters required by generative AI cannot be fulfilled without an extremely scalable and high-performance data storage system. DDN has been the solution of choice for thousands of deployments for organizations such as NASA, University of Florida, and Naver.”
…
Helmholtz Munich, part of Germany’s largest research organization, the Helmholtz Association, is a DDN customer. It has has four fully populated SFAES7990X systems that span a global namespace, and an SFA NVMe ES400NVX system with GPU integration for faster data throughput with direct datapaths between storage and GPU for its intensive AI applications. Case study here.
…
Professor Tom de Greef of the Eindhoven University of Technology expects the first DNA datacenter to be up and running within five to 10 years. New files will be encoded via DNA synthesis. Another part will contain large fields of capsules, each capsule packed with a file. A robotic arm will remove a capsule, read its contents and place it back. De Greef’s research group developed a microcapsule of proteins and a polymer and then anchored one file per capsule. The capsules seal themselves above 50 degrees Celsius, allowing the PCR (Polymerase Chain Reaction) read process to take place separately in each capsule. In the lab, it has so far managed to read 25 files simultaneously without significant error.
Each file is given a fluorescent label and each capsule its own color. A device can then recognize the colors and separate them from one another. A robotic arm can then select the desired file from the pool of capsules in the future.
IBM Storage Fusion HCI System caching accelerates watsonx.data queries, watsonx being the upcoming enterprise-ready AI and data platform designed to apply AI across a business. IBM watsonx.data is delivered on-prem with an appliance-like experience. Watch a video about it here.
…
Web3 storage supplier Impossible Cloud has certified its first German datacenter, located in Frankfurt, with plans to handle client data beginning June 1.
…
Informatica has announced the launch of its Intelligent Data Management Cloud (IDMC) on Google Cloud in Europe to address data sovereignty and localization concerns. There are new capabilities for IDMC including security features to control access to security assets and master data management enhancements for financial services and ESG compliance. Informatica has more product integration on Amazon Redshift with Informatica’s no code/no setup software as a service (SaaS) AI-powered cloud data integration-free directly from the Amazon Redshift console and industry certifications for financial services, healthcare and life sciences.
…
MSP-focused data protector N-able announced revenues of $99.8 million for Q1 this year, up 9 percent year over year, with a $3.5 million profit, lower than last year’s $5.1 million. Its subscription revenues were $97.4 million, another 9 percent rise. This is becoming a fairly predictable business revenue-wise.
…
Serene Investment Management-owned Nexsan has surpassed its aggressive Q1 earnings target for its SAN and file storage product line. This includes E-Series, Unity, and the Assureon Data Vault. We don’t know what the target was. Serene bought Storcentric and thus Nexsan in February this year for $5 million in a DIP loan, retaining key employees and leaders as it restructured the business. Prior Nexsan owner StorCentric filed for Chapter 11 bankruptcy protection in July 2022 and looked for a buyer.
Update. Storcentric’s Drobo operation closed down at the end of January, as its website indicates;
…
Scale-out filesystem supplier Qumulo has announced integration with the Varonis Data Security Platform and introduced its new Snapshot-Locking capability to protect customers against ransomware. The Varonis Data Security Platform provides real-time visibility and control over cloud and on-premises data and automatically remediates risk. Its behavior-based threat models detect abnormal activity proactively and can stop threats to data before they become breaches. Qumulo’s Snapshot-Locking feature uses cryptographic protection, where only the customer has access to the cryptographic key-pair required to unlock the snapshot.
…
Rakuten Symphony is partnering with Google Cloud to provide its Symcloud SDS K8s data management and persistent storage for Google Anthos Distributed Cloud offerings. Symcloud SDS releases will be aligned with GDC’s Anthos releases. Symcloud SDS is available through the Google Marketplace.
…
Pre-registration has opened for the SmartNICs Summit for its second annual event. It will occur on June 13-15 at the San Jose Doubletree Hotel. “We are now seeing the full impact of SmartNICs. They offload overhead from CPUs and make solutions more scalable,” said Chuck Sobey, Summit General Chair. “Distributed compute power is essential to handle the demands of incredibly fast emerging applications such as ChatGPT. SmartNICs Summit will help designers select the right architectures.”
…
Storage analyst firm DCIG has named StorMagic SvSAN a global top five HCI solution. The “2023-24 DCIG TOP 5 Rising Vendors HCI Software Solutions” report evaluates offerings from 11 rising vendor HCI software solutions to provide IT decision makers with succinct analysis of the market’s HCI solutions.
…
Synology tells us the BC500 is available for sale in the US as of May 10. The TC500 is still slated for June 14, and MSRP is $219.99 for both models. The BC500 should be available from retailers soon, and also just launched on its web store.
Data orchestrator Hammerspace has quietly bought French startup RozoFS for its transformational erasure coding technology, the Mojette transform.
RozoFS was started up in 2010 by CEO Pierre Evenou in Nantes, France, and has raised €700,000 in what funding it has made public, $764,000 in today’s money. Evenou was an academic researcher at the Institut de Recherche en Communications et Cybernétique de Nantes (IRCCyN) which worked on Mojette transform mathematics in discrete geometry. He took these ideas and set up RozoFS to commercialize them by building a NAS software system using patented Mojette transform erasure coding technology. Evenou moved to Silicon Valley in 2015 to set up RozoFS in the USA.
Tony Asaro.
Tony Asaro, SVP of Business Development for Hammerspace, was – we’re told – instrumental in making the acquisition happen, and said in a statement: “Rozo’s best-in-class erasure coding provides the right balance of price, performance, capacity efficiency, resiliency and availability. This is essential for organizations with massive amounts of data for multi-site and hybrid cloud environments.”
We understand the acquisition actually took place in late 2022 but was only disclosed today. The acquisition price was not confirmed.
Evenou is now VP Advanced Technology at Hammerspace and said: “Organizations need performance throughout their workflows. The integration of Rozo’s technology into the Hammerspace Data Orchestration System will help organizations get the most out of their expensive data creation instruments and compute clusters while also accelerating data analytics and collaboration.”
In 2018 RozoFS provided a high-performance file system on AWS in which fast metadata services provided asynchronous incremental replication between an on-premises storage system and an AWS-based copy. Incremental changes in the on-prem file system could be quickly computed. Using this information, the source cluster uses all its nodes to parallelize synchronization of the on-prem and AWS clusters. The cloud copy can be automatically updated as frequently as needed without impacting application performance. It reduced production dead time because there was no need for lengthy data synchronization.
The software speeds data transfers by reducing the amount of data needed. It can deliver more than 1Tbps with only eight commodity servers working in parallel and connected on a 200GbitE network, we’re told. Used by Hammerspace it allows customers to move files directly to the compute, application, or user at peak performance, nearly saturating the capabilities of their infrastructure.
Two RozoFS engineers have joined Hammerspace: CTO Didier Féron and VP Engineering Jean-Pierre Monchanin are new members of the Hammerspace development team as senior software engineers.
Background
Why “mojette”? It’s a French word meaning a white (haricot) bean – the beans used in baked beans – and such beans (sans the sauce) have been used in French schools to teach addition and subtraction. The usage is reflected in the term for accountants: beancounters. The transform only uses addition and subtraction, hence its name.
What’s the big beancounting deal for Hammerspace?
As we wrote several years ago: Mojette transform erasure coding starts from the concept of a grid of numbers. Imagine a 4×4 grid. We can draw straight lines along the rows, up and down the columns, and diagonally through the grid cells to the left and right. Figure 1 in the diagram below shows this.
The lines are extended outside the grid. For each line, the values in the intersected grid cells can be added or subtracted and written at the end of the line. In fig. 1 the value b19 is the sum of the values in cells p1, p6, p11 and p16.
The line of values from b22 to b16 is a kind of projection of the source grid along a particular dimension (diagonally from lower right to top left). The grid values can be viewed as being transformed into the projected values.
Figure 2 shows four such projections with the grid cells identified by Cartesian coordinates, such as cell 0,0; the bottom left cell. Figure 3 shows the projection direction in colour, blue, red, green and black.
If original data is lost somehow when the source data grid is read, then the projected values can be used to reconstruct a missing cell value, with two or more projections intersecting the missing cell such that its value can be re-computed.
Mojette transform erasure coding is quicker than other forms of erasure coding, such as Reed-Solomon and comparatively less storage space is needed. It’s also scalable out to billions of files.
A PDF doc, The Mojette Erasure Code, Benoît Parrein, Univ-Nantes/IRCCyN, Journée inter GDR ISIS/SoCSiP, 4/11/2014, Brest, provides more information on the basic maths and operations behind it.
IBM has unveiled its watsonx.data datastore using lakehouse uderpinnings to run multiple query engines for AI and analytics workloads, and claims this can cut data warehouse costs by up to 50 percent.
This datastore is the core part of an overall watsonx AI and data platform launched at the IBM Think event, and include an AI development studio. IBM says watsonx.data’s lakehouse bridges the gap between data warehouses and data lakes, offering the flexibility of a data lake with the performance and structure of a data warehouse.
The announcement comes bundled with quotes from Intel and Cloudera but not, oddly, IBM. This is a partnership-focused release.
Das Kamhout, VP and Senior Principal Engineer of the Cloud and Enterprise Solutions Group at Intel, said: “We recognize the importance of watsonx.data and the development of the open-source components that it’s built upon. We look forward to partnering with IBM to optimize the watsonx.data stack, achieving breakthrough performance through our joint technological contributions to the Presto open-source community.”
watsonx.data runs on-premises or in public clouds like AWS. The lakehouse underneath can contain both structured and unstructured data. It can support open data formats, such as Apache Parquet and Avro, and table formats like Apache Iceberg. This is open source software for enabling SQL commands to work on petabyte-scale analytic tables. Underneath this can be object storage.
The platform is intended to be a single point of entry to the lakehouse and provide access to multiple query engines such as Presto, Spark and Meta’s Velox open source unified execution engine acceleration library.
Presto, the in-memory distributed SQL datalake query engine, has a starring role here, building on IBM’s acquisition of Ahana in April.
IBM says watsonx.data offers built-in governance, automation, observability and integrations with an organization’s existing databases and tools to simplify setup and user experience. It is engineered to use Intel’s built-in accelerators on Intel’s new 4th Gen Xeon SP CPUs.
IBM’s tech partners are at the fore here. Paul Codding, EVP of Product Management of Cloudera, said: “IBM and Cloudera customers will benefit from a truly open and interoperable hybrid data platform that fuels and accelerates the adoption of AI across an ever-increasing range of use cases and business processes.”
Soo Lee, Director Worldwide Strategic Alliances at AWS, said: “Making watsonx.data available as a service in AWS Marketplace further supports our customers’ increasing needs around hybrid cloud – giving them greater flexibility to run their business processes wherever they are, while providing choice of a wide range of AWS services and IBM cloud native software attuned to their unique requirements.”
But watsonx.data is not yet available in the AWS Marketplace. We checked:
IBM says watsonx.data integrates with StepZen, Databand.ai, IBM Watson Knowledge Catalog, IBM zSystems, IBM Watson Studio, and IBM Cognos Analytics with Watson. IBM says these integrations enable watsonx.data users to implement various data catalog, lineage, governance, and observability offerings across their data ecosystems.
The watsonx.data roadmap includes incorporating the latest performance enhancements to Presto via Velox and Ahana. It will also incorporate IBM’s Storage Fusion technology to enhance data caching across remote sources as well as semantic automation capabilities built on IBM Research’s foundation models to automate data discovery, exploration, and enrichment through conversational user experiences.
A diagram in an IBM watsonx.data ebook shows multiple query engines accessing a metadata store, underneath which is an object store with links to structured, data warehouse, semi-structured, unstructured and data lake data.
There is no mention from IBM about the types of datalakes and lakehouses that are supported; Dremio is not identified, for example.
IBM claims watsonx.data will extend its market leadership in data and AI, but there is no word in IBM’s announcement of using ChatGPT-like large language models.
watsonx.data is in a closed beta phase and expected to be generally available in July 2023. Download an ebook here. It won’t tell you much more but you’ll get a flavor of IBM’s thinking.
AWS has announced general availability of new storage-optimized Amazon EC2 I4g instances featuring AWS-designed Graviton processors and AWS Nitro SSDs. With up to 64 vCPUs, 512 GiB of memory, and 15 TB of NVMe storage, they deliver up to 15% better compute performance than other AWS storage-optimized instances, we’re told. There are 6 sizes:
Storage volumes built from the Nitro NVMe SSDs deliver:
Up to 800K random write IOPS
Up to 1 million random read IOPS
Up to 5600 MB/second of sequential writes
Up to 8000 MB/second of sequential reads
(All measured using 4 KiB blocks.)
Torn Write Protection is supported for 4 KiB, 8 KiB, and 16 KiB blocks. Target storage-intensive workloads include relational and non-relational databases, search engines, file systems, in-memory analytics, batch processing, streaming, and so forth. These workloads are generally very sensitive to I/O latency, and require plenty of random read/write IOPS along with high CPU performance.
AWS server chassis containing a AWS Nitro SSD.
EC2 I4g instances are available today in the US East (Ohio, N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.
…
Microsoft Azure Ebsv5 and Ebdsv5 VM series instances are the first Azure VM series to support NVMe storage protocol. NVMe support enables these series to achieve the highest Disk Storage IOPS and throughput of any Azure VMs to date, we’re told. More info here.
…
Backblaze has announced Q1 cy2023 results for its cloud-based backup and general storage services. Revenues of $23.4 million were up 20% with a loss of $17.1 million vs the year-ago $12.1 million loss. Computer backup revenues rose 8% to $13.4 million but B2 cloud storage rose 42% to $10 million. B2 revenues could eclipse backup revenues next quarter. CEO Gleb Budman said: “We were pleased to deliver 42% year-on-year revenue growth for B2 Cloud Storage in Q1— well above that of our competitors like Amazon Web Services (AWS). Additionally, our Q1 results support our goal to approach Adjusted EBITDA breakeven in Q4 of 2023.”
Backblaze expects Q2 revenues to be between $24.1 million to $24.5 million.
…
Block data migrator Cirrus has announced the availability of Cirrus Migrate Cloud with Cloud Recommendation Wizard in the Azure Marketplace. The pitch is that Cirrus Migrate Cloud (CMC) enables organizations to migrate block-level storage data from any source to Azure and between Microsoft Azure Disk Storage with near-zero downtime.
…
DataStax has launched a new open source support service around machine learning and AI. Luna ML will be aimed at companies that want to get into using their data for ML projects based on the newly open sourced Kaskada real-time event processing engine. This will help make it easier to get started around these kinds of projects, as well as combining Kaskada with other open source projects for larger ML initiatives, like Cassandra for running a feature store or Pulsar for data streaming. Kaskada is a real-time event processing service that supports real-time AI
…
Grafana Labs has announced updates to its fully managed Grafana Cloud observability platform; specifically a new Adaptive Metrics feature, which enables users to aggregate unused and partially used time series data to lower costs. It’s now available for broader public access. Read a blog about it.
…
Michael Hay has rejoined Hitachi Vantara as VP Technology and Research, coming from Teradata. He was previously VP and Chief Engineer at Hitachi V from 2013 to 2018.
…
Data security provide Immuta announced a strategic investment from Databricks Ventures, the investment arm of Databricks, the data lakehouse supplier. It comes after a year of growth for Immuta in which the company reported a 200% increase in Annual Recurring Revenue (ARR) for its Data Security Platform SaaS offering as it expanded into EMEA and APAC. The investment will go towards product innovation to strengthen the integration between both platforms and new go-to-market initiatives to increase enterprise adoption.
…
At OpenWorks 2023, MariaDB has unveiled its vision to bring the power of distributed SQL through its Xpand database to both the MariaDB/MySQL and PostgreSQL communities. CEO Michael Howard said the intent is: “to take databases to new heights of scale and resilience, in any cloud at a fraction of the cost of competitors.”
…
Large (>20GB) file transfer provider MASV has revealed lower prices, new enterprise plans and compliance with SOC2 Type II. MASV previously announced ISO27001 and TPN, with HIPAA coming soon. Its new MASV Professional membership cuts its customers’ costs by 20 percent. Egress data over 200GB each month is billed on a flexible usage-based discounted plan of $0.20/GB. All MASV Professional plans include unlimited users at no additional cost. User plans are available from massive.io or on AWS Marketplace.
…
MSP-focussed Data protector N-able has expanded the Continuity features in Cove Data Protection with Standby Image recovery in Microsoft Azure. This delivers smarter disaster recovery as a service (DRaaS), helping MSPs and IT professionals provide a full range of recovery services to end users, now including recovery in Azure.
…
The latest release of NetApp’s ONTAP has increased performance of NFS over RDMA and GDS , and customers can get more than 171GiBps from an ONTAP storage cluster to a single NVIDIA DGX A100 compute node. If your existing ONTAP systems have the appropriate network adapters, you can add this level of performance with a free upgrade by updating to ONTAP 9.12.1 or later versions. Read test onfig etails and more in a blog.
…
Nutanix announced 3 dataservices-oriented offerings at its .NEXT conference.
Nutanix Central provides a single cloud-delivered console for visibility, monitoring and management across public cloud, on-premises, hosted or Edge infrastructure.
Unified data services across hybrid multicloud environments, enabling integrated data management of containerized and virtualized applications on-premises, on public cloud and at the Edge, with comprehensive data services for Kubernetes applications as well as cross-cloud data mobility. Multicloud Snapshot Technology (MST) enables snapshots directly to cloud native object stores, starting with the AWS S3 object storage service.
Project Beacon is a multi-year effort to deliver a portfolio of data-centric Platform as a Service (PaaS) services available natively on Nutanix or public clouds. This is an upgrade from the NDS managed Database-as-a-Service offering. The vision is of decoupling apps and data from the underlying infrastructure so developers can build applications once and run them anywhere.
Nutanix also rediscovered converged infrastructure by deciding to offer separate compute and storage nodes. This matches what Dell is doing with Dynamic AppsON, the linking of PowerStore storage arrays with VxRail HCI dynamic (compute-ony) nodes to sepatately scale compute and storage in an HCI environment. HPE has had this for some time with its Nimble dHCI offering (now Alletra).
Nutanix Objects Storage now integrates with Snowflake so customers can use the Snowflake Data Cloud to analyze data directly on Nutanix Objects Storage ensuring data stays local.
…
HC storage supplier Panasas announced 50 percent Y/Y growth in total partners since the launch of a new channel strategy a year ago. In addition, more than half of Panasas’ total revenue in the past year was partner-driven, with nearly a fifth of that revenue being net new logos.
…
Cloud file services supplier Panzura has been named to Inc. magazine’s annual Best Workplaces list in the May/June 2023 issue. This is its second such ranking in 2023. Pure has achieved three-year revenue growth of 485%, and in the past year has joined Inc.’s 5000 list of Fastest Growing Companies in the US.
…
Pure Storage announced the results of a new end user survey, “IT Leader Insights: The State of IT Modernization Priorities and Challenges Amid Economic Headwinds.” Ninety percent of IT buyers state that the pressure of their digital transformation agenda led them to buy technology their infrastructure could not support. Access the report here.
…
Real-time database supplier Redis announced the appointment of Spencer Tuttle as its Chief Revenue Officer (CRO). He joins from business intelligence analytics company ThoughtSpot, with over 15 years of experience in the tech industry and experience in growing revenues and expanding operations.
…
Seagate announced the launch of its cloud import service in the UK, as part of Lyve data transfer solutions. With this new service, customers can upload large data sets to any major public cloud destination, including Amazon S3, Google Cloud Platform, Azure, IBM Cloud Object Storage, OVHcloud, Wasabi, and Lyve Cloud.
…
Snowflake is opening a new UK office and Customer Experience Centre (CEC) in London, providing a “workplace experience” for employees and an area for collaboration with prospective clients. Snowflake’s fy2023 product revenue in EMEA grew 72 percent YoY. It also expanded its team in EMEA by 68 percent, reaching 1,289 in total as of January 31, 2023. In fy 2023 it continued to shift the EMEA sales team to a vertical-focused model.
…
Research house TrendForce’s latest research indicates that, as production cuts to DRAM and NAND Flash have not kept pace with weakening demand, the ASP of some products is expected to decline further in 2Q23. DRAM prices are projected to fall 13~18 percent; NAND Flash is expected to fall between 8~13 percent.
…
Cloud storage provider Wasabi strengthened its EMEA presence with the appointment of Jon Howes as VP and GM for EMEA. He will build on Wasabi’s growth in the region, highlighted by a flagship deal with Liverpool Football Club, 1,800 new partners, and a nearly 90 percent ARR growth rate. A full region go-to-market team has been established to support the growth of cloud storage, with Eric Peters as Country Manager of Benelux & Southern Europe based in France; Daniel Arabié, Country Manager of Central Europe/DACH based in Germany, and Kevin Dunn, Country Manager for the UK/I & Nordics based in the UK.