Home Blog Page 255

Storage news ticker December –14

Ascend.io says Snowflake users can put workloads on autopilot with Ascend’s Data Automation Cloud. This automates data ingestion, transformation, delivery, orchestration, and observability using Ascend’s DataAware intelligence. The Ascend Data Automation Cloud analyses and monitors end-to-end workflows, tracking and optimising the movement of up to trillions of records, and dynamically responds to changes in data, schema, and code within seconds, giving Snowflake customers an advanced autopilot system available for their data and analytics workloads.

Cloudcasa wants people to know it’s not affected by the Apache Log4j vulnerability.

Cobalt Iron’s Enterprise Object Search (EOS) can search the entire enterprise backup landscape at the object level — paths, directories, and filenames — to identify and locate files. It allows users to determine if and when an object was backed up, the types of objects that exist, and where in the backup landscape they are stored. The result is improved efficiency when locating objects within backup repositories and validating files in the servicing of requests for data file restores.

Codenotary’s open-source immudb tamper-proof database, built on a zero-trust basis, can serve as the main transactional database for enterprises. It has full ACID transactional compliance as version 1.2 has the ability to rollback changes and have data expire. The immudb database is now compliant with General Data Protection Regulation (GDPR), including its “right to forget” requirements. Data in immudb comes with cryptographic verification at every transaction to ensure there is no tampering possible. There have been more than 12 million downloads of immudb and more information can be found here.

Commvault says the UK’s North Bristol NHS Trust (NBT) is protecting its Office 365 data with Metallic’s SaaS backup for Office 365 running three times a day.

Continuity Software has published a list of storage products whose software uses vulnerable Apache Log4J functionality. Log4j is an open-source logging library written in Java that is widely used in software packages and online systems. It can be exploited, if unpatched, to make infected devices cryptocurrency-miners. Continuity’s list can be found here.

Seagate has updated its Exos AP controllers, which power its AP-2U12, AP 2U24 and AP 5U84 storage arrays, with Gen 2 AMD EPYC processors with core counts of 8, 12, or 16 for varying levels of performance. There are dedicated PCIe 4 lanes, delivering 200GbitE network connectivity, and high bandwidth to SAS controllers for faster HDD and SSD response. The system supports 25GbE on the motherboard providing base I/O — often an added cost on competitor platforms.

Seagate Exos AP 5U84 chassis.

Seagate says the new AP-BV-1 controller offers exceptional compute and storage performance in a single chassis. With dual EPYC processor-based controllers, the system delivers high availability or controller partitioning, with the flexibility of a common controller slot allowing connection to additional Exos E SAS expansion units in matched chassis.  The architecture is perfectly balanced for current and future CPUs and drive capacities. The Exos AP options with the new AP-BV-1 controller are available this month. More info here.

Storage Made Easy expands across the Middle East and Africa markets through a strategic distribution agreement for its Enterprise File Fabric software with SecureNet, a Value Added Distributor based in Dubai. The focus is on the provision of multi-cloud content collaboration with strong cybersecurity and compliance requirements for the entire data pool of a company.

StorCentric announced version 7.0 of its Nexsan Unity — software that includes enhancements for security, compliance and ransomware protection. Version 7.0 supports Object (S3) protocol, immutable snapshots, object locking, and provides an up to 40 per cent performance increase over v6.0 Nexsan Unity. Total throughput has increased up to 13GB/sec on the existing platforms. There is also an up to 50 per cent increase in the Unity to Assureon backup system ingest rate. Unity now supports Block (iSCSI, FC), File (NFS, CIFS/SMB) and Object (S3) protocols. It also supports pool-scrubbing to detect and remediate bit rot to avoid data corruption.

Teradata says its data warehouse has plugins for Dataiku to enable analytics and data science teams that use Dataiku to implement analytic functions within the Teradata Vantage data and analytics platform. Dataiku end users now have an easy-to-use interface for Vantage analytic functions and can tie them directly into the data science workflow. The plugins take the Dataiku end user configuration and send this back to the Vantage system for processing analytics at scale.

Kaiserslautern-based ThinkParQ, the company behind the BeeGFS parallel file system, announced a collaboration with Huawei to deliver Arm compatibility and support. Huawei’s Kunpeng server processor is based on the Arm AArch64 architecture and provides multi-core processing, abundant I/O, PCIe 4.0/CCIX and hardware acceleration capabilities. BeeGFS is able to fully utilise the hardware and saturate the network on a setup with four server machines and six client machines, all on a 100Gbit/sec InfiniBand connection — for a maximum network bandwidth of 48GB/sec. ThinkParQ measured actual throughput of 47.7GB/sec for reads and 46.3GB/sec for writes.

Veritone, which ships aiWARE, a hyper-expansive enterprise AI platform, announced a partnership with cloud data warehouser Snowflake. Veritone customers can access Snowflake’s Data Cloud and partner ecosystem within the aiWARE application. Snowflake customers can now query and analyse all data within Snowflake’s platform or within aiWARE’s cognitively-enabled applications, including hard-to-reach data that is typically inaccessible without human intervention.

Rockport gets co-CEOs and $48 million extra funding

Datacentre networking upstart Rockport Networks has taken in $48 million in new but non-VC funding, and recruited a second CEO to work alongside existing CEO and co-founder Doug Carwardine.

Rockport came out of stealth mode in October, with end-point network cards that incorporate switch functionality, theoretically rendering traditional and current network switches redundant. The new investment was led by Toronto-based Northern Private Capital with participation from current investors. It will accelerate Rockport’s go-to-market plan and expand its sales and marketing efforts.

Marc Sultzbaugh.

Marc Sultzbaugh has been appointed as the co-CEO, and he said: “The switching network in today’s datacentres is fundamentally broken. … We’re applying new thinking to the systemic issues of network congestion and performance that plague advanced computing workloads so that customers can expect more predictable network performance and much greater utilisation of their compute and storage resources.”

Carwardine said “The network market is ripe for change and we’re experiencing tremendous momentum since the launch of our switchless network solution a few short weeks ago. With the addition of Marc and this latest round of funding, we’re building an even deeper bench to work hand-in-hand with our customers and ecosystem partners.”

Rockport has eschewed VC funding, preferring to use founder funding, grants and private investments, that prevent it suffering from any loss of board control to VCs impatient to scale the business quickly for an exit. Its last funding event was in 2019, with $2.9 million debt financing and a $12 million grant. Carwardine tells us “We’ve never taken any venture money. … We started taking institutional money last fall [and] we’ve raised just under 100 million so far,” in total funding. 

Compared to VC funding, “this is a more friendly, flexible patient route. … It can be pretty difficult to operate within the restrictions of some of the timelines that you get forced upon [when] you go [the VC] way.”

Doug Carwardine.

Sultzbaugh actually joined Rockport’s board a year ago after resigning/retiring from Mellanox, where he spent more than 19 years — mainly focussed on sales, and finishing up as SVP for sales and marketing. He started out at Bell Labs back in the ’80s, in semiconductor process engineering. Then he moved over to the business side, telling us he was “part of the team that launched AT&T Microelectronics, which was really selling its semiconductor capability to the open market for the first time.” 

This was followed by a stint at fabless semiconductor company Birchtree in the ’90s, before he joined Mellanox in early 2000. He said Mellanox “was at a very similar stage to where Rockport is today. I had an amazing 20 years around there. I got to experience all the ups and downs of being a disrupter and trying to introduce a new capability into the industry and a successful IPO and then ultimately grew that business into a successful acquisition by Nvidia.” 

Why did Carwardine need a co-CEO? “I’ve been sort of poking at him in the in the chest for six months or so. Can you please help me? And thankfully, he’s agreed to participate.”

Carwardine said there’s “a lot going on as you can imagine financing this and running the business. So Mark has a great skillset that I do not and he’s going to handle the growth. Essentially the business sales and marketing [and] perspective of product management. I’ll take the finance and the R&D component of the business and continue to move that along. I think it’s a very good complement and Mark’s obviously got the skillset from from Mellanox.”

Sultzbaugh concurred. “We felt like that this was the right thing for the company. And … we’re really equal, really balanced, we were involved in all the decisions, but we definitely have our areas of expertise and in so we lead in that way.” 

Rockport now has around 170 employees and its technology is now in use with multiple customers including Frontera, the number one academic supercomputer located at the University of Texas’s Advanced Computing Center, Austin. 

NetApp’s all-flash range-topping A900: new stuff in an old box

NetApp has a new high-end all-flash array, the A900, more than doubling the capacity of the previous top end A800 system and a performance upgrade for the existing A700.  

NetApp A700 chassis.

The details, such as they are, can be seen on a NetApp datasheet. The company says its AFF A900 offers organisations the highest data storage performance to accelerate their business-critical enterprise database and application needs, the security and reliability to keep customer data highly available and secure, and the simplicity and flexibility that agile organisations require.

Here is a table comparing the current focus AFF arrays — A250, A400, A800 and new A900 — plus a comparison column showing the A700. This is the pre-Oct 2020 upgrade for the A700 which gave it NVMe SSD and NVMe-over Fabrics support.

We say “focus” AFF arrays as there have been other AFF arrays — such as the A300, A320, and A700s — which are not shown in this version of NetApp’s AFF range spec sheet.

The main differences in the table between the A900 and A700 are eight more 100GbitE target ports, and NVMe/TCP and NVMe/FC support. NetApp does not reveal controller CPU details but we think the A900 gets gen 3 Xeon controllers; it is a dual-controller array like all the other AFF models.

In March 2017 the A700 provided up to 7 million IOPS per NAS cluster (24-nodes), and was rated at 9.2 million IOPS in June 2018. The A900 delivers up to 14.4 million IOPS in a NAS cluster — a 5.2 million IOPS difference which is near enough NetApp’s 50 per cent more performance claim and clearly points to a controller CPU upgrade. The A900’s 300GB/sec cluster throughput is the same as the A700 and A800.

NetApp suggests the A900 matches the needs of high-end enterprise Oracle, Microsoft SQL Server, MongoDB databases, VDI, and server virtualization workloads. It comes with ONTAP Enterprise Edition data management software which is pre-packaged with unified support for SAN, NAS, NVMe-oF, and S3, built-in data protection with optional anti-ransomware capabilities.

A blog by NetApp’s Cheryl George, a technical marketing engineer, indicates the A900 has a “High-resilience design with … isolation of controllers from I/O modules, carrier-grade chassis, and hot pluggable cards.”

Pure Storage announced its new high-end FlashArray//XL product earlier this month. This supports PCIe 4, a feature which is not highlighted by NetApp with the A900.

The //XLs also support 64Gbit/sec Fibre Channel while A900 is limited to 32Gbit/sec — half the speed, but the focus is increasingly on NVMe-connectivity so the FC difference could be moot. Pure says the //XLs are for the most-demanding, mission-critical workloads, like SAP/HANA, Oracle, SQL Server and VMware — the same as the A900. 

Without detailed configuration information we cannot compare the two systems in any realistic way at all.

The AFF A900 is available as a non-disruptive, in-chassis, upgrade to existing A700 customers, and ONTAP software automatically applies firmware updates. 

Backblaze revenues surge 25 per cent as mid-market strategy pays off

Forest fire
Forest fire

Cloud storage and backup supplier Backblaze recorded 25 per cent revenue growth in its third calendar 2021 quarter to  $17.3 million.

This was the company’s first post-IPO set of results and there was a net loss of $6 million, which compares to a $1.9 million loss a year ago. Its recently-launched B2 Cloud storage business saw revenues grow to $6 million, while the established Computer Backup business earned $11.2 million in revenues.

CEO Gleb Budman’s results statement reads: “We delivered continued strong Q3 growth overall, led by rapid 59 per cent revenue growth in B2 Cloud Storage and consistent double-digit growth of 13 per cent in Computer Backup.”

He pointed out: “Our successful IPO in November was an important milestone for the company and a recognition by the markets of the mid-market public cloud storage opportunity. We believe the future is being built on independent clouds, and we plan to use our IPO proceeds to help accelerate future growth in this large and fast-growing market.”

Although the loss deepened, there was good news on the gross profits front. It saw $8.8 million — 51 per cent of revenue — compared to $6.7 million and 49 per cent of revenue in Q3 2020. Cash and cash equivalents were $4.7 million at the end of the quarter, and this sum does not include net cash proceeds of $103 million from the November 2021 IPO.

William Blair analyst Jason Ader summed up Backblaze like this: “As the B2 [Cloud Storage] mix rises over the next two years, and as the bootstrapped firm deploys its new growth capital toward sales and marketing, we expect revenue to accelerate (we model growth of 24 per cent, 26 per cent, and 31 per cent in fiscal 2021, 2022, and 2023, respectively). Our view is that Backblaze is a pure-play on SMB cloud adoption, with investors getting a high-growth, independent cloud storage platform supported by a cash-cow computer backup business that can also drive cross-selling opportunities.”

Backblaze expects revenues between $17.7 and $18.2 million for the fourth and final 2021 quarter. That would mean $67.1 million full 2021 year revenues at the mid-point.

Wikibon’s estimated enterprise storage supplier revenues for 2021.

The company is minute in size compared to cloud service-providing hyperscalers, such as Amazon Web Services, which earned $16.11 billion in its third quarter — a figure which rose 39 per cent year-on-year. Analyst company Wikibon estimated that AWS storage revenues for all of 2021 will be $9 billion. He wrote: “We believe AWS storage revenue will surpass $11 billion in 2022 and continue to outpace on-prem storage growth by more than 1,000 basis points for the next three to four years.”

Backblaze may be sized like a pilot fish circling the Amazon shark, but its successful IPO and growth show that there is cloud storage life outside AWS and its hyperscaler competitors Azure, GCP and Alibaba. Competitors like Backblaze can thrive and prosper because AWS and its ilk price their storage services so high as to give Backblaze profitable pricing headroom — so to speak. For example, Ader points out that BackBlaze’s B2 Cloud Storage “is one-quarter the price of AWS’s S3 service.”

Predictions for 2022

’Tis the season for predictions — usually vendor predictions — and they always seem to predict, astoundingly, that demand for the vendor’s products will rise because the vendor’s strategic vision is correct. We thought we would try to make our own predictions.

But first a warning. I bought shares in Violin Systems in one of its incarnations — being impressed by the then-CEO, Kevin DeNuccio. Violin tanked and so did the shares, and so did my trust in my own judgement over such things. So take these predictions as a bit of fun.

1. There will be IPOs for Cohesity, Databricks, Druva, Rubrik and WekaIO.

Data protection-stroke-management has enabled Cohesity, started up in 2013, and Rubrik, founded in 2014, to grow very quickly indeed, and investing VCs must be baying for exit blood. They have invested more than $550 million in Rubrik and $660 million in Cohesity. Rubrik has been strengthening its board — an encouraging sign. 

Data protector Druva, founded in 2008, has grown impressively too and reacted well to the move into SaaS services. There are $475 million of invested VC capital in Druva and it is waiting to be set free in a glorious exit.

Seven-year-old Databricks is part of the data analytics bandwagon, with the golden example of Snowflake’s IPO out there to emulate. Its total invested VC capital is an awesome $3.6 billion, making Cohesity, Druva and Rubrik look like VC-backed also-rans.

WekaIO is a file system phenomenon. It’s another 2013-era startup and has taken in just $67 million — a comparative minnow. It hired an exec chairman and also a president in August and has become the file system to beat in benchmarks such as STAC, and has won an eight-figure deal in the last couple of weeks.

2. 22TB HDD and 24–25TB SMR HDDs appear as the incremental disk drive development flywheel just carries on spinning.

The disk drive manufacturing triopoly members have all made announcements about making stepwise capacity advances towards a 30TB or so gateway to HAMR technology. 

Western Digital is talking about 10- and even 11-platter drives, which would add around 2.2TB per platter to current 9-platter 20TB drives. It has its OptiNAND controller flash and firmware technology and Toshiba has its MAS-MAMR providing rungs in the capacity ladder reaching upwards to 30TB.

Seagate is actively developing HAMR but keeping capacity pace with Western Digital in conventionally recorded drives.

It is a golden era in hard disk drive developments.

2a) Toshiba will introduce a 20TB disk drive.

The company is at the 18TB level while both Seagate and Western Digital have 20TB offerings. Toshiba has its MAS-MAMR technology and we are convinced a 20TB drive will be announced by the middle of 2022.

3. Ruler format SSDs appear in servers and arrays.

This is a little behind the curve, as they have already started appearing — for example Inspur has launched a ruler-supporting reference server system with Samsung. DDN has also said it will support EDSFF drives. These drives enable more flash capacity in standard chassis than the classic 2.5-inch (U.2) and M.2 drive formats. 

4. 3D NAND layer counts creep past 200 as new technology levels in NAND manufacturing are reached.

SK Hynix introduced 176 layers this year while Kioxia and Western Digital reached 162 with their BiCS 6 technology. We think we may hear about BiCS 7 and, say, 212 layers, in 2022. 

Samsung reached 176 layers this year with its V7 tech and V8 with 200+ layers beckons. Micron reached the 176 layer count in November 2020 and so its next step, to generation of its 3D NAND, will probably break past the 200 level as well.

5. One or more of AWS, Azure and the Google Cloud Platform will provide their own in-cloud backup services.

There have been several suppliers providing in-public cloud application backup services, such as Clumio. Cohesity has its offering and so too does Druva.

It seems to us that the big three cloud vendors — AWS, Azure and GCP — are leaving money on the table. It’s not inconceivable that one of them may buy a data-protection vendor to get the technology they need. After all, GCP bought Elastifile to get the in-cloud filesystem technology it needed.

5. VC will pump money into data lake and allied analytics software.

The example of Snowflake’s IPO and Databricks’ incredible $3.6 billion funding will make my software startup with technology to access, store, manage and analyse data better look attractive to VCs. Blinded by the Snowflake/Databricks light they will pour money into the likes of Airbyte, Ahana, and Dremio.

6. Both Infinidat and VAST Data will emerge as a major high-end storage array and filer suppliers.

Infinidat has weathered the CEO handover from Moshe Yanai to Phil Bullinger. The new CEO is emphasising the channel more and Infinidat’s memory-caching technology works with all-flash capacity storage as well as all-HDD. Bullinger has extended Infinidat’s exec team and hired IBM’s chief storage marketeer, Eric Herzog, to do the same job for Infinidat. As Infinidat’s high-end block access array fortunes rise, so too will IBM’s storage hardware sales fall, until a new mainframe gets the DS8000 moving again.

VAST Data is a phenomenon. It’s notching up huge sales wins for its all-flash file storage access, like seven-figure wins at the Department of Defense and was valued at $37 billion at an $83 million fourth round of funding in May.

Back then we said: “It launched its first product in 2019 and ended its second fiscal year with nearly 100 customers and a near-$100 million run rate of annualised software revenues, quadruple that of 2020. It was also cash flow-positive.” VAST has exited the hardware business and launched an initiative to go into the fast restore business — the turf that Pure’s FlashBlade has had all to itself up until now.

We think that both Infinidat and VAST Data will be thinking about their IPO prospects. We also think other companies will be looking at them as possible acquisition targets.

7. Either Komprise or Hammerspace may be acquired.

The reasoning here is that unstructured data amounts are growing and growing and growing. The ability to keep track of huge populations of files — think trillions — and huge multi-exabyte-plus capacities while using tiering to manage costs will become more and more important. Ocient in the structured data space is similarly well-positioned though at an earlier stage of corporate development.

8. Cisco will exit the server market.

It is no longer a large enough portion of its business to worry about. The UCS presence in the hyper-converged market is nearly nonexistent. Cisco’s UCS products are getting squeezed between Dell EMC and HPE on the one hand, and Supermicro and ODM manufacturers on the other. Sell it off Mr Robbins, and use the capital more productively elsewhere.

9. The flash/disk crossover will not happen.

Flash costs per TB are not declining fast enough to overtake declining HDD cost per TB. Both Seagate and Western Digital are confident that any crossover with flash becoming less expensive in cost per TB terms is at least a decade away. The flash industry is not making positive noises about penta-level cell flash (5bits/cell) and the marginal gains from 3D NAND layer count increases appear to be diminishing.

10. PCIe 4 will become mainstream and PCIe 5 will start making inroads.

This will pave the way for CXL and server CPU-memory disaggregation. Exactly how this will play out is unclear.

11. Computational storage and DNA storage will make incremental progress, but neither will experience a breakout.

The former because it doesn’t yet have killer use cases and the latter because it is simply too slow, no matter its incredible density. 

Nvidia BlueField 2 SmartNIC.

SmartNICs and DPUs will also make incremental progress, as will composable systems software. Other minor predictions are that Intel will re-commit to Optane and introduce a third-generation technology with an increased layer count. No container storage startup will make a breakthrough, as all the major storage suppliers have their own Kubernetes technology.

That’s it. Shoot them down if you wish. There’s a target on my back ready and waiting.

Storage news ticker – December 13

….

Cloud-native data linking platform Confluent announced the availability of Confluent Data Streaming Service on Alibaba Cloud. That means customers in mainland China now have a data-streaming platform to harness the flow of real-time data across entire organisations. It’s available in the Alibaba Marketplace.

A Microsoft Windows 11 update, KB5007262, “Addresses an issue that affects the performance of all disks (NVMe, SSD, hardisk) on Windows 11 by performing unnecessary actions each time a write operation occurs. This issue occurs only when the NTFS USN journal is enabled. Note, the USN journal is always enabled on the C: disk.” That means hard disk drives and SSDs. There is background information to this slow drive write speed issue in Mspoweruser.

TrendForce expects server DRAM prices will decrease by about 8–13 per cent in the first calendar quarter of 2022 due to slowdown in procurement activities. Server DRAM prices will experience the most severe declines compared to the other quarters in 2022.

VMware has announced Amazon FSx with NetApp ONTAP Integration for VMware Cloud on AWS. With this capability, you can attach a fully-managed NFS datastore built on NetApp’s ONTAP file system to the VMware Cloud on AWS SDDC and scale the storage environment as needed without the need to purchase additional host instances. Amazon FSx for NetApp ONTAP offers file storage with compression and deduplication to help reduce storage costs. It provides ONTAP’s data management capabilities, like snapshots, clones, and replication across a hybrid cloud environment that will, VMware says, improve staff productivity and responsiveness.

Active data replicator WANdisco announced a contract with a top-five UK banking group to migrate an initial 500TB to AWS using WANdisco’s LiveData Migrator platform. The bank will maintain a hybrid cloud between on-premises and the cloud for several years and is the first of four identified use cases — representing in aggregate more than 3PB of data — that have been highlighted by the bank. The initial contract will be a subscription licence for 500TB of data, with a view of moving to a ‘commit to consume’ relationship in the future as further use cases are introduced. 

Learning from history: QNAP’s ULINK AI tool predicts SATA drive failures

Taipei-based QNAP has a machine learning-based SATA disk and SSD drive failure predictor for its NAS systems that means you can replace the drives before they fail.

It has a partnership ULINK Technology — a supplier of IT storage interface test tools — which enabled QNAP to launch the DA Drive Analyzer. This a cloud-based tool with a user portal providing access to statistics generated from historical usage data of millions of drives provided by users. It is able to find drive failure events that wouldn’t be flagged by traditional diagnostics tools that rely on SMART.

Joseph Chen, CEO of ULINK Technology, issued a statement: “Artificial Intelligence is a new technology that has tackled many real-life problems. By applying this technology to disk failure prediction, ULINK can actively and continuously monitor drives, detect problems, predict failures, and notify end users with our unique cloud-based data processing system.”

QNAP product manager Tim Lin said: “QNAP is acutely aware that potential server down time is a critical concern for QNAP NAS users and sudden drive failure is one of its primary causes. We are honoured to have the chance to partner with ULINK to develop the DA Drive Analyzer to help users, especially IT staff who must manage large numbers of NAS devices.“

QNAP says its NAS system tracks raw drive health data, record system events such as temperature and IOPS test results, and uploads this data to the cloud AI engine daily. After collecting at least 14 days of usage data within the last 20 days, the AI will have sufficient data to analyse drive failure possibility in a newly-installed or monitored NAS system.

It will then provide recommendations based on an AI trained with over a million drives’ analysis data.

The DA Drive Analyzer’s UI is much better than general interfaces to drive SMART (Self-Monitoring, Analysis and Reporting Technology) data, as a screenshot shows:

Drives are given four ratings: normal, warning, faulty and critical. ULINK warns that the drive health information displayed by DA Drive Analyzer may be be one or two days old due to the time it takes to process and transfer data.

All QNAP NAS with QTS 5.0/QuTS hero h5.0 (or later) are supported and all QNAP expansion units (except the TR series) are also supported. Only SATA drives, both disk and solid state, are supported — but some drives may not be supported due to firmware or manufacturer settings. Full ULINK lists are available for disks and SSDs. To be clear, SAS and NVMe drives are not currently supported.

A Windows application, the DA Desktop Suite, enables admin staff to monitor several devices for multiple users, which can be a time saver.

The DA Drive Analyzer can be downloaded from the QNAP App Center. A free trial of DA Drive Analyzer is available (until March 5, 2022) for those who sign up for an annual subscription.

Comment

This is one of those developments that gets you thinking: “Of course! Why hasn’t it been done already?” Such a tool should be added to all drive array manufacturers’ support systems. In fact, they could go further and have hot spares already installed in the arrays with software to migrate data before the failing drive fails.

A second DA Drive Analyzer screenshot.

Recovery after drive failure is generally supported. NetApp supports hot spares, for example, in its E-Series arrays, and these receive reconstructed data from failed drives, not about-to-fail drives. Infinidat has an Infinispares feature to enable its arrays to keep running in a situation where dozens of disks have failed across a long period of time.

QNAP’s ULINK-based advance is to enable recovery before drive failure. Surely all drive array manufacturers can follow its lead. We might also expect disk drive and SSD manufacturers to co-operate with storage array suppliers in developing AI-based predictive drive failure and recovery systems.

Storage news ticker – December 10

Broadcom revenues for its fourth fiscal 2021 quarter were $7.4 billion, up 15 per cent year-on year, with $1.99 billion net income compared to $1.32 billion a year ago — a rise of 50.8 per cent. Full year revenues were $27.45 billion, up 15 per cent again, and $6.74 billion of profit — a 150 per cent year-on-year increase. It expects revenues to be in the $7.6 billion area next quarter. The infrastructure vendor pulled in $815 million in server storage connectivity revenue in the quarter, a 21 per cent year-on-year increase due to strong enterprise demand for HBAs and controllers. The earnings call revealed that its nearline storage business has a near billion dollar run rate. It says its next-generation storage connectivity portfolio (SAS 4, PCI Express 5 and NVMe) should help it “accelerate growth in our server storage connectivity revenue in Q1 to approximately 30 per cent year-on-year.”

Total Couchbase revenues in its Q3 FY2022 were $30.8 million — up 20 per cent year-on-year. There was a loss of $15.9 million, deeper than the year-ago $10.15 million loss. Annual recurring revenue (ARR) was $122.3 million, an increase of 21 per cent year-over-year. It expects revenue between $33.9 million and $34.1 million next quarter. Matt Cain, president and CEO of Couchbase, said: “Our strong third quarter performance was driven by ongoing large deal momentum, including some significant expansions, as well as acceleration of our cloud business. We also delivered solid top line growth … We continue to see demand for our modern database as digital transformation remains a priority across industries, and are excited about the market opportunity for Capella which makes it faster and easier to consume Couchbase in the cloud.”

Cloud backup provider Datto hosted an investor day. Jason Ader, a William Blair analyst, reported: “Datto sees potential for reacceleration to sustainable 20 per cent revenue growth and is targeting $1 billion-plus in revenue by full year 2024. With the core backup/recovery (Continuity) business growing steadily in the midteens (and representing 60 per cent of subscription revenue), the main growth engine is expected to come from user-based offerings, which include SaaS Protection, RMM, and security. This segment, which is growing today in the low-40 per cent range, should account for an increasing mix of Datto’s revenue over time.”

Digistor is rolling out commercial-class PCIe 4 NVMe SSDs in the M.2 form factor with 500GB to 4TB capacities. They have upto 7.2GB/sec read and 6.85GB/sec write bandwidth. Non-encrypted commercial PCIe 4 SSDs are available now with self-encrypting TCG Opal 2.0 versions available in Q1 2022.

Exascend, a provider of industrial, enterprise and cinematography storage, announces its P14 series of PCIe 4 NVMe SSDs available in the M.2, U.2 and E1.S form factors with Kioxia and Micron 3D TLC NAND. It says the PI4 series features ultra-high capacity (960GB to 7.68TB), and best-in-class energy efficiency. They deliver sustained transfer speeds of up to 3.5GB/sec and 4K random speeds of up to 600,000 IOPS. They come in M.2 2280 and U.2 form factors and can operate in temperatures from -40 to 185°F (-40 to 85°C). Exascend suggests they are a good fit for applications in the rugged edge servers, autonomous driving systems, surveillance, and big data logging systems areas.

IDC has issued a semi-annual Software-Defined Infrastructure Tracker that, no doubt, contains interesting supplier/product level detail information in its three categories: software-defined compute (54 per cent of total market value), storage controller software, and networking software. IDC actually uses the terms software-defined storage controller software (35 per cent of total market value), and software-defined network virtualization and SDN controller software (10 per cent of total market value), which seem excessively and redundantly wordy. Whatever, the publicly released information is frustratingly top-level and not really informative — the worldwide SDI market reached $6.5 billion during the first half of 2021, which was a 10.7 per cent increase year-on-year. Well, golly gee. 

Mike Shapiro and Jeff Bonwick of DSSD and Sun/Oracle are involved in a startup called Iodyne which is building a consumer storage product packed with NVMe technology. They have just announced a ProData SSD product featuring eight Thunderbolt ports and 12x NVME SSDs (totalling 12 or 24TB) outputting 5GB/sec. They say: “Pro Data is the fastest Thunderbolt storage for M1 Macs, and the fastest Thunderbolt RAID array.” You can daisy-chain up to six Pro Data devices per host port for up to a whopping 576TB of solid-state awesomeness. See the video below for more info.

NAS supplier iXsystems has appointed ex-DDN Tintri CMO Mario Blandini as its VP of Marketing. He’s previously worked at DZSi, SwiftStack (acquired by Nvidia), Drobo, Brocade, Rhapsody Networks, and SANrise. Blandini will support the adoption of TrueNAS storage as iXsystems expands deployments in SMB and enterprise IT environments. 

MongoDB revenues in its Q3 FY2021, ending October 31, were $226.9 million, up 50 per cent year-on-year. Subscription revenue was $217.9 million, an increase of 51 per cent year-over-year, and services revenue was $9.0 million, an increase of 35 per cent year-over-year. There was a net loss of $81.3 million, slightly worse than the year-ago loss of $72.7 million. It expects Q4 revenues to between $239 million and $242 million. William Blair analyst Jason Ader said these numbers were “blow-out results and guidance.” He said: “Top-line growth of 51 per cent was the fastest growth in over two years, with both Enterprise Archive  and Atlas seeing year-over-year acceleration (EA up 19 per cent and Atlas up 85 per cent, its fourth consecutive quarter of accelerating growth).” He added: “large enterprises are now standardising on MongoDB for an increasing number of workloads and viewing the company as a long-term strategic partner.”

A NetApp Future is Hybrid survey looked at 83 global customers representing organisations across multiple industries that have deployed hybrid cloud architectures. It found 77 per cent of respondents indicate they plan to operate as a hybrid cloud for the foreseeable future. A majority believe the hybrid cloud model improves infrastructure flexibility and scale. A majority also use hybrid cloud for protection of their data. They start either by moving secondary storage to the cloud for use cases such as data protection or by moving less business-critical workloads such as application development and DevOps to the cloud. They then move on to more critical workloads. A majority also thought hybrid cloud flexibility enables them to innovate faster.

Oracle’s Q2 FY2022 revenues were $10.4 billion, up 6 per cent year-on-year. Total Cloud Revenue (IaaS plus SaaS) was up 22 per cent to  $2.7 billion. Cloud services and license support revenues were up 6 per cent to $7.6 billion. Cloud license and on-premise license revenues were up 13 per cent to $1.2 billion. But Oracle’s Q2 GAAP results were adversely impacted by the payment of a judgment related to a ten-year-old dispute surrounding former CEO Mark Hurd’s employment. There was a loss of $1.26 billion compared to a profit of $2.44 billion a year ago.

William Blair analyst Jason Ader mentioned: “longer-term competitive positioning concerns. For example, despite the early success of Oracle’s highly touted autonomous database, Oracle remains a database market share donor (7 points of share loss between 2018 and 2020, according to IDC, with continued share loss expected in 2021). Moreover, while OCI is experiencing strong growth, Oracle remains far behind the big three vendors in terms of IaaS market share and the feature richness of its cloud services portfolio.”

Dario Zamarian is on the hiring trail as Pavilion Data has hired Anil Virmani as its SVP for software engineering. Previously he was VP engineering and product management, GreenLake Cloud Services at HPE, and stints at ADARA, VMware, and Juniper Networks before that. Chethan Bachamada is appointed as Pavilion’s VP of operations, leaving Jabil as a business unit director after a 20-year-plus period at the company. It’s also appointed Lynn Orlando as its VP narketing. She comes from being head of content marketing at WekaIO, and head of marketing at Stellus before that. 

Pavilion hires left to right: Anil Virmani, Chethan Bachamada and Lynn Orlando.

The SNIA has formed a Zoned Storage technical workgroup and the charter members are Western Digital, Samsung, NetApp, and Microsoft. It says command interfaces for Zoned Storage have been standardised (ZAC/ZBC for SMR HDDs and ZNS for NVMe SSDs), but the specs leave flexibility in how host software interacts with the Zoned Storage IO stack and Zoned Storage devices, resulting in different application best practices depending on the use case. The Zoned Storage ecosystem will benefit from the description of common use cases, and a nomenclature around which corresponding host/device models can be described. The new workgroup will facilitate a common industry understanding of Zoned Storage use cases and create a host/device architecture and programming model; providing a framework for Zoned Storage design and enabling the development of a robust Zoned Storage solutions ecosystem.

Singapore-based digital travel platform, Agoda, a Booking Holdings subsidiary, selected VAST Data’s all-flash Universal Storage as the data science backbone for its big data and machine learning environment. Agoda supports thousands of users simultaneously searching the website to find the best deals on hotels, flights and other travel reservations. It will build a cost-effective, private cloud computing environment for Apache Spark and Apache Impala processing with real-time access to VAST’s S3-compatible fast object storage. 

Show us what you got: Showa Denko opens Toshiba door to 30TB+ drives

Showa Denko (SDK) has developed a hard disk drive recording medium that will support Toshiba producing 30TB-plus drives using a Microwave Assisted Switching-Microwave Assisted Magnetic Recording (MAS-MAMR) technology suggested by Toshiba researchers.

Toshiba briefed Blocks & Files on its MAS-MAMR concept in June. MAS-MAMR can be used to narrow the recording tracks on a disk platter, so increasing the disk’s areal density and thus capacity. Toshiba uses Flux Control-MAMR in its current MG09 and MN09 18TB drives. 

MAS-MAMR is the use of resonance-enhanced magnetic oscillations between a spin-torque oscillator (STO) in the disk drive’s read/write head and the recording medium. The stronger oscillations facilitate writing data in narrower tracks in the media. 

Blocks & Files diagram.

Toshiba says that there has been a joint development program in which SDK, Toshiba, and TDK have proved that a combination of a TDK-developed read/write head equipped with dual spin-injection-layer, and disk recording media equipped with a new SDK-developed magnetic layer, “can substantially increase HDDs’ data-storage capacity through manifestations of the MAS effect.”

SDK will now accelerate development of disk media supporting MAS-MAMR so that Toshiba, using TDK heads, can develop disk drives up to and exceeding 30TB in capacity. Toshiba calls this second-generation MAMR technology.

SDK will also work on developing HAMR (heat assisted magnetic recording) media. Its MAS-MAMR announcement does not say that HAMR is a follow-on technology to MAS-MAMR but we can certainly infer that.

Comment

The road now seems open to Toshiba producing 20 to 30TB and higher capacity drives in the future, enabling it to keep up with Seagate and Western Digital as they use intermediate technologies on their own transition to HAMR drives. Both are already at the 20TB capacity level and Seagate is shipping first-generation HAMR drives, with the second generation in development. We expect Toshiba to introduce its own 20TB drive in the relatively near future.

Definitions

Resonance — the strengthening or amplification of some periodically applied force to an element when the frequency of the applied force is equal, or close to, the natural frequency of the medium containing the element, such as a bit area in a magnetic medium.

Spin Torque Oscillator — this device, referred to as an STO, produces microwaves radiating outwards. Electrons in a magnetised area have a spin state, tending to spin one way or another. By applying microwaves at the right frequency a resonance effect can alter the spin state.

Venn diagram-inspired data sharer gets great revenue growth

Vendia today announced 900 per cent year-on-year revenue growth, a pay-as-you-go self-service pricing plan, and the beta launch of Azure support in Vendia Share — the world’s first serverless service platform for Web3 applications.

Who and what is Vendia and Vendia Share?

The company name comes from Venn diagram, the mathematical figures popularised (but not invented) in the 1880s by John Venn. Venn diagrams illustrate the logical relationships between sets of data. Thus, the company’s intent is to reflect its core mission: helping customers share code and data across companies, clouds, and technology stacks.

Shruthi Rao.

Shruthi Rao, CBO and co-founder of Vendia, told us Vendia helps people securely share data across companies and clouds in real time. Enterprises rely on Vendia for data integration, financial settlement, ML training, transaction processing, supply chain solutions and more.

Vendia Share was launched in 2020. BMW uses Vendia’s service to share information about cars being built and shipped with partners across its supply chain. Vendia built a system where inspection information, photos, and videos are collected via a mobile application at each juncture in the supply chain, and stored immutably on Vendia’s multi-party database. BMW can invite partners to connect, regardless what cloud they use, without worrying about infrastructure. Partners can easily integrate with the system without a great deal of IT support.

Vendia is different from traditional approaches because it combines serverless technology with a distributed ledger to allow companies to share data between organisations and their partners, across companies, clouds, regions and different tech stacks.  

Traditionally, companies used MuleSoft or other APIs to share real-time data but it was a bilateral mechanism and could only support two parties at a time. If three or more partners were involved — such as BMW and two of its logistics partners all needing to see the latest status of a part — then you would need three MuleSoft pings. Scale this to hundreds of partners and the apparatus quickly gets complicated. Now you need a host of people to build, manage and fine-tune it. Moreover, companies cannot share files — pictures, videos, PDFs, geolocations — with their partners, rendering a complex system like this half-baked. 

With Vendia, BMW can create a simple data model for parts, invite the partners with whom it wants to share data and specify fine-grained parameters for each of the partners to control access, visibility and ensure compliance. This entire backend can be set up in minutes once there is a data model defined. 

Read more about the news in a Vendia blog.

Note: Before co-founding Vendia in March 2020, Shruthi Rao was running business development for AWS’s blockchain division — including Amazon Managed Blockchain (a productisation of the open source Hyperledger Fabric code) and QLDB (a centralised single-owner database with ledger-like capabilities).

Western Digital spinner boss drops tape archive invasion bomb

Blocks & Files was briefed by Ashley Gorakhpurwalla, EVP and GM of Western Digital’s hard disk drive (HDD) business, who suggested WD could move into the tape archive market with HDD technology.

He also talked about the possible flash/disk cost crossover, HAMR, multi-actuator drives, 11-platter drives and SMR (Shingled Magnetic Recording) as well — but the archive disk drive idea was the briefing bombshell as far as we were concerned.

Ashley Gorakhpurwalla

Much as CEO David Goekeler said earlier this month, Gorakhpurwalla doesn’t see a flash/disk crossover happening any time soon. HAMR drives will happen around the 30TB mark or so and ePMR-type technologies such as OptiNAND will drive capacities up stepwise for another 10TB or so. He also said multi-actuator drives would be developed by Western Digital. So let’s wave goodbye to these topics and head over into the archive space, where we’ll meet 11 platters and SMR as well.

Gorakhpurwalla said: “The greatest percentage of your data is really not being accessed as much. And it starts to get colder and colder over time. … Many studies have shown that … the real challenge for the industry moving forward is to add yet another tier, to allow colder data and archive data to exist.” 

“I think the future of that is the hard drive as well, perhaps in a slightly different form or capability than today. But solving the archive problem is part of our mission.” 

If he is talking about this archival data disk drive concept to us — hacks towards the lower end of the hierarchy of who suppliers talk to about new concepts — then it must have had a fair amount of air time inside Western Digital and between WD and its largest customers already.

Here is a memory-to-tape hierarchy diagram, presented as a triangle to show that the amount of data grows as you move down-hierarchy, with access latency increasing and cost/bit decreasing in lockstep as you descend this ladder of tiers.

Gorakhpurwalla said: “I think even if you go beyond those tiers, all the way down into very little access, maybe right once read never, you start to get into a medium that still exist as we go forward in paper or other forms, perhaps even optical.”

He added: “Think of a hard drive in a traditional sense, you know, the three and a half inch form factor with 9 or 10 platters [and] in the future 11 platters and … the [kind of] head stack that we have today. That’s a … combination of technologies and capabilities. Utilising … that toolbox then to go and be able to deliver a solution for different tiers in the datacentre … is part of our roadmap at Western Digital.”

This is the first time we have heard of an 11-platter disk drive possibility and it would provide a 2+ TB boost to drive capacity.

Back to the archive drive idea. “It’s not something that that … we’re gonna launch next quarter, but over time we’ll be able to do what’s really important here.

“As you move forward, then I think utilising our technology and sort of code designing and partnering with our largest customers, the systems infrastructure and software level, we can then start to move into a colder. more archive space going forward, utilising the same kind of technologies that make up hard drives.”

OK, let’s explore Gorakhpurwalla’s idea a little. This is a capacity-optimised play needing careful price/performance positioning versus tape. Let’s envisage a 5.25-inch form factor disk drive with ten platters, each twice the equivalent 3.5-inch capacity and have nearline 3.5-inch drives be shipping with a 26TB non-shingled capacity.

That gives us a 52TB conventionally recorded drive. Now let’s apply shingling, because write speed is not a critical factor here, and increase capacity by a shade under 17 per cent to reach 60TB. We could play with numbers more and start out with an 11-platter drive and so arrive at a 70TB destination disk. Whichever, we could call this a Coldline drive (or Farline — make up your own term) and place it in our memory-to-tape hierarchy like this:

Would cloud service providers and enterprise hyperscalers be willing at this capacity level to alter their software stacks so as to support shingled disk media and have a rough 10ms random access latency to archived data rather than the two-minute-plus latency for offline tape data? Gorakhpurwalla thinks they might.

He’s bringing the idea out into the media light of day and so, we conclude, WD’s engineers, strategists and enough of its large customers think there is potential as well. It is early days but maybe we’ll see or hear something more as we progress through 2022, and perhaps product will start getting discussed in 2023.

The old Flash-and-Trash concept and Wikibon’s Wright’s Law-based flash/disk cost crossover ideas will be somewhat delayed — by ten years at least it seems. Until then the flash, disk and tape trio rules — the stable, long-lived, three-legged stool of storage media.

NetApp appoints chief product officer who doesn’t look after all products

NetApp has appointed its first chief product officer, Harvinder (Harv) Bhela, who will look after most — but not all — of NetApp’s products.

Harvinder Bhela.

Bhela starts in January 2022, and will report to CEO George Kurian. His job will be to “accelerate the ongoing transformation of the company into a multi-cloud, storage, and data services leader.” He joins after nearly 25 years at Microsoft, where he held multiple executive leadership positions — most recently “as corporate VP of the Microsoft 365 Security, Compliance and Management business, which he grew to more than $10 billion in annual revenue, making Microsoft the largest security company in the world.” 

NetApp’s announcement is careful to point out that “Anthony Lye, EVP and GM of NetApp’s Cloud Management and Platform Services Business Unit, will continue to lead the company’s rapid growth and innovation in emerging technologies, furthering its leadership position in Cloud Operations (CloudOps), also reporting to George Kurian.”

Why?

Anthony Lye.

A Kurian statement reads: “Under Anthony’s leadership we have successfully incubated a cloud storage and data services business and an emerging cloud operations business. Looking ahead, we have the opportunity to accelerate the expansion of our cloud storage and data services business, while continuing a focused approach to scaling our CloudOps business.”

Lye will run the CloudOps business while Bhela will be responsible for the cloud storage, data services and hybrid cloud businesses. This, NetApp believes, “will further accelerate the pivot to cloud of its storage business while maintaining leadership in flash and object storage,” while Lye will scale the CloudOps business.

We asked NetApp if Anthony Lye is responsible for the Spot series of products (such as Ocean) while Bhela will look after ONTAP, StorageGRID, the AFF, FAS, E-Series and SolidFire arrays, but not the Spot products. The reply was: “Yes, this is accurate.”

So the Spot portfolio is not included in Bhela’s CPO responsibilities. 

The appointment of Bhela follows the retirement of Brad Anderson, the General Manager of NetApp’s Hybrid Cloud Business at the end of FY22, which was announced in June. Bhela’s LinkedIn profile says he is an incoming NetApp EVP and CPO, giving him the same EVP rank as Lye.

Perhaps it would have been less confusing to give Bhela the GM Hybrid Cloud Business title rather than the slightly inaccurate chief product officer role. Perhaps also Lye did not want his reporting line direct to Kurian broken by his having to report to Bhela?

Bhela, by the way, is masterly at describing himself. Here’s part of his LinkedIn profile: “I have led multiple products that went on to become #1 in their category, became 10b$+ businesses, are used by 100s of millions of people around the world, have become de-facto industry standards and have won several industry awards. I help build a differentiated strategy that gets better with time given industry trends. I help teams move at lightning speed and yet have the patience to execute relentlessly across many years. It takes bold strategy, amazing people, disciplined execution, and patience to build truly great things for customers and thereby disrupt competition. Embrace disruption like you have nothing to lose – don’t become a victim of it!”

Lye apparently didn’t feel he had anything to lose.

Bhela carries on in this vein: “But the real source of all differentiation is great people and culture! A great team is like a legendary music band. We are obsessed about making great music to delight our audience. We strive life-long to be world-class at the instrument we play. We are stronger because different people in the team are excellent at different instruments and have diverse performance styles. We trust, respect, listen and build upon each other. We generate positive energy that fuels ourselves and others and we have fun together. Then and only then can we make great music together. But legendary bands don’t just make music, they create ‘feelings’ and that high bar, lofty goal, think big is what we shoot for.”

Feel the force!