Startup Nebulon has announced for the first time a customer win for its storage processing unit-based smartCore infrastructure. The client – South African MSP SYSDBA – said choosing Nebulon was a “no-brainer.”
The Service Processing Unit (SPU – originally called a Storage Processing Unit) is a PCIe card added to a server and controlled through the public cloud to provide server infrastructure management, offloading mundane infrastructure management from the server’s CPU cores. This means the server can run more application code and the infrastructure housing it can be managed more efficiently; that’s the Nebulon pitch for its smartInfrastructure-branded products.
The Nebulon SPU.
Marc Pratt
Marc Pratt, Strategic Alliances Manager at SYSDBA, provided a canned quote: “The fact that Nebulon smartInfrastructure can be provisioned, managed and maintained from anywhere gives significant flexibility in terms of control. What made Nebulon even more attractive, however, was the fact that we cut our costs in half versus purchasing disaggregated storage and compute solutions.”
He added: “Because the solution doesn’t consume any server CPU, memory, or networking resources like hyper-converged infrastructure alternatives, we are able to use 100 per cent of our server for the applications we run. Based on this alone, choosing Nebulon was a no-brainer.”
SYSDBA has deployed a bunch of HPE ProLiant DL380 Gen10 servers with Nebulon smartInfrastructure to host its customer environments and internal operations. It says it can now control its environment from anywhere and reduce infrastructure maintenance efforts through behind-the-scenes (delivered from the cloud in the background) software updates. These features can be critical for certain customers in Africa who may lack skilled resources to manage large infrastructures as well as others located in areas with difficult or limited access.
In essence it has a new product category positioning problem, and so the first public customer win is a big deal.
Nebulon co-founder and CEO Siamak Nazari said: “Service providers like SYSDBA rely on solutions to help them address key time-to-value and management challenges they experience in their core and hosted data centres. With Nebulon smartInfrastructure, not only can they address these challenges, but they can save infrastructure costs doing it.”
Seagate has consolidated its place at the top of the global HDD supplier rankings against a backdrop of stagnating disk drive areal density.
Data from TrendForce and Tom Coughlin’s Digital Storage Technology Newsletter enabled us to chart a 3-year picture of disk drive revenue market share. It shows a transfer of market leadership from Western Digital to Seagate, and Toshiba gaining and holding a 21 per cent share of all sales.
Bocks & Files chart using IDC, Coughlin and TrendForce numbers.
More of the disk drive market is switching to mass capacity drives, particularly with nearline storage applications. Capacity is becoming the most important aspect of disk drive technology as SSDs are favoured by customers in the speed stakes.
Areal density stagnation
High capacity disk drives provide 2TB/platter in capacity. Increasing that equates to storing more bits on the platters, meaning an areal density increase. Coughlin’s newsletter said: “There was no growth in HDD areal density in C1Q 2021 and the growth of capacity-oriented Nearline drives for enterprise and hyperscale applications will result in more components per drive out to 2026.”
That’s because the only way to get capacity increases, if the areal density (capacity per platter) is static, is to add more platters and heads.
Coughlin’s newsletter added: “The industry is in a period of extended product and laboratory areal density stagnation, exceeding the length of prior stagnations.”
The problem is that a technology transition from perpendicular magnetic recording (PMR), which has reached a limit in terms of decreasing bit area, to energy-assisted technologies – which support smaller bit areas – has stalled.
The two alternatives, HAMR (Heat-Assisted Magnetic Recording) and MAMR (Microwave-Assisted Magnetic Recording) both require new recording medium formulations and additional components on the read-write heads to generate the heat or microwave energy required. That means extra cost. So far none of the three suppliers: Seagate (HAMR), Toshiba and Western Digital (MAMR), have been confident enough in the characteristics of their technologies to make the switch from PMR across their product ranges.
Indeed Western Digital appears to be sitting on the fence: It introduced 18TB and 20TB partial MAMR drives in September 2019; and in July 2020 it launched an 18TB Gold brand PMR drive and a 20TB Ultrastar DC HC650 drive using shingling (overlapped write tracks) to reach 20TB without using MAMR at all.
Wikibon analyst David Floyer said: “HDD vendors of HAMR and MAMR are unlikely to drive down the costs below those of the current PMR HDD technology.”
Due to this: “Investments in HAMR and MAMR are not the HDD vendors’ main focus. Executives are placing significant emphasis on production efficiency, lower sales and distribution costs, and are extracting good profits in a declining market. Wikibon would expect further consolidation of vendors and production facilities as part of this focus on cost reduction.”
LucidLink, whose FileSpaces software presents public cloud object storage as a local filer, has impressed venture capitalists enough to get a $12m A-round of funding.
The company was launched in 2016 and seed funding prior to this latest cash injection totalled $7.1m. Its Filespaces technology has a software agent, and provides on-premises metadata processing and a small cache area in the local storage. Applications make file requests which are sent to the public cloud object store and serviced from there using parallel links, compression, streaming and pre-fetching to give local access speed.
Conrad Chu, a partner at venture cap fund Headline, said in a statement: “We discovered Filespaces as a user and immediately recognised that there is nothing else like it out there. With incredible traction in the Media and Entertainment industry as well as Architecture, Engineering, and Construction, LucidLink is hitting it out of the park with this next-generation cloud file system.”
LucidLink diagram.
Adobe invested alongside Headline, as did Baseline Ventures and Bright Cap Ventures. Chu is joining LucidLink’s board.
LucidLInk co-founder and CEO Peter Thompson said in a statement: “Our partnership with Adobe presents a perfect opportunity to easily enable remote production teams with the entire suite of Adobe Creative Cloud products.”
Filespaces can be combined with an on-premises Cloudian HyperStore back-end platform. Local applications see Filespaces as a NAS mountpoint and translate their file access requests into S3 requests, feeding them to the Cloudian system. The point is that the Cloudian HyperStore system is generally more affordable than a NAS filer and, courtesy of Filespaces, provides equivalent access speed.
In February LucidLink announced a bundling of IBM Cloud Object Storage with Filespaces. Other supported object storage includes AWS S3, Azure Blob, Google Cloud Platform, MinIO, Nutanix Objects, Scality, Wasabi and Zadara.
The prospect is that multiple remote users and hybrid businesses with edge computing sites, connected like spokes to a public cloud hub, can all benefit from cheap and scalable public cloud storage while getting local filer access speeds by using Filespaces software to bridge the on-premise and cloud environments.
There are several other suppliers offering similar public cloud-to-edge data access and file collaboration, such as CTERA, Egnyte, InfiniteIO, Nasuni, and Panzura. They all share a basic public cloud-to-local site data access capability, implemented in their own way, while adding their own set of services on top.
Quantum will likely see LucidLink as a competitor in the entertainment and media industry, as Quantum’s StorNext file system also supports object storage back ends.
Never mind the GPUs used for process-intensive mining of Bitcoin, new cryptocoin Chia is driving up hard disk drive prices by its own processing needs, which are based on storage capacity.
Chiacoin was launched this month. It is a bitcoin-like currency, and it needs so-called proof-of-space-time to be mined, or rather farmed, using hard disk drives and SSDs. That requires less processing than bitcoin, positioning Chiacoin as a “greener” cryptocurrency. It was devised by Bram Cohen, the American programmer who came up with the BitTorrent peer-to-peer network protocol. Chia Network is the company backing Chiacoin.
Bitcoins are mined using a so-called proof-of-work which involves vast numbers of CPU cycles and hence electricity. Chiacoin farming requires much less processing.
Chia is based on a proof-of-space-time consensus algorithm and a blockchain network. It involves a so-called Chia farmer or “prover” sending such a proof to a verifier. The proof is that some amount of disk or SSD space is actually used to store some specific data.
A Chia Business White Paper explains the initial concept. It states “Proof of Space leverages the over-provisioned exabytes of disk space that already exist in the world today.”
A Chia FAQ supplies more information. Going deeper than this involves a great degree of mathematical complexity in a Chia Green Paper:
Chia Green Paper extract
A Chia farmer loads cryptographic numbers onto their disk or SSD into a portion of the capacity known as a “plot.” A plot is associated with blocks from a blockchain, the number of which depends upon the percentage of the total space a farmer has allocated compared to the entire blockchain network. Each plot has a hash and, as the Chia blockchain extends, each farmer’s system can see, using background processing, if their hashes are a match for the next blockchain step (or challenge.)
A VDF (Verifiable Delay Function) proof-of-time “Timelord” server verifies each block and awards an XCH token to the farmer for each verified block (coin).
To find out what’s really going on study the Chia Green Paper and its referenced documentation.
Getting back to known ground, Chia farming requires disk and/or SSD space and so wannabe “farmers” are buying up disk drives and SSDs.
A PC Gamer report claimed a 512GB SSD could be used up by Chia farming in 40 days and a 2TB drive in 160 days. So much writing is involved that the SSDs wear out. That encourages disk drives to be used instead of SSDs.
A Tom’s Hardware report said Chai farming had increased demand for high-capacity disk drives and retail/etail prices have risen $100 – $300 in the first half of May.
For example a Toshiba X300 12TB drive cost $320 in April and now costs between $498 and $506. A Western Digital Gold 12TB cost about $340 in March. WD sells it now for $416 and it’s $440 on Amazon. A Golds 14TB drive cost $410 in March and is now priced at $527 to $630 on Amazon.
Shares in both Seagate and Western Digital have risen recently. On May 12 Seagate shares were priced at $84.05. They are now valued at $102.29, a 21.8 per cent rise in six days. May 12 saw Western Digital shares priced at $64.38. The current price is $74.96, meaning there has been a 16.4 per cent increase in six days.
What will happen now? Are we in a Chiacoin bubble? Does this thing have wings? The heck we know: it may better for the environment, but it’s definitely not great news for those who need to buy disk drives.
Western Digital has launched a Pro version of its Purple surveillance system drive with a longer warranty and enhanced workload rating.
The aim is to support the more powerful video camera, network video recorder and video analytics systems now being produced to process an ever-increasing amount of video surveillance imagery.
Brian Mallari, director of marketing for Smart Video at Western Digital, said: “With the addition of WD Purple Pro, our full portfolio of smart video products covers customers’ needs from dedicated WD Purple microSD cards for cameras to WD Purple hard drives for mainstream NVRs and the new generation of smart video architectures.
WD Purple Pro 18TB drive.
He also claimed it could help “original equipment manufacturers and integrators to evolve their systems for emerging AI workloads.”
Capacities
The all-SATA Purple HDD range is divided into low-end 1TB to 6TB drives which spin at 5400 – 5700rpm, support 64 cameras and have a 180TB/year workload rating. The 8TB – 18TB Purple drives spin faster at 7,200rpm, support 64 cameras plus 32 video streams for deep learning analytics. They have a higher 360TB/year workload rating.
The Purple Pros have the same 8TB – 18TB capacity and 7,200rpm spin speed as the upper-end Purple HDDS but a higher still 550TB/year workload rating. They also have a 5-year warranty compared to the Purple drives’ 3-year warranty. The 8TB and 10TB products are air-filled drives while the 12TB, 14TB and 18TB products have sealed, helium-filled enclosures. These helium-filled drives have a 2.5 million hours MTBF rating while the air-filled drives have a 2 million hours MTBF number.
Purple Pros have a WD Device Analytics (WDDA) feature that provides device parametric operational and diagnostic data. Algorithms interpret the data and direct the host system to alert admin staff with specific recommendations to address potential issues. The idea is that WD OEMs, system integrators, and IT pros can better monitor and manage supported storage devices using the drives.
Seagate has a competing Skyhawk 18TB drive for the video surveillance market. It supports the same number of camera and analytics streams as the Purple Pro and comes with an identical 550TB/year workload rating. However it only has a 3-year warranty. Toshiba’s MD06 surveillance drives have a 10TB capacity maximum and are not in the Purple Pro capacity class.
WD Purple Pro drives will be available this quarter from Western Digital resellers. We understand a 10TB model will cost $337.99 but may be available for as little as $264.34. No datasheet was available at publication time.
Data protection firm Rubrik has pushed out a Cloud Data Management software release that automates the process of application recovery from ransomware attacks, and also protects Kubernetes-orchestrated containers.
The product was rolled out at Rubrik’s Forward virtual conference this week, which features presentations from high-profile people including tennis champ Venus Williams. Snowflake CEO Frank Slootman, NetApp CEO George Kurian, and VC Vinod Khosla will also be speaking. Rubrik signed up NetApp as a reseller in April.
Khosla has been a Rubrik investor since 2016 via his Khosla Ventures operation. Slootman ran Snowflake’s impressively successful IPO, perhaps an inspiration to Rubrik CEO Bipul Sinha. Williams is a former world number one tennis player in both singles and doubles and is still in the top 100 on the WTA circuit.
From left to right: Venus Williams, Frank Slootman, George Kurian and Vinod Khosla
Rubrik president Dan Rogers said: “There has never been a greater need to protect and quickly recover from rising cyber threats like ransomware, which are devastating businesses on a daily basis. Rubrik … [allows] for quick recovery from attacks and protection of precious IP, no matter where the data is stored.”
AppFlows automated ransomware recovery
The new software release adds a Polaris AppFlows disaster recovery (DR) facility for on-premises VMware environments to a second site or VMware Cloud on AWS. Rubrik says it enables IT teams to use existing backup systems they’ve already paid for. This obviates the need to deploy and manage separate DR infrastructure. AppFlows uses application blueprints that specify app resource mapping and workload dependencies to enable failover in the event of a data centre outage.
Rubrik AppFlows diagram
If a ransomware attack encrypts VMware app, a Radar facility will identify impacted VM membership within blueprints. It will then present a way for entire applications or business services to recover to a point in time before the encryption event. The whole process uses an AppFlows enhanced workflow. So there is no need to triage and remediate individual VMs as it presents a global view of the customer’s data estate. The idea is that IT teams can identify affected workloads and files in a single process.
Failover and failback processes are fully automated. Rubrik will add local, in-place recovery for ransomware attacks. Of course, this is only as useful if you actually switch it on, as the University of California San Francisco discovered last year.
Other additions
The new software release also adds:
Integration with automation frameworks, such as Palo Alto Networks Cortex XSOAR and ServiceNow Incident Response,
Two-factor authentication
Data risk management with Sonar user behaviour analysis to determine who is accessing, modifying or adding files
Data Management for Kubernetes to protect applications and stateful data through the Polaris SaaS platform
Rubrik-hosted backup and recovery for Microsoft 365 applications, including SharePoint and Teams
New Network Attached Storage (NAS) Cloud Direct built on Rubrik’s acquisition of Igneous IP and technology, protecting and rapidly recovering petabyte-scale NAS systems,
Automated discovery and backup for SAP HANA on public cloud and on-premises
Intelligent cloud archival to optimise costs of public cloud object-based storage
Backup from NetApp SnapMirror
Consistent point-in-time backups for Nutanix Files and Live Mount for Nutanix AHV backups
Offload backups to Oracle Data Guard standby databases with failover/switchover awareness
Delivery of policy-driven backups for Cassandra databases via Polaris
Incremental backups for vSphere Metro Storage Cluster (vMSC)
Hitachi Vantara has a “strategic partnership” with Buurst, the rebranded SoftNAS. It has adopted its Fusion data mover product for its Hitachi Virtual Storage-as-a-Service(VSaaS) and Hitachi Kubernetes Service offerings.
The pair spoke of the deal in a May 14 Hitachi Vantara and Buurst webinar (see here). The two companies also hosted a webinar on Monday, May 17. In the latter, they talked about hybrid cloud acceleration unifying compute and storage with Kubernetes for Hybrid Cloud.
Hitachi Vantara and Buurst May 14 webinar tweet
Buurst built Fuusion on the open source Apache NiFi project. It is a bi-directional data pipeline connecting edge and cloud. You can deploy it on bare metal, in virtual machines or, in the future, containers. It supports AWS, Azure, GCP, and VMware, and operates across any network link.
The Fuusion software accelerates network traffic and ensures no loss of data, even if the network goes down for a period of time. It provides a chain of custody for migrated data. Management is central.
The VSTaaS webinar registration page says: “Buurst Fuusion in [a] strategic partnership with Hitachi Virtual Storage as a Service delivers the best in breed Data Transfer Management and Data operations in the Industry. Hitachi Virtual Storage as a Service with Buurst Fuusion ensures data delivery in Hybrid Cloud deployments will be bullet proof.”
Data highway
As part of the deal, the Hitachi Kubernetes Service will use Buurst Fuusion to deliver centralised data management and bi-directional data operations transfer across the hybrid cloud.
Fuusion is something like a data highway and spans on-premises data centre, co-location sites like Equinix, and public clouds. It competes with data moving technologies from Bridgeworks, Cirrus Data Solutions, Datadobi and WANdisco.
A slide from the VSTaaS webinar shows deployment of Buurst software on-premises, in Equinix colos, and in the public cloud:
VSTaaS webinar slide
Buurst
CEO Vic Mahadevan runs the company. He was the company’s chairman from August 2019 and became CEO in November 2020, replacing Garry Olah. Olah became CEO when Mahadevan became board chair. The firm rebranded SoftNAS as Buurst during Olah’s tenure.
Buurst Fuusion diagram
Marc Palombo, Buurst’s chief revenue officer, told us: “Garry was a first-time CEO and it showed in some initiatives. As a result, the board voted to move Vic into the CEO role in the fall in order to get the company back on track to its forecasted deliverables. Buurst is now poised to drive significant growth in 2021 with its new key strategic global partnership with Hitachi around data transfer management and more significant news to come.”
Disk drive and SSD supplier Western Digital has spoken about computational storage use cases using SSDs and the need for a standard drive-host interface for computational storage drives (CSDs), indicating it wasn’t a question of if but when they become mainstream.
Computational storage aims to overcome the time issue created when data is moved to computation by moving a slice of compute to the data instead. One example of such a time issue would be when reading data from storage into a host server’s memory. There are two main ways to move compute to the data: at the array level and at the drive level.
Array level
It has been tried at the array level with Coho Data, and that attempt failed as there simply weren’t enough use cases to justify the product’s use back in 2017.
Dell EMC’s PowerStore has an AppsOn feature to enable virtual machines to run in the array’s controllers. In theory it can be used to run any workload on the array. We don’t yet know how popular this idea is.
One issue with array-level computation is that the data still has to be read into main memory, controller main memory in this case. There is no need for a network hop to a host server, but data movement distance has only been shortened, not abolished.
Drive level
Drive-level or in-situ computational storage carries out the processing inside a drive’s chassis, with an embedded processor operator almost directly on the data. There are five startups active in this area: Eideticom, NGD, Nyriad, and ScaleFlux. Seagate may also be active.
ScaleFlux computational storage drive
Typically that adds an Arm processor, memory and IO to a card inside the drive enclosure. Seagate is basing its efforts on the open source RISC-V processor, as is Western Digital. But WD has also invested in NGD, having two CPU irons in the computational storage fire so to speak.
Richard New
Although WD would not be drawn on any moves on its own part to manufacture CSDs, the potential extent of Western Digital’s computational storage ambitions became evident in a briefing with Richard New. New is VP of Research at the company, and the person who runs WD Labs.
Use cases
New described the features of computational storage use cases using SSDs:
The application is IO-bound and that needs alleviating.
No need for significant computation, only a fairly simply compute problem that needs to be done quickly, encryption for example.
Data needs filtering with a search algorithm. The IOs sent to the host after filtering represent a fraction of the overall data set.
A streaming data application that needs to touch every byte that is written to the storage device, like encryption and compression.
Other potential applications include image manipulation and database acceleration. New suggested video transcoding is a less obvious application.
In this use case, unless this in-situ processing is transparent to the host server, it needs to know when drive-level processing starts and stops. We need a host-CSD interface.
NGD Newport CSD.
Host-drive interface
New said there needs to a standard drive-host interface. The NVMe standards organisation is working on such a standard. In fact the NVMe standard is properly known as the Non-Volatile Memory Host Controller Interface Specification (NVMHCIS).
There is an NVMe Computational Storage Task Group, with three group chairpersons: Eideticom’s Stephen Bates, Intel’s Kim Malone, and Samsung’s Bill Martin. The task group’s scope of work encompasses discovery, configuration and use of computational storage features inside a vendor-neutral NVM Express framework. The membership count is greater than 75, with over 25 of these suppliers.
The point of having a standard would be to enable a host server to interact with a CSD. The host system would then know about the status of a CSD and the work it could do. New suggested a general purpose script file could be used. He said: “We need such a standard” to bring about broad adoption of CSDs.
There is also an SNIA workgroup looking into computational storage.
Risc-V
The CSD processor has to be purpose-built and New said: “RISC-V gives Western Digital more flexibility in doing this. You can target specific computational problem types.”
For example you might combine RISC-V cores with ASICs and develop orchestrating software. In New’s view: “You need to optimise for a particular use case.”
We note that Arm processors have to use IP licensed from Arm, whereas RISC-V is open source and free of such restrictions.
Challenges
New mentioned four challenges he thinks stand in the way of broader CSD adoption:
How do you pass context to the device?
A file system: how is implemented on the device?
How can a key be passed to the device to enable it to decrypt already-encrypted data?
How can it cope with data striped across multiple drives?
We would add the challenge of programming the device. Does it run firmware? Does it have an OS? How is application code developed, compiled or assembled, and loaded onto the device?
New is confident CSD will be adopted: “We believe it will happen but it will take some time for the use cases to be narrowed down and standards to be set.”
ScaleFlux CSD componentry
Comment
It seems to us at Blocks & Files that a use case justifying CSD development and deployment will have to involve tens of thousands of drives, if not more. Without that, CSDs will never progress beyond being highly customised products sold in low numbers into small application niches.
This use case will involve, say, hundreds of thousands of files or records needing to be processed in pretty much the same way. Instead of a host CPU processing one million records read in from storage, which takes 2 minutes, you would have 1,000 CSDs each processing 1,000 records. The latter would complete in 20 seconds – because there is no data movement. Each CSD tells the host server that it has completed its work and then the host CPU can kick off the next stage of the workflow. It’s this kind of calculation that could drive CSD adoption – if such use cases exist in the real world. Perhaps the Internet of Things sector will help generate them.
If they did, Western Digital could sell tens of thousands of computational storage SSDs. We think it would want to sell hundreds of thousands. B&F does not think that WD will build its own RISC-V-powered CSDs and hope the customers will come. It will surely want to see solid signs of sustained customer adoption of CSDs and an ecosystem of CSD developers before it will start building its own branded CSDs for sale through such an ecosystem.
We are all, like WD, watching the CSD startups to see if they gain traction. They can take most of the “build it and they will come” approach and pain, with WD and pals stepping in after that traction has been demonstrated.
NAND and SSD supplier Kioxia is reporting a rise in revenues and underlying profitability as market demand strengthens, but there is no news on either a resumed IPO or an acquisition.
The company filed for an IPO in August last year but the US-China trade dispute derailed it. Market speculation about Western Digital and Micron acquisition bids for Kioxia surfaced in April.
In Kioxia’s fourth quarter fy21, ended 31 March, showed a 10.2 per cent Y/Y revenue rise to ¥294.7bn ($2.7bn), with a loss of ¥21bn ($190m), contrasting with the year-ago ¥9.8bn ($90m) profit. Kioxia sold more SSDs into data centres and for use as desktop/notebook drives, which drove a topline rise. This was despite a seasonal decrease in smartphone NAND sales.
Full year revenues rose 19 per cent, in line with the market, for fiscal 2021 ended 31 March, hitting ¥1,178.5bn (c $10.79bn) as the NAND glut eased throughout the year.
Full year net income was a loss of ¥24.5bn ($22m), a great improvement on the ¥166.7 bn ($1.53 bn) prior year loss. A company statement said profitability improved significantly, with a return to positive operating income due to cost reductions from OPEX management. There was also a move to 96-layer 3D NAND (BiCS 4) production with a lower cost/TB than the previous 64-layer product.
Shipments and builds
It has seen high single-digit per cent average selling price (ASP) declines in both Q3 and Q4 fy20. Quarter-on-quarter bit shipments gave grown from a low single-digit increase in Q34 to a mid single-digit increase in Q4, however. If the ASP decline can be lowered further, or halted, and bit shipments rise at the same or a higher rate Kioxia can move into profit.
The company is expanding its NAND manufacturing capacity by building a seventh fab at its Yokkaichi facility.
Kioxia is developing its sixth generation (BiCS 6) 162-layer 3D NAND products to lower manufacturing cost further. It sees data centre and client SSD demand staying strong. It predicts smartphone NAND demand will rise as 5G models become popular. Kioxia will be hoping this means more NAND and SSD sales with stronger pricing, enabling the firm to make a profit.
IPO, acquisition and amalgamation
A report in Japanese biz daily the Asahi Shimbun said Kioxia’s majority shareholder, Bain Capital Private Equity, had no plans to sell its holding. Yuji Sugimoto, who heads Bain ops in Japan, said Kioxia’s IPO will be brought forward ASAP.
He believes there will be an amalgamation of NAND producers. He also said governments will have to be involved to help make it happen.
There are are six major NAND producers: Intel, Kioxia, Micron (the industry leader), Samsung, SK Hynix, and Western Digital. SK Hynix is buying Intel’s NAND foundry and and SSD operations, which will bring the number down to five.
Kioxia and Western Digital have a joint flash foundry venture. Potential Kioxia buyers appear to be Micron and Western Digital, and Samsung, as SK Hynix already has Intel NAND and SSD interests in its grasp. It may be easier for Japan, where Kioxia is based, to agree an acquisition by a US company than a Korean one. This is patly due to historical enmity between Japan and Korea based in events in the Second World War.
Bain Capital may be thinking that a bidding for Kioxia could generate a better price in the open market following an IPO than private sale bids before any IPO takes place.
It’s time to visit with old friends this week as Fibre Channel lives on at higher speeds, DRAM demand increases, and backup revenues rise nicely. That’s true for both data protection in the cloud and on-premises. The constant increase in data generation and movement is providing a constant positive headwind for the storage business.
For once AI, machine learning and Kubernetes take a back seat.
Broadcom’s 64gig HBA
It has only taken 8 months. Broadcom, which launched its 64Gbit/s (gen 7) Fibre Channel switch last September, has made available its matching 64Gbit/s HBA: the Emulex Gen 7 LPe36000-series Host Bus Adapters.
Emulex LPe36000.
Broadcom claims it’s the world’s first 64G Fibre Channel HBA and enables an end-to-end 64G data path. Its speed in Tolly Group testing:
Reduced Oracle data warehousing runtime by 87 per cent compared to 32G FC,
PCIe 4.0 demonstrated up to a 63 per cent improvement in application performance for dual-port 64G HBAs compared to running in a PCIe 3.0 server,
Cuts storage migration times by up to 38 per cent,
Reduces VM boot storm times by up to a half.
Kevin Tolly, founder of The Tolly Group, said: “The combination of new PCIe 4.0 servers and all-flash arrays demonstrate such high performance margins that they are now capable of consuming all the bandwidth that current storage networks can deliver.”
That’s why, he says, “all-flash storage arrays should be paired with 64G technology such as Emulex Gen 7 HBAs and Brocade Gen 7 switching.”
DRAM shipments and prices rose in Q1
Research house TrendForce says all DRAM suppliers posted revenue growth in 1Q21, and overall DRAM revenue for the quarter reached $19.2bn, 8.7 per cent growth QoQ.
This was mostly due to higher demand for notebook memory resulting from remote working and working from home during the pandemic. The researchers also cite increased demand from Chinese smartphone manufacturers – OPPO, Vivo and Xiaomi, competing for marketshare after Huawei was included on the US Entity list.
These two things led to higher-than-expected shipments from various DRAM suppliers while DRAM prices rose as TrendForce had predicted.
It says some server manufacturers have started a new round of procurement as they expect a persistent increase in DRAM prices.
Datto drives revenues higher
Cloud backup service provider Datto saw Q1 revenues rise to $144.9m, up 16 per cent on last year’s $116m and beating estimates. There was a $15.3m profit (GAAP net income), up a massive 1,030 per cent from the year-ago $1.4m.
Subscription revenues rose 17 per cent Y/Y to $135.6m and ARR (Annual Run-rate Revenue) rose 15 per cent to $572.5m.
It increased its MSP partner count by 300 in the quarter, to 17,300. Guidance for next quarter is $147m +/- $1m in revenues.
William Blair analyst Jason Ader told subscribers: “The beat was driven by rebounding demand for the firm’s core continuity (backup/disaster recovery) solutions, which management attributed to the combination of economic reopening tailwinds (though still early and uneven) and SMB focus on ransomware protection.”
He said: “Net new ARR, which after adjusting for currency was $26m, a significant acceleration from the prior three quarters and the second-highest net new ARR in company history.”
Ader thinks Datto will achieve mid-to-high teens growth after facing increased customer churn during the pandemic.
Veeam announces yet another double-digit quarter
Data protector Veeam has announced an annual recurring revenue (ARR) increase of 25 per cent Y/Y for Q1’21.
CEO and Board Chairman William Largent talked of double-digit YoY growth across all geos and said: “To see such increases globally is a tremendous achievement in such a challenging environment.”
Veeam has now recorded 13 consecutive quarters of growth greater than 10 per cent and the customer count has gone past 400,000.
It overtook Veritas in revenue terms in the quarter and IDC says it’s the second largest software vendor worldwide after Dell EMC.
Danny Allan, CTO and SVP of Product Strategy at Veeam, said: “Our product roadmap for 2021 will further expand our offerings with the top cloud providers – AWS, Microsoft Azure and Google – and Kubernetes.”
Shorts
Aunalytics is partnering Stonebridge Consulting. The combination of Aunalytics Aunsight Golden Record as a Service and Stonebridge’s EnerHub data management product provides customers in the oil and gas market with Universal Data Access and dynamic data cleansing and governance.
ChaosSearch’s multi-model and multi-cloud Data Lake Platform product now supports SQL on the AWS and GCP clouds. It indexes data as-is within cloud environments, recognising native schemas, rendering it fully searchable and uses open APIs. That enables log and BI analytics with existing tools in use, such as Tableau, Looker, Kibana, Grafana or Elastic API.
Cisco is end-of-lifing its UCS-E VSAN ready node. The last day to order the product is November 5, 2021.
The ClearDATA Healthcare Security and Compliance Platform can automatically detect protected health information (PHI) in multi-cloud storage buckets. It ensures compliance with applicable privacy regulations that include HIPAA and GDPR.
Commvault’s Metallic SaaS data protection is out in 20 countries in the EMEA region after having been available for 6 months. Shai Nuni has been appointed as the new Vice President of Metallic in EMEA. Customer wins include Aliscargo Airlines (Italy), Evolutio Cloud Provider (Spain), Sithabile Technology Group (South Africa), and Keshet Broadcasting (Israel)
The Japan Aerospace Exploration Agency (JAXA) has selected DDN‘s SFA200NVXE and SFA7990XE modular storage systems as infrastructure components for its new 19.4 TFLOPS FX1000 ARM-based supercomputer system TOKI-SORA, which went into operation in December 2020. The DDN systems will provide over 50 PBs of usable SSD and HDD storage capacity at a combined peak throughput of up to 1TB/sec.
UK-based data protector Databarracks has bought 4sl for an undisclosed sum to create a combined company with 75 staff, including 50 data protection experts.
Barnaby Mote, CEO and founder of 4sl, said: “Databarracks is now the UK’s largest Commvault Managed Service Provider.”
Delphix has announced new data compliance capabilities for Salesforce customers, protecting personally identifiable information.
Flash chip fabber and SSD maker Kioxia announced a 20 billion yen ($18m) investment to expand its Technology Development Building at its Yokohama Technology Campus and to establish its new Shin-Koyasu Advanced Research Center. The new facilities are expected to be operational by 2023.
Pavilion Data is supplying its HyperParallel Data Platform array to the Cyber Bytes Foundation (CBF), which showcases technologies in its Research and Innovation Labs located at the Quantico Cyber Hub (QCH). These labs support a Cooperative Research and Development Agreement (CRADA) with Marine Corps System Command and Marine Corps Forces Cyberspace Command.
Open source database software and services supplier Percona announced a preview of its fully open source Database as a Service (DBaaS), which eliminates vendor lock-in and supports Percona open source versions of MySQL, MongoDB and PostgreSQL. By using Percona Kubernetes Operators it’s possible to configure a database once, and deploy it anywhere – on-premises, in the cloud, or in a hybrid environment.
Korean flash and memory foundry business SK hynixsaid it is considering a plan to double its foundry capacity. Co-CEO and Vice Chairman Park Jung-ho said it will look into several options such as equipment expansion at domestic sites and M&A.
BeeGFS parallel file system company ThinkParQ has promoted channel partner Advanced Clustering Technologies from Gold to Platinum status, meaning it can provide 1st and 2nd level support.
DDN’s Tintri operation paid for an ESG study that said VDI customers got value from their Tintri VMstore storage with cost savings and easier VDI admin.
Cloud storage provider Wasabi has announced support for S3 Object Lock, meaning stored objects can be made immutable for a specific period of time and so protected against ransomware. Of course you needs backup apps that support it too, like Veeam Backup & Replication.
Analysis: We have more HPE Alletra‘s features and performance details, after analysing a post by Dimitris Krekoukias. The Nimble exec provided a more informed comparison with the competition, Primera arrays, and a view of the 9000’s branding.
The new Alletra 9000 details we now have are:
Active Peer Persistence allow “a LUN to be simultaneously read from and written to from two sites synchronously replicating”;
Multiple parallelised ASICs per controller help out the CPUs with various aspects of I/O handling for the all-NVMe SSDs;
Vast majority of I/O happens well within 250 microsecond latency;
Array OS determines what workloads to auto-prioritise
Alletra 9000 and competing arrays
Krekoukias charts the 9000 (and 6000) on a SAP HANA Nodes supported basis, against competing arrays from Hitachi (VSP), Dell EMC (PowerMax), IBM (DS8950, FlashSystem 9200), Pure Storage (FlashArray//X90), and NetApp (AFF A800).
Krekoukias makes much of the 9000’s ability to deliver this performance from its single 4U enclosure. He believes it “makes it the most performance-dense full-feature Tier 0 system in the world (by far).”
He says of the HDS VSP 5500: “The physically smallest possible HDS 5500 shown for comparison would need 18U to achieve 74 nodes. So, the Alletra 9000 can do 30 per cent more speed in 4.5 times less rack space.”
That means it beats HPE’s own XP8, which is an OEM’d HDS VSP 5100/5500 array.
As for Dell EMC’s PowerMax: “A PowerMax 8000 2-Brick (4 controllers) needs 22U and only does 54 nodes. A 3-brick system (6 controllers) can do 80 nodes and takes almost a whole rack (32U). So even with more controllers, a PowerMax needs 8x more rack space to provide less performance than an Alletra 9000!”
There’s more of the same regarding Pure Storage and NetApp.
We were interested in using the data to compare HPE’s Primera arrays with the Alletra 9000. The Primera’s architecture and OS are the basis of the 9000.
Alletra 9000 and Primera
A quick Primera range recap: the current Primera arrays are three all flash models – the 24-slot x 2U x 2-node A630, the 48-slot x 4U x 2-4-node A650, and the A670 – and three hybrid ones: the C630, C650 and C670, all with the same slot, chassis and node details. A node means a controller. The A630 and C630 have a single ASIC per node while the A and C 650 and 670 systems have 4 ASICs per node.
The Alletra 9000 has a 4U chassis like the Primera A and C 650 and 670 arrays.
The ASICs handle zero detect, SHA-256, X/OR, cluster communications, and data movement functions.
Krekoukias writes: “The main difference is how the internal PCI architecture is laid out, and how PCI switches are used. In addition, all media is now using the NVMe protocol. These optimisations have enabled a sizeable performance increase in real-world workloads.”
The blog reveals the number of SAP HANA nodes it supports. We can chart the Alletra 900 and Primera array performance on that basis:
This allows us to directly compare the 9000 models to the equivalent Primera models and work out the performance increase:
The 4-node (controller) models gain a 33.33 per cent speed boost; there were smaller increases for the 2-node models.
Alletra 9000 performance characteristics
As we understand it, an Alletra 9000 system, like a Primera multi-node system, is a cluster in a box. You cannot cluster separate Alletra 9000s together, unlike the Nimble-based Alletra 6000s.
In theory, the only way to scale up Alletra 9000 performance further would be to add more controllers inside a chassis, or to provide some form of interconnect to link separate Alletra 9000s together. Both cases would require hardware and software engineering by HPE.
Without this, having only 4 controllers in its chassis limits the Alletra 9000’s top-end performance, as with Primera. Somewhat embarrassingly, it also uses PCIe gen 3 instead of the twice-as-fast PCIe gen 4 bus (like the Alletra 6000s).
The Alletra 9000s get more performance, per chassis, than the 6000s, even with the slower PCIe Gen 3 bus, as their ASIC hardware accelerates their performance. But cluster the slower 6000 boxes together and they outrun the 9000, reaching 216 SAP HANA nodes supported.
Speeds and feeds
It is a bit of a mystery why the Alletra 9000 didn’t move to AMD processors and the PCIe 4 bus, like the 6000, and gain a greater performance boost over the Primera arrays. That said, the engineering burden would have been greater and taken longer to complete. They would have needed to tune and tweak the ASICs for the new CPUs, and re-engineer the passive backplane to support PCIe Gen 4.
In our view there is an implicit roadmap to a second generation Alletra 9000, using AMD processors and PCIe gen 4. Whether that roadmap contains a larger 9000 chassis to accommodate more nodes, six or eight, is a moot point. So is the addition of a clustering capability, like that of the Alletra 6000.
Without these, faster Dell EMC PowerMax and clustered NetApp AFF systems, as well as clustered 6000s, will be able to outgun the top-end Alletra 9000.
Branding conundrum
It’s clear that, underneath the umbrella Alletra brand, the 9000 and 6000 arrays are different hardware systems with different OS software as well. They are unified by the branding and by the shared management console and ownership/usage experience.
As we understand it, a migration from the 6000 to the 9000 would be a fork-lift upgrade. We have asked if there is an HPE strategy to move to a common hardware/software architecture for the Alletra products.
From a public cloud point of view, where we order storage with particular service levels and characteristics – think S3 variations – the Alletra, like S3, would be an umbrella brand signifying different storage service types available through a unified and consistent front end. The actual hardware/software product details are abstracted away and underneath the AWS customer experience infrastructure.
Seen through this lens, the Alletra branding makes good sense.
The Chief Revenue Officer of high-speed file system startup WekaIO has resigned.
Ken Grohe
Ken Grohe became Weka’s president and CRO in June 2020, and lasted 11 months at his post. Carol Platz, a senior director in Weka’s Global Corporate Marketing organisation, also resigned this month.
Grohe’s LinkedIn entry states he joined Commvault’s software-as-a-service advisory board in a part-time role in April.
We asked Weka about Grohe and Carol Platz’s departures. CEO and cofounder Liran Zvibel told us: ”Ken Grohe has decided to spend time pursuing his passion, which is advising startup companies, especially those in the SaaS space. He will remain an advisor to WekaIO as he uses his years of experience to help companies reach their goals. Ken remains a true believer in Weka’s mission, its ability to grow, and its Limitless Data Platform’s ability to dominate the market.
Carol Platz
“Regarding Carol Platz, after helping to launch and scale Weka, Carol was offered a leadership position in marketing at another company which has been her personal goal, and we are very happy for her.“
She has in fact joined NVMe-over-TCP outfit Lightbits Labs as its global marketing veep.
Zvibel said this of Grohe and Platz: “While we will miss them both, they remain friends of Weka.”