Home Blog Page 298

NetApp stacks DataStax onto its Astra Kubernetes storage

DevOps people working with DataStax Cassandra databases can now use NetApp storage through the latter’s Astra Kubernetes storage services. Astra has now been integrated with DataStax’s Enterprise product as well as open source Cassandra clusters.

Enterprise is DataStax’s scale-out, cloud-native Cassandra NoSQL database. DataStax was founded in 2010, has taken in $190m in funding, and claims $150m+ in annual recurring revenues. An IPO is rumouredAstra is NetApp’s SaaS data management suite for Kubernetes-orchestrated workloads aimed at protecting, recovering and moving containerised applications.

Eric Han.

Eric Han, VP for product management for public cloud at NetApp, said: “Working with DataStax, we have made it easier for enterprises to adopt and manage high scale, cloud native data.”

Ed Anuff, chief product officer at DataStax, provided a corresponding comment: “When companies want to adopt Kubernetes and create modern data applications, developers and IT operations teams have to think about how they will manage the data their applications will create … Our partnership with NetApp makes it easier to manage storage resources and speed up deployments.”

Specs

NetApp and DataStax say that, with this integration, customers can automate the implementation of Cassandra clusters. they will also be able to simplify operations and lifecycle management processes around applications, data and container images on Kubernetes.

In detail they can have:

  • Automatic storage provisioning and storage class setup processes,
  • Cloning and migration of application clusters for app testing,
  • Data protection, disaster recovery, point-in-time copy recovery, active cloning, activity log, and other data management services for Cassandra clusters,
  • Portability and migration for Cassandra clusters, moving Kubernetes workloads and data between cloud locations,
  • One user interface, instead of two, and visualisations of data protection status.

The integrated DataStax NetApp software is available now. NetApp Astra is completely separate from DataStax’s multi-cloud database-as-a-Service (DBaaS) product, which is also, and confusingly, called Astra.

On a roll: Wasabi gets $25m dollop for cloud storage three weeks after fishing out $122m

A drawing of a wasabi plant, published in 1828 by Iwasaki Kanen

Less than a month after pulling in $122m in C-round funding,  object storage firm Wasabi has opened its pockets again, taking in an extra $25m from two strategic investors to build out the business.

Wasabi offers a single tier of S3-compatible public cloud object storage. It can be viewed as a tier-2 public cloud provider, with AWS, Azure and Google being the tier 1 trio.

Daniel Flynn, president and treasurer at WD’s investment arm, one of the two strategic funders, said: “In the future, most of the world’s data will live in the cloud. We’ve partnered and invested with Wasabi because its mission – to store data in the cloud – aligns with our strategy to partner with cloud customers to provide the foundational technologies underpinning the global data infrastructure.”

Wasabi CEO David Friend

David Friend, Wasabi’s CEO and co-founder, said: “We have been using Western Digital disk drives since the founding of the company. Their investment in Wasabi reflects the fact that data storage in the cloud is accelerating and that there is a growing interdependence between our companies.”

The other strategic investor is Aramco Ventures via its Prosperity7 Ventures growth fund.

Friend said: “Companies like Aramco are sitting on mountains of exploration and operational data. Energy, medical imaging and diagnostics, genomics, surveillance and finance are among many industries profiting from the use of AI. The thing to remember, however, is that the value of AI is completely dependent on having a rich source of data. That’s why a company like Wasabi is a natural fit with a company like Aramco.”

The C-round now stands at $137m, with equity funding amounting to $244m. The total funding we have recorded, including debt financing, is $284.2m.

Wasabi has reported three times year-over-year growth, has 23,000 customers and 5,000 channel Partners and Technology Alliance Partners. The new cash will go to Wasabi’s worldwide roll-out of data centres, grow its distribution channels and partner network, and build its management team.

Comment

We are seeing an accelerating rush to build out the infrastructure needed to capture private and public sector organisations’ storage business as they move to the cloud. The hope is that, once captured, customers stay put and Wasabi gets regular subscription revenues. Build it (data centre and channel infrastructure) and they will come seems to be a crude way of describing Wasabi’s strategy. It sees an ingoing migration of storage to the cloud and wants to be right in customers’ sights as they look for a cloud vendor.

Infinite.IO domain name up for grabs as co-founder confirms company ‘shut down’

Startup InfiniteIO’s domain name is up for sale: $75,000 and it’s yours. This comes two months after its server kit was listed for sale at a Texas auction. The company has closed down, a co-founder confirmed to Blocks and Files.

Update: CEO note added. 21 May 2021.

Infinite IO domain name sale notice.

InfiniteIO offered accelerated access to files with its NAS metadata server speeding file:folder and other file metadata operations. This meant a speeding up of access to actual file data. It developed this into tiering software, running in an appliance, that moved cold files into cloud object storage and allowed user to access them more quickly.

Mark Cree

It was founded in Austin, Texas, in 2012 by CEO Mark Cree, VP Operations David Sommers, VP Engineering Jay Rolette and principal engineer Chris Richards. Jay Rolette said today: “Unfortunately, InfiniteIO shut down about 6 months ago.”

InfiniteIO gained $3.4m in A-round funding in 2015, and $10.3m B-round cash in 2018: a total of $13.7m. We are told that InfiniteIO could not develop a virtual machine version of its software and this made customers perceive its incorporation into a hybrid cloud model as being difficult.

Cree told us in post-publication mail: “We landed a HUGE government opportunity that was put on indefinite hold due to Covid that would have set the company on a course to quickly ramp and add funding.  Selling an on-premise appliance was like being in the cruise-ship business during Covid.  And our virtual appliance that was under development could not be completed before the money ran out.  Timing is everything in the startup business.”

Server equipment from InfiniteIO was auctioned off on March 26 according to an Auction Factory Texas note.

All-in-all it’s a sad end for a company that started out with high hopes and good technology.

Nebulon reveals first public customer win for service processing unit tech

Startup Nebulon has announced for the first time a customer win for its storage processing unit-based smartCore infrastructure. The client – South African MSP SYSDBA – said choosing Nebulon was a “no-brainer.”

The Service Processing Unit (SPU – originally called a Storage Processing Unit) is a PCIe card added to a server and controlled through the public cloud to provide server infrastructure management, offloading mundane infrastructure management from the server’s CPU cores. This means the server can run more application code and the infrastructure housing it can be managed more efficiently; that’s the Nebulon pitch for its smartInfrastructure-branded products.

The Nebulon SPU.
Marc Pratt

Marc Pratt, Strategic Alliances Manager at SYSDBA, provided a canned quote: “The fact that Nebulon smartInfrastructure can be provisioned, managed and maintained from anywhere gives significant flexibility in terms of control. What made Nebulon even more attractive, however, was the fact that we cut our costs in half versus purchasing disaggregated storage and compute solutions.”

He added: “Because the solution doesn’t consume any server CPU, memory, or networking resources like hyper-converged infrastructure alternatives, we are able to use 100 per cent of our server for the applications we run. Based on this alone, choosing Nebulon was a no-brainer.”

SYSDBA has deployed a bunch of HPE ProLiant DL380 Gen10 servers with Nebulon smartInfrastructure to host its customer environments and internal operations. It says it can now control its environment from anywhere and reduce infrastructure maintenance efforts through behind-the-scenes (delivered from the cloud in the background) software updates. These features can be critical for certain customers in Africa who may lack skilled resources to manage large infrastructures as well as others located in areas with difficult or limited access.

Nebulon started up in 2018 and announced its first product in June last year. There is no other equipment quite like it, and it partially overlaps the hyper-converged infrastructure (HCI), SmartNIC and Data Processing Unit (DPU) markets as well as rendering storage arrays redundant.

In essence it has a new product category positioning problem, and so the first public customer win is a big deal.

Nebulon co-founder and CEO Siamak Nazari said: “Service providers like SYSDBA rely on solutions to help them address key time-to-value and management challenges they experience in their core and hosted data centres. With Nebulon smartInfrastructure, not only can they address these challenges, but they can save infrastructure costs doing it.”

Seagate solidifies HDD market top spot as areal density growth stalls

Seagate has consolidated its place at the top of the global HDD supplier rankings against a backdrop of stagnating disk drive areal density.

Data from TrendForce and Tom Coughlin’s Digital Storage Technology Newsletter enabled us to chart a 3-year picture of disk drive revenue market share. It shows a transfer of market leadership from Western Digital to Seagate, and Toshiba gaining and holding a 21 per cent share of all sales.

Bocks & Files chart using IDC, Coughlin and TrendForce numbers.

More of the disk drive market is switching to mass capacity drives, particularly with nearline storage applications. Capacity is becoming the most important aspect of disk drive technology as SSDs are favoured by customers in the speed stakes.

Areal density stagnation

High capacity disk drives provide 2TB/platter in capacity. Increasing that equates to storing more bits on the platters, meaning an areal density increase. Coughlin’s newsletter said: “There was no growth in HDD areal density in C1Q 2021 and the growth of capacity-oriented Nearline drives for enterprise and hyperscale applications will result in more components per drive out to 2026.” 

That’s because the only way to get capacity increases, if the areal density (capacity per platter)  is static, is to add more platters and heads.

Coughlin’s newsletter added: “The industry is in a period of extended product and laboratory areal density stagnation, exceeding the length of prior stagnations.”

The problem is that a technology transition from perpendicular magnetic recording (PMR), which has reached a limit in terms of decreasing bit area, to energy-assisted technologies – which support smaller bit areas – has stalled.

The two alternatives, HAMR (Heat-Assisted Magnetic Recording) and MAMR (Microwave-Assisted Magnetic Recording) both require new recording medium formulations and additional components on the read-write heads to generate the heat or microwave energy required. That means extra cost. So far none of the three suppliers: Seagate (HAMR), Toshiba and Western Digital (MAMR), have been confident enough in the characteristics of their technologies to make the switch from PMR across their product ranges. 

Indeed Western Digital appears to be sitting on the fence: It introduced 18TB and 20TB partial MAMR drives in September 2019; and in July 2020 it launched an 18TB Gold brand PMR drive and a 20TB Ultrastar DC HC650 drive using shingling (overlapped write tracks) to reach 20TB without using MAMR at all.

Wikibon analyst David Floyer said: “HDD vendors of HAMR and MAMR are unlikely to drive down the costs below those of the current PMR HDD technology.”

Due to this: “Investments in HAMR and MAMR are not the HDD vendors’ main focus. Executives are placing significant emphasis on production efficiency, lower sales and distribution costs, and are extracting good profits in a declining market. Wikibon would expect further consolidation of vendors and production facilities as part of this focus on cost reduction.”

Cloud file streaming startup LucidLink grabs $12m in latest fund raising

LucidLink, whose FileSpaces software presents public cloud object storage as a local filer, has impressed venture capitalists enough to get a $12m A-round of funding.

The company was launched in 2016 and seed funding prior to this latest cash injection totalled $7.1m. Its Filespaces technology has a software agent, and provides on-premises metadata processing and a small cache area in the local storage. Applications make file requests which are sent to the public cloud object store and serviced from there using parallel links, compression, streaming and pre-fetching to give local access speed.

Conrad Chu, a partner at venture cap fund Headline, said in a statement:  “We discovered Filespaces as a user and immediately recognised that there is nothing else like it out there. With incredible traction in the Media and Entertainment industry as well as Architecture, Engineering, and Construction, LucidLink is hitting it out of the park with this next-generation cloud file system.”

LucidLink diagram.

Adobe invested alongside Headline, as did Baseline Ventures and Bright Cap Ventures. Chu is joining LucidLink’s board.

LucidLInk co-founder and CEO Peter Thompson said in a statement: “Our partnership with Adobe presents a perfect opportunity to easily enable remote production teams with the entire suite of Adobe Creative Cloud products.”

Filespaces can be combined with an on-premises Cloudian HyperStore back-end platform. Local applications see Filespaces as a NAS mountpoint and translate their file access requests into S3 requests, feeding them to the Cloudian system. The point is that the Cloudian HyperStore system is generally more affordable than a NAS filer and, courtesy of Filespaces, provides equivalent access speed.

In February LucidLink announced a bundling of IBM Cloud Object Storage with Filespaces. Other supported object storage includes AWS S3, Azure Blob, Google Cloud Platform, MinIO, Nutanix Objects, Scality, Wasabi and Zadara.

The prospect is that multiple remote users and hybrid businesses with edge computing sites, connected like spokes to a public cloud hub, can all benefit from cheap and scalable public cloud storage while getting local filer access speeds by using Filespaces software to bridge the on-premise and cloud environments. 

There are several other suppliers offering similar public cloud-to-edge data access and file collaboration, such as CTERA, Egnyte, InfiniteIO, Nasuni, and Panzura. They all share a basic public cloud-to-local site data access capability, implemented in their own way, while adding their own set of services on top.

Quantum will likely see LucidLink as a competitor in the entertainment and media industry, as Quantum’s StorNext file system also supports object storage back ends.

Just two weeks after launch, storage-based cryptocurrency Chiacoin drives up disk prices

Never mind the GPUs used for process-intensive mining of Bitcoin, new cryptocoin Chia is driving up hard disk drive prices by its own processing needs, which are based on storage capacity.

Chiacoin was launched this month. It is a bitcoin-like currency, and it needs so-called proof-of-space-time to be mined, or rather farmed, using hard disk drives and SSDs. That requires less processing than bitcoin, positioning Chiacoin as a “greener” cryptocurrency. It was devised by Bram Cohen, the American programmer who came up with the BitTorrent peer-to-peer network protocol. Chia Network is the company backing Chiacoin.

Bitcoins are mined using a so-called proof-of-work which involves vast numbers of CPU cycles and hence electricity. Chiacoin farming requires much less processing.

Chia is based on a proof-of-space-time consensus algorithm and a blockchain network. It involves a so-called Chia farmer or “prover” sending such a proof to a verifier. The proof is that some amount of disk or SSD space is actually used to store some specific data.

A Chia Business White Paper explains the initial concept. It states “Proof of Space leverages the over-provisioned exabytes of disk space that already exist in the world today.”

A Chia FAQ supplies more information. Going deeper than this involves a great degree of mathematical complexity in a Chia Green Paper

Chia Green Paper extract

A Chia farmer loads cryptographic numbers onto their disk or SSD into a portion of the capacity known as a “plot.” A plot is associated with blocks from a blockchain, the number of which depends upon the percentage of the total space a farmer has allocated compared to the entire blockchain network. Each plot has a hash and, as the Chia blockchain extends, each farmer’s system can see, using background processing, if their hashes are a match for the next blockchain step (or challenge.)

A VDF (Verifiable Delay Function) proof-of-time “Timelord” server verifies each block and awards an XCH token to the farmer for each verified block (coin). 

To find out what’s really going on study the Chia Green Paper and its referenced documentation.

Getting back to known ground, Chia farming requires disk and/or SSD space and so wannabe “farmers” are buying up disk drives and SSDs.

A PC Gamer report claimed a 512GB SSD could be used up by Chia farming in 40 days and a 2TB drive in 160 days. So much writing is involved that the SSDs wear out. That encourages disk drives to be used instead of SSDs.

A Tom’s Hardware report said Chai farming had increased demand for high-capacity disk drives and retail/etail prices have risen $100 – $300 in the first half of May.

For example a Toshiba X300 12TB drive cost $320 in April and now costs between $498 and $506. A Western Digital Gold 12TB cost about $340 in March. WD sells it now for $416 and it’s $440 on Amazon.  A Golds 14TB drive cost $410 in March and is now priced at $527 to $630 on Amazon.

Shares in both Seagate and Western Digital have risen recently. On May 12 Seagate shares were priced at $84.05. They are now valued at $102.29, a 21.8 per cent rise in six days. May 12 saw Western Digital shares priced at $64.38.  The current price is $74.96, meaning there has been a 16.4 per cent increase in six days.

What will happen now? Are we in a Chiacoin bubble? Does this thing have wings? The heck we know: it may better for the environment, but it’s definitely not great news for those who need to buy disk drives.

I always feel like somebody’s watching me: WD releases higher-rated Pro version of Purple surveillance HDD

Western Digital has launched a Pro version of its Purple surveillance system drive with a longer warranty and enhanced workload rating.

The aim is to support the more powerful video camera, network video recorder and video analytics systems now being produced to process an ever-increasing amount of video surveillance imagery.

Brian Mallari, director of marketing for Smart Video at Western Digital, said: “With the addition of WD Purple Pro, our full portfolio of smart video products covers customers’ needs from dedicated WD Purple microSD cards for cameras to WD Purple hard drives for mainstream NVRs and the new generation of smart video architectures.

WD Purple Pro 18TB drive.

He also claimed it could help “original equipment manufacturers and integrators to evolve their systems for emerging AI workloads.”

Capacities

The all-SATA Purple HDD range is divided into low-end 1TB to 6TB drives which spin at 5400 – 5700rpm, support 64 cameras and have a 180TB/year workload rating. The 8TB – 18TB Purple drives spin faster at 7,200rpm, support 64 cameras plus 32 video streams for deep learning analytics. They have a higher 360TB/year workload rating.

The Purple Pros have the same 8TB – 18TB capacity and 7,200rpm spin speed as the upper-end Purple HDDS but a higher still 550TB/year workload rating. They also have a 5-year warranty compared to the Purple drives’ 3-year warranty. The 8TB and 10TB products are air-filled drives while the 12TB, 14TB and 18TB products have sealed, helium-filled enclosures. These helium-filled drives have a 2.5 million hours MTBF rating while the air-filled drives have a 2 million hours MTBF number.

Purple Pros have a WD Device Analytics (WDDA) feature that provides device parametric operational and diagnostic data. Algorithms interpret the data and direct the host system to alert admin staff with specific recommendations to address potential issues. The idea is that WD OEMs, system integrators, and IT pros can better monitor and manage supported storage devices using the drives.  

Seagate has a competing Skyhawk 18TB drive for the video surveillance market. It supports the same number of camera and analytics streams as the Purple Pro and comes with an identical 550TB/year workload rating. However it only has a 3-year warranty. Toshiba’s MD06 surveillance drives have a 10TB capacity maximum and are not in the Purple Pro capacity class.

WD Purple Pro drives will be available this quarter from Western Digital resellers. We understand a 10TB model will cost $337.99 but may be available for as little as $264.34. No datasheet was available at publication time.

Rubrik aims auto-VMware app recovery software at orgs smacked by file-encrypting ransomware

Data protection firm Rubrik has pushed out a Cloud Data Management software release that automates the process of application recovery from ransomware attacks, and also protects Kubernetes-orchestrated containers.

The product was rolled out at Rubrik’s Forward virtual conference this week, which features presentations from high-profile people including tennis champ Venus Williams. Snowflake CEO Frank Slootman, NetApp CEO George Kurian, and VC Vinod Khosla will also be speaking. Rubrik signed up NetApp as a reseller in April.

Khosla has been a Rubrik investor since 2016 via his Khosla Ventures operation. Slootman ran Snowflake’s impressively successful IPO, perhaps an inspiration to Rubrik CEO Bipul Sinha. Williams is a former world number one tennis player in both singles and doubles and is still in the top 100 on the WTA circuit.

From left to right: Venus Williams, Frank Slootman, George Kurian and Vinod Khosla

Rubrik president Dan Rogers said: “There has never been a greater need to protect and quickly recover from rising cyber threats like ransomware, which are devastating businesses on a daily basis. Rubrik … [allows] for quick recovery from attacks and protection of precious IP, no matter where the data is stored.”

AppFlows automated ransomware recovery

The new software release adds a Polaris AppFlows disaster recovery (DR) facility for on-premises VMware environments to a second site or VMware Cloud on AWS. Rubrik says it enables IT teams to use existing backup systems they’ve already paid for. This obviates the need to deploy and manage separate DR infrastructure. AppFlows uses application blueprints that specify app resource mapping and workload dependencies to enable failover in the event of a data centre outage. 

Rubrik AppFlows diagram

If a ransomware attack encrypts VMware app, a Radar facility will identify impacted VM membership within blueprints. It will then present a way for entire applications or business services to recover to a point in time before the encryption event. The whole process uses an AppFlows enhanced workflow. So there is no need to triage and remediate individual VMs as it presents a global view of the customer’s data estate. The idea is that IT teams can identify affected workloads and files in a single process.

Failover and failback processes are fully automated. Rubrik will add local, in-place recovery for ransomware attacks. Of course, this is only as useful if you actually switch it on, as the University of California San Francisco discovered last year.

Other additions

The new software release also adds:

  • Integration with automation frameworks, such as Palo Alto Networks Cortex XSOAR and ServiceNow Incident Response,
  • Two-factor authentication
  • Data risk management with Sonar user behaviour analysis to determine who is accessing, modifying or adding files
  • Data Management for Kubernetes to protect applications and stateful data through the Polaris SaaS platform
  • Rubrik-hosted backup and recovery for Microsoft 365 applications, including SharePoint and Teams
  • New Network Attached Storage (NAS) Cloud Direct built on Rubrik’s acquisition of Igneous IP and technology, protecting and rapidly recovering petabyte-scale NAS systems,
  • Automated discovery and backup for SAP HANA on public cloud and on-premises
  • Intelligent cloud archival to optimise costs of public cloud object-based storage
  • Backup from NetApp SnapMirror
  • Consistent point-in-time backups for Nutanix Files and Live Mount for Nutanix AHV backups
  • Offload backups to Oracle Data Guard standby databases with failover/switchover awareness
  • Delivery of policy-driven backups for Cassandra databases via Polaris
  • Incremental backups for vSphere Metro Storage Cluster (vMSC) 

Read more about AppFlows in a datasheet.

I like to move it: Hitachi Vantara embraces Buurst’s Fuusion data mover

Hitachi Vantara has a “strategic partnership” with Buurst, the rebranded SoftNAS. It has adopted its Fusion data mover product for its Hitachi Virtual Storage-as-a-Service(VSaaS) and Hitachi Kubernetes Service offerings.

The pair spoke of the deal in a May 14 Hitachi Vantara and Buurst webinar (see here). The two companies also hosted a webinar on Monday, May 17. In the latter, they talked about hybrid cloud acceleration unifying compute and storage with Kubernetes for Hybrid Cloud.

Hitachi Vantara and Buurst May 14 webinar tweet

Buurst built Fuusion on the open source Apache NiFi project. It is a bi-directional data pipeline connecting edge and cloud. You can deploy it on bare metal, in virtual machines or, in the future, containers. It supports AWS, Azure, GCP, and VMware, and operates across any network link.

The Fuusion software accelerates network traffic and ensures no loss of data, even if the network goes down for a period of time. It provides a chain of custody for migrated data. Management is central.

The VSTaaS webinar registration page says: “Buurst Fuusion in [a] strategic partnership with Hitachi Virtual Storage as a Service delivers the best in breed Data Transfer Management and Data operations in the Industry. Hitachi Virtual Storage as a Service with Buurst Fuusion ensures data delivery in Hybrid Cloud deployments will be bullet proof.”

Data highway

As part of the deal, the Hitachi Kubernetes Service will use Buurst Fuusion to deliver centralised data management and bi-directional data operations transfer across the hybrid cloud.

Fuusion is something like a data highway and spans on-premises data centre, co-location sites like Equinix, and public clouds. It competes with data moving technologies from Bridgeworks, Cirrus Data Solutions, Datadobi and WANdisco.

A slide from the VSTaaS webinar shows deployment of Buurst software on-premises, in Equinix colos, and in the public cloud:

VSTaaS webinar slide

Buurst

CEO Vic Mahadevan runs the company. He was the company’s chairman from August 2019 and became CEO in November 2020, replacing Garry Olah. Olah became CEO when Mahadevan became board chair. The firm rebranded SoftNAS as Buurst during Olah’s tenure.

Buurst Fuusion diagram

Marc Palombo, Buurst’s chief revenue officer, told us: “Garry was a first-time CEO and it showed in some initiatives. As a result, the board voted to move Vic into the CEO role in the fall in order to get the company back on track to its forecasted deliverables. Buurst is now poised to drive significant growth in 2021 with its new key strategic global partnership with Hitachi around data transfer management and more significant news to come.”

Western Digital and computational storage: ‘We believe it will happen’

Disk drive and SSD supplier Western Digital has spoken about computational storage use cases using SSDs and the need for a standard drive-host interface for computational storage drives (CSDs), indicating it wasn’t a question of if but when they become mainstream.

Computational storage aims to overcome the time issue created when data is moved to computation by moving a slice of compute to the data instead. One example of such a time issue would be when reading data from storage into a host server’s memory. There are two main ways to move compute to the data: at the array level and at the drive level.

Array level

It has been tried at the array level with Coho Data, and that attempt failed as there simply weren’t enough use cases to justify the product’s use back in 2017.

Dell EMC’s PowerStore has an AppsOn feature to enable virtual machines to run in the array’s controllers. In theory it can be used to run any workload on the array. We don’t yet know how popular this idea is.

One issue with array-level computation is that the data still has to be read into main memory, controller main memory in this case. There is no need for a network hop to a host server, but data movement distance has only been shortened, not abolished.

Drive level

Drive-level or in-situ computational storage carries out the processing inside a drive’s chassis, with an embedded processor operator almost directly on the data. There are five startups active in this area: Eideticom, NGD, Nyriad, and ScaleFlux. Seagate may also be active.

ScaleFlux computational storage drive

Typically that adds an Arm processor, memory and IO to a card inside the drive enclosure. Seagate is basing its efforts on the open source RISC-V processor, as is Western Digital. But WD has also invested in NGD, having two CPU irons in the computational storage fire so to speak.

Richard New

Although WD would not be drawn on any moves on its own part to manufacture CSDs, the potential extent of Western Digital’s computational storage ambitions became evident in a briefing with Richard New. New is VP of Research at the company, and the person who runs WD Labs. 

Use cases

New described the features of computational storage use cases using SSDs:

  • The application is IO-bound and that needs alleviating.
  • No need for significant computation, only a fairly simply compute problem that needs to be done quickly, encryption for example.
  • Data needs filtering with a search algorithm. The IOs sent to the host after filtering represent a fraction of the overall data set.
  • A streaming data application that needs to touch every byte that is written to the storage device, like encryption and compression.

Other potential applications include image manipulation and database acceleration. New suggested video transcoding is a less obvious application.

In this use case, unless this in-situ processing is transparent to the host server, it needs to know when drive-level processing starts and stops. We need a host-CSD interface.

NGD Newport CSD.

Host-drive interface

New said there needs to a standard drive-host interface. The NVMe standards organisation is working on such a standard. In fact the NVMe standard is properly known as the Non-Volatile Memory Host Controller Interface Specification (NVMHCIS).

There is an NVMe Computational Storage Task Group, with three group chairpersons: Eideticom’s Stephen Bates, Intel’s Kim Malone, and Samsung’s Bill Martin. The task group’s scope of work encompasses discovery, configuration and use of computational storage features inside a vendor-neutral NVM Express framework. The membership count is greater than 75, with over 25 of these suppliers.

The point of having a standard would be to enable a host server to interact with a CSD. The host system would then know about the status of a CSD and the work it could do. New suggested a general purpose script file could be used. He said: “We need such a standard” to bring about broad adoption of CSDs.

There is also an SNIA workgroup looking into computational storage.

Risc-V

The CSD processor  has to be purpose-built and New said: “RISC-V gives Western Digital more flexibility in doing this. You can target specific computational problem types.”

For example you might combine RISC-V cores with ASICs and develop orchestrating software. In New’s view: “You need to optimise for a particular use case.”

We note that Arm processors have to use IP licensed from Arm, whereas RISC-V is open source and free of such restrictions.

Challenges

New mentioned four challenges he thinks stand in the way of broader CSD adoption:

  • How do you pass context to the device?
  • A file system: how is implemented on the device?
  • How can a key be passed to the device to enable it to decrypt already-encrypted data?
  • How can it cope with data striped across multiple drives?

We would add the challenge of programming the device. Does it run firmware? Does it have an OS? How is application code developed, compiled or assembled, and loaded onto the device?

New is confident CSD will be adopted: “We believe it will happen but it will take some time for the use cases to be narrowed down and standards to be set.”

SCaleFlux CSD componentrey
ScaleFlux CSD componentry

Comment

It seems to us at Blocks & Files that a use case justifying CSD development and deployment will have to involve tens of thousands of drives, if not more. Without that, CSDs will never progress beyond being highly customised products sold in low numbers into small application niches.

This use case will involve, say, hundreds of thousands of files or records needing to be processed in pretty much the same way. Instead of a host CPU processing one million records read in from storage, which takes 2 minutes, you would have 1,000 CSDs each processing 1,000 records. The latter would complete in 20 seconds – because there is no data movement. Each CSD tells the host server that it has completed its work and then the host CPU can kick off the next stage of the workflow. It’s this kind of calculation that could drive CSD adoption – if such use cases exist in the real world. Perhaps the Internet of Things sector will help generate them.

If they did, Western Digital could sell tens of thousands of computational storage SSDs. We think it would want to sell hundreds of thousands. B&F does not think that WD will build its own RISC-V-powered CSDs and hope the customers will come. It will surely want to see solid signs of sustained customer adoption of CSDs and an ecosystem of CSD developers before it will start building its own branded CSDs for sale through such an ecosystem. 

We are all, like WD, watching the CSD startups to see if they gain traction. They can take most of the “build it and they will come” approach and pain, with WD and pals stepping in after that traction has been demonstrated.

Kioxia revenues rising, demand wind stronger: No news yet on IPO or acquisition

NAND and SSD supplier Kioxia is reporting a rise in revenues and underlying profitability as market demand strengthens, but there is no news on either a resumed IPO or an acquisition.

The company filed for an IPO in August last year but the US-China trade dispute derailed it. Market speculation about Western Digital and Micron acquisition bids for Kioxia surfaced in April.

In Kioxia’s fourth quarter fy21, ended 31 March, showed a 10.2 per cent Y/Y revenue rise to ¥294.7bn ($2.7bn), with a loss of ¥21bn ($190m), contrasting with the year-ago ¥9.8bn ($90m) profit. Kioxia sold more SSDs into data centres and for use as desktop/notebook drives, which drove a topline rise. This was despite a seasonal decrease in smartphone NAND sales.

Full year revenues rose 19 per cent, in line with the market, for fiscal 2021 ended 31 March, hitting ¥1,178.5bn (c $10.79bn) as the NAND glut eased throughout the year.

Full year net income was a loss of ¥24.5bn ($22m), a great improvement on the ¥166.7 bn ($1.53 bn) prior year loss. A company statement said profitability improved significantly, with a return to positive operating income due to cost reductions from OPEX management. There was also a move to 96-layer 3D NAND (BiCS 4) production with a lower cost/TB than the previous 64-layer product.

Shipments and builds

It has seen high single-digit per cent average selling price (ASP) declines in both Q3 and Q4 fy20. Quarter-on-quarter bit shipments gave grown from a low single-digit increase in Q34 to a mid single-digit increase in Q4, however. If the ASP decline can be lowered further, or halted, and bit shipments rise at the same or a higher rate Kioxia can move into profit.

The company is expanding its NAND manufacturing capacity by building a seventh fab at its Yokkaichi facility.

Kioxia is developing its sixth generation (BiCS 6) 162-layer 3D NAND products to lower manufacturing cost further. It sees data centre and client SSD demand staying strong. It predicts smartphone NAND demand will rise as 5G models become popular. Kioxia will be hoping this means more NAND and SSD sales with stronger pricing, enabling the firm to make a profit.

IPO, acquisition and amalgamation

A report in Japanese biz daily the Asahi Shimbun said Kioxia’s majority shareholder, Bain Capital Private Equity, had no plans to sell its holding. Yuji Sugimoto, who heads Bain ops in Japan, said Kioxia’s IPO will be brought forward ASAP.

He believes there will be an amalgamation of NAND producers. He also said governments will have to be involved to help make it happen.

There are are six major NAND producers: Intel, Kioxia, Micron (the industry leader), Samsung, SK Hynix, and Western Digital. SK Hynix is buying Intel’s NAND foundry and and SSD operations, which will bring the number down to five.

Kioxia and Western Digital have a joint flash foundry venture. Potential Kioxia buyers appear to be Micron and Western Digital, and Samsung, as SK Hynix already has Intel NAND and SSD interests in its grasp. It may be easier for Japan, where Kioxia is based, to agree an acquisition by a US company than a Korean one. This is patly due to historical enmity between Japan and Korea based in events in the Second World War.

Bain Capital may be thinking that a bidding for Kioxia could generate a better price in the open market following an IPO than private sale bids before any IPO takes place.