Home Blog Page 277

High-capacity disk ships drive Western Digital revenues to three-year high

18TB nearline disk drives led Western Digital’s revenue charge in its beat-and-raise fourth fiscal 2021 quarter, with gross margin improvements leading to a tripled year-on-year profit increase.

Revenues were $4.9 billion in the quarter ended 31 July — up 15 per cent compared to last year — with profits 216 per cent higher than a year ago at $622 million. Full-year revenues were $19.9 billion — an increase of just one solitary per cent over fiscal 2020. That did produce profits of $821 million — a fine turnaround from last year’s $250 million loss.

CEO David Goeckeler enthused in his statement: “I am extremely proud of the outstanding execution our team exhibited as we achieved another quarter of strong revenue, gross margin and EPS results above expectations.” He also mentioned WD’s “unique ability to address two very large and growing markets” — referring to flash and disk drives.

In fact, believe it or not, disk drive revenues overtook flash revenues this quarter: 

Sector views

The increase was led by WD’s data centre business which, at $1.7 billion grew faster than its client solutions ($977M) and client devices ($2.1B) businesses, as a second chart clearly shows: 

The client devices sector experienced broad-based strength across nearly every product category, with better than expected demand for notebook and desktop HDDs, as well as flash-based solutions. It also saw robust demand for gaming, smart video, automotive, and industrial applications.

In the data center devices & solutions area, WD achieved a record shipment of over 104EB in enterprise capacity hard drives — a 49 per cent sequential increase. The 18TB energy-assisted hard drive was the leading capacity point, accounting for nearly half of its capacity enterprise shipments. Enterprise SSD demand strengthened, with stronger than expected sales of NVMe SSD, as WD completed a qualification at another cloud titan.

Client solutions experienced greater than seasonal demand, resulting in sequential growth for both HDD and flash-based solutions.

WD had ceded nearline high-capacity disk market share to Seagate over the last few quarters, and it now appears to be regaining the lost ground. Wells Fargo analyst Aaron Rakers said Seagate shipped 101.4EB of nearline capacity and Toshiba around 34.2EB, so WD was the lead nearline shipper in the quarter.

Financial summary:

  • Gross margin — 31.8 per cent vs 26.4 per cent in prior quarter;
  • EPS — $1.97 vs $0.63 in prior quarter;
  • Operating expenses — $790M vs $713M in prior year;
  • Operating activity cash flow — $994M vs $172M a year ago;
  • Free cash flow — $792M vs $261M a year ago;
  • Cash and cash equivalents — $3.4 billion.

Earnings call

In the earnings call Goeckeler confirmed the high-cap disk drive recovery: “The [revenue] upside was primarily driven by record demand for our capacity enterprise hard drives.”

He amplified this by saying: “We had our highest organic sequential revenue growth in the last decade, driven by the successful ramp of our 18-terabyte energy-assisted hard drive, growing cloud demand, a recovery in enterprise spending and, to a lesser extent, cryptocurrency driven by Chia.”

The gross margin increase was helped by lower costs. CFO Bob Eulau said: “We had very good cost takedowns in Q4 on both the hard drive side and on the flash side.”

The guidance for the next quarter is affected by concerns about the COVID pandemic’s effect on supply chains in Asia, so the sequential rise is lowish. WD expects revenues between $4.9 billion and $5.1 billion. A mid-point of $5 billion compares to $3.9 billion a year ago — a 28 per cent rise. Let the year-on-year compare good times roll.

Pensando pulls in more DPU development cash

Data Processing Unit chip startup Pensando has had another $35 million invested in it, completing a C-round fund raise which had previously attracted $145 million in October 2019. 

Pensando has developed its Arm-powered Distributed Services Card which connects to a server across a PCIe interface. The Data Processing Unit (DPU) card offloads and accelerates networking, storage and management tasks from its host, freeing up the host CPU to run application workloads instead of infrastructure-focused tasks. 

The latest investors were Ericsson, Qualcomm and Liberty Global Ventures. Total funding now stands at $313 million.

Pensando’s marketing head, Christopher Ratcliffe, told us “These three investors made up the majority of the investment. Since we emerged from stealth we’ve been working closely with a broad range of service providers to expand the capabilities of the Pensando platform for the 5G market.”

This is an important market sector for Pensando, and Ratcliffe said: “As more vendors take full advantage of the 5G NR (New Radio), more and more services are being pushed to the edge and deployed in a virtualised and scale-out manner similar to that employed by public cloud providers. Given the capabilities of our platform, this presents an interesting growth opportunity for us. In addition to helping us scale our business, these investments bring us partners with a deep understanding of 5G technologies and requirements, as well as strong ecosystem relationships in every major market from healthcare to retail, automotive and manufacturing.”

We asked him about Pensando’s headcount and revenue growth, and he replied: “In terms of employees and revenues we’re currently at over 300 employees and continue to hire across a range of roles in our HQ locations in Silicon Valley and Bangalore as well as various remote locations. 2020 was our first full year out of stealth and we exceeded our revenue expectations despite the global pandemic. 2021 is looking very good at this point.”

How about the status of customers and partners?

“We have a broad range of Fortune customers in deployment at this point. We’ve publicly discussed our relationship with Goldman Sachs among a number of other financial institutions and are very engaged via our partnerships with HPE, Dell, NetApp and VMware. We also have a number of major cloud providers deploying at scale that we should be able to talk about in the near future.”

We think that Pensando’s growth results in 2020 triggered the late C-round investments, and look forward to hearing more about its 2021 activities and growth later this year.

Lenovo unveils new NetApp entry-level all-flash arrays

Lenovo has announced two entry-level all-flash arrays, one with NVMe drives and the other SAS, with a sub-$15,000 starting price.

Update: 6 Aug 20321. Justification for Lenovo first to market claim ahead of NetApp added. Efficiency point explained.

The DM5000 is a capacity-optimised 2Ux24 small form factor slot box scaling up to 2.2PB of raw capacity using 144x 15.36TB SAS SSDs, spread across a base enclosure and 5x 2Ux24 expansion cabinets. The faster but lower-capacity DM5100F uses NVMe SSDs, supports NVMe/FC, and can only have a single expansion cab. And it has faster Ethernet or Fibre Channel ports as a basic speeds’n’feeds table shows:

Marco Pozzoni, Director, EMEA Storage Sales at Lenovo ISG, said: “Traditional storage solution packages come with all capabilities built-in, raising the costs and making storage expensive for all businesses. We’ve decided to address this issue and offer solutions that help companies, irrespectively of the growth stage they are at.” 

The systems include deduplication and compression, and Pozzoni explained: ”Our tests have achieved efficiency rates of up to 20:1 when running virtual machines, and up to 3:1 in other more data-intensive environments. These ratings dramatically lower the total cost of ownership of data storage.” (See bootnote below.)

Lenovo DM5000F.

We think the two Lenovo arrays are based on OEM NetApp AFF A250 hardware and ONTAP software.

First to market

The arrays can be purchased as block-access-only and then upgraded to add file and object (S3) access, with Lenovo claiming it is “the first to market with complete upgrade paths from block-only to Unified (block, file and object) storage”.

Lenovo told us: “The Lenovo DM storage system is based on the NetApp ONTAP operating systems. NetApp AFF and FAS models are only offered using a unified bundle that includes SAN and NAS functions, like Lenovo’s premium bundle. NetApp do also offer an ASA, All SAN array, NetApp do not support upgrading ASA models to include file services.

“The Lenovo SAN only software offering is not the same as NetApp AFA which uses a modified version of ONTAP that can’t be upgraded. So the Lenovo SAN only version is unique to Lenovo and unique in its ability to be it can be upgraded, whilst Lenovo utilise ONTAP, our SAN only and fundamentals solutions, are not available to NetApp customers.”

Primary features

Lenovo says the primary capabilities of its new arrays include:

  • Performance Consistent low-latency controls via adaptive Quality of Service (QoS) and service level provisioning to cater for additional workloads like AI and Data Analytics;
  • Cloud integration facilities for backup, Disaster Recovery (DR), automated data tiering and burst workload management; 
  • ThinkSystem Intelligent Monitoring Predictive analytics and machine learning algorithms to uncover risk factors and opportunities to improve system health, availability, and security; 
  • Data protection Built-in backup/restore and disaster recovery, integrated with third-party software including Veeam and Commvault;
  • SnapMirror Business Continuity Non-disruptive failover active-active cross site clusters, ensuring data continuity;  
  • System availability Six 9s or better system availability, including for planned activities and unplanned events; 
  • Security Regulatory compliance and protection against unauthorised data access including in-flight and at-rest encryption.

Lenovo’s channel now has a neat pair of entry-level all-flash arrays designed for capacity or performance, and can pitch datasheets at prospective customers. Read the DM5000F one here and the DM5100F one here.

Bootnote.

On the efficiency point Lenovo said the 20:1 ratio is typically seen in environments like virtual desktops. Unlike all of our closest competitors, the Lenovo 3:1 All Flash Storage Efficiency Guarantee does not include all of our storage efficiency software so customers can expect to see vastly improved efficiencies if they use them. 3:1 is guaranteed for the actual data savings in a virtual server environment. 

An example would be that if we add Thin Provisioning, this doubles the efficiency so 3:1 becomes 6:1.  If we then include our Snapshots then this would increase the efficiency by a factor of ten so 60:1.  Lenovo’s 3:1 all flash guarantee can be offered in writing so that if the sold system falls below this level, Lenovo will give the customer the additional required capacity free of charge.

Two WekaIO hires: an exec chairman and a president

Furiously fast filesystem startup WekaIO has hired Amit Pandey as Executive Chairman and Jonathan Martin as President, following the abrupt departure of President and Chief Revenue Officer Ken Grohe in May.

Grohe left after less than 12 months in the post, and has been taken on as an SVP and Chief Revenue Officer at IBM-acquired Taos — a consultancy and managed services company. He works in the cloud adoption and DevOps areas.

WekaIO CEO and Co-Founder Liran Zvibel issued a statement saying: “Amit’s proven track record of leading highly innovative organisations through hypergrowth is perfect for our next stage, and Jonathan’s intimate understanding of customer needs at the most successful companies in this space are the perfect combination to ready us for Weka’s next phase of growth. I welcome Jonathan to the presidency and look forward to working with him and Amit to accelerate Weka’s future endeavors.”

Amit Pandey.

Pandey was the CEO of Avi Networks from September 2015 to June 2019, when VMware bought Avi Networks. He served as VP of NSX Services at the Network and Security Business Unit at VMware.  

Before that he was CEO at Zenprise from April 2012 to March 2014, when Zenprise was acquired by Citrix. Prior to that he was CEO at Terracotta from July 2006 to November 2001, when Teracotta was bought by Software AG. 

There’s something of a pattern here — three CEO roles leading to three acquisitions. It stands out, doesn’t it?

WekaIO says he will be leading it through its next phase of growth and will oversee administrative and operational functions. but we are not told which ones. Pandey has not become Weka’s CEO —; that post is held by Liran Zwibel — but he is Executive Chairman, which is not a hands-off role and is sometimes associated with a coming CEO transition. Is some form of transition in the future?

A Pandey statement said: “The beauty of WekaFS is that it offers the simplicity of NAS, the performance of SAN or DAS, and the scale of object storage … With its unique software-defined architecture, customers can run on-premises, natively in the cloud, or orchestrate their data effortlessly between the two.” 

Jonathan Martin.

The other exec-level hire, Jonathan Martin, was the Chief Marketing Officer at Hitachi Vantara from March 2019 to May 2021, the CMO at Pure Storage from July 2015 to July 2017, and the CMO at EMC from March 2014 to June 2015. This guy is a CMO through and through, but he does not have the CMO title at Weka. 

His responsibilities will be overseeing Weka’s marketing, customer success, and global go-to-market initiatives — a CMO role in all but name. A statement by Martin said: “The company has enjoyed tremendous success over the years with plenty of avenues still left to explore. Weka has a data platform that delivers the simplicity, speed, and scale that modern enterprises need to achieve positive business outcomes. The combination of this leading technology with a successful partner infrastructure in place makes for an ideal situation for further success.”

There is no Chief Revenue Officer at Weka. The nearest position to that would would seem to be Sales VP Andrew Perry.

Weka’s announcement said it saw “significant market momentum in 2020 and has achieved significant growth fuelled by its new customer adoption in the US and across Europe and Asia.” If we were picky, we might wonder at why nothing was said about market momentum in the first half of 2021. Perhaps that was a factor in Grohe’s departure.

Best wishes to both new execs — it will be interesting to see if the company makes any changes to its business, product and marketing strategies as a result of having these two on board.

Cohesity fiscal quarter and year blowout — great work, Cohesians

Cohesity CEO Mohit Aron celebrated a standout quarter and fiscal year end in a tweet, but details are nowhere to be found. Cohesity’s PR people are keeping shtum.

Aron sent out a tweet about the quarter:

We read this and assumed it meant the fiscal fourth quarter and fiscal year revenues were nicely high, and that Cohesity had won some impressive new customers. It must have been a good quarter, because the third quarter was a record one. That meant a phone call to Cohesity’s marketing communications/PR person and a request to say more.

Answer came there none. There was nothing to say beyond what the CEO had tweeted. No details. No hints.

I mean come on, give me a break, talk about Fortune 500 customer take-up? Nope. Tell me about Cohesity’s growth rate? Can’t do that. Has your customer count increased? Can’t say. Has your head count risen? No comment. Is Cohesity profitable? We don’t reveal things like that. 

Will Cohesity put out a release? Maybe. Possibly. Probably. 

Given what Cohesity released about its third quarter in June, after the quarter ended in May, we do expect a detailed release in the next few weeks.

The “Cohesians” term is a nice touch, emphasising the company-as-team idea. Cloudera has Clouderans. Commvault has CommVaulters. Google has Googlers. Nebulon has Nebunerds. Pure Storage has Puritans. VAST Data has Vastronauts. Veeam has Veeamers.

But the close-mouthed Cohesian I spoke to clammed up. And that’s what we have ended up writing about, because there’s this absolutely blow-out Cohesity quarter and we can’t say a single thing about it.

Note. Fortunately Snowflake does not have Snowflakes and nor does Amazon have Amazonians. Box does not employ Boxers — but Cloudian may well employ Cloudians.

Gartner: customers need to prioritise ransomware, and Commvault rules the roost

Gartner’s latest Critical Capabilities for Enterprise Backup and Recovery Software Solutions report [purchase required] looks at a dozen vendors and says customers should prioritise recovery from ransomware attacks, protecting data stored on-premises, in SaaS applications, and in public cloud IaaS services. Commvault is the top vendor.

The report recommends that backup software should support replicating on-premises backup data to the public cloud and automatically tiering it there to lower-cost archival storage. Backup software should provide automated disaster recovery — particularly large-scale ransomware attack recovery — and it should also protect edge location data.

Ranga Rajagopalan, VP of Products at Commvault, issued a quote: “We’re thrilled to be the only vendor with the highest scores across the use cases of data centre, cloud and edge environments, for the second year in a row, after having been named a Leader in the Gartner Magic Quadrant for Enterprise Backup and Recovery Software Solutions for the 10th time.”

Gartner’s analysts looked at three location-based use cases: data centres, public cloud (SaaS, IaaS and PaaS), and edge locations. They judged vendors’ suitability across 13 separate capabilities, including scalability, ecosystem integration, ransomware and DR orchestration.

The data centre use case vendors in product score order are: Commvault, Veritas, Rubrik, Cohesity, Veeam, Dell EMC, IBM, Druva, Zerto (HPE), Unitrends, Acronis and Micro Focus.

The cloud environment use case vendors in product score order are: Commvault, Veritas, Rubrik, Cohesity, Veeam, Druva, IBM, Unitrends, Acronis, Dell EMC, Micro Focus and then Zerto.

The edge environment use case vendors in product score order are: Commvault, Rubrik, Cohesity, Veritas, Veeam, Dell EMC, Druva, IBM, Zerto, Unitrends, Acronis and Micro Focus.

That’s like a grand slam for Commvault, with consistently strong placings for Rubrik, Cohesity and Veritas and Veeam as well.

Gartner points out that Acronis, Dell EMC PowerProtect Data Manager, Micro Focus Data Protector, Unitrends and the Zerto Platform all performed below average in all three use cases, with IBM Spectrum Protect Plus just below average in the three use cases.

NVIDIA/VMware/Dell Project Monterey server offload early access program

NVIDIA, Dell and VMware have started a Project Monterey Early Access Program so customers can explore whether servers offloaded with NVIDIA’s BlueField-2 SmartNIC can run applications faster. If this goes well, other hypervisor and server suppliers will look to start Monterey-me-too programs.

The EAP is based on Dell R750 PowerEdge servers fitted with BlueField-2 SmartNICs running VMware’s ESXi hypervisor on their Arm CPU cores. The idea is that low-level data-centric tasks to do with hypervisors, networking, security and storage are executed on the BlueField-2 card.

A blog by Motti Beck, Senior Director of Enterprise Market Development at NVIDIA Networking Mellanox, announced the EAP: “AI and other compute-intensive workloads require real-time data streaming analysis, which, along with growing security threats, puts a heavy load on server CPUs. The increased load significantly increases the percentage of processing power required to run tasks that aren’t an integral part of application workloads. This reduces data center efficiency and can prevent IT from meeting its service-level agreements.”

He provided a diagram showing a straightforward transfer of infrastructure management and SW-defined security, storage and networking to BlueField-2, called a DPU (Data Processing Unit):

BlueField-2 is a Mellanox system-on-chip (SoC) card, featuring an array of eight-core, 64-bit Arm processors, a ConnectX-6 Dx ASIC network adapter, PCIe Gen-4 x16 lane switch, and two 25/50/100 GbitE or one 200GbitE ports. This acceleration engine hardware provides a crypto engine for IPsec and TLS cryptography, integrated RDMA and NVMe-oF acceleration, and data deduplication and compression.

BlueField-2 card.

EAP customers are being invited to reinvent a software-defined data centre architecture based around BlueField-2 and VMware.

When it was announced in September last year, four application areas were mentioned:

  • Virtualizing disaggregated remote storage and presenting it as local composable storage pools;
  • Provisioning bare metal servers for cloud service providers;
  • End-point application isolation using micro-segmentation;
  • Multiple application-specific firewalls for enhanced security.

Beck’s blog cites four general and somewhat vague benefits:

  • Improved performance for application and infrastructure services;
  • Enhanced visibility, application security and observability;
  • Offloaded firewall capabilities;
  • Improved data center efficiency and cost for enterprise, edge and cloud, with reduced deployment downtime.

VMware’s role in this is partly to be a data and function on-ramp for NVIDIA’s GPUs and BlueField-2, as an NVIDIA graphic shows:

Interested parties can apply to join this EAP technical preview program on the NVIDIA Project Monterey web site.

Comment

It will be interesting to see if VMware starts working with Fungible’s DPUs in a similar way. It will also be most interesting to see if other hypervisor suppliers check out if BlueField has possibilities for them too — with the obvious examples being Nutanix with AHV, KVM and Red Hat Virtualization. If the offloading of server-based infrastructure management, networking, security and storage tasks to a SmartNIC-cum-DPU works for VMware, Dell and NVIDIA, then it should equally well work for other hypervisors and other server suppliers. Not to mention other SmartNIC/DPU suppliers — such as Fungible and Pensando.

A problem area is getting software loaded onto the SmartNIC and having the SmartNIC interoperate with host servers. An API library is probably being developed by NVIDIA and VMware to accomplish this.

It’s taken two years but Nutanix has added Commvault to Mine portfolio

Two years and three months after saying it would happen, Nutanix has added Commvault Backup and Recovery to its turnkey Mine backup appliance. We’re still waiting for Unitrends and Veritas to be added.

Mine was launched in May 2019 as a commodity X86 server running core Nutanix HCI software along with third-party backup software — initially from Veeam and then with HYCU following in November that year. The pitch was that customers could manage their primary and secondary (backup + archive) data through a single Prism console. Commvault, Unitrends and Veritas were identified in the May announcement as future Mine partners.

Tuhina Goel, Nutanix’s Director of Product Marketing, blogs that this “most recently engineered solution provides a simple yet intelligent data protection platform that delivers efficient backup and recovery services no matter where your data resides so that you can keep your business up and running under all circumstances.”

Nutanix Mine with Commvault diagram.

John Tavares, VP Global Channel and Alliances at Commvault provided a matching quote: “Nutanix Mine with Commvault takes our partnership to a new level, providing customers with the speed, scalability, and flexibility they need to modernise their data center.  Together we are offering the answer to today’s very real data concerns — Commvault’s trusted intelligent data services and Nutanix’s simplified storage in one turnkey solution.

That’s all very well, but why has it taken so long? Commvault added support for Nutanix AHV in 2015, and subsequently added support for Nutanix snapshots, replication, Nutanix Files, Nutanix Objects, and now Nutanix Mine. Obviously Commvault is a committed Nutanix partner.

Customers buy a Mine system through Nutanix’s channel. Nutanix Mine with Veeam and HYCU is available on a range of HPE ProLiant servers. We anticipate the ProLiant/Mine boxes could host the Commvault software as well and the sourcing Nutanix channel partner can advise on the server hardware box size and capability.

The bullet point summary about the Mine + Commvault combo’s benefits is this:

  • Meet the most stringent RTO/RPO SLAs with strict consistency;
  • Converge backup, recovery, and archival in a single turnkey solution;
  • Seamlessly connect to public cloud for long term data retention — AWS, Azure and GCP;
  • Start small and scale as your data footprint grows;
  • Improve security posture by combining Nutanix’s hardened platform security with Commvault’s built in AI/ML driven anomaly detection, air-gapping, and data validation;
  • Protect against ransomware by combining the Nutanix Mine WORM capability for immutable backups with Commvault’s multiple layers of protection control;
  • Streamlining the customer support experience.

The basic point here is that Nutanix has, finally, been able to add Commvault to its other two Mine partners: Veeam and HYCU. We are still waiting for Unitrends and Veritas. As for Acronis, Atempo, Cohesity, Rubrik and the many other backup suppliers — who knows?

You might think that for Nutanix, having just three partners for its Mine appliance 27 months after product launch is  not that many. We wouldn’t like to comment.

VAST Data gets vast orders from vast US federal sector

In the same week it Twitter-announced two customers had ordered $20 million worth of its software, VAST Data announced $10 million in US Department of Defense orders, as its VAST Federal subsidiary plugs into the US federal market.

The orders for VAST Data’s storage software include the storage of data used in synthetic aperture radar (SAR) and AI-driven packet capture applications. VAST stores data in a single-tier, all-flash array using QLC (four bits/cell) flash, which it calls Universal Storage.

Randy Hayes, VP of VAST Federal, said in a statement: “For customers such as the US Department of Defense, all data is equally critical and our mission is to ensure that data age never defines the time to data access.

“Universal Storage offers the DoD a secure and unified approach to being able to process any data asset in real time by eliminating storage tiering from the mission agenda and taking our customers to a cost-effective all-flash end state that they were never before able to afford.”

SAR involves the use of millimetre-wave radar beams pulsed from a synthetic aperture array to build a 3D model of a ground image, visible day or night and through cloud cover. Images can be compared over time to spot changes, such as mobile missile launchers appearing at a location or ships leaving a harbour. The ability to compare a new image with a stored image is clearly useful here.

ICEYE SAR image of Rotterdam Harbour showing ships, oil tanks and containers.

AI packet capture refers to the capture of data packets crossing a network and using them to reconstruct network activity to show what has happened. For example, a malware attack can be tracked as it progresses.

Both applications rely on rely on massive amounts of stored data used to compute SAR image comparisons and AI-supported network packet analytics. VAST says its Universal Storage system combines the speed of an all-flash array with the affordability of an archive to analyse and respond to national security situations in real-time.  

ICEYE glacier image.

VAST Data’s federal customer roster includes the US Department of Energy, National Institutes of Health, National Oceanic and Atmospheric Administration, NASA, and the Department of Veterans Affairs. The VAST Federal business unit was launched in April. In July former CIA Chief Technology Officer Gus Hunt joined the VAST Federal board.

Nuke checker Los Alamos Labs investigating NGD computational storage

US nuclear weapon stockpile health checker Los Alamos Labs is looking to see if computational storage can speed up simulation modelling runs on its supercomputers.

Los Alamos Labs also works in the energy, environment, infrastructure, health, and global security areas, doing strategic work for the US government. It operates in the ultrascale high-performance computing environment and has set up an Efficient Mission-Centric Computing Consortium (EMC3) to develop more efficient computing architectures, system components, and environments for its mix of workload. Computational storage developer NGD Systems is an EMC3 member, along with nearly 20 other organisations.

The oft-quoted Gary Grider, HPC division leader at Los Alamos, provided a statement about this: “Los Alamos is happy to see the evolution of computational offloads towards standards-based computational storage technology, and is hopeful explorations into use cases for this technology will bear fruit for the HPC and at-scale computing environments.”

NGD CTO Vladimir Alves explained Los Alamos’s interest: “NGD’s … computational storage platform makes it easy to try new concepts for offloading functions to near storage.”

What role does near storage play here? Brad Settlemyer, a senior scientist in Los Alamos’s HPC Design Group, explained: “Computational storage devices become a key source of acceleration when we are able to directly interpret the data within the storage device. With that component in place, near-storage analytics unleashes massive speedups via in-device reduction and fewer roundtrips between the device and host processors.”

NGD 12TB Newport ruler drive.

NGD builds its Newport line of NVMe flash storage drives, with up to 64TB of capacity, using on-board ASICs and Arm cores to process data stored in the drive. This means that Los Alamos HPC processors could have some of their work — repetitive processing of stored data such as transcoding — offloaded to computational storage drives, and so enable more compute cores working on the overall problem. It’s all about accelerating HPC application runs.

We are told computational offloads using both in-network processing and near-storage compute are becoming an important part of both scale-up and scale-out computing, with future scaling requirements virtually requiring programmable elements along the data path to achieve performance efficiency goals.

Alves said: “By offering an OS-based storage device, with on-board applications processors in our NVMe SSD ASIC solution, we offer partners like Los Alamos the ability to try many different paths to a more complete solution with a simple and seamless programming and device management model.”

As a by-product of the EMC3 work, Los Alamos has partnered with NGD Systems to build a curriculum for a set of summer internship programs in using NGD’s Newport Computational Storage Drives to accelerate data analytics.

Comment

You can’t view NGD Systems Newport drives as drop-in components in a computing data path. They have to be programmed — and that means an application’s code or a host server OS has to recognise and manage where processing is being carried out. That said, the potential benefits of having, say, a hundred Newport drives pre-process hundreds of terabytes of data before it is sent to expensive and dedicated HPC system cores for further processing can be huge. This would be particularly so with repetitive HPC runs, where the drive-level processors load pre-written code.

This would be equivalent to loading factory production-line robot welding machines with new programs to build car bodies instead of having a single central computer look after all the robots. The robots do the grunt work themselves, offloading the central system.

Fire hoser Vcinity creates global LAN from WAN using RDMA over IP

Vcinity’s Data Access Platform (VDAP) defeats remote data gravity, providing local disk access speed to data at inter-continental distances.

It goes against the perceived wisdom by not moving entire datasets to compute. Instead it shifts chunks of data using remote direct memory access (RDMA) inside IP packets across global distances and enables real-time compute on data thousands of miles away. Don’t cache and store the compressed and deduped distant data locally, like cloud file gateways such as Egnyte, Nasuni and Panzura. Don’t physically move disk drives like Seagate’s Lyve Mobile or AWS Snowball.

We wrote about the privately-owned company in March two years ago. In April it presented at Tech Field Day. In June it hired an SVP for Worldwide Sales and also a Chief Strategy Officer — both moves signalling its intention to bulk up its presence in the market. How does it do what it does, and why — if the technology is so good — isn’t it more popular?

Gateway devices

Vcinity provides two gateway devices, one either end of a wide area link. (“Gateway” is our term, not one used by Vcinity by the way.) These devices can be appliances or virtual machines. They run on Linux and implement IBM’s Spectrum Scale parallel file system. A client system mounts its Vcinity device as an NFS or SMB file system and reads or writes data to/from the remote file share via its inline Vcinity device.

There is no traditional WAN optimisation. The data is not compressed or deduplicated, nor cached or stored at the target end of the link. From the outside, a Vcinity link is a drop-in and dumb but high-speed pipe that needs no change in applications accessing NFS or SMB file data.

There can be from three to eight network links between the two Vcinity devices, with data striped across them. (Each link can have a separate encryption method if so desired to increase security.) When data is requested or sent an RDMA protocol is used between the two Vcinity gateways, but the data is split into chunks and sent as an IP packet with IP headers — so that standard IP networks can handle the traffic flow control and provide lost packet recovery. VDAP is a lossless protocol.

ESG Vcinity diagram.

The RDMA can run across RoCE or InfiniBand. Because TCP/IP is not used as the transport, all of its chattiness (network overhead) is sidestepped and up to 95 per cent of a link’s bandwidth can be used by Vcinity and sustained.  When a chunk of data is sent across the link the time to the first bit can be 70ms, according to an ESG validation report. But after that, data just streams across the link, as if coming through a fire hose.

Test results

ESG’s testers compared Vcinity’s data movement with TCP/IP and found a substantial improvement.

For example: “Vcinity was 94 per cent faster than TCP/IP when accessing a data set that was 2,405 miles away (29 seconds compared with 7 minutes and 42 seconds).” The ESG people said. “As … seismic data scientists had reported, accessing data located thousands of miles away was, for all intents and purposes, as fast as accessing data stored locally.” 

In another test, they “compared the time that it took to open and render a 15MB AutoDesk rendering of a ship design in a local directory to the time that it took to open and render the same file located more than 2,000 miles away, using both Vcinity and traditional TCP/IP … the elapsed time using Vcinity 1000V was 95 per cent faster than the TCP/IP method (12 seconds versus 3 minutes and 36 seconds) and only 2 seconds slower than the local method.”

ESG Vcinity Autodesk file access chart.

In ESG’s view the “Vcinity ULT X [device] turns your WAN into a global LAN with RDMA over WAN technology that skips layers of the TCP/IP stack and delivers near real-time access to industry standard NFS and SMB file shares over long distances.”

Comment

Vcinity provides the equivalent of LAN access to disk drive data across WAN distances. It does not provide the equivalent of NVMe-over-Fabrics access to data stored on NVMe SSDs. We think that Vcinity is chasing down opportunities where real-time access is needed to large files stored remotely, and it is not inexpensive. However it is cost-justified when expensive technical staff can get to work on data days faster than before and keep expensive remote data-generating assets, like oil drilling ships, operating more effectively.

Vcinity VDAP video at Tech Field Day.

Read up an a Vcinity data sheet here.  Watch a Vcinity Tech Field Day video by Vcinity CTO Stephen Wallo here. about its VDAP. He gives a good explanation and answers lots of questions from the delegates.

Perhaps you will find that you don’t have to move terabytes of data long distances to have your technical staff and applications access it. Instead keep a single, golden copy, and have them access it remotely, as if they were in its vicinity. (Apologies for awful pun — ed.)

Gartner: NAND TAM second only to DRAM

A Gartner semiconductor revenue forecast out to 2025 shows that DRAM has the largest total addressable market (TAM), followed by NAND and then microprocessors.

Wells Fargo analyst Aaron Rakers told subscribers about Gartner’s second 2021 quarter global semiconductor revenue forecast out to 2025, which has bigger numbers from the first 2021 quarter forecast document. This was due to Gartner increasing its calendar 2021 and 2022 forecast by $25.7 billion (+4.7 per cent vs prior) and $25.3 billion (+4.2 per cent) respectively.

The prime driver is a larger forecast for DRAM and NAND revenues due to DRAM under-supply and NAND price rises. A chart showing semiconductor revenue by device type caught our eye as it showed that microprocessor, DRAM and NAND revenues were the three largest categories.

Judging by the chart circle positions the microprocessor total addressable market TAM) will be about $60 billion, NAND $90 billion and DRAM $100 billion. The microprocessor TAM growth out to 2025 is negative. DRAM has a >6 per cent TAM CAGR with NAND’s CAGR being more than ten per cent.

Yangtze Memory Technologies Co

Separately, Rakers told subscribers that Chinese media outlet Global Times reports China’s state NAND champion, Yangtze Memory Technologies Co (YMTC) is now mass-producing its 128-layer, 3D NAND. Shenzhen-based Powev Electronic Technology has Asgard-brand SSDs on sale using the YMTC flash.

Intel, Kioxia, Micron, Samsung, SK hynix and Western Digital account for 98 per cent of NAND market markets according to TrendForce, so YMTC has a mountain to climb to reach a significant presence in the global NAND market.

YMTC’s product development may be affected by parent Tsinghua Unigroup’s bankruptcy problems