Home Blog Page 267

The RoCE storage road to Graphcore’s AI servers

AI server developer Graphcore is setting up a reference architecture with four external storage suppliers to get data into its AI processors faster.

DDN, Pure Storage, Vast Data, and WekaIO will provide Graphcore-qualified datacenter reference architectures. Graphcore produces IPU (Intelligence Processing Unit) AI compute chips which power its IPU-Machine M2000 server and IPOD multiple server systems. These are claimed to be better in price/performance terms than Nvidia GPUs at running AI applications such as model training.

Quad-processor M2000 with massive cooling block at the rear.

The M2000 is basically a compute and DRAM box and needs fuelling with data to process. It has four IPUs and a PCIe Gen-4 RoCEv2 NIC/SmartNIC Interface for host server communication. A dual 100Gbit RNIC (RDMA-aware Network Interface Card) is fitted. Stored data will be shipped to the IPU-M2000 from the host server across the Ethernet link.

The system is controlled by Poplar software, which has a PCIe driver. As the system supports RoCE (RDMA over Converged Ethernet), NVMe-over-Fabrics interconnection to external storage arrays is possible.

Vanilla quotes

The four announcement quotes from DDN, Pure, VAST and Weka are pretty vanilla-like in flavour — none revealing much about how they will send data to the M2000s.

“DDN and Graphcore share a common goal of driving the highest levels of innovation in artificial intelligence. DDN’s industry-leading AI storage, combined with the strength of the Graphcore IPU, brings a powerful new solution to organisations looking for a whole AI infrastructure and data management solution with outstanding performance and unlimited scalability,” said James Coomer, SVP for Products at DDN.

“Turning unstructured data into insight lives at the core of an organisation’s effort to accelerate every aspect of AI workflows and data analytic pipelines. Pure FlashBlade, the leading unified fast file and object (UFFO) storage platform was built to meet the demands of AI. It is purpose-built to blend bigger, faster analytics capabilities,” commented Michael Sotnick, VP, Global Alliances, Pure Storage. “Customers want efficient, reliable infrastructure solutions which enable data analytics for AI to deliver faster innovation and extend competitive advantage.”

“The Graphcore IPU, coupled with VAST’s Universal Storage, will help customers achieve unprecedented accelerations for large and complex machine learning models, furthering adoption of AI across the enterprise data centre,” said Jeff Denworth, Co-Founder and CMO at VAST Data. “VAST Data is breaking the tradeoff between performance and capacity by rethinking flash economics and scale, making it possible to afford flash for the entirety of a customer’s dataset. This is especially important in the new era of machine intelligence where fast and easy access to all data delivers the greatest pipeline efficiency and investment return.”

“WekaIO’s approach to modern data architecture helps equip AI users to build and scale highly performant, flexible, secure and cost-effective datacentre systems. Working with Graphcore allows us to extend next-generation technologies to our customers, keeping them at the forefront of innovation,” said Shailesh Manjrekar, Head of AI and Strategic Alliances at WekaIO.

Comment

Nvidia has its GPUDirect program to encourage storage suppliers to ship data as quickly as possible to its GPU servers. That’s accomplished through bypassing the host server CPU, its DRAM and storage IO software stack, and setting up a direct connection between the external storage system and the GPU server. 

The four storage suppliers partnering with Graphcore here are all supporting GPUDirect. We would hope, and expect, a similar program for the Graphcore system — IPUDirect if you like — to appear. Asked about this, a Graphcore spokesperson said: “We probably need to wait for the first reference architectures to be released to answer that. We’ve said that this will be happening throughout the remainder of 2021.”

That was unexpected. We thought Graphcore would be driving this.

Kioxia’s software-defined flash tech enables hyperscalers to drive SSDs their own way

Kioxia has been talking about its software-defined flash technology at this week’s SNIA Storage Developer Conference and introduced a new Software-Enabled Flash SDK to speed the development of software drivers.

The SSD-using market is seeing the development of application-specific flash drives, separate from the base standard SSD with its block interface. They include key:value store SSDs, zoned namespace SSDs, and even customer Flash Translation Layer SSDs for hyperscalers. The problem Kioxia highlights is a hyperscaler one because such customers will have large populations of flash drives, making change-overs of drive technology and supplier problematical.

They can end up with sub-populations of drives of a particular sort, facing costly and inconvenient migrations to adopt new technology. The trouble is that drive format types — such as key:value stores and zoned namespaces — are hard-coded into the drives and also require host software control, locking these drives into specific application use and preventing their re-use for different applications. It would be good for hyperscalers if their key:value store drives could also be used or re-used for zoned namespace applications and block applications.

That’s what Kioxia is doing: creating a flash drive API-based interface that enables SSD operators to reprogram their drives and thereby create a base population of SSDs which can be purposed for specific applications under software control, and then repurposed for different applications.

The drives have a controller and software on them that accept API-delivered instructions, giving access to SSD features down to the die level. New drive types — say QLC drives replacing TLC drives — can have the same design, enabling the host software to drive them in the same way and speed their adoption.

A Kioxia tech brief describes this concept and explains that Kioxia’s software-defined flash tech is open-source and SNIA support is being sought. 

The test of whether this idea is generally adoptable will be to see if other SSD suppliers line up and support it, such as Micron, Samsung, SK hynix and Kioxia partner Western Digital — a supporter of the zoned namespace idea. Check out a section of Kioxia’s web site to explore the concept in more detail.

Double-helix data storage developer Catalog gets funding boost

Spurred on by the development of its DNA storage Shannon system, Catalog has taken in $35 million in B-round funding to help devise a storage and computation system based on DNA.

The round was led by Hanwha Impact and the money will also help create an ecosystem of Catalog collaborators, partners, and users of DNA-based computing. Korea-based Hanwha Impact is the rebranded Hanwha General Chemical. DNA is a double-helix molecule present in the cells of all living organisms. It carries genetic instructions for the development, everyday functioning, and reproduction of cells — the base coding foundation for living creatures.

Hyunjun Park, Catalog’s founding CEO and an MIT researcher, said in a statement: “Simply preserving data in DNA is not our end goal. Catalog will fundamentally change the economics of data by enabling enterprises to analyse and generate business value securely from data that previously would have been thrown away or archived in cold storage. The possibility of a highly scalable, low energy, and potentially portable DNA computing platform is within our reach.”

Catalog’s head of molecular biology, Tracy Kambara, prepares Shannon, the first commercially viable automated DNA storage and computation platform for enterprise use. 

Catalog’s DNA storage technology involves creating quasi-letters which can represent binary data, and “writing” the resulting DNA sequences to fluid or dry pellets for later retrieval. Such stored DNA is orders of magnitude denser than flash or tape-based storage, air-gapped from online systems like tape, and can last for, it is claimed, thousands of years.

It is possible, Catalog says, to compute the data stored in DNA molecules. This concept is of a compute-in-storage product and its programming and other details have yet to be revealed. 

Catalog does say that, by incorporating DNA into algorithms and applications there could be “potential widespread commercial use through its proprietary data encoding scheme and novel approach to automation. Expected areas of early application are artificial intelligence, machine learning, data analytics, and secure computing. In addition, initial use cases are expected to include fraud detection in financial services, image processing for defect discovery in manufacturing, and digital signal processing in the energy sector.”

The firm says that its coming data and compute platform will be more energy efficient, affordably scalable, and highly secure compared to conventional electronic platforms.

Catalog was founded at Boston in 2016 and has taken in a total of $44.3 million according to our records. We think we’ll hear about more progress in 2022.

Speedata outta stealth with $70m in funding and speedy analytics processing chip

High tech electronic PCB (Printed circuit board) with processor and microchips. 3d illustration

Israeli startup Speedata revealed its existence today by showing off its big data analytics processing chip development and a $55 million funding round from keen VCs showing they think it’s a good bet.

Jonathan Friedmann.

The big bet here is that data analytics workloads can run one to two orders of magnitude faster than x86 processors when running on the Speedata analytics processing unit (APU) chip. The APU does for analytics what GPUs do for graphics processing.

A statement from Speedata’s co-founder and CEO, Jonathan Friedmann, said: “Analytics and database processing represents an even bigger workload than AI with regard to dollars spent. That’s why industries are anxiously seeking solutions for accelerating database analytics, which can serve as a huge competitive advantage for cloud providers, enterprises, datacentres, and more. …

“Our amazing team of academic and industry leaders has built a dedicated accelerator that will change the way datacenter analytics are processed — transforming the way we utilise data for years to come.”

Pedal to the metal

Speedata has designed a dedicated accelerator chip for these workloads and claims a server with this APU will replace multiple racks of CPUs, dramatically reducing costs, electricity usage and saving space.

The chip is said to address the main bottlenecks of analytics, including I/O, compute, and memory, effectively accelerating all three. It is compatible with all legacy software so workloads can execute on it with no code changes.

High tech electronic PCB (Printed circuit board) with processor and microchips.

The illustration shows the chip fastened to a board. How does it get its data? How is it linked to a host system? We asked Friedman some questions to find out more.

Blocks & Files: How does it get the data it needs to work?

Friedmann: Speedata’s APU connects via a PCIe to a NIC or Smart NIC and/or to local storage. The data flows from local storage via PCIe bus, and/or from remote storage via Ethernet to the NIC and the connected APU.

Does it have its own pool of DRAM with a PCIe bus linking it to storage resources?

Yes. The APU does have its own pool of DRAM and a PCIe bus linking it to storage resource. Since the APU will do all the data processing, this will dramatically reduce the amount of DRAM needed next to the CPU.

How does it hook up to normal servers?

The APU connects to normal servers via a standard PCIe card. The PCIe card contains an APU and is inserted into a standard server. The APU PCIe card is thus hooked up within normal servers in the same way that a GPU PCIe card is hooked up within normal servers.

Any information on its size and number of processing elements and types?

The size and number of processing elements is similar to a GPU. It is important to note that Speedata’s APU elements are optimized for Analytics and Databases, while the GPU elements are optimized for Graphics and AI.

How would the performance in on-premises APU system running database analytics software compare to that same software running in AWS, Azure etc?

Speedata’s APU performance will be up to 20x to 100x more powerful than a CPU when running database analytics. The APU will work equally well on-premise and in public cloud systems.

How does it compare to Snowflake and similar public cloud data warehouses?

The boost in performance may be utilised by multiple types of analytic tools and data warehouses. Snowflake and other similar public cloud data warehouses can use Speedata’s APU to benefit from this improvement in their own data warehouses.

Funding and founding

Speedata has pulled in $55 million of A-round funding from VCs led by Walden Catalyst Ventures, 83North, and Koch Disruptive Technologies (KDT), with participation from existing investors Pitango First, Viola Ventures and prominent individual investors including Eyal Waldman, Co-Founder and former CEO of Mellanox Technologies.  Waldman has joined Speedata’s board.

There was a previous undisclosed seed round of $15 million from a group of investors led by Viola and Pitango. This took place in 2019, the year Speedata was founded. Speedata’s total funding is $70 million.

There were six founders: Friedmann, CTO Yoav Etsion, Chief Architect Rafi Shalom, VP System Engineering Dani Voitsechov,  Chairman Dan Charash, and Itai Incze, Chief Software Architect. 

Friedman was CEO and founder of processor developer Centiped Semi, and a COO at Provigent, a fabless developer of broadband wireless SoCs before that. Charash was the CEO of Provigent. Etsion is an associate professor at Technion, the Israel Institute of technology. Shalom was a chief architect at storage networking NIC supplier QLogic. Voitsechov was a post-doctoral researcher into massively parallel computer systems architecture at Technion. Interestingly, Voitsechov and Etsion have partnered to write research papers. For example, Inter-thread Communication in Multithreaded, Reconfigurable Coarse-grain Arrays.

We think Speedata will get the APU chip into test use at sample customers and we’ll hear more about it next year.

Kasten Kubernetes backup gets closer to Veeam mainstream container dream

Veeam’s Kasten business unit has upgraded its K10 Kubernetes backup product to extend its coverage to more applications and to edge environments, and to support Veeam’s own Backup and Replication, bringing the two closer together.

Veeam CTO Danny Allan said: “With Veeam Backup and Replication (VBR) data path integration added to Kasten K10 4.5, VBR customers have a pathway to extend their investments into their Kubernetes environments. Users will have better visibility and easier access to a common repository that includes containerised Kubernetes apps, access to Veeam’s portable backup file format for volume content, and more capabilities for enriching their Kubernetes deployments.”

Version 4.5 of the K10 product adds coverage for Kafka, Apache Cassandra, K8ssandra, and Amazon RDS. It supports the K3s and EKS Anywhere Kubernetes distributions extending K10 coverage to edge environments and applications such as video streaming.

Kasten K10 dashboard.

Kasten says that K10 now has an improved out-of-the-box experience through “integration with a complete set of tools to effectively deploy, manage, monitor and troubleshoot Kubernetes environments.”

Kasten is steadily becoming Veeam for containers.

At your service: Clumio Protect protects Amazon’s S3 buckets

Clumio has introduced a managed service to protect object data in AWS customers’ S3 buckets, transforming the clunky and limited versioning and replication-based AWS facilities into a single centralised and more capable operation.

Clumio Protect is available for the protection of Amazon EBS, EC2, RDS, Microsoft 365, and VMware Cloud on AWS. It has been extended with Clumio Protect for S3 to cover S3 buckets and their contents in all of a customer’s S3 accounts and regions.

Chad Kenney.

Chadd Kenney, VP of Product at Clumio provided an announcement quote: “S3 is massive and requires a cloud-native data protection solution that is built from the cloud up, to deliver all the scale and efficiencies that existing data protection products cannot provide at a reasonable cost.”  

The thinking is that AWS S3 storage is now used for increasing amounts of increasingly critical data, for such things as in-cloud analytics using data lakes. AWS’s own S3 protection involves versioning and replication, both creating extra copies of data which has to be stored at extra cost. That is better than nothing, but far from optimal.

Clumio Protect for S3 protects buckets and prefixes (object groups defined by object name string) across all of a customer’s AWS accounts and regions by moving the data into its own Secure Vault outside of a customer’s region. There is no need for 1-to-1 object mapping, and a Protection Group facility can apply protection polices to multiple objects. Such policies can define periods and which objects and S3 tiers to include or exclude.

A big selling point of Clumio Protect for S3 is cost. In-house AWS protection for S3 is based on replication with a second copy of the data created in the lower-cost S3 Infrequent Access (IA) tier. The Clumio service combines smaller objects into larger ones and so saves cost. Clumio has modelled the monthly replication costs for 1000 files (objects) in S3 and claims a greater than 10x reduction as its chart shows:

The Clumio service stores a customer’s data in its Secure Vault in a separate out-of-region AWS Clumio account, thus providing a logical air-gap defence against ransomware. Data is encrypted, with customers bringing their own keys, immutable, and access involves multi-factor authentication.

A management plane provides protected object information across all of a customer’s S3 buckets, accounts and regions, in a single dashboard. This makes compliance reporting much easier than fetching data yourself from a bucket survey. 

Restoration is to a point-in-time with global search and cross-account browsing capabilities. It also has varying granularity — at bucket level, at object level or at object prefix (object grouping by named type) and to any buckets, prefixes or objects to any S3 tier, prefix or AWS account. That makes recovery from data loss or corruption much faster, potentially decreasing it from hours to a matter of minutes.

Comment

AT first glance an AWS S3 data protection service would seem redundant — as several backup suppliers store their backup data in S3, because it’s cheaper than on-premises storage and the various S3 tiers provide lower costs for ageing data. Clumio makes the point that S3 is being used for online access to data in data lake/warehouse analytics scenarios. That data is growing rapidly and its protection is necessary. 

AWS’s own facilities are clunky, not optimised for cost as much as they could be, and limited in management reporting and restoration flexibility. Step forward Clumio with a slicker and more comprehensive service that can cost much less.

This is the first such AWS S3 protection service and we expect other SaaS backup service providers protecting data in AWS to follow in Clumio’s footsteps.

AWS has introduced cross-account and cross-region protection for files with CRAB for FSx, and could well upgrade its S3 protection facilities in the future.

Clumio Protect for Amazon S3 is expected to be available for early access by late October and generally available by December 2021.

Lightbits gets NVMe/TCP certification, enters enterprise software-defined storage ring

Startup Lightbits has gained NVMe/TCP certification from VMware with vSphere 7 Update 3 for its LightOS software and is now competing head-on with other NVMe/TCP storage suppliers.

LightOS is a storage array controller product, featuring NVMe/TCP, and provides independent scaling of compute and storage on commodity hardware. It supports Intel’s Gen-3 Xeon SP processors, Optane persistent memory, and 100Gbit/sec Ethernet NICs, and QLC SSDs. A single LightOS cluster can deliver over 40 million IOPS (random Read) and 10PB of user capacity, with less than 200μs latency.

Kam Eshghi.

Kam Eshghi, Chief Strategy Officer at Lightbits, provided a quote: “We are super-excited to have a high performance, highly available storage solution also for VMware users, with in box support for NVMe/TCP. Organisations with private clouds and hybrid clouds, as well as cloud service providers and financial service providers, can now realise the performance, scalability, and cost-efficiency benefits of a combined solution from VMware, Lightbits, and Intel.”

Performance and cost

Basically Lightbit’s message is equivalent or better data services and faster storage access performance at lower cost. We understand that In general, Lightbits NVMe/TCP scales linearly with over 6x more IOPS vs iSCSI at the same thread counts while attaining as much as 4x lower latency vs iSCSI (on the same hardware).

It will be revealing more performance data this week at a VMworld event.

The company tells us it reduces a customer’s storage TCO by:

  • QLC SSDs, ~30 per cent lower cost than TLC SSDs;
  • Higher density per storage node, amortizing the fixed cost of the storage server over larger capacity;
  • Compression reduces flash cost (workload dependent);
  • No hypervisor required on storage node (no license fee).

Comment

Since NVMe/TCP is becoming table stakes in storage networking — witness the array of vendors supporting it — Lightbits is competing in the level playing ground of the software-defined storage market, looking for greenfield wins and iSCSI upgraders. 

It is in competition with other NVMe-oF suppliers such as Excelero, as well as QLC flash and Optane supporting suppliers such as StorONE and VAST Data. On the NVMe/TCP front it is facing up to to Dell, NetApp, Infinidat and Pavilion Data, and may find attacking HPE accounts fertile ground as that company does not have NVME/TCP support — yet. Ditto IBM. The iSCSI upgrade market in the two installed bases may provide a fertile hunting ground.

Just being a fast access NVMe/TCP target is not enough. Lightbits has to match competing suppliers with their various data services. LightOS supports multi-tenancy, thin-provisioning, snapshots, clones, remote monitoring, dynamic rebalancing, SSD-level Elastic RAID, per-volume replication, cloud-native applications, Kubernetes orchestration, and more. Its SSD management improves flash endurance by up to 20x, which is good as it supports low-endurance QLC flash drives. 

LightOS is listed in the VMware compatibility guide and LightOS software-defined storage with Intel high-performance hardware for VMware ESXi 7.0U3 is now generally available.

Fungible joins the NVMe/TCP party with a go-faster card

Data processing unit (DPU) startup Fungible has launched a Storage Initiator card supporting NVMe/TCP and says it makes deploying NVMe/TCP effortless in existing datacentres.

The card is the latest member of its Fungible Storage Cluster product range and is claimed to deliver the world’s fastest and most efficient implementation of NVMe/TCP. It uses Fungible’s S1 DPU chip and, Fungible says, enables the benefits of pooled storage without sacrificing performance.

Eric Hayes, Fungible CEO, said: “With our high-performance and low-latency implementation, Fungible’s disaggregated NVMe/TCP solution becomes a game changer. Over the last five years, we have designed our products to support NVMe/TCP natively to revolutionise the economics of deploying flash storage in scale-out implementations.”

The company says it offers technology, managed by Fungible’s Composer, to unlock the capacity stranded in silos by disaggregating these resources into pools, and composing them on-demand to meet the dynamic resourcing needs of modern applications. 

There are FC200, FC100 and FC50 storage initiator cards, and a single FC200 card is capable of delivering 2.5 million IOPS to its host. The SI cards are available in a standard PCIe form factor, manage all NVMe/TCP communication for the host, and in turn present native NVMe devices to the host operating system using standard NVMe drivers. The cards offload the processing of NVMe/TCP from the host, freeing up approximately 30 per cent of the general-purpose CPU cores to run applications.

This approach enables interoperability with operating systems that do not natively support NVMe/TCP, such as Windows, older Linux kernels and macOS.

Fungible claims that, when paired with a Fungible FS1600 storage server node or other non-Fungible NVMe/TCP storage targets, the SI cards enhance the performance, security and efficiency of those environments, as well as providing the world’s highest-performance implementation of standards-based NVMe/TCP.  

It also says these cards allow datacentre compute servers to get rid of all local storage — even boot drives — allowing the complete disaggregation of storage from servers.

In the last few days NetApp announced NVMe/TCP, followed yesterday by VMware and Dell, and now today Lightbits and Fungible. It’s becoming an open house party.

Micron coining money hand over fist; DRAM and NAND demand growth is just great

Businesses just keep on needing more DRAM and NAND chips. Quarterly revenues at memory and flash chip maker Micron climbed to the second highest level ever in its latest quarter, with record flash revenues. The demand environment is tremendous and the outlook rosy. Happy days.

Revenues in Micron’s Q4 FY2021, ended September 2, were $8.27 billion, up 36.6 per cent on a year ago, with profits of $2.7 billion, up a thumping 175 per cent on the previous year. Full FY2021 revenues were $27.7 billion, 29.3 per cent higher than FY2020 revenues, with profits of $5.86 billion, 118 per cent more than the prior year.

Micron CEO and President Sanjay Mehrotra’s results statement read: ”Micron’s outstanding fourth quarter execution capped a year of several key milestones. In fiscal 2021, we established DRAM and NAND technology leadership, drove record revenues across multiple markets, and initiated a quarterly dividend. The demand outlook for 2022 is strong, and Micron is delivering innovative solutions to our customers, fueling our long-term growth.”

Micron revenue history show potential to exceed previous record of Q4 FY2018. How far can this DRAM+NAND up cycle go on?

DRAM represented 74 per cent of Micron’s revenues in the quarter, up 39.9 per cent annually. NAND was 24 per cent of total revenues and increased at a lower rate: 29 per cent year-on-year. However there was record NAND revenue in the quarter.

Micron has four business units, one of which, the embedded unit, saw stellar growth of 108 per cent year-on-year as this table shows:

In his prepared remarks Mehrotra said: “We achieved our highest-ever mobile revenue, driven by all-time-high managed NAND revenue and multichip package (MCP) mix. Our embedded business had a tremendous record-breaking year, with auto and industrial businesses both at substantial new highs. And our Crucial-branded consumer business and overall QLC mix in NAND all hit records in fiscal 2021.”

Financial summary for the quarter:

  • Gross margin — 47.9 per cent
  • Cash flow from operations — $3.9B
  • Free cash flow — $1.9B
  • Liquidity — $13B
  • Cash minus debt — $3.7B (there’s a rock solid balance sheet here)
  • Diluted earnings per share — $2.42 vs $1.08 a year ago

The DRAM and NAND technology leadership claims refer to Micron’s 1α (1-alpha) DRAM and 176-layer NAND being industry’s most advanced nodes in high-volume production. Mehrotra said: “We believe we are several quarters ahead of the industry in deployment of these process technologies.”

On the SSD front: ”We are … enhancing our NVMe SSD portfolio and will soon introduce PCIe Gen-4 datacentre SSDs with Micron-designed controllers and leveraging the full benefit of vertical integration.” It has already qualified 176-layer Gen-4 NVMe client SSDs with several PC OEMs.

In general: “Datacentre has become the largest market for memory and storage, driven by the rapid growth in cloud.” The other end-user markets — PC, graphics, mobile, auto and industrial — all exhibited revenue growth for Micron.

Mehrotra made general comments about technology developments: “We … expect to increase FY22 R&D investment by approximately 15 per cent from FY2021 to deliver bold product and technology innovations designed to fuel the data economy, as well as to expand our portfolio to capitalise on opportunities such as high-bandwidth memory and Compute Express Link (CXL) solutions.”

Micron withdrew from the 3D XPoint storage-class memory market earlier this year, saying that it was interested in developing new storage-class memory technologies accessed across the CXL interconnect. Perhaps this is an area for bold product innovation.

Outlook

The guidance for its next quarter, Q1 in FY2022, is for revenues of $7.65 billion plus/minus $200 million. This would be a 32.5 per cent rise year-on-year at the mid-point and represent Micron’s highest-ever quarterly revenue.

Micron is also planning for record revenues in the full fiscal 2022 year, as there is strong demand across the board for its products.

Cohesity hires Pure Sales VP as its Chief Revenue Officer

Data protector and manager Cohesity has hired Kevin Delane as its Chief Revenue Officer, replacing the departed Michael Cremen.

Cremen resigned in August to go to another company, and Delane has been hired quite quickly. He comes from being worldwide sales VP at all-flash array supplier Pure Storage, a strategic Cohesity partner. Like Cohesity, Pure is embarking on a transition to SaaS-based services and so Delane will be able to hit the ground running from that point of view.

Kevin Delane.

Mohit Aron, Cohesity founder and CEO, said in a statement: “Kevin has proven success in advancing global go-to-market strategies and consistently delivering strong results. With an extensive knowledge of the technology industry, strong partner and international expertise, and a management approach that puts culture first, Kevin will play a key role in accelerating our global momentum. We are thrilled to welcome Kevin to the Cohesity executive leadership team at this exhilarating time.”

Exhilarating? Yes, because Cohesity has just announced a knock-out quarter.

We understand Delane was recommended by some Cohesity board members — for example Carl Eschenbach, who was previously hired by Kevin to work at EMC. It’s a small world.

Cohesity’s announcement points out that, at Pure, Delane was responsible for the company’s global go-to-market strategy, including sales, field sales, sales operations, and systems engineering, and he was also instrumental in preparing the company for its IPO. (We like this point, being sure that Cohesity has an IPO in its near-term plans. And that could mean stock options for Delane.)

Delane spent nearly eight years at Pure, having previously spent almost 19 years at EMC, then Dell EMC, finishing up as SVP business operations for Isilon.

His comment about joining Cohesity? “This was an opportunity I simply could not pass up.”

More power for PowerStore: Dell adds NVMe/TCP networking to unified file+block array

Taking advantage of VMware’s NVMe/TCP support Dell is adding it to the PowerStore array product line via a SmartFabric Storage Software (SFSS) feature.

Dell says it is the first supplier to support VMware’s Update 3 to vSphere 7, and there will be an NVMe IP SAN portfolio across Dell Technologies’ storage, networking and compute products.

SFSS automates storage connectivity for NVMe IP SAN. It will allow host and storage interfaces to register with a Centralized Discovery Controller that can notify hosts of new storage resources. In full the additions are:

  • PowerStore NVMe/TCP protocol and SFSS integration Dell EMC’s PowerStore storage array can use the NVMe/TCP protocol and will support both the Direct Discovery and Centralized Discovery management models.
  • VMware ESXi 7.0u3 NVMe/TCP protocol and SFSS integration Dell partnered with VMware to add support for the NVMe/TCP protocol and add the ability for each ESX server interface to explicitly register discovery information with SFSS via the push registration technique.
  • PowerSwitch and SmartFabric Services (SFS) Dell EMC PowerSwitch and SmartFabric Services (SFS) can be used to automate the configuration of the switches that make up customers’ NVMe IP SAN.
  • PowerMax and PowerFlex plans Dell is planning support for NVMe/TCP in both its Dell EMC PowerMax all-flash enterprise data storage and Dell EMC PowerFlex software-defined storage product lines.

That last point emphasises Dell’s commitment to NVMe/TCP. ESXi 7.0u3 using NVMe/TCP can now be used on Dell EMC PowerEdge servers.

Industry NVMe/TCP support is strengthening significantly.

If we see HPE and IBM getting on board this bandwagon then NVMe/TCP will become the natural upgrade for iSCSI users.

The NVMe/TCP support for PowerStore will be available in November.

Full speed ahead — VMware adds support for NVMe/TCP and Optane persistent memory snapshots

VMware has updated vSphere and vSAN to provide faster storage networking, better Optane support, more cloud-native features, simplified operations and better vSAN resilience and security in an Update 3 release.

vSphere now supports NVMe-over-Fabrics networking across a TCP/IP link (NVMe/TCP) providing remote direct memory access speed to external NVMe storage drives. Unlike NVMe-oF RoCE, which needs costly lossless Ethernet links, NVMe/TCP uses everyday Ethernet cabling — a tad slower but less expensive and an easy migration for iSCSI users. 

A VMware blog explains: “NVMe/TCP allows vSphere customers a fast, simple and cost-effective way to get the most out of their existing storage investments,” and provides more information. 

Dell Technologies, VMware’s owner and partner, is introducing SmartFabric Storage Software with support for NVMe/TCP in vSphere.

VMware’s vSphere and vSAN support the snapshotting of Optane persistent memory (DIMMs) in app direct (DAX) mode. This is another helpful brick in the wall of Intel’s expanding Optane ecosystem.

Update 3 to vSphere and VSAN 7 also provides enhanced cloud-native developer and deployment support with:

  • vSphere VM Service support for vGPUs Using the VM Service through Kubernetes API, developers can provision VMs that leverage underlying GPU hardware resources.
  • Simplified setup of vSphere with Tanzu Faster and easier setup of networking with fewer steps and inputs needed.
  • vSAN Stretched Clusters for Kubernetes Workloads Users can extend a vSAN cluster from a single site to two sites for a higher level of availability and intersite load balancing. VMware now supports Kubernetes workloads in a stretched cluster deployment.
  • Kubernetes Topology Support in vSAN Enables Kubernetes to see the underlying topology and manage data placement across availability zones (AZs) to ensure availability in case of a planned or unplanned outage.

The update also provides simplified operations and better vSAN resilience and security. Read more in another VMware blog by senior product marketing manager Glen Simon. 

Both vSphere 7 Update 3 and vSAN 7 Update 3 will become available by the end of VMware’s Q3 FY22 (October 29, 2021).