Seagate thinks it will gain a HAMR-based competitive advantage over Western Digital – because its arch-rival is delaying a move to the HDD technology. It also thinks Toshiba will lose out on sales because it lacks in-house HAMR drive manufacturing.
Given Seagate’s roadmap, evidence of any such competitive advantage could emerge in 2022.
Disk drives using conventional magnetic storage technologies are approaching capacity limits. HAMR (heat-assisted magnetic recording) enables much higher disk drive capacities by using smaller recording bits. However, the disk drive makers have had to overcome many technological development barriers to get HAMR disks into production.
Microwave-assisted magnetic recording (MAMR) is an alternative method to push up disk drive capacity. The disk drive makers are able to do this with relatively incremental advances but capacities are lower than can be achieved by HAMR. Accordingly, MAMR may be considered as a intervening magnetic storage technology
Seagate CFO Gianluca Romano pointed to WD’s emphasis on MAMR, in a recent conference call hosted by Wells Fargo. He thinks this could result in one to two years of lost HAMR development time for WD, giving Seagate potential advantages in cost per TB and areal density.
WD told Blocks & Files in December that it probably will move to HAMR as it increases capacity per drive beyond 24TB. (Read our article discussing WD’s disk drive roadmap.)
A glimpse of Seagate roadmap
Rakers told his subscribers that Romano expressed confidence in its HAMR positioning and also in demand for public cloud-driven nearline, high-capacity HDDs.
During the conference call, Romano discussed Seagate’s near-term HDD roadmap and said the current 16TB conventional magnetic recording (CMR) drive is selling well. An 18TB CMR drive released by mid-2020 is the next step up the capacity ladder, and this drive will use 9 platters and 18 heads, in common with the 16TB HDD.
The qualification period for the 18TB drive means Seagate’s 16B model will be its most popular drive in 2020. Rakers notes that WD expects revenue this quarter from its recently announced 16TB/18TB CMR-based and 20TB SMR-based drives. (SMR, or shingled magnetic recording to give it its full name, gets more read tracks on a drive by partially overlapping the write tracks, leaving the narrower read tracks unaffected.)
At this point in Seagate’s roadmap there is a transition to HAMR tech, with a 20TB HAMR drive released before the end of 2020 and revenue also anticipated by the end of the year. This is expected to be a non-shingled drive and will also use 9 platter and 18 heads.
Next up are 22TB and 24TB capacities, using a second generation HAMR technology. The schedule outlined by Romano has enabled Blocks & Files to craft a Seagate HDD roadmap chart.
HAMR time
Visualising Seagate conference call content
We have positioned Seagate’s 22TB-24TB drives on the above chart about six months after the 20TB HAMR drive, because Romano said Seagate will focus on transitioning quickly to the 2nd-generation as a more cost-optimised platform at those capacities. In our view, ‘quickly’ means less than a year.
The 32TB HAMR drive positioning is derived from a Seagate cost-reduction chart shown below. We envisage 26TB, 28TB and 30TB drives will be on Seagate’s roadmap in 2022-202 .
The second-generation HAMR drives will still use 18 heads and 9 platters, meaning an effective lowering of the cost per TB. Seagate provided a chart to show this:
LMR means Longitudinal Magnetic Recording. PMR stands for Perpendicular Magnetic Recording
This shows 32TB HAMR tech drives with a 45 per cent cost advantage over 16TB CMR drives, in Seagate’s fiscal 2024.
Intel is exploring its options on self-manufacturing NAND flash memory and may buy in from third party suppliers.
The revelation came from CFO George Davis, speaking last week at a Morgan Stanley Analyst Conference (recording, here). Intel makes 3D NAND chips in Dalian, China but has been unable to sell enough SSDs, using those chips, to generate profits.
Intel’s options include stopping operating its own NAND foundry and buying in chips, or even sourcing complete SSDs from a third party. Alternatively, it could sell chips to third parties. But there isanother dimension to Intel’s NAND puzzle – Optane.
Swan’s way
Davis told the assembled analysts that NAND flash is an Intel big bet, with NAND in the data centre becoming more and more important. But “we have to have profitability, long term profitability and attractive returns… we haven’t been able to generate the profits out of that to get the kind of returns that we would like to see.”
This pretty much repeats what his boss, CEO Bob Swan, said in April last year, when he discussed weakish Q1fy19 results. Swan said Intel was evaluating NAND manufacturing operations because the business was unprofitable. “We [have] got to generate more attractive returns on the NAND side of the business… And to the extent there is a partnership out there that’s going to increase the likelihood and/or accelerate the pace, we’re going to evaluate those partnerships.”
Eleven months later and Davis is singing off the same hymn sheet. At the Morgan Stanley bash he said a NAND glut and price cuts had made selling SSDs in 2019 more difficult. But things have now changed: “When we look at the demand picture, there’s still critical shortages on NAND. So I think, you know, that that’s got a good tail wind with it.”
He added: “We’re going to look at ways of improving profitability, not only in terms of how we manage the business every day, but also in looking at partnerships and other things where we can perhaps improve the overall economics of the investment.”
3D NAND
Blocks and Files thinks Intel setting up its own 3D NAND manufacturing operation was a mistake, and not securing an in-house supply of XPoint chips is also a mistake. In our view Intel should convert the Dalian plant to make 3D XPoint chips and buy in NAND chips from Micron, or another supplier.
The situation is: Micron wants to use XPoint chips in its own storage-class memory product and Intel has no XPoint fab of its own – yet it has a 3D NAND fab in Dalian producing unprofitable chips.
Intel has nicely tweaked its CPUs and only its CPUs to exclusively use 3D Point memory products. AMD processors don’t have this integration, and nor do IBM POWER CPUs or ARM processors. Intel has a near-captive market and cross-selling opportunity for its Optane brand memory. That could lead to profitability once it sells enough of the chips.
To explain the background for our thinking, let’s revisit Intel Micron Flash Technologies, the joint venture between the two companies to make flash memory that dissolved in October last year.
IM Flash Technologies
Intel and Micron formed a joint venture to make flash memory in 2005. IM Flash Technologies built flash chips only for Intel and Micron, which sold their own SSDs using the chips. At its peak the JV managed three fabs, two in the USA and one in Singapore.
But in 2013 Intel dropped a bombshell when it said it would sell its stake in two of the fabs to Micron. Also, it said the companies would revise their agreement so that both could work on 3D XPoint. This is the faster-than-NAND, slower-than DRAM technology (known as storage-class memory) that Intel puts into its Optane brand products.
So Intel built more NAND chips than it needed, sold flash fab capacity back to Micron to reduce its excess supply, and looked to 3D XPoint for future business.
Enter 3D XPoint
The XPoint product was launched in July 2015, with IM Flash Technologies building the chips and Intel taking all the output. IM Flash Technologies was also making 3D NAND chips by that time.
Later that year Intel dropped bombshell number two, by announcing it would build its own 3D NAND chips at a new fab in Dalian, with initial output slated for the second half of 2016.
The third bombshell landed in October 2018, this time from Micron, which declared its intent to buy out Intel’s share in IM Flash Technologies. The acquisition was completed a year later and the agreement entailed a contractual obligation by Micron to supply XPoint chips to Intel.
HPE’s generally-available Container Platform software, launched this week, runs containers on non-virtualised data centre servers or the public cloud – and supports virtualised servers too, if customers prefer.
Kumar Sreekanti, CTO for Hybrid IT at HPE, provided a quote: “Customers benefit from greater cost efficiency by running containers on bare metal, with the flexibility to run on VMs or in a cloud environment.”
The HPE Container Platform contrasts with VMware’s launch of vSphere 7, which enables a single software layer to support virtual machines (VMs) and containers.
Tax-free compute?
HPE said running container services on bare metal servers avoids paying a virtualisation software tax – no need to buy VMware, for example – and a server resource tax, as servers run more efficiently. According to HPE, with containers on bare metal:
There’s no need to start up the guest operating system (OS) of the VM, including a full boot process; this speeds development, operations, and time-to-market.
Since each VM has its own guest OS, eliminating it reduces the RAM, storage and CPU resources – and the associated data centre costs – required to sustain it.
There’s no need to have a management framework for a virtualised environment and a Kubernetes orchestration environment for containers (a reference to vSphere 7’s Kubernetes support).
Run more containers on a given physical host than VMs, by eliminating multiple copies of guest OSes and their requirements for CPU, memory, and storage.
Analytics and artificial intelligence (AI) workloads with machine learning (ML) algorithms require heavy computation to train the ML models; these applications will deliver faster results and higher throughput on bare-metal.
Source technology comes from HPE’s recent acquisitions of BlueData, giving it a container management control plane, and MapR. The latter provides a distributed file system that functions as a unified data fabric for persistent storage.
Supported storage services include HPE Cloud Volumes, and the Container Platform will work with storage devices that support the Kubernetes Container Storage Interface (CSI). For now, that means Nimble arrays and Cloud Volumes but a January HPE Community statement said of v1.0.0 of the HPE CSI Driver: “HPE only supports HPE Nimble Storage but more products are on the roadmap, such as HPE 3PAR, HPE Primera and HPE Cloud Volumes. Stay tuned for future updates!”
HPE is also introducing new Pointnext professional services and reference configurations for data-intensive application workloads such as AI, machine learning, data analytics, edge computing, and Internet of Things (IoT).
Red Hat has made Ceph faster, scale out to a billion-plus objects, and added more automation for admins.
Red Hat Ceph Storage 4 provides a 2x acceleration of write-intensive object storage workloads plus lower latency. Object Store Daemons (OSDs) now write directly to disk, get a faster metadata store through RocksDB, and a write-ahead log that together enhances bandwidth and IO throughput performance.
Open-source Ceph provides block, file and object storage functions from a single distributed pool of storage, and keeps three copies of data for reliability.
Ceph Storage 4 features include:
Simplified installation process, using Ansible playbooks, with standard installations completed in under 10 minutes
New management dashboard with a “heads up” view of operations to help admin staff deal with problems faster
Quality of service monitoring feature to help verify application QoS in a multi-tenant hosted cloud environment and control noisy neighbour apps
Integrated bucket notifications as a Tech Preview to support Kubernetes-native serverless architectures, which enables automated data pipelines
Ceph File System (CephFS) supports taking snapshots as a Tech Preview
Added support for S3-compatible storage classes to better control data placement
Improved scalability with starting cluster size of 3 nodes and support for 1 billion-plus objects.
Hitachi Vantara has bought the assets of Containership, a tiny, collapsed six-year-old startup based in Pittsburgh. This gives the data storage giant the means to supply storage to Kubernetes-orchestrated containers.
Hitachi Vantara will integrate Containership technology across its product portfolio in the next few quarters. It will then be able to help its customers deploy and manage Kubernetes clusters and containerised applications in their public cloud and on-premises environments.
He reckons that “deploying containers at scale across multiple cloud environments brings management and orchestration complexity that most IT teams do not have the skills to address”.
ItsKubernetes Engine is a CNCF-certified Kubernetes distribution and enables the provisioning and ongoing maintenance of Kubernetes clusters on any major cloud provider.
Containership crashed and died in September last year. The company started out as Docker-based and pivoted to Kubernetes technology in early 2018 with a Containership Cloud product. In a September 2019 blog, since deleted, CTO Norman Joyner announced the company’s closure: “Ultimately, we came up short … we have failed to monetize Containership Cloud in such a way that we could build a sustainable business.”
Physicists at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland have altered a magnetic bit’s polarity with light, potentially opening the way to denser and faster disk drives using magneto-optical technology.
Researchers László Forró, Bálint Náfrádi and Endre Horváth suggest magneto-optical drives using this method could be physically smaller, faster and cheaper than today’s disk drives. They also say it is an alternative to heat-assisted magnetic recording (HAMR).
They are now seeking investors to back a patent application and industrial partners to productise the proof of concept demonstration.
Crystal clear
The EPFL team used visible light waves to write and re-write magnetic bits at room temperature. Previously it had only been possible by cooling a magnet to –180 degrees Celsius.
The researchers used a halide perovskite/oxide perovskite heterostructure, and the abstract of their scientific paper states “demonstrate that photo-induced charge carriers from the CH3NH3PbI3 photovoltaic perovskite efficiently dope the thin La0.7SR0.3MnO3 film and decrease the magnetization of the ferromagnetic state, allowing rapid rewriting of the magnetic bit.”
A photovoltaic substance converts light into electricity. The word ‘perovskite” refers to a calcium titanium oxide mineral (CaTiO3) and the term is also used for a class of compounds which have the same type of crystal structure. Perovskites are used in solar cells.
In a heterostructure the chemical composition of the structure changes as the position within the structure changes. For example, and simplistically, a semiconductor could have dissimilar crystalline regions either side of the interface between them.
The Swiss researchers placed a thin film of perovskite material on top of a magnetic substrate. Their paper states: “We use a sandwich of a highly light sensitive (MAPbI3) and a ferromagnetic material (LSMO), where illumination of MAPbI3 drives charge carriers into LSMO and decreases its magnetism.”
MAPbI3 is Methylammonium lead bromide and LSMO is Lanthanum strontium manganite; both are perovskites.
The EPFL full paper is published behind a paywall by PNAS (Proceedings of the National Academy of Sciences of the USA).
Infinidat is offering unlimited capacity on demand and temporary swing capacity free of charge for 30 days, in response to a spike in customer demand fuelled by the novel coronavirus epidemic.
An Infinidat insider told us the company’s rationale in going public is “to generate awareness and challenge everybody to ignore quarterly earnings and think about how this industry can do something meaningful. Pure, Dell, Netapp, all have similar programs and can join; let’s start a movement”.
In a statement today, Moshe Yanai, CEO of the disk-based storage array vendor, said Infinidat started increasing inventory levels at the end of 2019, was confident of fulfilling all orders booked in the first half of 2020 and did not anticipate any disruption in its supply chain.
However, he noted that in the last 30 days the company had experienced a “20% increase in installations caused by limited availability of on-premises data center staff; 65% increase in unplanned Capacity-on-Demand (COD) activations, [and] Numerous customer requests for temporary “swing” capacity.”
Coronavirus and the supply chain
Blocks & Files polled several leading storage vendors today about the effects of Covid-19 on their supply chains. At time of writing, no supplier is reporting difficulties or staff shortages.
But Jerome Lecat, CEO of Scality, an object storage vendor, noted: “For the next few months, storage server and drive supply is a concern. While we don’t sell hardware, our customers do need hardware on which to install or expand storage systems based on Scality RING.
“As of now, we are not seeing shortages, and we do not yet see a need to adjust forecasts in anticipation of shortages. We do take this seriously, however, and are monitoring, with concern both for the health of our employees and associates, and for the health of our business.”
Storage vendors are possibly unaware of the impact of coronavirus on the suppliers of their suppliers. For instance, Resilinc, a US supply chain data firm, is forecasting shortages for capacitors and resistors, used in printed circuit boards. Bindiya Vakil, CEO, told the Financial Times: “The scariest thing we see is the highest numbers of parts [made in and around Hubei] are caps and resistors – tiny things nobody cares about – plus thermal components, plastics and resins, and sheet metals.”
Infinidat’s Moshe Yanai’s statement in full
Dear Customers,
I’m sure you are following the frequent updates concerning the COVID-19 response around the world. While the wellbeing of family and employees are at the front of your mind, I know everyone is also focused on minimizing risks to their business due to the potential disruption of global IT supply chains.
Infinidat began increasing inventory levels in December, 2019, and we are highly confident in our ability to deliver all 1H20 systems within 14 days of ordering. While we are not anticipating any disruption whatsoever in our manufacturing supply chain, we are, however, seeing significant increases in three related areas over the past 30 days:
20% increase in installation delays caused by limited availability of on-premises data center staff
65% increase in unplanned Capacity-on-Demand (COD) activations
Numerous customer requests for temporary “swing” capacity
In response to this situation, effective immediately, we are enabling unlimited COD and FLX consumption at no charge for a minimum of 30 days, which can also be extended based on the evolving COVID-19 situation and individual customer needs.
COD (Capacity on Demand) is reserve, unpurchased storage capacity on existing Infinidat CapEx systems. FLX is Infinidat OpEx storage capacity, billed monthly.
Up to 100% of installed capacity can be used for 30 days at no additional charge.
After 30 days we will work with you to accommodate your expected storage growth for the remainder of the year.
The best response to any time of need is when communities come together in response to a challenge. In that spirit, we are here to help – please reach out to me or your local team to let us know how we can be of further assistance
VMware has added Kubernetes support to run containers and virtual machines simultaneously in the new vSphere release. The virtualization giant can now also offer a single management domain that covers containers and VMs in the hybrid cloud.
vSphere 7, launched today, represents the first fruits of the company’s Project Pacific. Project Pacific is in turn a component of VMware parent Dell’s wider Tanzu initiative to enable its overall product set to build, run, manage, connect and protect containerised workloads alongside virtual machine workloads. (Read more about Tanzu deliverables, in a Dell blog.)
Deepak Patil, SVP and GM for cloud platforms and
solutions at Dell Technologies, provided a quote: “As organisations look to solve
for managing their private clouds seamlessly with multiple public clouds, we’re
now able to extend our capabilities to both VMs and containers with a single
hybrid cloud platform.”
VMware Cloud Foundation V4
VMware today also announced a new release of VMWare
Cloud Foundation, a software stack that combines vSphere, the vSAN virtual SAN
and NSX networking, which runs on premises and in the public cloud. The latest
V4 release includes vSphere 7.0 and so can run VMs and containers at scale,
according to VMware.
Dell has built a Cloud Platform system that incorporates VMware Cloud Foundation and Dell EMC’s VxRail hyperconverged hardware. It now supports running simultaneous VMs and containers on Dell EMC’s PowerEdge servers and some storage systems, including the Unity XT mid-range block and file arrays and the high-end PowerMax arrays. They can now provide storage for containers running in vSphere 7.0. Dell EMC’s PowerProtect Data Manager for Kubernetes extends PowerProtect data protection from virtual machines to K8s-orchestrated containers.
Dell’s Cloud-Validated Designs cover Unity XT and PowerMax in the Dell Cloud. The company said it can qualify external NFS and Fibre Channel (FC) storage systems for VMware Cloud Foundation but has not revealed details at time of publication.
Customers can run Kubernetes on the Dell
Technologies Cloud Platform within vSphere 7.0 within 30 days of vSphere 7.0’s
general availability. Subscription pricing is available for the cloud platform
systems.
Virtualization and containerisation
VMware traditionally virtualizes servers such that a hypervisor runs the physical server and controls the execution of virtual machines using its hardware. These virtual machines (VMs) contain an operating system and applications.
With containerisation, a controlling software entity provides the operating system and its facilities while applications are built as a set of microservices running in containers. These containers use the single set of operating system facilities and so virtualize the server more efficiently, by not duplicating the operating system instances.
The containers are scheduled to run via an
orchestration service and Google’s Kubernetes (K8s) is becoming the dominant
orchestrator.
Containerisation is becoming popular as a way of writing applications to run in the public cloud, so much so that they are called cloud native. As enterprises with on-premises data centres want to have a common environment for their applications across their own data centres and the public cloud they are beginning to embrace cloud-native application development.
This is at odds with the predominant on-premises application style which is to use virtualized servers, particularly with VMware vSphere.
VMware has extended vSphere to the public cloud
with VMware Cloud Foundation to provide a single hybrid environment. Despite
this, many customers are adopting cloud-native applications – and they want a
common cloud-native environment to cover their hybrid resources.
VMware has shown it can bring K8s into its hypervisor. Nutanix AHV (Acropolis HyperVisor) has its Acropolis Container Services and Karbon front end wrapper for Kubernetes. Other hypervisors, such as Red Hat’s KVM and Microsoft’s Hyper-V will surely follow suit. This will help their owners defend their virtual server base against containerisation encroachment and can be presented as helping customers embrace containerisation.
In this week’s data storage roundup, Infinidat has put things in place to support vSphere disaster recovery failover across metro distances while Retrospect make it simple and easy to add systems throughout an enterprise IT estate to its backup regime. Meanwhile, Toshiba has published a little update about the disk drives that store large hadron collision data at CERN.
We also have a bunch of shorts on Datrium, Formulus Black, Kingston, Pivot3 and others.
Infinidat delivers vSphere Metro Cluster support
Infinidat, the high-end storage array vendor, has announced reference architecture support for VMware vSphere Metro Storage Cluster (vMSC) with its InfiniBox Active-Active Replication.
vMSC enables a single cluster of physical host resources to operate across geographically separate data centres. InfiniBox Active-Active replication implements vMSC and achieves zero RPO and zero RTO. In other words, business services can keep operating through a complete site failure.
Pete Chargin, senior director for the vSphere platform at VMware, said in a quote: “Our customers… utilize vMSC to reduce downtime while enabling high performance. Infinidat’s vMSC reference architecture is one of the solutions that fills that need effectively for multi-petabyte enterprise environments, optimising the value of vMSC for the higher end of the enterprise market.”
Infinidat claims it is faster from a latency perspective than other vendors that offer active-active synchronous replication capability.
Retrospect builds automatic onboarding
Retrospect has announced Retrospect Backup 17 and Retrospect Virtual 2020 and updates for the Retrospect Management Console. The backup supplier said its product suite now provides hosted management service for automatic onboarding of physical and virtual backup instances.
With automatic onboarding, administrators can share a single URL with their entire company, and each employee can download the client for their platform, pre-packaged with a public key for authentication. There are no user passwords, and Retrospect Backup 17 automatically adds the new clients and starts protecting them.
Retrospect Virtual 2020 is a management facility that enables customers to monitor physical and virtual backup infrastructure via the Retrospect Management Console.
Mihir Shah, CEO of StorCentric, parent company of Retrospect, had an announcement quote: “Retrospect enables any business to backup their entire infrastructure and restore a file or a system to a single point in time–days, months, or years in the past. The ubiquity of ransomware means businesses need a data protection strategy, with on-site backups for fast restore and an off-site location. Retrospect makes it a click away.”
Retrospect Backup 17 is certified for Nexsan E-Series and Nexsan Unity storage devices – Nexsan is another StorCentric sub.
Toshiba loves Large Hadron collisions
Toshiba said the CERN Large Hadron Collider has used three generations of its disk drives since 2014 to store experimental data.
The current storage setup at CERN consists of HDD buffers with 3,200 JBODs carrying 100,000 hard disk drives, providing a total of 350PB.
CERN”s 3-phase Toshiba disk buying timeline
Toshiba’s upcoming launches for CMR (conventional magnetic recording) and SMR (shingled magnetic recording) drives will give CERN access to 16TB and 18TB drives. This will add 432 TB of new capacity per JBOD.
Toshiba is also developing a next-generation microwave-assisted magnetic recording (MAMR) technology to extend capacities beyond 20TB per HDD and beyond in coming years.
Shorts
Datrium has been awarded five new US patents for data resiliency and durability; enhanced storage performance; advancements in server-powered deduplication, encryption and compression; and data path monitoring for improved network resilience.
Formulus Black has released a Tech Brief entitled: In-Memory Storage and Formulus Black’s FORSA. This “details the unique characteristics of NAND, DRAM, and SCM, as well as discusses what makes SCM so exciting. Find out how we have changed the storage game by incorporating SCM and DRAM technologies to create a high-performance and ultra low latency in-memory storage system.” You can download a copy.
Kingston Technology has a released the DC1000M, a U.2 data centre NVMe PCIe SSD. It delivers up to 540K IOPS of random read performance and moe than 3GB/sec throughput. The drive is hot pluggable, has end-to-end data path protection, power-loss protection and telemetry monitoring. It is available in 960GB, 1.92TB, 3.84TB and 7.68TB capacities, and has a limited five-year warranty.
MEMXPRO has introduced extra long life TLC SSDs for Edge AI and 5G use. They are the PT31 series with SATA 3 interface and 10,000 P/E cycles, and the PC32 NVMe SSD series with 40,000 P/E cycles. MEMXPRO said the 10K endurance Micron B17A TLC is 3x longer, and 40K endurance Micron B17A TLC in SLC (1 bit/cell) mode is 13x longer than MLC (2bits/cell) or other industrial TLC NANDs.
Hyperconverged infrastructure supplier Pivot3 has a partnership with UK-based AIMES, a cloud and data centre service provider. AIMES’ Health Cloud will use Pivot3 kit for university and hospital researchers to collect and analyse massive amounts of sensitive data.
Houston-based OvationData stores and processes datasets from oil and gas companies in 50 countries worldwide. Oil and gas companies “chunk” their largest files and send them to OvationData and projects with 25+TB files were becoming frequent. It is using Qumulo’s file system as a single workspace to concatenate such large file chunks from customers into single files for processing or transport.
Rambus has developed an HBME 2 interface offering; a co-verified PHY and memory controller operating at a top speed of 3.2 Gbit/s over a 1024-bit wide interface. It delivers an aggregate bandwidth of 410 GB/sec from a single HBM2E DRAM stack. Find our more from a downloadable HBM2E and GDDR6: Memory Solutions for AI white paper.
StorMagic has joined the HPE Complete program as a replacement for the StoreVirtual offering that was discontinued through HPE at the end of 2019. StorMagic’s virtual SAN SvSAN software will be available worldwide immediately through HPE’s global reseller and distribution network.
New CPU developer Tachyum has become a member of two JEDEC committees developing standards for solid state memories and DRAM modules
FPGA supplier Xilinx has announced Alveo-U25 brand Smart NICs with network, storage and compute acceleration functions for cloud service providers, telcos, and private cloud data centre operators. Suggested workloads are SDN, virtual switching, NFV, NVMe-oF, electronic trading, AI inference, video transcoding, and data analytics. Xilinx says it provides higher throughput and a more adaptable engine than SoC-based NICs.
People
With CFO Ron Pasek retiring, NetApp has appointed Mike Berry as EVP and CFO. He joins from McAfee where he was also EVP and CFO.
Nutanix has promoted Sylvain Siou to VP Systems Engineering, for Europe, Middle East & Africa (EMEA). Siou will have a role supporting Nutanix’s expansion in EMEA, while maintaining overall responsibility for the company’s team of systems engineers in the region.
Pliops, the developer of a dedicated storage processor, has announced that Carnegie Mellon University assistant professor David Nagle has joined its advisory board. According to Pliops, Nagle is a key influencer in the creation of exabyte-scale databases and storage offerings for the hyperscale world. Nagle has had senior positions with Facebook, Google and Panasas.
NetApp has bought Talon Storage, a supplier of caching software that connects branch offices to file servers in the public cloud. Terms were undisclosed.
Talon, headquartered in New Jersey, was founded by CEO Shirish Phatak in 2010. It has offices in the Bay Area, the UK, Netherlands and India and claims more than 400 enterprise-class customers.
Shirish Phatak
NetApp told us it “is retaining Talon team members responsible for the company’s product innovations and customer sales and support”.
Phatak said in a prepared statement: “We’re excited about the potential for our customers to be brought more completely into the NetApp customer family, with all of their resources and vision. This is a true ‘win-win’ for Talon’s stakeholders, employees, and customers, as both NetApp and Talon share a common vision for the power of the cloud as the consolidated repository for all types of unstructured data.”
Anthony Lye, head of NetApp’s cloud data services unit, also provided a quote: “We share the same vision as the team did at Talon – a unified footprint of unstructured data that all users access seamlessly, regardless of where in the world they are, as if all users and data were in the same physical location. And to do this without impacting workflow, user experience – and at a lower cost.”
NetApp’s got Talon
Talon’s FAST (file acceleration and storage-caching technology) is a VM-based caching appliance for remote and branch offices. This gets its data from a file server in the public cloud and enables ROBO sites to do without an on-premises file server. It supports NetApp cloud services such as Cloud Volumes OnTap in AWS, Cloud Volumes Service and Azure NetApp Files. The Google Cloud Platform is also supported.
NetApp and Talon say customers can seamlessly centralise data in the cloud while still maintaining a consistent branch office experience using Cloud Volumes combined with Talon FAST software. Branch offices won’t have an an on-premises NetApp filer, so Talon gives NetApp an indirect ROBO presence.
Talon FAST supports file and block protocols. The company said late last year it would add object storage protocol support Blocks & Files supposes that NetApp’s StorageGRID object storage product will now benefit from this.
Talon spotting
There is no recorded venture capital funding for Talon. Phatak had previously started Tacit Networks, a wide area file systems supplier, in 2000 and this was bought by WAN optimisation supplier Packeteer in May 2006 for $78m. He stayed at Packeteer for two years and then joined Bluecoat when this WAN application delivery company bought Packeteer for $268m in June 2008.
He left Bluecoat in 2010 and then, after starting up Talon, founded Velocious Networks in late 2011 to provide quality of service (QoS) technology for optimising application traffic across enterprise networks. Akamai bought the latter company in November 2013.
GigaOm has published supplier comparisons for object storage and Kubernetes storage, using its Radar marketscape methodology.
The top object storage suppliers in the leaders’ area are Hitachi Vantara, Scality, NetApp and Minio, followed by Cloudian and Caringo. Red Hat and SwiftStack (soon to be acquired by NVIDIA), are poised to enter the leader’s area.
New entrant OpenIO is about to enter the challengers’ area, which is occupied by slow-moving IBM, Dell EMC and Quantum.
In his enterprise object storage report, analyst Enrico Signoretti writes: “In the past, organizations perceived object storage as a tactical way to save money with a secondary or tertiary storage tier.” This has changed and “for some organizations, object-stores can be considered primary storage similar to a block storage array, if it is used by business and mission-critical applications.”
Kubernetes storage
Kubernetes container storage is a new and rapidly evolving market. “In most cases, Kubernetes infrastructures are still relatively small,” Signoretti notes, “and applications running on them are fairly simple, with limited data storage needs.”
GigaOM classifies Portwork as the runaway (our term) leader, with Diamanti as the only other supplier in the leaders’ area. Challengers include Maya Data, NetApp and Pure Storage, which are each positioned to move into the leaders’s area, and to a lesser extent, StorageOS, DataCore and Datera.
Infinidat and Red Hat are the remaining challengers. Dell EMC, Hitachi Vantara and Dell EMC are classed as new entrants and slow movers.
Signoretti’s reports are available to subscribers but you can read the introductory paragraphs and inspect the diagrams by visiting GigaOm’s website.
Western Digital has appointed David Goeckeler, the head of Cisco’s networking and security Business, as its new CEO.
Goeckeler starts on March 9 and outgoing CEO Steve Milligan stays on as an adviser until September to smooth the transition. Milligan announced his intention to retire in November last year.
Matthew Massengill, chairman of WD’s board, issued a quote: “David is a transformative leader with an exceptional track record of driving highly profitable, core businesses at scale while innovating successful business strategies that expanded into new markets and generated new revenue sources.”
David Goeckeler.
He is “the right person to lead Western Digital in a world increasingly driven by applications and data.”
Wells Fargo analyst Aaron Rakers offered this thought to subscribers: “While we think this news is a bit surprising, we positively view the appointment of David Goeckeler as we believe his outsider perspective and broader system experience can bring value to Western Digital.”
Goeckeler is a 19-year Cisco veteran, responsible more than $34 billion of Cisco’s technology franchise and leading a global team of more than 25,000 engineers.
He said in a prepared statement: “The industry is facing an exciting inflection point where customers of every size, vertical and geography are deploying business infrastructure that is software-driven, enabled by data and powered by the cloud. This megatrend has only just now reached an initial stage of adoption and will drive a massive wave of new opportunity.”
“In this IT landscape, the explosive growth of connected devices will continue fueling an ever-increasing demand for access to data. With large-scale hard disk drive and semiconductor memory franchises, Western Digital is strongly positioned to capitalize on this emerging opportunity and push the boundaries of both software and physical hardware innovation within an extremely important layer of the technology stack.”
Comment
Goeckeler’s experience will help WD as it pushes the idea of zoned systems – host software controlling disk and flash media at a granular level to extend working life and enhance performance.
However, the sideways step by Western Digital into data centre systems under Milligan’s leadership, looks dead and buried. This led to acquisitions of companies like Tegile and Amplidata and then the sale of the resulting data centre systems assets. IntelliFlash went to DDN and ActiveScale went to Quantum.
Steve Milligan
We doubt also that Goeckeler will repeat Milligan’s tactic of pushing the NAND fab partnership with Toshiba almost to breaking point when Toshiba was in a business survival, threatening financial crisis and wanting to sell its NAND fab business unit. That partnership, inherited when the Milligan-led WD bought Sandisk in October 2015, is now with Toshiba Memory Systems’ successor business, Kioxia. It must be seen as one of WD’s most prized assets.