Home Blog Page 320

Cambridge University opts for NetApp HCI in cloud consolidation

Cambridge University
Cambridge University

The University of Cambridge has plumped for NetApp HCI in an IT infrastructure overhaul that underpins online teaching and learning.

The Covid-19 ready solution sees Citrix Workspace desktop software, running on NetApp hardware. Citrix Consulting helped to install the NetApp HCI hardware and Citrix Workspace is now available to thousands of users.

NetApp HCI enables Cambridge University Information Services (UIS) to take advantage of cloud and scale storage and compute and reduce software licensing costs.

NetApp will help UIS to centralise, consolidate, and migrate the IT portfolio to a private cloud environment. The company says it will help develop an IT billing system for use across the university. 

UIS said it “consolidated multiple storage systems and services offered by the university. We then searched for efficiencies through automating processes, improving scalability and eventually move some of our larger workloads to the cloud. Our partnership with NetApp allows us to consolidate many of our services, and saving much needed resources while delivering to staff and students’ needs in the ‘New Normal’.” 

UIS has selected NetApp HCI, previously known as the Solidfire all-flash arrays with the Elements OS. NetApp HCI comprises software running on H series hardware nodes, as this datasheet illustration depicts:

NetApp HCI nodes

NetApp says that the UIS move to the cloud has enabled students and staff to work and lecture remotely, and also meet high performance and reliability SLAs and service-level objectives for nuclear physics simulations and big data analysis involved in the university’s research programmes.

Cambridge University’s Department of Computer Science and Technology has run NetApp file servers in a system called Elmer since 2002.

Optane PMem ‘dramatically outperforms’ rival SCMs, says Hazelcast

Interview; Storage IO slows down applications and there is nothing you can do about it, except by not doing it. In-memory computing accelerates applications to sprint speed by having them run entirely in memory. This eliminates storage IO, except when the application is first loaded into memory. 

However, memory is expensive and capacity per server is limited by the number of sockets and channels up to 1.5TB. If an application’s code and data is larger than 1.5TB it can’t all run in memory. Some storage IO Is necessary.

The use of Optane persistent memory increases server memory capacity to a maximum of 4.5TB per server, greatly increasing the space available for in-memory applications. Newer memory technologies such as DDR5 and HBM should enable in-memory applications to run even faster. But how much faster?

John DesJardins, CTO of Hazelcast, the developer of an in-memory computing platform, has shared his thoughts with us about Intel Optane PMem, HBM and DDR5 DRAM. Check out our interview below.

Blocks & Files: DRAM is expensive so an in-memory app needs costly memory. Is storage-class memory, like Optane, worth the money?

John DesJardins: Hazelcast has benchmarked with Intel Optane and found it performs well, offering an attractive, lower cost alternative to DRAM.

John DesJardins

Blocks & Files: How does Hazelcast extend memory by using persistent memory like Optane or Samsung’s Z-SSD or Kioxia’s X SSD?

DesJardins: Hazelcast is optimised as a pure in-memory platform, meaning that our data, indexes and metadata all are retained in-memory for ultra-low-latency performance. So, we don’t use SSDs or other storage to extend memory.  We do offer persistence features for ultra-fast restarts or optionally for data structures such as within our CP Subsystem. These persistence features can leverage either Optane PMem or SSDs including Intel Optane SSD, OCZ Octane SSD, Samsung Z-SSD, etc.

Blocks & Files: How does it compare and contrast these different storage-class memory devices?

DesJardins: Intel Optane PMem dramatically outperforms other storage class memory in our benchmarks. We see nearly 3X faster restart times with Optane PMem, for example.

Optane Persistent Memory.


Blocks & Files: How does it use Optane PMem such that no app code changes are needed?

DesJardins: Hazelcast provides advanced Native Memory features that extend beyond Java to fully leverage memory and storage interactions available from the Operating System level, and this has been extended to leverage Intel libraries for Optane PMem. These capabilities are all abstracted and managed via simple configurations within our platform, eliminating any need for application code changes to leverage Optane PMem.

Blocks & Files: Can Hazelcast show price performance data to justify PMem use?

DesJardins: We are working with Intel on a paper that will include the price/performance TCO figures to justify Optane PMem. We expect to release this paper within the next few weeks and are happy to share it.

Blocks & Files: What does Hazelcast think of DDR5?

DesJardins: Hazelcast tracks and evaluates all advances in hardware technology to assess their impact and the benefits to our customers. As they become available on the market, we will do more detailed benchmarks. We see good potential in DDR5 to improve performance.

However, as the standard for DDR5 DIMMs was only released on July 14, 2020, we have not yet done benchmarks to know what that level of impact will be. Until now, DDR5 was only available integrated into GPUs and other specialised processors.

High Bandwidth Memory scheme

Blocks & Files: What does Hazelcast think of High Bandwidth Memory technology?

DesJardins: We are tracking High Bandwidth Memory technology but have not yet seen it mad available in server class hardware or from Cloud Service Providers outside of being integrated with GPUs or other specialised processors. The first availability of HBM in general purpose DIMMs seems to be with DDR5. We will continue to track demand and availability, as well as assess how these technologies apply to our customers’ use-cases. As technology is available, we will benchmark with it, and where it makes sense, optimise our platform to leverage it.

Blocks & Files: Does in-memory processing benefit containerised apps? Can Hazelcast work with Kubernetes?

DesJardins: Yes, in-memory platforms work very well within containerised applications. Hazelcast is Cloud-Native and supports Kubernetes, as well as Red Hat OpenShift, VMWare Tanzu and Kubernetes on major Cloud Service Providers.

Comment

The net:net here is that in-memory applications will get larger and execute in less time. They have an inherent advantage over traditional applications that can’t fit into memory and have to use storage IO.

But in-memory compute will still only be available to a minority of applications overall, until memory capacity limits such as CPU sockets and memory channels are swept away, and until DRAM becomes more affordable.

Exclusive: Igneous lays off staff, blames economy

Igneous, the Seattle-based data management startup, has laid off an unspecified number of staff, citing a “difficult economic environment”.

A company spokesperson confirmed the layoffs, adding: “We continue to serve our customers and work with our partners to deliver value.”

According to LinkedIn, Igneous employs 51-200 people. Pitchbook estimates 75 staff are on the books.

Igneous has developed a UDMaaS (Unstructured Data Management-as-a-Service) that provides a petabyte-scale unstructured data backup, archive and storage system with a public cloud backend. At the end of 2019, the company said it had 40-60 customers.

The company was founded in 2013 to develop software to manage massive file populations (read our profile). It has raised $66.7m in venture funding, including $25m in the most recent round in March 2019.

Founding CEO Kiran Bhageshpur resigned in December last year, while retaining a seat on the board. Dean Darwin, the incoming CEO and President, joined the Igneous board in June 2019. His career includes senior roles at Palo Alto Networks and F5 Networks.

Kioxia integrates KumoScale with Kubernetes

Kioxia, the memory chip maker, has integrated its KumoScale flash storage array into the Kubernetes world.

KumoScale is Kioxia’s software to operate a box of flash drives and presents their capacity across an NVMe-oF link as virtual storage volumes to host server applications. The software includes thin provisioning, snapshots, clones, drive-to-drive migration, and TCP/IP and RoCE transport support. 

KumoScale can now serve as a container host, providing managed NVMe volumes to storage applications such as distributed file systems and object stores running locally on KumoScale storage nodes. These capabilities and applications, along with other KumoScale control plane services, are deployed and managed via a Kubernetes “micro-cluster”. 

The company has added a raft of Kubernetes-related features.

  • KumoScale CSI Driver makes KumoScale volumes appear to host containers as fast local NVMe drives. The driver is available in the Cloud-native Computing Foundation repository. 
  • Containerised KumoScale Management Cluster control services are installed and managed by a private dedicated Kubernetes “micro-cluster” that manages dozens or hundreds of storage nodes.
  • Single KumoScale cluster can simultaneously serve many client clusters, with differing control and  maintaining isolation and storage quotas for each cluster.
  • Policy isolation between tenants.
  • KumoScale storage nodes can host container-based storage interfaces, like file and object storage services which are automatically installed and monitored by the KumoScale Management Cluster. 

KumoScale can serve block, file, and object storage simultaneously to Kubernetes, OpenStack, and bare metal clients. 

Kioxia NVMe-oF array difference

To date, rival all-flash array storage suppliers have integrated their arrays with Kubernetes via CSI (Container Storage Interface) plugins. They also support, or will support, NVMe-oF access. They have control plane software running in the array storage controllers.

In contrast, Kioxia KumoScale’s array has no controllers to run its control plane software. The equivalent software, KumoScale Management Cluster control services, runs as containers in a connected Kubernetes host cluster. According to Kioxia, KumoScale orchestrates storage volumes at data centre scale, like Kubernetes orchestrates containers. 

Kioxia is targeting service providers and large private cloud deployments with KumoScale. A free two-week trial is on offer, hosted on Kioxia cloud infrastructure.

Cisco clambers onto containers with two Kubernetes acqui-hires

Banzai Tree

Cisco is beefing up its Kubernetes capability with an acqui-hire, its second in recent weeks.

The networking giant said yesterday it is acquiring most of the assets of Banzai Cloud, an early stage start-up based in Budapest. It gains a cloud-native app development, deployment, runtime and security workflows and a dev team. Last month, the company bought Portshift, an Israeli developer of agentless Kubernetes container security.

The Banzai team – there are 37 employees at the company, according to Hungarian reports – joins Cisco’s Emerging Technologies and Incubation group, and will work on cloud-native networking, security and edge computing environments. PortShift devs also work at this business unit.

Liz Centoni.

Liz Centoni, Cisco SVP for Emerging Technologies and Incubation, said yesterday: “As modern cloud-native applications become more pervasive, the environments in which these applications run are becoming thinner (containers, microservices, functions), increasingly distributed and more geographically diverse.”

So how does this mesh in with Cisco and its IT network piping? According to Centoni, the “cloud-native application relies on the network to provide application and API connectivity and a runtime platform for an ever-changing cloud topology.” These two acquisitions ”underscore our commitment to hybrid, multi-cloud application-first infrastructure as the de facto mode of operating IT.”

Not only that but a cloud-native, containerised infrastructure.

Banzai screen shot

In Cisco’s recent earnings call, CEO Chuck Robbins outlined six focus areas for the company.

  • 1. Focus on the application experience for customer applications with insights, visibility, security, and even networking capabilities
  • 2. Deliver core networking capabilities as a service – with cloud-delivered SW and on-premises HW as a service.
  • 3. Focus on communication providers: the telecom providers, the service providers, and cable providers, the cloud companies, and look at big architectural transitions like 400G, or flattening of the networks, or 5G
  • 4.  The future of work with remote work being more than collaboration
  • 5. End-to-end security
  • 6. Deliver Cisco asset capabilities to the edge (carrier, cloud, and enterprise edge)

Western Digital, Microsoft and all that DNA storage jazz

Western Digital, Microsoft, Twist Bioscience and Illumina have set up the DNA Data Storage Alliance to develop a commercial DNA archival storage ecosystem.

They will devise an industry roadmap, build use cases for various markets and industries, and promote and educate the larger storage community to advance DNA storage adoption.

The founder members say that 30 per cent of digital businesses will mandate DNA storage trials by 2024, addressing an exponential growth of data that threatens to overwhelm existing storage technology.

Stefan Hellmold, Western Digital VP for corporate initiatives, said there is an “unmet need for a new long-term archival storage medium that keeps up with the rate of digital data growth. We estimate that almost half of the data storage solutions shipped in 2030 will be used to archive data as the overall temperature of data is cooling down. We are committed to providing a full portfolio of storage solutions addressing the demand for hot, warm and cold storage.”

Dr. Emily Leproust, CEO and co-founder of Twist Bioscience, said in a press statement: “DNA is an incredible molecule that, by its very nature, provides ultra-high-density storage for thousands of years. By joining with other technology leaders to develop a common framework for commercial implementation, we drive a shared vision to build this new market solution for digital storage.”

Emily Leproust in video.

Twist BioScience provides DNA fragments and data writing capabilities. Illumina has DNA sequencing and genotyping technology. Microsoft has been involved in DNA storage research projects, and Western Digital thinks the field is interesting.

The four alliance founders claim DNA data storage has the potential to deliver a low-cost archival data storage technology. They say 10 full length digital movies fitting into a volume the size of a single grain of salt.

DNA data storage encodes binary data (base 2 numbering scheme) into a 4-element coding scheme using the four DNA nucleic acid bases; adenine (A), guanine (G), cytosine (C) and thymine (T). For example, 00 = A, 01 = C, 10 = G and 11 = T. This transformed data is encoded into short DNA fragments and packed inside some kind of container, such as a glass bead, for preservation.  Such fragments are tiny and can theoretically last for an extraordinarily long time. They can be read in a DNA sequencing operation.

Twist and Microsoft have said that, theoretically, one gram of DNA can store almost a zettabyte of digital data – one trillion gigabytes. Fewer than twenty grams of DNA could store all the digital data in the world. 

That is hugely impressive but a tad misleading. Grains of salt-grain-size glass beads are in turn stored in cylindrical phials the size of spectacle cases. 

They also say DNA enables cost effective and rapid duplication. That’s not rapid on the same timescale as SSD accesses. The Microsoft and University of Washington demo system had a write-to-read latency of approximately 21 hours for its 5-byte data payload. The researchers wrote: ”While 5  bytes in 21 hours is not yet commercially viable, there is precedent for many orders of magnitude improvement in data storage.”

Karin Strauss

Dr. Karin Strauss, senior principal research manager at Microsoft, issued a quote: “We’re encouraged by the potential for more sustainable data storage with DNA and look forward to collaborating with others in the industry to explore early commercialisation of this technology.”

Ten other organisations have joined the alliance: Ansa Biotechnologies, CATALOG, The Claude Nobs Foundation, DNA Script, EPFL, ETH Zurich, imec, Iridia, Molecular Assemblies, and the Molecular Information Systems Lab at the University of Washington.

Claude Nobs? He created the Montreux Jazz Festival in 1967 and the foundation is investigating DNA storage of more than 14,000 tape reels of live jazz recordings. 

Trilio covers more K8s variants and controls with enhanced console

Shipping container

Trilio has introduced a management console and extended coverage for its Kubernetes data protection service.

With TrilioVault for Kubernetes 2.0, the company aims to provide an all-in-one SaaS, covering all the distributions and public clouds with a central facility that discovers K8s-built apps and applies protection policies. The management console means fewer command line and scripting operations.

The idea is to keep up with expanding enterprise deployments of applications built from Kubernetes-orchestrated containers. These deployments are taking multiple K8s distributions and clusters in multiple locations on board, as well as using on-premises and various public clouds for application deployment.

Murali Balcha, Trilio founder and CTO, said in a statement: “The cloud and cloud-native applications are presenting new data management challenges around scale, performance and mobility that legacy backup products simply cannot support.” 

TrilioVault for Kubernetes is now validated for Rancher Server with RKE (Rancher Kubernetes Engine), adding to existing Amazon EKS, Google Kubernetes Engine (GKE), and Red Hat OpenShift certifications. Also, Trilio has certified Cassandra, Mongo and MySQL databases for app-consistent backups and restores. The software is available as a free trial or enterprise edition at trilio.io, Red Hat Marketplace and with IBM’s Cloud Paks.

Trilio is a seven-year old startup, which bagged a single $5m A-series rund in 2017. Nevertheless, the cloud-native software company is classed as a leader in GigaOm’s Radar for Kubernetes Data Protection report, published last week. The company said it experienced 300 per cent revenue growth in the first half of 2020 as enterprise adoption of Kubernetes accelerated.

Hitachi Vantara fires up second round of layoffs in a year. Third round is in the wings

Hitachi Vantara, the leading data storage vendor, has implemented a big round of layoffs, its second in 2020. Up to 1000 staff were laid off in the week beginning 19 October, according to our sources. Company insiders are bracing themselves for another round early next year.

An Hitachi Vantara spokesperson confirmed the job cuts had affected “various departments and locations around the world” but said the numbers affected were “much smaller than the number you mentioned”.

The spokesperson declined to reveal the actual number, adding: “Of course, no matter how many were impacted, these were difficult decisions for the company, and we are doing everything we can to make the transition as easy as possible for the affected employees and teams.”

Last week we reported the departure of CMO Jonathan Martin and Digital Solutions President Brad Surak. We have since learned that CFO Catriona Fallon and Chief Product Officer Sanjay Chikarmane have also left the company.

Hitachi Vantara’s two business units, Digital Infrastructure and Digital Solutions, were formed via the merger of Hitachi’s storage subsidiary Hitachi Vantara with Hitachi Consulting. This reorg completed in January and led to a big round of layoffs affecting up to 1500 staff.

New CEO Gajen Kandiah, who joined the company from Cognizant in July, has outlined his strategic priorities for Hitachi Vantara in this blog.

Veeam and Pure snapped up the top two Kubernetes DP suppliers

Pure Storage bought Portworx in September and Veeam bought Kasten last month, to establish an early lead in Kubernetes data protection, according to GigaOm.

Enrico Signoretti, author of The GigaOm Radar for Kubernetes Data Protection, published this week, has reviewed a dozen vendors, classifying products into three categories. These are: traditional products that now support Kubernetes; cloud-native storage with data protection capabilities; and cloud-native data protection. The report examines how these can be deployed in hybrid and multi-clouds or as a software-as-a-service (SaaS) .

The Radar diagram above shows Robin joining Portworx and Kasten as the top 3 leaders. Signoretti said the report illustrates the typical characteristics of a new market, with a number of startups leading the pack. “Established vendors are still far from the bull’s-eye but are working quickly to bridge the gap with the leaders.”

Three traditional vendors are adapting existing products to support Kubernetes protection – Commvault and IBM. Cloud-native storage suppliers include Arrikto, MayaData, Portworx and Robin. Suppliers that offer cloud-native data protection include Arrikto, Commvault, Druva, HYCU, Veeam-Kasten, MayaData, Portworx, Trilio, Velero and Zerto.

“Containers and Kubernetes are dramatically different from legacy technologies, such as virtual machines and hypervisors, and traditional data protection solutions aren’t up to the task,” Signoretti writes.

That task is more than basic backup, with products needing to provide “application and data mobility, improve security, and simplify DevOps processes with copy data management.”

Data management is becoming an important feature in this market, particularly for sophisticated users, he writes. They wish to replicate applications to remote sites or from on-premises to a public cloud, and also make application copies and clones.

This is a fast-developing market and a future edition of the report will likely cover many more suppliers. For instance, traditional and important data protection vendors such as Dell EMC and Veritas are not included. However, Dell EMC’s PowerProtect recently added Kubernetes capabilities, and Veritas NetBackup, Catalogic CloudCasa, and Cohesity can also protect containers.

Cisco should junk UCS servers, says analyst. And HCI too?

money pouring into black hole

Cisco’s latest earnings (report) were dragged down by performance of the infrastructure platforms business. What gives?

In yesterday’s earnings call, Bank of America Merrill Lynch analyst Tal Liani asked the data networking giant: “Infrastructure Platforms are down 16 per cent year over year. And it’s worse than all of your competitors. If I just look at switching and routing in Juniper and Arista on a global basis, without getting into details of the composition, you’re down more than they are. And the question is, why is it down so much versus competition?”

Cisco chairman and CEO Chuck Robbins replied: “If you look at what really drove that, it was compute, and a lot of it is sort of the pricing that came through compute, which neither of those competitors you mentioned have.”

That means Cisco UCS servers and and HyperFlex HCI systems based around them.

William Blair analyst Jason Ader told his subscribers; “While Cisco has been attempting to pivot its business toward a greater mix of software and recurring revenue, we believe the pandemic has spotlighted the firm’s product deficiencies (especially in the cloud), nonstrategic assets (e.g., Cisco’s compute portfolio), and competitive challenges (best-of-breed competition chipping away in multiple product areas).”

We asked Ader if he thought Cisco should dispose of its UCS and HyperFlex product lines. He told us: “Yes on UCS. Especially if they want to be more of a software play.”

Cisco’s Manish Agarwal, director of product management for HyperFlex, blogged earlier this month that Cisco will introduce a software-only version that can run on UCS and third-party x86 servers – with a “Cisco Validated” stamp – and in the public cloud. This frees HyperFlex from UCS dependency and launches some time in 2021 But why on earth is Cisco, a networking equipment vendor, offering an HCI product at all, if not to sell its UCS servers?

This week in storage with Kioxia and Elastic

In this week’s storage news roundup, there are signs of a coming together of Kioxia and Western Digital over zoned SSDs. Also ,Elastic is adding searchable snapshots stored in public cloud object stores.

Kioxia adds QLC support to software-enabled flash

Kioxia has added support for QLC flash and weighted fair queueing to its software-enabled flash (SET) technology.

SWET enables a big data centre operator to closely manage the details of data placement on SSDs to optimise capacity, efficiency, performance and endurance across a large fleet. APIs provide for the creation of virtual devices on SSDs similar to Western Digital’s Zoned SSD concept. Kioxia is active in the NVMe technical working group for ZNS (Zoned Namespace) SSDs.

Weighted fair queueing (WFQ) is a packer-based network scheduling algorithm that provides for different bandwidth priorities to be given to different types of traffic, with different flow types given a different weight in scheduling decisions. Different weights could be given, for example, to traffic flows to the different virtual devices set up on SSDs by SET. This would enable varying levels of service to be met.

Kioxia claims this allows the user to define and control latency live as applications are running.

Searchable Snapshots

Elastic, which supplies Elasticsearch and the Elastic Stack, has announced the beta of searchable snapshots. The snapshots can be searched by analytical routines to get insights to drive operational efficiencies.

Elastic will initially support a new lower-cost cold tier of storage, which offloads redundant copies of data to the object stores to drive savings. The redundant copies are searchable snapshots of data in primary storage tiers that are placed in low-cost object stores such as Amazon S3, Azure Storage, and Google Cloud Storage. This frees up primary storage.

Elastic claims formalised data tier definitions with built-in data transition rules and integrated index lifecycle management will make it easier for customers to manage the full lifecycle of their data automatically.

In a future release, Elastic customers will be able to use a frozen tier of storage, such as AWS Glacier, where all data can be kept in lower-cost object stores. Read more in an Elastic blog .

Flash Memory Summit 2020 awards

The virtual Flash Memory Summit has concluded and attendees have voted for 18 examples of outstanding technology in various categories.

One category was the Most Innovative Flash Memory Technology  and the awards were given to:

  • SSD – Intel’s Optane Persistent Memory,
  • Controller and System – Marvell and HPE  NVMe RAID Accelerator for HPE OS Boot Device,
  • All-flash array – Pure Storage QLC-based FlashArray//C,
  • Hardware and Software Architecture – VAST Data, 
  • Industry Standards – Kioxia EDSFF E3.S SSD and Storage System, 
  • Industry Standards – NVM Express and Zoned Namespaces (ZNS),
Kioxia EDSFF E3.3 SSDs in Dell PowerEdge chassis

One award winner was Pavilion Data. This was for its HyperParallel Flash Platform in massive deployments across 10s of petabytes. It achieved 120GB/sec, 20 million IOPS, and latency of only 25µs across the fabric to meet the facial recognition needs of a Federal Government Agency.

Other winners included Pliops (compute on storage drive), Fungible (Data Processing Unit FS1600), WekaIO (Weka Accelerated DataOps for AI) and IBM with its ESS3000 all-flash array..

Shorts

Caringo has announced Swarm 12 object storage, which enables more flexible distributed protection and immediate global content use across geographically dispersed sites. The software integrates with single sign-on (SSO) platforms and has a simplified web-based UI. There is support for S3 Glacier and Glacier Deep Archive and capacity and performance optimisations for dense storage nodes and flash.

CRU Data Security Group (CDSG), a provider of data security, data transport and disaster-proof data storage devices, has taken over fellow US company Digistor, a manufacturer of secure solid-state drives and removable storage products. The transaction is described as a merger.

The CXL Consortium has released the 2.0 version of the Compute Express Link specification. CXL interface tech is for connecting data centre host processors to accelerators (graphics chips and FPGAs), memory buffers, and smart network interface controllers (NICs). CXL 2.0 is backwards compatible with CXL 1.1 and 1.0 and adds memory pooling, connecting many devices across one CXL Link, standardised persistent memory management and a CXL fabric manager.

Infinidat is marketing an edge security appliance that combines the company ‘sInfiniBox storage array; VMware Validated Designs (VVD) for SDDC implementation; and Splunk SIEM for realtime security monitoring, threat detection, forensics, and incident management.

Inspur Information and Samsung Electronics announced at OCP Tech Week a jointly-developed1U server-based open all-flash storage resource pooling solution. The is designed for local large-capacity/high-performance storage and the product architecture is to be made open source.This server uses the new Ruler SSD E1.S format, holds up to 256TB in 1U (1PB in 40U), and supports remote sharing via NVMe-oF.

Kioxia wants people to stop using the “ruler” SSD moniker. It tells us: “It’s not a big deal, but we wanted to provide some background on the ruler naming. Kioxia is in the SFF groups that are defining EDSFF E3x and E1.x, and ruler is not officially mentioned in either specification.  The term ruler was used before E1.L was actually named and … stuck around as a nickname.  E3.x is more like a traditional 2.5” form factor, so that is certainly not like a ruler.  Also, Kioxia is co-leading the SNIA SSD SIG that markets EDSFF and we are not using ruler when discussing E1.S.”

Palo Alto Networks has introduced Enterprise Data Loss Prevention (DLP), a cloud-delivered service to prevent data breaches by automatically identifying confidential intellectual property and personally identifiable information (PII).

Panasas has appointed Brian Peterson as Chief Operating Officer. he has held roles as SVP of sales and marketing, vice president of business development, and vice president of international operations. In the latter role, he ran EMEA for Emulex.

Phison PS5018-E18 controller.

Phison has built the PS5018-E18, a second generation PCIe Gen 4 NVMe controller. It has 4 PCIe lanes and uses a distributed architecture with many small cores to perform the workload in parallel. It delivers a record 7GB/sec on both read and write access. 

Pliops is developing its Storage Processor and showed it at the Flash Memory Summit. It is a PCIe card which accelerates storage processing and uses Intel QLC (4bits/cell) 3D NAND flash. The card offers thin provisioning, reduced write amplification, and a 5-year warranty. It provides RAID 5-like protection at twice RAID 0 speed, and better-than-TLC SSD performance. The product is sampling with GA in early 2021.

Redis Labs has announced the new fully integrated tiers of Redis Enterprise on Microsoft Azure Cache for Redis are available now in public preview. These new tiers bring search, time series and Bloom probabalistic data structures to Azure Cache. Azure customers can consume Redis Enterprise v6.0 like any other Azure Cache tier and use their existing commitment to Azure.

RSC Data Tornado product.

At FMS 2020 the RSC Group announced that its RSC Data Storage-on-Demand solutions support Intel’s DAOS (Distributed Asynchronous Object Storage)open-source cluster file system. It enables multi-layered storage based on Lustre file system in Disaggregated Composable Infrastructure and flexible management of NVMe disk pools. The RSC BasIS platform enables a composable approach for DAOS management combining servers with PMEM and servers with NVMe devices in pools connected with fast network fabric. 

RSC presented an improved RSC Tornado AFS intelligent data storage-on-demand system with up to 1PB capacity per node enabled by 32x Intel SSD drives with NVMe in EDSFF.L format, The node also includes two Xeon Scalable 2nd Generation processors, Optane SSD drives and Optane DC Persistent Memory modules. RSC Tornado AFS nodes use  hot water liquid cooling with record low PUE level of 1.04. 

Smart Modular MDC7000

SoftIron is joining the iRODS Consortium after certifying that its HyperDrive storage appliances are fully compatible with the iRODS architecture. iRODS develops free open source software for data discovery, workflow automation, secure collaboration, and data virtualization that is used globally by research, commercial and governmental organisations.

ThinkParQ, the company behind the parallel file system BeeGFS, has appointed Dr. Peter Braam as CTO. He has held exec roles at public companies and has founded or co-founded six start-ups, and has been working with the University of Oxford and the University of Cambridge on Radio Telescopes, Machine Learning and Data Intensive Computing. Braam will lead the R&D team, guide BeeGFS’s architectural and product development, and its strategic roadmap.

Google: Let the great cloud database migration begin

Wildebeest migration.

Google is previewing a serverless Database Migration Service (DMS) to move SQL databases to its Cloud SQL. The serverless aspect means customers do not have to set up server instances to run the migration – DMS takes care of the underlying server infrastructure provisioning and operation.

DMS minimises downtime, Google says, and protects data during migration with support for multiple secure, private connectivity configurations. This isn’t a general database migration service as DMS is a SQL-to-SQL move and uses the source database’s own replication facilities. That feature helps to guarantee data and metadata fidelity. Supported databases are MySQL, PostgreSQL, SQL Server and AWS RDS.

Andi Gutmans

Andi Gutmans, Google Cloud engineering VP and GM for databases, blows the trumpet in the press statement:

“Database migration is a complex process for most businesses. With Database Migration Service, we’re delivering a simplified and highly compatible product experience so that, no matter where our customers are starting from, they have an easy, and secure way to migrate their databases to Cloud SQL.”

And John Santaferraro, research director at Enterprise Management Associates, chips in: “We have seen an acceleration in migrations across the board, including a wave of customers leaving AWS for other cloud service providers. This makes it vital for providers like Google Cloud to provide tools to streamline these migrations.”

Google cites a Gartner study which forecast 75 per cent of all databases will be on a cloud platform by 2023. This figure is surprisingly high and if the call is correct, bad news for hybrid IT vendors, as the on-premises side of any setup would be quite small.

Customers can migrate with DMS at no additional charge for native like-to-like migrations to Cloud SQL. Support for PostgreSQL is currently available for limited customers in preview, with SQL Server coming soon. A blog by Gutmans provides background information.