Acronis has a fresh release of its True Image PC-tablet disk image backup and cloning software, with faster performance and better anti-malware defences.
It can stop ransomware and crypto-jacking attacks in real time, automatically restoring any affected files, Acronis claimed.
True Image supports Windows, macOS, iOS and Android operating systems. The latest release, True Image 2020, enables users to automatically replicate local backups in the cloud so they can have a local copy of disk image as well as a further backup in the cloud for added protection.
There are more than 100 point enhancements in the update, which is presented under a cyber protection banner and combines data protection and security attributes.
True Image 2020 has a new backup format that delivers faster backup and recovery speeds. It also enables users to browse files in their cloud backups more quickly.
True Image 2020 desktop screen.
Other enhancements include Wi-Fi network selection and the prevention of backups when a device is operating on battery power. There are also improved machine learning models for detecting malware attacks.
There are three versions; Standard, Advanced and Premium, with pricing from $49.99 for Standard and Advanced editions and $99.99 for the Premium version.
The potential of persistent memory is to return computing to the pre-DRAM era but with vastly more powerful NVRAM servers, and external storage relegated to being a reference data store.
Or so says Steven Sicola, the storage sage whose 40-year career includes senior technical roles at Compaq, Seagate, X-IO, SanDisk, Western Digital and Formulus Black
Server starvation NVRAM fix
In his presentation at the 2019 Flash Memory Summit this month, he argued that modern IT suffers from looming server starvation because data growth is accelerating and server processors and networks cannot keep up. By 2020 a tipping point will be reached, and server scaling will be inadequate in the face of data in the tens of zettabytes.
The memory-storage IO bottleneck is the fundamental flaw afflicting today’s servers. How to overcome this? SIcola’s answer is to re-engineer servers with non-volatile RAM (NVRAM) instead of DRAM.
Sicola presentation deck slide.
Such an NVRAM server would have all its primary storage in memory, and share this with other NVRAM servers, speeding up distributed computing by magnitudes, with no external storage getting in the way. The server is used for sequentially-written and read reference/archive data and holds all primary data in NVRAM memory. External storage is used for secondary, reference data.
Tipping point
Sicola reckons: “NVRAM based servers, accompanied by an NVRAM-optimised Operating System for transparent paradigm shift are just about here…this is a big tipping point.”
He thinks the flawed Von Neumann era could be about to end, with NVRAM servers: “We now see the re-birth of NVRAM with Intel’s Optane, WD’s MRAM, etc.”
NVRAM servers will be able to handle many more virtual machines and much larger working data sets, especially for Big Data, AI, and all the other applications begging for more server power, from databases, mail, VDI, financial software, to, as Sicola says, you name it.
Memory-storage bottleneck
For Sicola, the big bottleneck now is between memory and storage, where there is an IO chokepoint. That’s because storage access is so much slower than DRAM access. Despite the use of SSDs, CPU cycles are still wasted waiting for storage IO to complete.
A slide in his FMS19 presentation declared: “It has been the ‘Holy Grail’ for all software engineers and those making servers to have NVRAM instead of cache (DRAM), but have been living with a flawed Von Neumann architecture for almost 40 years!!
This bottleneck has come about because theVon Neuman computing architecture, seen in the first mainframes, has become modified and flawed
The original mainframes did not have dynamic random access memory (DRAM), the volatile stuff. Instead they used NVRAM – remember ferrite cores? These machines were examples of Von Neumann architecture, which was first described in 1945. Minicomputers also used Von Neumann architecture.
Von Neumann architecture
In these systems a central processing unit accessed instructions and data from a memory unit. This used non-volatile memory as dynamic or volatile random-access memory (DRAM) wasn’t commercially available until the 1970s. Instructions and data were loaded from an input peripheral unit and the system output data to an output peripheral unit.
As systems developed, a bottleneck was identified between the CPU and the memory unit. Instructions and data were read in to the CPU over a single data path. A later Harvard architecture machine has separate instruction and data paths, enabling the system to operate faster.
A modified Harvard architecture machine combines the data and instruction memory units with a single address space, but has separate instruction and data pathways between it and discrete instruction and data caches. The caches are accessed by the CPU.
This architecture is used by x86, ARM and Power ISA processors. The memory units use DRAM technology, dating from the 70s, with x86 processors appearing from 1978 onwards. DRAM usage meant persistent storage was separate from memory, with memory functioning as a cache.
Since then CPUs have gained faster clock rates and more cores, CPUs have developed a hierarchy of ever-faster caches and internal data paths have sped up, with today’s PCIe 3 making way for its successor, PCIe 4.0.
But not today, not yet
However NVRAM servers are not yet ready to go mainstream, according to Sicola. Today’s hybrid RRAM/flash NVDIMMs are too early as they are too expensive, do not have enough capacity and are over-complex with firmware in memory.
He thinks network bandwidth could rise markedly, with today’s switch-based networking changing over to memory-like hops with torus networks, such as those from Rockport Networks. Memory fabrics like Gen Z could also speed data transfer between NVRAM servers, with PCIe 4.0 accelerating internal traffic.
It’s a glorious future but will it come to pass? Will 40 years of x86 servers using DRAM give way to servers based around persistent memory? Some might think that’s a big ask.
Innodisk, the Taiwainese storage vendor, has built an industrial SSD with Microsoft’s Azure Sphere providing software updates, device-level analytics, data security, remote monitoring and control through the Azure cloud.
The InnoAGE SSD will be embedded on an edge device. Its firmware receives commands from Azure Sphere which in turn connects to customers’ Microsoft Azure Cloud deployments. The SSD can collect data and issue requests for management through the cloud.
Example application areas are manufacturing, surveillance, unmanned devices, vending machines and digital signage.
A Microsoft FAQ says: ‘You can connect to other clouds for app data while running Azure Sphere – or optimise efficiencies by using Azure Sphere alongside Visual Studio and Azure IoT.”
This reminds Blocks & Files of HPE’s cloud-based InfoSight device monitoring facility, which is applied to storage arrays, servers and network switches. It seems entirely logical to deploy the same facility at storage device level, particularly in non-data centre, industrial site locations.
Azure Sphere
The InnoAge SSD features a 2.5-inch or M.2 format SATA interface with an Azure Sphere system chip affixed to it. This constitutes the Azure Sphere microprocessor-based control unit (MCU), operating system and security service.
InnoAGE SSD with highlighted on-board Azure Sphere MCU chip.
MCU chips are built by certified Microsoft partners. Customers buy devices using Azure Sphere and that price entitles them to device lifetime access to the Azure Sphere Linux-based OS, updates to it and security service.
The 2.5-inch version has a 1TB maximum capacity while the M.2 variant holds up to 512GB. Both use Toshiba 64-layer 3D NAND with up to 3,000 write/erase cycles. Availability is slated for the end of September, and a development road map is planned by the end of the year.
There is as yet no information available from Innodisk about the device’s performance or pricing. Check out a brief InnoAGE datasheet to find out a little more.
Here are some storage news stories to round off the week.
Dell EMC storage comes closer to Cloudera
Dell EMC and Cloudera have strengthened their partnership whereby Dell EMC storage is used with Cloudera Big Data and analytics software.
Cloudera has a Quality Assurance Test Suite (QATS) process for certifying both Hortonworks Data Platform (HDP) and Cloudera Data Hub (CDH) with hardware vendors.
To date, Dell EMC Isilon has been validated with CDH v5.14 and HDP v3.1. Over the next few months, Dell EMC will work with Cloudera to get Isilon certified through QATS as the primary HDFS (Hadoop File System) store for both CDH v6.3.1 and HDP v3.1. It also plans to get Dell ECS certified as the S3 object store for CDH and HDP.
Dell EMC also plans to launch new joint Hadoop Tiered Storage solutions that enable customers to use direct-attached storage (DAS) for hot data and shared HDFS storage for warm/cold data within the same logical Hadoop cluster. This delivers more performance and more economic scaling.
It is working with Cloudera to align the Isilon and ECS roadmaps with Cloudera’s strategy for Cloudera Data Platform (CDP), the new Hadoop distribution that combines the best of breed components from CDH and HDP.
Dell EMC will also offer phased migration services from CDH or HDP to CDP. These migration services will be launched as CDP becomes available for on-premises deployment.
Background details can be read in a Direct2DellEMC blog by John Shirley, VP of product management.
InfiniteIO chums up with Cloudian
InfiniteIO file access acceleration can now work its metadata magic to speed access to file stored on a [URL] Cloudian object storage system.
The jointly validated system combines Cloudian HyperStore object storage with InfiniteIO Hybrid Cloud Tiering, with no changes to users, applications or systems. HyperStore can include objects stored in the public cloud.
A combined InfiniteIO-Cloudian system helps ensure data is properly placed across primary, secondary storage and public cloud storage, potentially saving millions of dollars in primary and backup storage costs.
The companies said customers can migrate hundreds of petabytes of inactive data from on-premises NAS systems to the exabyte-scalable Cloudian object storage system, with no downtime or disruption to existing IT environments.
Also customers can attain highly available enterprise-class storage with the performance of all-flash NAS in all storage tiers.
Shorts
ATTO Technology announced that its Xstream CORE ET8200 network interface has been adopted by Spectra Logic as a component of its Spectra Swarm system, which adds Ethernet connectivity to Spectra LTO tape libraries.
BackBlaze’s B2 Copy File APIs were in beta and are now public and ready to use
The UK’s University of Leicester has deployed Cloudian’s HyperStore object storage system as the foundation of a revamped backup platform.
Everspin Technologies, the developer and manufacturer of Magnetoresistive RAM (MRAM) persistent memory products, has announced an IP cross-licensing agreement with Seagate Technology. Everspin gets access to Seagate’s MRAM patents, while Seagate gets access to Tunneling Magnetoresistance (TMR) patents owned by Everspin.
The US DOD has bought a $12m, 6 petaflop IBM supercomputer housed in a mobile shipping container and using the Spectrum Scale parallel access file system. It’s designed to be deployed to the tactical edge.
Intel told an analyst briefing group about Optane DC Persistent Memory progress; “To date we are seeing good traction within the market. We currently have over 200 POCs in the works with customers.” It commented: “As with any new disruptive technology, broad customer adoption takes time. The value is there.” That remains to be seen.
Intel Optane DC Persistent Memory status.
Micron is the first memory company to begin mass production of 16Gbit DDR4 products using1z nm process technology. This delivers substantially higher bit density, as well as significant performance enhancements and lower cost compared to the previous generation 1Y nm node.
Micron unveiled the industry’s highest-capacity monolithic 16Gbit low-power double data rate 4X (LPDDR4X) DRAM. It’s capable of delivering up to 16GB of low-power DRAM (LPDRAM) in a single smartphone.
Object Matrix announced that MatrixStore, its object storage product, is now integrated with VSN’s media management and workflow automation platform, VSNExplorer.
Data protection vendor Overland Tandberg, freshly spun out of Sphere 3D, has reported $67m in revenues ,according to Black Enterprise.
Storage Architects, a Dutch consultancy specialising in digital data storage, has chosen Qumulo’s distributed file system to serve the needs of its enterprise clients.
Storage Made Easy has a new release of the Enterprise File Fabric product which features a new Content Intelligence feature hooked into its search which provides on-demand checksums, media info and AI/Machine Learning with Google Vision initially to be followed by Amazon Rekognition and IBM Watson integrations. It also includes in-browser media playback from any storage with Video Scrubbing, in addition to thumbnails and previews for an extended amount of image formats such as RAW images.
SwiftStack has announced its new Technology Partner Program and the “Works with SwiftStack” designation, to provide customers with the confidence that validated integrations have passed testing to ensure compatibility with the company’s object-based cloud storage offering.
Processor developer Tachyum has joined the Peripheral Component Interconnect Special Interest Group (PCI-SIG), a 700+ member association committed to advancing the non-proprietary PCI technology to yield a reliable, scalable solution for high-speed I/O in numerous market applications. IT’s working with PCIe v4.o and v5.0.
People
Nancy Hurley has been appointed a Consultant- Team Lead VxRail Product Marketing at Dell EMC. She had been acting CMO at Bridgeplex.
Several execs have resigned from NetApp. Joel Reich, EVP Products and Operations, has retired from running NetApp’s Storage, Systems and Software business unit.
Brad Anderson has been promoted from running NetApp’s Cloud business to EVP and GM for NetApp’s Cloud Infrastructure and Storage, Systems, and Software business units.
Object storage software house MinIO has demonstrated its storage can run up to 93 per cent faster than a Hadoop system.
In its latest benchmarks, published this week, MinIO was faster than a Hadoop file system (HDFS) configuration. In the test set-up both systems were run in the Amazon public cloud. There was an initial data generation procedure and then three Hadoop process execution times were examined – Sort, Terasort and Wordcount – first using HDFS and then MinIO software.
MinIO was slower than Hadoop running the data generation process but faster with the Sort, Terasort and Wordcount tests. It was also faster overall, based on summing the time taken for the data generation and test runs.
In a blog post announcing the results, MinIO CMO Jonathan Symonds exclaimed: “Basically, modern, high performance object storage has the flexibility, scalability, price, APIs, and performance. With the exception of a few corner cases, object storage will win all of the on-prem workloads.”
In other words, “Goodbye, NAS and SAN.”
Here’s the table of test results:
The benchmarks are charted below:
MinIO’s best result was 93 per cent faster at the Sort run.
Summing the MinIO and HDFS data generation times to the test run times we see that MinIO (3,700 secs) was overall faster than HDFS (4,337 secs.) You can check out the Minio HDFS benchmark details.
Hadoop systems rely on multiple compute + storage nodes, each handling a subset of the overall data set. It involves three copies of the raw data, for reliability, and large numbers of nodes as the data sets increase in size. This means hundreds of servers, potentially.
An object storage system is inherently more reliable at holding data than a Hadoop system and does not need to make three copies. The amount of compute resource to run an analysis can be tailored to the workload instead of being drawn from the HDFS nodes.
MinIO said analytics using its object storage software can typically run on fewer servers than an HDFS system, and needs less disk or SSD capacity to hold the data. This saves time and money.
MinIO’s software can run on-premises or in the public cloud.
Oracle is shuttering its flash storage division and laying off at least 300 employees, according to various sources.
Employees were told of the layoffs on Thursday, August 15 by Mike Workman, SVP of flash storage systems at Oracle, via a conference call.
An outgoing Oracle staffer who attended the conference call, told Blocks & Files: “Today Larry’s Band of Storage Misfits aka Pillar Data was quietly let go and the product discontinued. A small number of people were kept for take the product to the grave via sustaining mode.”
He said: “The layoff estimate is approximately 300 people.”
The job cuts were also reported in two anonymous posts on Thelayoff.com yesterday. One poster wrote: “All but 12 people in Mike Workman’s org were laid off … essentially the entire Flash Storage division. 12 were retained. Over 400 jobs cut…
And:
Oracle is focusing heavily on its public cloud business, with less emphasis for on-premises deployments of its software, and hence hardware. It now looks as if Oracle is stopping building its own flash storage hardware and software.
We asked Oracle for comment and a spokesperson’s response was: “As our cloud business grows, we will continually balance our resources and restructure our development group to help ensure we have the right people delivering the best cloud products to our customers around the world.”
Background
Mike Workman founded Pillar Data in 2001 with then Oracle CEO Larry Ellison. The company was acquired by Oracle in 2011 and subsequently became part of the Oracle Flash Storage division, based at Broomfield in Colorado. Oracle employed 2,000 people at Broomfield last year.
Oracle flash storage products included the FS1, a redesigned Pillar Axiom array.
They were integrated into the Oracle engineered systems, the components of which are architected, co-engineered with Oracle software, integrated, tested, and optimized to work together for better Oracle Database performance.
Engineered systems include the Exadata Database Machine, Big Data Appliance, Zero Data Loss Recovery Appliance, Private Cloud Appliance and Database Appliance.
The Exadata Database Machine X8 was announced in June 2019. Oracle’s Gurmeet Goindi, master product manager, Exadata at Oracle, blogged about the Exadata X8: “The X8 hardware release updates 2- and 8-socket database (compute) servers and intelligent storage servers with the latest Intel chips and significant improvements in storage. The X8 hardware release also adds a new lower-cost storage server that extends Exadata benefits to low-use data.”
NetApp is working on running ONTAP, an operating system for storage arrays, on IOT edge devices and other small things
The game was given away by a tweet today from John Martin, NetApp’s APAC director of strategy and technology, and in a subsequent exchange with Simon Sharwood, the Oz journalist (and former Register APAC editor).
Here’s the tweet stream;
Our take? We think NetApp engineers have installed ONTAP on an Arm-class or RISC-V class CPU system. That means it could run on devices with embedded ARM systems, such as smart NICs and assorted IoT edge devices. They could integrate with larger NetApp systems, on-premises or in the cloud or both.
Octavian Tanese, NetApp’s SVP for ONTAP, commented by Twitter;
We are seeing computational SSDs – from NGD, for example – and we think these could also run ONTAP on their embedded processors. Whether it would be useful to have a single drive ONTAP system is… another question.
We have asked NetApp for comment and a spokesperson said: “We don’t have anything we’re announcing publicly at this time regarding this. It is not a solution that is officially in development, but should that change we will definitely have something for you.”
Four software upgrades from Rubrik today, including no gap protection added to Rubrik Cloud Data Management, two new apps for its Polaris governance product, and a fresh lick of paint and an overhaul for Datos IO.
Rubrik Cloud Data Management (RCDM) gets continuous data protection (CDP) added in a v5.1 release. This supports vSphere virtual machines with CDP switched on and off by policy. It is certified as VMware Ready by VMware.
Polaris GPS is a cloud-resident SaaS data management tool intended to provide a unified system of record across apps and data for security, compliance, and governance. It scans on-premises, AWS and other cloud systems’ enterprise applications such as Cassandra, SAP, MySQL, Oracle and Rubrik’s RCDM data protection products, and builds up metadata about them.
Polaris’s new Sonar app provides a data classification service. This uses machine learning to look in RCDM data to find, classify and report on sensitive data, such as personally identifiable information (PII). A customer, the City of Sioux Falls, said it eliminated work done by third-party auditors and several software engineers, reducing time spent to complete hundreds of search queries from two weeks to one hour. People were freed up to do other work.
Polaris also gets AppFlows to orchestrate DR. It converts VM snapshots into Amazon Elastic Compute Cloud (Amazon EC2) instances on an Amazon Virtual Private Cloud (Amazon VPC), and can deliver RTOs of minutes.
A sharp slowdown in enterprise customers’ all-flash array purchases has sucker-punched NetApp, and though it is hiring more sales heads to fix this worry, things aren’t forecast to get better anytime soon.
NetApp warned investors two weeks ago that things had gone awry but the actual decline for its Q1 fiscal ’20 ended 26 July (PDF) was less bad than the company detailed: revenue dropped 15.6 per cent year-on-year to $1.236bn.
Product sales was where the pain was felt the most, sliding 26.4 per cent to $644m, though hardware maintenance didn’t help, dipping 7.5 per cent to $342m. Only software maintenance moving in the right direction for the company, rising to $250m from $229m.
Two-thirds of the slide was attributed by senior management to macroeconomic concerns, such as the US-China trade dispute, and a third to sales execution issues.
CEO George Kurian said he was “clearly disappointed” in the top-line figure and claimed work to improve its gross margin and “cost structure” in recent years “enables us to navigate the ongoing macroeconomic headwinds”.
“We have further analysed the dynamics of what happened in the first quarter and they confirm that we are seeing a combination of slowdown related to overall macro conditions and company specific go-to-market execution issues,” said the CEO.
“We continue to see pressure on deal sizes, longer sales cycles and deferral of transaction… our underperformance is not across the board. Our APAC, Europe and US Public Sector geographies are mostly on track.”
He used the example of two major clients that are “exposed to the China tariff situation” that had slashed their capital spending by “north of 30 per cent year-on-year”.
The top global customer accounts and corporate business in the Americas proved to be NetApp’s soft underbelly, Kurian added, saying it needed to “expand our share of wallet” with some of the bigger businesses.
“We have deep relationships with too few of these customers, which increases our susceptibility to a slowdown in spending related to the macro.”
The all-flash array run rate slipped 24 per cent to $1.7bn.
In the earnings call, Kurian said:
“What we saw with our AFA business is that customers bought more of the mid-range configurations and bought the capacity that they needed for the next year, rather than rightsizing the equipment for a three-year outlook.”
The fix? Getting in front of more buyers by investing in “sales capacity without increasing total operating expenses, by continuing to make disciplined trade offs in our spending priorities”. NetApp saved money by moving 15 EMEA countries from direct NetApp coverage to being looked after by channel middlemen.
Some 200 sales people will be bolted onto the business with a specific target for the Americas, winning new accounts and cross-selling to existing punters. Thomas Stanley, the senior veep in charge of the Americas, has resigned.
The second part of the get-better proposal was to market the ass out of its all-flash gear in areas where customers are still spending.
“We expect that this, combined with additional sales capacity will return us to a position of growth in the all-flash market,” said the boss man.
NetApp has made its services available in Azure and is adding Google to the list – Kurian’s twin brother Thomas heads up Google’s Cloud biz. No surprises here, but NetApp forecast revenue acceleration in its Cloud Data Services in the public and private cloud.
Kurian said Cloud Data Services sales are “also helping us now with new footprint on-premises, as customers now say, listen, I discovered NetApp in the public cloud. I want to give them a bigger footprint on-prem.”
Expenses for the quarter were marginally up, but the sales hit fed into the profit drop, with net earnings down 63 per cent to $103m.
Sombre outlook
The macroeconomic situation behind most of the decline is not going to get better quickly, Kurian warned.
“With regard to our outlook for the rest of the year, we are not expecting a rapid resolution of either the macro or some of the uncertainty around trade… we’re not expecting some miraculous rebound in terms of the macro environment.”
The full 2020 outlook is for revenues to be down 5 to 10 per cent on the prior year.
Meanwhile, Cisco’s latest results also showed a marked slowdown in enterprise purchases, and it too called out the trade war with China as the dampening factor. “We’re being uninvited to bid. We’re not being allowed to even participate anymore,” Cisco CEO Chuck Robbins told analysts last night.
Wells Fargo senior analyst Aaron Rakers wrote: “With increased macro and geopolitical uncertainties (most notably seen in weakened enterprise results and very weak service provider trends), Cisco is guiding F1Q20 (Oct ’19) revenue to grow 0 to 2 per cent y/y.”
This supports NetApp’s view that that the macroeconomic situation will affect all suppliers of IT to large enterprises. We’ll see if this is confirmed when Pure Storage and Dell Technologies report their quarterly results later this month.
Virtual Instruments has bought Metricly, the developer of a cloud management tool, for an undisclosed sum.
The company sells Virtual Wisdom, an infrastructure performance monitoring application. With this acquisition, Virtual Instruments said it can deliver end-to-end application and infrastructure performance monitoring in hybrid multi-cloud environments.
VI integration
Metricly works to bring performance, capacity, and public cloud cost analysis together. It learns the behaviour and workload patterns of a customer’s environment to optimise cloud resource utilisation, reduce cloud spending, and identify performance anomalies.
VI will integrate Metricly with VirtualWisdom. This will add more than 50 integrations for open source DevOps technologies covering databases, messaging platforms, microservices, and containers. There is support for cloud infrastructure services such as AWS Lambda, EC2, ECS, ASG, EMR, Microsoft Azure VMs, and Load Balancer.
The Metricly acquisition will make VI a stronger player. Where next? Competitor Dynatrace completed a $554m IPO earlier this month. Shares kicked off at $16 and are now trading at $23.60. Cue IPO thoughts in the VI boardroom?
Metricly metrics
Metricly’s roots lie in a company called Netuitive, founded in 2002 by CEO Bob Farzani as a predictive analytics company using machine learning and AI.
Metricly was set up around the rebranded Netuitive business in July 2017 by Farzani and colleagues. It has raised $11m in three tranches, including a $9m A-round in August 2018.
It has almost 100 customers and claims 100 per cent growth in recurring subscription revenue in 2018.
Toshiba and Samsung are pushing the idea of SSDs directly accessed over Ethernet as a way of simplifying the storage access stack.
This idea of directly accessing storage drives across Ethernet first surfaced with Seagate and its Kinetic disk drives. Kinetic drives implement a key:value store instead of the traditional file, block or object storage mediated through a controller manipulating the drive’s data blocks.
Samsung supports the key:value drive store idea but Toshiba opposes it.
Kinetic disk drives
Seagate was a prominent champion of kinetic drives but its technology appears to have fallen by the wayside in 2015 or2016.
Kinetic disk drives had an on-board NIC plus a small processor implementing a key:value store. The drives were directly accessed over Ethernet, with the host server operating the drive as a key:value store – as opposed to a block or file storage device.
There was complexity involved in writing host software to handle the kinetic drives. There was also the lack of significant benefit compared to the existing ways of accessing disk drives. The upshot is that customers had little appetite for kinetic drives.
Direct Ethernet access SSDs
Toshiba proposed an Ethernet-addressed SSD at the Flash Memory Summit (FMS) 2018, with a drive supporting the NVMe-over Fabrics (NVME-oF) protocol.
NVMe-oF uses the NVMe protocol across an Ethernet or Fibre Channel network to move data to and from a storage target, addressed at drive level. Data is pumped back and forth at remote direct access memory (RDMA) speeds, meaning a few microseconds latency.
Typically, a smart NIC intercepts the NVMe-oF data packets, analyses them and passes them on to a drive using the NVMe protocol.
At FMS 2018, Toshiba put 24 of its SSDs inside an Aupera JBOF (Just a Bunch of Flash drives) chassis. They were interfaced to a server host via a Marvell 88SN2400 NVMe-oF SSD converter controller, with dual-port 25Gbit/s ethernet connectivity.
The chassis achieved 16 million 4K random read IOPS from its 24 drives – claimed at the time to be the fastest random read IOPS rate recorded by an all-flash array. Each drive was rated at 666,666 IOPS.
Ssmsung KV-SSD
In November 2018 Samsung revealed it was working on a similar Ethernet-addressed drive, the Z-SSD. The underlying device was a PM983 SSDs with an NVMe connection. This, unlike Toshiba’s NVMe-oF SSD, had an on-board key:value store, making it a KV-SSD,
Samsung said it would eliminate block storage inefficiency and reduce latency.
Toshiba’s NVMe-oF all-flash JBOF
At FMS 2019 Toshiba has gone one step further, giving its SSD a direct NVMe-oF connection. The demonstration SSD uses Toshiba’s 96-layer 3D NAND and has an on-board Marvell 88SS5000 converter controller. This has 8 NAND channels and up to 8GB DRAM, and can talk to Marvell data centre Ethernet switches and so link to servers.
Toshiba said the 88SS5000-SSD combination delivers up to 3GB/sec of throughput and up to 650k random IOPS. This is a tad slower than the FMS 2018 system’s SSDs.
Marvell partners
In an FMS 2019 press releaseMarvell said the idea of a direct-to-ethernet SSD is “being advanced by multiple storage end users, server and storage system OEMs and SSD makers.” That makes Toshiba the first one to go public with what Marvell says is a market-ready product.
Marvell cited Alvaro Toledo, VP for SSD marketing at Toshiba Memory America, who talked of an SSD demonstration – with no commitment to launch product yet. Another Toshiba exec, Hiroo Ohta, technology executive at Toshiba Memory Corporation, was quoted:“The combination of our products will help illustrate the significant value proposition of NVMe-oF Ethernet SSDs for data centres.”
Blocks & Files thinks Toshiba and Marvell could usefully demonstrate some big name software product running faster and cheaper on their direct-to-ethernet SSDs.
It remains to be seen if Samsung will pick up the Marvell 88SS5000 converter controller for its KV-SSD. It will have a tougher job marketing the KV-SSD than Toshiba with its NVMe-oF SSD, because the key:value store idea adds another dimension to the sale for customers to take on board.
The composable systems connection
Western Digital, Toshiba Memory Corp’s flash foundry joint-venture partner, has a flash JBOF product: the Ultrastar Serv24-HA.
The obvious next step for Western Digital is to bring out its own direct-to-ethernet SSD for the Serv-24-HA chassis, and to also use it in the OpenFlex system.
Toshiba supports the DriveScale Composable Infrastructure, and another obvious possibility is for DriveScale to support Toshiba’s direct-to-Ethernet SSD.
Clumio, a data protection as a service startup, came out of stealth today. It is touting a cloud-native way of simplifying disaster recovery and contrasts this approach with on-premises rivals and their legacy baggage.
“It’s a huge market we are disrupting. This hasn’t been done before. It’s a hard problem to solve,” CEO Poojan Kumar told us in a telephone briefing.
Clumio was set up in 2017 and has raised $51m to date in two funding rounds. The company installed v1.0 product in the first customer sites in May 2019.
The founders are three veterans of PernixData, the developer of a hypervisor memory-based caching scheme, which Nutanix bought in August 2016. They are CEO Poojan Kumar, CTO Woon Ho Jung and engineering VP Kaustubh Patil.
Clumio co-founders. From left; Engineering VP Kaustubh Patil, CEO Poojan Kumar and CTO Woon Ho Jung.
May the SaaS force be with you
The attraction of DPaaS, Clumio says, is based on the complexity of on-premises-based backup. These handle backup servers, software, and storage products along with replication and secondary backup storage for disaster recovery, and coverage of on-premises and public-cloud based servers. These can include bare metal, virtualized, hyperconverged and containerised servers.
Sweep it all way and run a single DPaaS service covering all the bases with central management and remove the need to provision, operate or manage your own hardware and software infrastructure.
Clumio said its service scales on demand, has predictable costs, is simpler to manage than the on-premises muddle and has policies set for security and compliance.
Cloud Data Fabric
Clumio is based on a Cloud Data Fabric hosted on Amazon S3 object storage, and backs up AWS and on-premises VMware virtual machines. No doubt it will extend coverage to Azure and Google, possibly Oracle too, and server environments beyond VMware, such as KVM.
Customers connect to clumio.com to activate the service. Payment is based on the number of protected virtual machines. Clumio says they can start backing up their first vSphere workload in less than 30 minutes. The customer deals with Clumio only.
An on-premises agent, running as a virtual appliance, selects, dedupes, compresses, and encrypts data before moving it up to AWS. The agent’s operation is controlled by an AWS-based scheduling policy. The Cloud Data Fabric holds the dedupe fingerprints and a data catalog in the AWS cloud. It also provides multi-tenant user and encryption key management.
Restoration is based on the customer running a Google-like search for backups, which looks at VM backup metadata. There is a calendar view and customers can select whole VMs or particular files, such as a financial spreadsheet, to restore.
Internally, Clumio uses just one tier of S3 storage. Shane Jackson told us: “We will add multiple tiers over time.”
Competition
Clumio sees scope for SaaS-ifying data protection (DPaaS), where it is initially focused, copy data management, analytics and log management.
Competitors such as Acronis, Cohesity, Rubrik and Veeam, are all based on on-premises software and aim to move into the cloud. Jackson said the true competition is the status quo, i.e. the current way of backing up data with on-premises software.
Blocks & Files asked Kumar if Clumio saw Druva, also based in AWS, as competition. He said: “It still fits in the previous (on-premises) category. The focus was endpoint protection. Druva is taking all that legacy and trying to pivot into this [the public cloud]”.
W. Curtis Preston, Druva’s Chief Technologist, said: “Clumio’s information is years old, and the product to which they refer is no longer available. Druva has been a cloud-native and DPaaS offering for several years. We protect datacenters running VMware, Hyper-V, Linux, Windows, SQL Server and Oracle; cloud workloads like AWS and VMC; SaaS offerings like Salesforce, Office 365, and G Suite – as well as protecting laptops and mobile devices. As to focus, most of our company’s growth in the last few years has come from datacentre, cloud, and SaaS workloads. We wish Clumio the best of luck in a space we pioneered.”
There are other cloud-based data protection suppliers, such as Carbonite. Our take is that Clumio will focus on enterprises more than the small and medium business market where Carbonite operates.
These PernixData veterans have form. That and a $51m war chest could enable Clumio to hit the ground running. Let’s see how far and how fast they go.