Home Blog Page 425

A dash of storage confetti to spray over Sunday and Monday

Here’s another great big batch of storage news items, two dozen or so of them, to give you a flavour of what’s going on apart from the headline items.

Delphix and 451 DataOps report

A 451 DataOps report found;

  • 13 per cent of enterprises report data growth in excess of 2TB per day. 
  • 47 per cent say it takes 4-5 days to provision a new data environment. 
  • 66 per cent cite compliance and improved security as the number one business benefit of DataOps

The report was commissioned by Delphix, a DataOps evangelist, and we thought this could be a pay-for-play job but, we’re told, 451 was exploring DataOps adoption before Delphix got involved, starting the research a year ago and concluding this January.

Researcher Matt Aslett was, we’re also told,  genuinely surprised by an incredibly strong response from the surveyed users. 

Get the report here (registration required.)

Fujitsu’s SAP-happy HCI

Fujitsu says its PRIMEFLEX  for VMware VSAN hyper-converged system that’s optimised for SAP applications uses a PRIMERGY RX4770 M4 server. This is a 2U 4-socket system with Xeon SP silver, gold or platinum processors (up to 28 cores each), up to 6,144 GB DDR4 memory with 2,666 MHz (48 DIMM slots).. There are 8x PCIe Gen 3 slots, and up to 16x 2.5-inch HDD/SSD + 1x ODD or up to 12x PCIe 2.5-inch SSD SFF.

The vSAN storage for SAP HANA is based on NVMe disks or cache and SATA SSD for data.

The HCI SW technology stack comes from VMware. All SW which is currently supported by VMware vSAN can also be used with PRIMEFLEX or VMware vSAN. Network Virtualisation can be optionally integrated with VMare NSX.

Quobyte’s HPC storage gets into African pest control

HPC storage software producer Quobyte has teamed up with the Centre for Agriculture and Bioscience International (CABI) to help African farms fight pest infestations better.

The Quobyte software runs on the UK’s JASMIN supercomputer and contributes to a PRISE (Pest Risk Information Service ) project via a big data analytics process which helps small farms in Ghana, Kenya, and Zambia.

JASMIN

CABI is an international nonprofit that uses science and technology to solve some of the world’s problems in agriculture and the environment.

PRISE uses earth observation technology, satellite positioning, weather, and pest lifecycle information to forecast the risk of pest outbreaks and so enable farmers to take preventative steps to reduce crop losses and ensure better yields. The farmers can, for example, identify a disease, spray a pesticide, adjust irrigation, and so forth.

The project is able to issue forecasts a week or more in advance, and will roll out in more countries early next year, There are plans to bolster the service by collecting data from additional countries, including crowdsourced data from the fields, supplied by farmers – to increase output and improve prediction modelling accuracy.

Contributors to PRISE include King’s College London, UK Space Agency, Swiss Agency for Development and Cooperation, Australian Centre for International Agricultural Research, and the UK Science and Technology Facilities Council (STFC) Centre for Environmental Data Analysis. 

Shorts

Atto Technology will exhibit its XstreamCORE 8200 at he HPA Tech Retreat, Feb. 11-15, Palm Desert, CA. It is a hardware-accelerated protocol bridge with storage controller features that allows multiple workstations to share a pool of SAS storage over an iSCSI or iSER network. The product features two 40Gb Ethernet input connectors and four x4 12Gbit/s Mini-SAS HD connectors. It achieves up to 1.2 million 4K IOPS and 6.4GB/sec throughput per controller with only two microseconds of added latency. 

Atto says that, with XstreamCORE 8200 you can easily configure and manage pooled block storage using an interface less complex than traditional server interfaces. The pooled storage can be assembled using new or re-purposed compatible storage hardware.

A Backblaze blog discusses how hot and cold in data storage describe the availability of the stored data and how often it’s typically accessed. It takes a look at the differences between hot and cold storage, the advent of hot cloud storage, and which temperature is right for your data storage strategy.

Caveonix announced a collaborative relationship with Dell EMC to offer a hardened risk and compliance management solution for services providers based on its RiskForesight product. It’s designed to enable service providers to offer continuous cyber and compliance risk management for customers in hybrid and multi-cloud deployments on dedicated or multi-tenant configurations.

Analytics startup Databricks banks big bucks. It’s raised a massive $250m in an E-round taking its total raised to $498.5 and valuation to $2.75bn. Not bad for a company founded in 2014. Multi-round investor Andreesen Horowitz led the round with existing investors lNew Enterprise Associates, Battery Ventuires and Green Bay Ventures pumping cash in, along with Microsoft and the Coatue Management hedge fund.

DDN (DataDirect Networks) says HPC storage provider RAID Inc. is a preferred reseller of DDN’s distribution of the Lustre scalable parallel file system software.

Dell EMC will ship a ruggedised version of Azure Stack starting this quarter. It includes server, storage, networking and Azure Stack, which should be suited for edge use cases in the military, energy and mining industries.

Cloud scale-out filesystem SW startup Elastifile now has support for dynamic provisioning of container volumes via a new software driver compliant with the Container Storage Interface (CSI) specification. The CSI driver integrates the Elastifile Cloud Filesystem’s NFS capability with Kubernetes and other container orchestration products.  

The FCIA (FIbre Channel Industry Association) has a FICON webcast on Feb. 20, 10am PT // 1pm ET.  It says FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilise Fibre Channel as the underlying transport. The webcast described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges.

MapR Ecosystem Pack (MEP) 6.1, gives developers and data scientists new language support for the MapR document database and support for Container Storage Interface (CSI). The CSI support provides persistent storage for compute running in Kubernetes-managed containers. MapR says that, unlike the Kubernetes Flexvol-based volume plugin, storage is no longer tightly coupled or dependent on Kubernetes releases. An Apache Kafka Schema Registry allows the structure of streams data to be formally defined and stored, letting data consumers better understand data producers. More info here.

Pavilion Data Systems, an NVMe-oF storage technology business, has joined the Storage Networking Industry Association (SNIA) to help create standards around NVMe-oF for data management and security, adding to its ongoing efforts as a contributor to NVM Express.

Quest has joined the Veeam Alliance Partner Program. Its Quest QoreStor software-defined secondary storage with deduplication has been verified as a Veeam Ready Repository.  With it, Quest says, customers can accelerate their Veeam backups by more than 300 per cent, and reduce backup storage requirements by more than 95 per cent.

Rambus announced the tapeout of its GDDR6 PHY on TSMC 7nm FinFET process technology. This provides the industry’s highest speed of up to 16 Gbit/s, providing a maximum bandwidth of up to 512 Gbit/s. GDDR6 is applicable, it says, to a broad range of high-performance applications including networking, data centre, advanced driver assistance systems (ADAS), machine learning and artificial intelligence (AI.) It’s available from Rambus for licensing. 

MRAM developer Spin Memory, previously known as Spin Transfer Technologies, says an additional investor, Abies Ventures of Tokyo, Japan, has joined its Series B funding round. It joins existing investors Applied Ventures LLC, Arm, Allied Minds, Woodford Investment Management and Invesco Asset Management. Spin Memory intends its MRAM to replace embedded SRAM and a range of other non-volatile memories.

SwiftStack has signed Carahsoft as a reseller, with Carahsoft acting as SwiftStack’s Federal Distributor and Master Aggregator, making the company’s industry-leading multi-cloud storage products available to the public sector and Carahsoft’s reseller partners via the company’s NASA Solutions for Enterprise-Wide Procurement (SEWP) V Contract. SwiftStack’s storage products are available immediately via Carahsoft’s SEWP V contracts NNG15SC03B and NNG15SC27B.

Contaner system monitoring startup Sysdig says that, in 2018, it more than tripled the number of Fortune 500 customer deployments, with over 40 per cent of deployments using the Sysdig platform for both security and visibility use cases – up from 5 per cent this time last year. It nearly doubled its global head count and grew to over 30 locations world-wide. Downloads of Falco, the company’s open source container runtime security offering which is now a Cloud Native Computing Foundation Sandbox project, grew six times over in 2018.

Veritas says its top backup products; NetBackup and Backup Exec, have attained Amazon Web Services (AWS) Storage Competency status. Veritas is an AWS Partner Network (APN) Advanced Technology Partner offering solutions validated by the AWS Storage Competency. NetBackup and Backup Exec support multiple AWS storage classes, including Simple Storage Service (S3), S3 Standard-Infrequent Access (S3 Standard-IA), and Glacier.

Virtual Instruments, involved in hybrid infrastructure performance monitoring and management, says its VirtualWisdom Platform Appliance v5.7  has completed the Common Criteria certification process. he completion of the Common Criteria certification under the NIAP-approved collaborative Protection Profile for Network Devices (NDcPP) gives governments and end users confidence that the VirtualWisdom Platform Appliance has passed documentation and testing requirements.

Distributor Exertis Hammer has expanded its Western Digital offering to include rights to distribute IntelliFlash across EEA, South Africa and Israel. It can now distribute the entire Western Digital Data Centre Systems (DCS) range. Paul Silver, ex-VP EMEA for WD-acquired Tegile, and now Senior Director EMEA DCS Sales at WD says; “Exertis Hammer holds an established position … at the forefront of storage technology distribution.”

People

Gabriel Chapman

Gabriel Chapman has left NetApp and joined Gartner as a Senior Director Analyst on the Data Center and Cloud Operations team within the Gartner for Technical Professionals research group. He had been a Principal Architect – SolidFire Office of the CTO at NetApp, and a Senior Manager for Cloud Infrastructure. He spent time at Cisco and SimpliVity before that.

Davis Johnson has joined Cohesity as its VP for Federal Sales. He has Riverbed, Oracle and NetApp public sector experience in his 30-year CV. Cohesity’s public sector customer base grew by nearly 200 per cent in the second half of the last fiscal year.

Customers

Exagrid (deduping backup to disk arrays) says it has provided its products to American Standard since 2009. This customer is a subsidiary of LIXIL, and has produced residential and commercial products for kitchen and bath for over 140 years. ExaGrid integrates with all of the most frequently used backup applications, including Veeam, which American Standard uses to back up its virtual environment, and Veritas NetBackup, which is used for the remaining physical servers.


Intel top semiconductor revenue dog again as Samsung slumps

Intel has regained the semiconductor revenue lead, re-overtaking Samsung.

Fourth quarter Intel revenues were $18.66bn while Samsung’s semiconductor revenues were KRW18.5 trillion, which converts to $16.686bn; below the Intel number.

Objective Analysis’ Jim Handy says; “When Samsung’s semiconductor revenues rose above Intel’s that was big news.” 

He’s charted the trend in their comparable revenues to show Samsung passing Intel some six quarters ago and now Intel regaining the lead;

Samsung said its fourth quarter earnings were affected by a drop in demand for memory chips used in data centres and smartphones. 

It stated; “The Memory Business saw overall market demand for NAND and DRAM drop due to macroeconomic uncertainties and adjustments in inventory levels by customers including datacenter companies and smartphone makers. For NAND, overall demand was low, as major customers opted to hold back on orders in anticipation of further reduction in price.

“Amid the lackluster sale of smartphones, a trend toward high-density in mobile persisted, and the All-Flash-Array portion increased in the fourth quarter. In the case of DRAM, server demand declined due to inventory adjustments by datacenter companies. For mobile, while mobile set demand reduced due to weak sales of new smartphones by major customers, growing orders of high-density products over 6GB content partly eased the negative trend.”

Samsung thinks: “Demand for memory is seen gradually recovering from the second quarter.”

Handy says: “This all has to do with the price collapses first in NAND flash, and then in DRAM.  I see no reason to expect a price recovery for quite some time.  There’s an oversupply that has to be slowly worked off.”

He suggests that; “This is one of those times when it’s a lot more fun to watch  the semiconductor market than to participate in it!”

Who are the Blocks & Filers?

Most Blocks & Files readers live in the USA and half read content on cell phones.

Three months after the site formally opened an analysis of its readership shows 45 per cent used a desktop or notebook, 5 per cent a tablet, and 50 per cent a mobile device. That surprised us – and made us resolve to be more careful in showing diagrams with fine detail.

The geographical spread of our readers is;

  • 65.0 per cent USA
  • 13.3 per cent UK
  • 4.9 per cent Canada (c70 per cent North America)
  • 3.8 per cent India
  • 2.8 per cent Germany
  • 2.6 per cent Israel
  • 2.3 per cent Australia
  • 2.3 per cent Japan
  • 1.8 per cent Singapore
  • 1.6 per cent France
  • 1.9 per cent somewhere else

A pie chart shows North American dominance;

In the last thirty days 64 per cent of our readers were new and 36 per cent were existing ones.

The most popular articles in the last month have been;

Hot topics are filers in the cloud, 3D XPoint, SSDs vs disk drives, NVMe over Fabrics, Nutanix and IBM.

Thank you readers one and all. We’re most grateful.


Attention! Qumulo adds Atempo file migration data mover

Here’s something to move you; scale-out file storage supplier Qumulo has launched a data migration service.

Qumulo says file estate migration can be labour-intensive, and require software and hardware rarely used in daily business operations. It is better to use specific tools in a migration project and get the job done faster and better. That’s what the Data Migration (DMS) service is and does.

Atempo’s Miria data protection technology is used in the migration process. It was previously known as ADA (Atempo Digital Archive) and includes file data migration capabilities, which can migrate millions of files to an archive or from one NAS system to another. Enter Qumulo and its need to help customers migrate files from existing NAS products to the Qumulo one.

DMS includes project management, execution, Miria software and servers.

Qumulo manages the entire data migration process from planning, to execution and validation. DMS maintains all permissions, directory structures, and application integrations seamlessly after the data has been moved. 

Once the data set size, network capacity and existing storage system performance capabilities are documented, the migration can be scheduled to meet migration timeline requirements. User authentication, and all rights management policies remain in-place during the migration.

Application access to files continues unhindered throughout. Qumulo says the impact on the customer’s IT system is lowered through an adaptive read/write mechanism in the software which balances availability and throughput.

Herve Collard, VP of marketing at Atempo, gave out a canned quote; “Atempo’s Miria technology stack enables organisations to securely migrate large volumes of files while providing the best performance in moving data from legacy systems to Qumulo’s file storage, be this on-premises or in the cloud.”

Sounds good. Quite moving.

Virtuozzo Storage is “like Ceph, only faster”

If you want to have file, block and object storage in one product then Ceph does just that. And so does Virtuozzo Storage – but five times faster, it says.

Who and what is Virtuozzo?

History

Virtuozzo was early server virtualization technology, released by SWsoft in 2002. SWsoft itself was founded in 1997 as a privately-held server automation and virtualization company. It bought Parallels in 2007 and so gained virtualization technology for the Mac. This had a strong brand image and SWsoft changed its name to Parallels in 2008.

Times changed and, in 2015 Parallels sold an acquired Odin service provider business to Ingram Micro, spun out Virtuozzo in 2016, and was itself bought by Corel in 2018.

Timeline

  • 1997 – SWsoft founded as privately-held server automation and virtualisation company,
  • 2000 – Virtuozzo invents OpenVZ operating system level virtualisation technology to produce virtual private servers in containers (not Docker-style application containers)
  • 2002 – Virtuozzo (System) Containers v2.0 released providing server virtualisation,
  • 2003 – SWsoft buys Confixx and Plesk web hosting products,
  • 2007 – SWsoft buys Parallels and its virtualisation software for the Mac,
  • 2008 – SWsoft changes its name to Parallels.
  • 2015
    • In February SWsoft buys 2X Software and rebrands its service provider business to Odin,
    • In December SWsoft sells Odin to Ingram Micro,
  • 2016
    • Parallels spins out Virtuozzo
    • George Karadis becomes Virtuozzo CEO
    • Parallels spins out Plesk,
  • 2018 – Corel buys Parallels,
  • 2018 – Alex Fine becomes Virtuozzo CEO having joined the company in November.
Virtuozzo CEO Alex Fine

Today Virtuozzo, the sole  remnant of SWsoft, is owned by a group including Serguei Beloussov, which also own Acronis. This data protection vendor recently introduced a hyperconverged infrastructure (HCI) appliance.

Although Acronis and Virtuozzo are separate companies there is some ongoing collaboration. For example both companies have hyperconverged infrastructure products and Blocks & Files would find it likely collaboration was involved here.

Virtuozzo has around 120 employees, and is headquartered in Switzerland, with development centres in Russia and the UK. There are some 500 customers running its VPS software on tens of thousands of physical servers, split between service providers and enterprises, and it is profitable, with revenues in the low 8-digit dollars/year.

Virtuozzo business and technology

Virtuozzo OpenVZ operating system containerisation technology takes a server with a Linux OS and has the kernel technology duplicate virtual private servers (VPS) so that multiple virtual servers can run on one physical server. This has less overhead than hypervisors, such as vSphere and KVM, and means more virtual servers can run on the physical server base.

However only the Linux OS can run in the VPS’, not any other OS, making it somewhat limited.

The company has since developed a hypervisor-based server virtualisation product based on KVM and running virtual machines.

Current Products

Virtuozzo 7 is a CentOS7 KVM hypervisor-based server virtualisation product. The company says it has added some 200 enhancements, including a full set of Hyper-V enlightenments. These enhancements make it faster than the base KVM;

Interestingly it includes a backup facility; shades of Acronis collaboration?

Intel’s Optane 3D XPoint DC P4800X SSD technology makes Virtuozzo 7 go faster. Testing found  up to 150 per cent better random read and seven times better random write performance for the P4800X, compared to previous-generation Intel SSDs; 350GB Intel SSD DC P3700s. 

Grab a Virtuozzo 7 data sheet here.

Virtuozzo Storage, which has been in production since 2014, is a multi-protocol product, offering iSCSI block, NFS file and S3 object storage. It can provide storage for Docker-style containers as well. 

The scale-out software runs on x86 servers, and supports tiered storage, using SSDs, fast and slow disk drives and is faster than Ceph, up to 5 times faster at sequential writes;

Around a third of Virtuozzo’s total customers use this storage product and its footprint is measured in petabytes. It is often used in conjunction with Virtuozzo 7.

Get a datasheet here.

The Virtuozzo Infrastructure Platform is a hyperconverged Infrastructure (HCI) product using x86 server hardware, Virtuozzo VM-based virtualisation, the software-defined storage, virtual networking and a common management facility.

The intended market is service providers offering an end-to-end private cloud, a virtual private cloud. The product is now being deployed with one or two customers using it in production.

Multi-tenancy and self-service are coming in the next couple of quarters.

A datasheet can be obtained here (registration needed.)

Comment

What we have here is a supplier, with hypervisor-based server virtualisation and multi-protocol SW-defined and scale out storage. These are combined in an HCI product with Virtuozzo thinking  the whole data centre storage area is moving in the direction of HCI.

Blocks & Files thinks Virtuozzo will rapidly develop its HCI technology and also move to add back-end public cloud tiering to its storage. Alex Fine, the CEO, defines himself as the Chief Energy Officer; so he’s a man in a hurry.


Feeble hybrid cloud capabilities for Dell and IBM in Gartner object storage report

Lack of Hybrid Cloud object storage capability sends Dell EMC, IBM, Red Hat and SUSE down to the bottom ranks in Gartner’s Critical Capabilities for Object Storage report.

The report evaluates object storage suppliers’ capabilities in five use cases; analytics, archiving, backup, cloud storage and hybrid cloud storage.  It is based on an evaluation of a suppliers’ strengths in capacity, storage efficiency, interoperability and five other attributes. Each of these attributes is given a weighting for each use case and the vendor scores calculated using the attribute scores modified by the use case weightings.

Generally the supplier rankings are fairly stable and consistent, except in the hybrid cloud use case, which bridges on-premises and public clouds. Here Dell EMC ECS, IBM COS, Red Hat Ceph and SUSE Enterprise Storage are all given a ‘not applicable’ status, while Scality RING gets the top score. It is second in all the others.

The report can be downloadedfrom Scality’s website (registration required.)

We summed all the scores in all the use cases by vendor to get an overall sense of how they compared, and charted them:

You can see how Dell EMC, IBM, Red Hat, and SUSE are all penalised by their inferior hybrid cloud ranking. DDN is bottom but that is because it is relatively poor in all use cases, including hybrid cloud,

If we rank the vendors with the hybrid cloud scores removed then the chart is different and shows IBM is the top-ranked vendor, with Dell EMC in fourth place.

Huawei, Caringo and DDN’s object storage products get a low ranking.

The Gartner report writers’ key findings are:

  • Enterprise endeavors to port to object APIs for on-premises applications continue to happen slowly; as a result, most customers continue to require file protocols.
  • Most object storage products should come with cautionary warnings to enterprises considering lackluster Network File System and Server Message Block designs in object storage products.
  • Late-adopter enterprises are beginning to evaluate on-premises object storage products to manage unstructured data growth; however, they don’t always understand the optimal use cases and challenges that arise when using object storage for general-purpose, unstructured data.
  • The best use case for object storage is one in which an API-driven application requires a large repository for unstructured data that would be inefficient if stored using traditional file systems.
  • The worst use case for object storage is one in which an end user consumes object storage through a file gateway for personal and home directories, due to inherent incompatibilities between the file gateway and the object storage platform.

Their recommendations are:

  • Evaluate distributed file systems, rather than object storage products, if the primary use is file-based. If cost is the primary driver, verify that there aren’t similar savings from existing file-based storage; cost savings from object storage typically show up in large-scale deployments.
  • Deploy object storage products for applications that require large repositories for unstructured data in which they control the API integration.
  • Insist on speaking to object storage vendors’ reference customers, particularly for multisite deployments, because reliability is a significant challenge for vendors with less-mature support for such deployments.
  • Evaluate products from vendors that offer integrated backup appliances, rather than selecting their own object storage solution behind third-party backup software.

This Gartner Critical Capabilities report is worth reading by anyone thinking about deploying object storage.


Arcserve goes head-on against Commvault, Rubrik, Unitrends and Veritas

Watch out Commvault, Rubrik, Unitrends and Veritas; Arcserve is coming right at you.

It has updated its UDP appliance with 9000 series models that integrate backup, disaster recovery and backend cloud storage, enabling it to compete more strongly with other unified data protection vendors.

They replace the previous, second generation UDP 8000 series, which were purpose-built backup appliances similar to those of Data Domain.

Arcserve claims this is the market’s first appliance purpose-built for DR and backup. Compared to the UDP 8000s this all-in-one option for onsite and offsite backup and DR boasts:

  • Cloud services that enable companies to spin up copies of physical and virtual systems directly on the appliance, and in private or public clouds
  • Twice the effective capacity as previous models (In-field expansion up to 504TBs of data per appliance and up to 6PBs of managed backups through a single interface)
  • A new hardware vendor that enables Arcserve to deliver onsite hardware support in as fast as four hours, and high redundancy with dual CPUs, SSDs, power supplies, HDDs and RAM.

Who is the HW supplier? Arcserve says it cannot say but it is the #1 hardware vendor in the world and is U.S. based. [Who said Dell?]

Arcserve schematically shows the 9000 appliance’s deployment;

The A9000 series features;

  • Up to
    • 20 x86 cores,
    • 768GB DDR4-2400MHzRAM
    • 504TB effective capacity
    • 6PB backups managed
  • 20:1 dedupe ratio with global dedupe
  • SAS disk drives and SSDs
  • 12Gbit/s RAID cards with 2GB non-volatile cache
  • Expansion kits to bulk up base capacity up to 4x
  • High-availability add-on
  • Cloud DRaaS add-on
  • Real-time replication with failover and failback
  • Pump data offsite to tape libraries

There are 11 models; 

Arcserve 9012, 9024 and 9048 appliances deliver up to 20 TB/hour throughput based on global source-side deduplication with a 98 per cent deduplication ratio. The 9072DR, 9096DR, 9144DR, 9192DR, 9240DR, 9288DR, 9360DR and 9504DR deliver up to 76 TB/hour throughput based on global source-side deduplication with a 98 per cent deduplication ratio. 

The appliances can protect VMware, Hyper-V, RHEV, KVM, Nutanix AHV, Citrix and Xen VMs with agentless and agent-based backup. They can back up and recover Office 365, UNIX, FreeBSD, AIX, HP-UX, Solaris, Oracle Database, SAP HANA and more.  

Supported back-end clouds include the Arcserve Cloud, AWS, Azure, Eucalyptus and Rackspace. 

A dozen appliances and 6PB of backups can be managed through one interface.

Arcserve says it can take as little as 15 minutes to install and configure the appliance, and it comes with 4-hours and next-business-day on-site support options.

 Competition

Arcserve says its biggest competitors with this announcement are Unitrends, Rubrik and Veritas [US] and Veritas, Commvault and Rubrik [outside of US]. Unitrends, Rubrik and Veritas use a different, less preferred hardware vendor outside of the U.S.

It makes the following competitive claims;

Unitrends

  • Unitrends disaster recovery is very limited and there are no expansion capabilities
  • On-appliance recovery is only supported for Windows, and only if backup is done with Windows-based imaging
  • You can’t backup VMware/Hyper-V VM host-based and use on-appliance recovery
  • Any of their Instant Recovery rehydrates the entire VM, and that’s why it’s very slow (at least 20 minutes per TB).
  • You must allocate 100 per cent of Windows machine used storage for IR, removing it from backup storage.’
  • No support for hardware snapshots – Arcserve supports NetApp, HPE 3Par and Nimble
  • No UEFI on-appliance

Veritas:

  • No on-appliance DR or HA – appliances do not include storage and require storage shelves, driving costs and complexity
  • Veritas does not claim dedupe ratios.
  • Very high cost of units, software, maintenance, expansion
  • Complex NetBackup software at the core, consuming IT pro time, lowering IT productivity

Rubrik:

  • 80TB is capacity of Rubrik model r6410.
  • 400TB is based on an advertised dedupe efficiency of 5:1.
  • High list price, and targeted primarily at enterprise
  • No on-appliance DR or HA – separate infrastructure required, driving up cost and complexity
  • No 4-hour, or even NBD support commitment – best effort only

Commvault

  • HyperScale cluster can scale infinitely. 262TB is for 3x Commvault HyperScale
  • 3300 appliances with 8TB drives. Commvault does not claim deduplication ratios.
  • Requires a minimum of three appliances to operate.
  • Very high cost of units, software, maintenance, expansion
  • Complex Simpana software consumes IT pro time, lowering IT productivity
  • No on-appliance DR or HA – separate infrastructure required, driving up cost and complexity

Development plans

New focus areas for the next generation of Arcserve Appliances will centre on expansion enhancements. 

Arcserve will launch a new version of its UDP software featuring Nutanix and OneDrive support, as well as a next-gen of our RHA which will include a host of new support for Linux HA to AWS, Azure, VMware and Hyper-V, Windows Server 2019, and more.

Arcserve Appliance customers will get a free upgrade to the new versions with all these features as part of their maintenance benefits. Currently general availability for these new products is targeted at late spring / early summer 2019.

Availability and pricing

All new trial and licensed customers of Arcserve UDP Appliances are using the new Arcserve 9000. New Arcserve UDP Appliance customers will be able to use the new series within the next month.

The new Arcserve Appliance series is available now worldwide through Arcserve Accelerate partners and direct.

The starting list price for the backup only (DR not included) Appliance is $11,995. The starting list price with DR included is $59,995.

WekaIO goes higher than Summit

Well, well; remember back in November when WekaIO, with its Matrix filesystem, took second place in IO-500 10 Node Challenge, with the Summit supercomputer taking first place?

About turn, because the Virtual Institute for I/O (VI4IO) , which maintains and documents the IO-500 10 Node Challenge List, has recalculated its results and awarded WekaIO first place.

Why 10 nodes?

By limiting the IO-500 10 Node Challenge benchmark to 10 nodes, the test challenges single client performance from the storage system. Each system is evaluated using the IO-500 benchmark that measures the storage performance using read/write bandwidth for large files and read/write/listing performance for small files.

With the November scoring, WekaIO served up 95 per cent of the Summit supercomputer’s 40 storage racks IO using half a rack’s worth of its Matrix scale-out fast filer software. Matrix ran on eight Supermicro BigTwin enclosures, and scored 67.79, coming within 5 per cent of the Oak Ridge IBM Summit Supercomputer’s 70.63 score. Summit ran its system on a 40-rack Spectrum Scale storage cluster.

Bug detection alert

The VI4IO people found a bug in their tests and state; “we fixed the computation of the mdtest score that had a bug by computing the rate using the external measured timing.”

The new IO-500 10 Node Challenge List ranking results.

The new score for WekaIO is 58.35 while the now second-placed Summit system scores 44.30. A Bancholab DDN/Lustre system is third with 31.50.

That means Matrix is 31 per cent faster than IBM Spectrum Scale and 85 per cent faster than DDN Lustre.

So it’s official; WekaIO’s Matrix is the fastest file system in the world, based on the IO-500 10 Node Challenge List, beating Spectrum Scale running on the world’s fastest supercomputer, Summit. That means WekaIO is higher than Summit.

Seven Pillars of IBM Storage wisdom

You would think IBM was a storage chemist; there are that many IBM Storage Solutions floating around. And now we have three more with another four point storage product announcements as well in a Herzog blog blast.

IBM Storage Solutions are blueprinted bundles of product which are pre-tested and validated for specific application areas.

Eric Herzog is IBM’s Storage Division CMO and its VP for world-wide channels. His latest blog says;

  • IBM Storage Solution for Blockchain runs on either NVMe FlashSystem 9100 infrastructure or LinuxONE Rockhopper II. It includes Spectrum Virtualise, Spectrum Copy Data Management, and Spectrum Protect Plus. IBM claims it increases blockchain security with 100 per cent application and data encryption support and reduces test, development and deployment time for both on/off-chain solutions. Get this; Big Blue also claims it improves time to new profits from days to hours.
    • Perhaps you could suggest to IBM that you pay for it by giving IBM a percentage of these new profits? That would lower your CAPEX and OPEX needs.
  • IBM Storage Solution for Analytics is based on IBM Cloud Private for Data, which is supported on the NVMe-based FlashSystem 9100. It is said to accelerate data collection, orchestration and analytics, and simplify Docker and Kubernetes container utilisation for analytics apps.
  • IBM Storage Solution for IBM Cloud Private, with IBM Cloud Private being a Kubernetes-based platform for running container workloads on premises. This has Spectrum Scale parallel access file storage added to the existing block and object storage.
  • FlashSystem A9000 gets AI added to its deduplication capability to help work out where best to place data for maximal deduplication. It analyses metadata in real time to produce deduplication and capacity estimates without, IBM claims, any performance impact.
  • IBM Cloud Object Storage gets added NFS and SMB file access to its object storage.
  • IBM Spectrum Protect gets retention sets to simplify data management and reduce data ingest amounts.
  • IBM Spectrum Protect Plus has added data offload to public cloud services; IBM COS, AWS. Azure, IBM COS on-premises and Spectrum Protect. This is for data archiving and/or disaster recovery.  Spectrum Protect Plus has enlarged its protection capabilities to include the MongoD (NoSQL database) and Microsoft Exchange Server.

Here we have a worthy set of seven incremental software announcements to broaden and extend the appeal of existing products. 

Commvault has a new puppet-master pulling its strings

Commvault has appointed Sanjay Mirchandani, who ran Puppet, as its new President and CEO, and the man is fresh to enterprise data protection and management but full of energy.

Ex-CEO and President and Chairman Bob Hammer becomes Chairman Emeritus after running Commvault for 20 years, while Nick Adamo becomes board chairman. He became a board member in August last year, being a board new broom following activist investor Elliott Management’s involvement with Commvault.

It was Elliott’s influence that caused Commvault to initiate its Advance restructuring project and Hammer’s agreed resignation last May.

Al Bunte, who has served alongside Hammer for more than two decades, is stepping down from his role as COO while maintaining his board position. Both Hammer and Bunte will remain with the company through a transitionary period, with Hammer stepping away from the transition effective arch 31, 2019.

In a Forbes article Mirchandani wrote; “Once-dominant stalwarts are being disrupted by well-financed software startups that, rather than building a better mousetrap, decided to engineer a better mouse.”

That sounds catchy if we think of the mouse as software and mousetrap as hardware, with Mirchandani as the mouse-man.

Puppet products automate the production, delivery and configuration of software, and Commvault reckons it needs a modern software-focussed CEO.

Sanjay Mirchandani

MIrchandani has senior exec, including CIO, experience at Microsoft, Dell EMC and VMware in his CV, and ran Puppet for almost three years.

Kevin Compton, co-founder and partner at Radar Partners and Puppet board member, said: “We are truly indebted to Sanjay for the incredible impact he’s had on Puppet. Under his leadership, Puppet acquired two companies and opened five new offices in Seattle, Singapore, Sydney, Timisoara, and Tokyo. Sanjay also oversaw a $42 million fundraise and took Puppet from a single product company to a multi-product portfolio company. We’re incredibly grateful for his leadership.”

Yvonne Wassenaar replaces him as CEO at Puppet.

Why Commvault?

 In a briefing Michandani told us it was time for a change at Puppet, and he wanted to move back to the East coast of the USA; he has family in the New Jersey area, where Commvault is head-quartered.

Commvault is attractive to Mirchandani because it is a bigger company and in good shape. We asked if it could become a billion dollar company: “The space is growing really rapidly. I’m not going to put a number on it just yet. I”m very excited about this space. Infrastructure and applications are coming together” and “data is paramount. We’re in a great position to define that to our customers.”

He likes where Commvault is located in the marketed: “I think we’re in a great place. The amount of data we manage in the cloud is approaching an exabyte and growing rapidly.”

What about the well-funded upstarts such as Cohesity, Rubrik and Veeam?

“There’s something to be said for having been around for a while. Their funding is validation of the space being important.” Also, in comparison to Cohesity and Rubrik: “Commvault took less and does more. … A [customer] CIO needs someone to trust and one throat to choke. We have an unrivalled capability that spans from mainframes to containers.”

He said Commvault is investing in its partner eco-system but nothing about products and  strategies. 

Comment

Blocks & Files thinks Mirchandani has a learning curve ahead of him but already sees the need to nurture Commvault’s installed base. Apart from that his time at  Puppet shows he is willing to acquire needed technology and grow a product portfolio. 

Cohesity, Rubrik and Veeam now have a fight on their hands, as Mirchandani won’t want to let the/his stalwart Commvault be disrupted by these cash-rich upstarts. He could be a quick study, taking Hammer’s legacy and getting Commvault growing to the billion dollar revenue level and beyond. We expect him to hit the ground running – fast.


Acronis’ hyper-converged backup appliance

Acronis’ SDI Appliance is the first appearance of the company’s hyper-converged product line. It is a purpose-built backup system intended to be a storage target for its Backup and Backup Cloud offerings. 

This software product combines hyper-converged infrastructure (HCI) and cyber protection, and  is based on updated, pre-configured Acronis Software-Defined Infrastructure (SDI) software. This was formerly called Acronis Storage and turns customers’ X86 server-based hardware into a hyper-converged system. It supports block, file, and object storage workloads and delivers cyber protection by incorporating Acronis’ CloudRAID and Notary products. The latter uses blockchain technology.

It addresses five aspects of cyber protection — safety, accessibility, privacy, authenticity, and security.  Other features include virtualisation, high-availability, AWS S3 compatibility, software-defined networking, and monitoring.

The appliance is delivered pre-installed on hardware that has been developed, built and shipped by German-based RNT RAUSCH GmbH, a manufacturing and logistics company. 

It comes in a 3U rack mount form-factor, carrying five nodes, each fitted with an Intel 16-core processor, 32GB RAM (up to 256GB) and 3x Seagate 4/8/10/12 TB SATA disk drives; up to 180TB capacity in total.

Acronis SDI Appliance is currently available in the U.S., Canada, U.K., Germany, Switzerland, Austria, and North European countries.

Comment

Earliier this month Acronis said it was going to launch a software-defined data centre product. This purpose-designed backup appliance is it.

Add some flash storage and more CPU horsepower to this appliance and it becomes a potential general-purpose hyper-converged system which Acronis has said it’s developing. Virtuozzo, another company part-owned by Acronis co-founder and CEO Serguei Beloussov, has such an HCI system coming that’s destined for service providers selling virtual private clouds.

Blocks & Files wouldn’t be at all surprised if there wasn’t some collaborative software development taking place between Acronis and Virtuozzo.

NVMe/TCP needs good TCP network design

Poorly-designed NVMe/TCP networks can get clogged with NVMe traffic and fail to deliver the low latency that NVMe/TCP is designed to deliver in the first place.

The SNIA has an hour-long presentation explaining how NVMe/TCP storage networking works, how it relates to other NVMe-oF technologies, and potential problem areas.

NVMe over TCP is interesting because it makes the fast NVMe fabric available over an Ethernet network without having to use lossless data centre class Ethernet components which can carry RDMA over Converged Ethernet (RoCE) transmissions.

Such Ethernet components are more expensive than traditional Ethernet. NVMe/TCP uses ordinary, lossy, Ethernet and so offers an easier, more practical way, to advance into faster storage networking than either Fibre Channel or iSCSI

The webcast presenters are Sagi Grimberg from Lightbits, J Metz from Cisco, and Tom Reu from  Chelsio, and the talk is vendor-neutral.

This talk makes clear some interesting gotchas with TCP and NVMe. First of all every NVMe queue is mapped to a TCP connection. There can be 64,000 such queues, and each one can hold up to 64,000 commands. That means there could be up to 64,000 additional TCP connections hitting your existing TCP network if you add NVMe/TCP to it.

If you currently use iSCSI, over Ethernet of course, and move to NVME/TCP using the same Ethernet cabling and switching, you could find that the existing Ethernet is not up to the task of carrying the extra connections.

Potential NVMe/TCP problems

NVMe/TCP has more potential problem areas; latency higher than RDMA, Head-of-line blocking adding latency, Incast adding latency, and lack of hardware acceleration.

RDMA is the NVMe-oF gold standard and NVMe/TCP could add s few microseconds of extra latency to it. But, in comparison to the larger iSCSI latency, the extra few microseconds are irrelevant, and won’t be noticed by iSCSI migratees. 

The added latency might be noticed by some latency-sensitive workloads, which wouldn’t have been using iSCSI in the first place, and for which NVMe/TCP might not be suitable.

Head-of-line blocking can occur in a connection when a large transfer can hold up smaller ones while it waits to complete. This may happen even when the protocol breaks large transfers up into a group of smaller ones. Network admins can institute separate read and write queues so that, for example, there are separate read and write queues. NVMe also provides a priority scheme for queue arbitration which can be used to assuage any problem here.

Incast

Think of Incast as the opposite of broadcast, with many synchronised transmissions coming to a single point, which forms a congestion bottleneck, through a buffer overflow, with the sessions backing off, and the affected packets dropped, causing a retransmission and added latency.

It could be a problem and might be fixed by switch and NIC (Network Interface Card) vendors upgrading their products and possibly by TCP developers with technologies like Data Centre TCP. The idea would be to tell the sender somehow, by explicit congestion detection and notification,  to slow down before the buffer overflow happens, The slowing itself would add latency but not as much as an Incast buffer overflow. Watch this space.

HW-accelerated offload devices could reduce NVMe/TCP latency below that of software NVMe/TCP transmissions. Suppliers like Chelsio and others could introduce NVMe/TOEs; NVMe TCP Offload Engine cards, complementing existing TCP Offload Engine cards.

The takeaway here is that networks should be designed to carry the NVMe/TCP traffic and that needs a good estimate of the added network load from NVMe. 

This SNIA webcast goes into this in more detail and is well worth watching by storage networking and general networking people considering NVMe/TCP.