Home Blog Page 406

Google plants Anthos hybrid cloud with storage mulch

Google has launched its Anthos hybrid cloud with its Google Kubernetes Engine enabling application container movement between on-premises and the AWS, Azure and GCP clouds.

Google’s Cloud Services Platform has been rebranded Anthos, which means flower in Greek (Anthe prefix) and the main component  is the Google Kubernetes Engine (GKE) being available in customer data centres (GKE On-Premises version).

Storage for containers is included, with announcements from HPE, Intel and Elastifile plus support from Cisco (with its HyperFlex HCI), Dell EMC, Lenovo and VMware.

GKE is available in Google’s own cloud and also in AWS and Azure, meaning that containers can be moved to and fro between these GKE environments, with no change.

Anthos schematic

HPE, Elastifile and Intel

HPE has two validated designs supporting Anthos. One covers the SimpliVity hyper-converged infrastructure (HCI) system and the other involves a ProLiant server backed up with Nimble storage. They will be available through the GreenLake managed services scheme.

Scale-out file system supplier Elastifile, whose cloud-native Elastifile Cloud File System (ECFS) product is available in the Google cloud, has announced EKFS. This is the Elastifile Container File System for Kubernetes and it supports GKE On-Prem and is available for the Google Cloud Platform Marketplace.

Elastifile says file storage has emerged as one of the standard platforms for achieving data persistence in containerised environments. EKFS, which supports the Container Storage Interface (CSI) standard, delivers persistence for GKE On-Prem deployments.

Intel has a reference design for Anthos involving gen 2 Xeon P processors and an optimised Kubernetes software stack. It will publish the production design as an Intel Select Solution, as well as a developer platform. 

The reference design will be delivered by mid-2019 with expected product delivery from OEMs and systems integrators later this year.

Kaleao gets fresh CEO

Hyperconverged startup Kaleao has appointed Komprise sales boss sales Tony Craythorne as its new CEO.

Kaleao’s KMAX technology uses ARM servers with FPGAs and a virtual NIC concept to build scalable hyper-converged infrastructure (HCI) systems.

Tony Craythorne

Craythorne was the SVP for world-wide sales at file data lifecycle management company Komprise, joining in November 2017, and SVP for world-wide sales at Nexsan before then.

A Kaleao spokesperson said: “This news has been not announced or confirmed as yet.”

Kaleao was involved in a 2016 EU ExaNeST exascale supercomputing project and so has a high-performance mindset. It’s 3U KMAX-HD chassis, intended for hyperscale deployments, has 192 64-bit, 8-core ARM CPUs inside it, plus 24TB of tier-1 flash and up to 368TB of tier-2 flash; NVMe SSDs.

KMAX server

The newer KMAX-EP with a 4U chassis was announced in April 2017 with general availability in the second half of 2018. It was intended to fit in general enterprise data centre racks, unlike the KMAX-HD which had certain limitations in that area.

The company is headquartered in Cambridge, England, with an office in North Carolina, USA, a development centre in Crete, and offices in France and Italy. Its CEO was co-founder Giampietro Tecchiolli, an Italian academic, who took on the role in October 2015.

Ex-ARM Director for Technology and Systems John Goodacre is the second Kaleao co-founder and its Chief Scientific Officer. He is also Professor of Computer Architecture at the University of Manchester in England. Goodacre left the ARM role in October 2018, three years after Kaleao was started.

Greg Nicoloso, the third co-founder, is Kaleao’s Chief Marketing Officer and General Manager, and based in the USA. Paolo Stecca is listed as Kaleao’s VP Operations but he left in September last year; six months ago, according to LinkedIn.

US-based Hilary Longo is Kaleao’s VP for Business Development and Marketing.

Kaleao says it has strong financial backing but we have no details of the amount.

HyperGrid CEO takes a hike

HyperGrid CEO Nariman Teymorian has been replaced by co-founder Manoj Nair.

The firm was founded as GridStore in 2009 by Nair, COO Mark Mitchell, now departed Chief Strategy Officer Kelly Murphy, and also departed Chief Architect Antoni Sawicki and Principle Engineer Tomasz Novak. Nair was Chef Product Officer before taking on the top slot.

Manoj Nair

GridStore was a hyper-converged infrastructure (HCI) startup using the Hyper-V hypervisor instead of vSphere; some what unusual at the time.

It evolved into selling the HyperCloud HCI-as-a-service portfolio in 2016 financed by new funding, and enabled through a DCHQ acquisition.

That was the year when Teymourian became the CEO, and chairman, and it changed its name to HyperGrid.

There were a set of exec departures in late 2017, with co-founders Sawicki and Novak leaving at that time.

Then, in 2018, it pivoted again, becoming a hybrid cloud management platform company with its HyperCloud offering;

Altogether HyperGrid has taken in $93 million in funding in seed, A, B and C-rounds, with the B and C-rounds both taking place in 2018.

Now a CEO change has taken place. Ex-CEO Nariman Teymourian issued a statement saying: “I recruited Manoj almost three years ago and I could not think of a better leader to take my role. Manoj has both my support and that of the board to take on this responsibility.”

A prepared quote from Nair said: “The HyperGrid leadership team and I will work to continue driving technology innovation, as well as further build out our sales, distribution and marketing functions, to deliver value for our customers, employees and investors.”

An investor quote by Kevin Dillon, Managing Partner at Atlantic Bridge Capital, said: “The leadership Manoj has provided since day one, make him a natural fit for the CEO position as the company moves into this logical next step of its evolution and drives to the next levels of success.”

A coming webinar shows HyperGrid is moving ahead on hybrid cloud management issues;

HyperGrid is moving forward from its hyper-converged roots to a hybrid cloud management world, with a new CEO to push the strategy ahead.

Note. Updated with HyperGrid announcement.


Google Cloud serves up three-decker Backup-as-a-Service club sandwich

Actifio, Cohesity and HYCU have declared their support for Google Cloud Platform with backup-as-a-service versions of their software.

The three backup suppliers announced their services today at Google Cloud Next ’19 in San Francisco. This is further evidence of Google ramping up its enterprise cloud business under the management of ex-Oracle exec Thomas Kurian.

Let’s take a brief look at the three products.

Actifio Go Backup-as-a-Service

Actfio announced its GO for VMware service in February. Go for VMware uses Actifio’s Sky data moving technology to take incremental forever copies of vSphere virtual machines and put them into in AWS (S3, S3 IAS), Azure (Blob Hot, Blob Cold), GCP (Nearline, Regional, Coldline), IBM COS and Wasabi’s public cloud object stores.

The public cloud backup copy can be recovered via a direct mount and storage vMotion used to move it back on-premises.

Now Actifio Go Backup-as-a-Service is available in the GCP Marketplace for use on the Google Cloud Platform.

Cohesity SaaS and GCP

Cohesity Cloud Backup Service for Google Cloud provides enterprise-level backup and recovery for applications on GCP, eliminating the need for Google Cloud customers to deploy on-premises backup software and infrastructure. There is a single SaaS-based dashboard for all management tasks with consumption-based pricing that is integrated with GCP billing.

HYCU and GCP

HYCU, a Nutanix-focused backup supplier, can now be considered to be a Google Cloud specialist too. The company is backing Google Cloud Services, with Google Cloud SQL the first service to be supported. It previously launched HYCU Backup as a GCP Service. This is a fully managed backup as a service for customers running their applications and databases on virtual machines running on Google’s Cloud.

HYCU says its Backup-as-a-Service is a cloud native, elastic service that grows and shrinks with customers’ needs. It is available on the Google Marketplace. There are no agents to install and zero deployment effort.

Support for Google Cloud SQL includes:

  • Automated Discovery of all Cloud SQL instances and databases
  • One Click data protection for Cloud SQL instances
  • Granular recovery of databases
  • Automation for one-click, dev and test copy management

The HYCU support for Google Cloud SQL service is available this quarter and more Google Services support is set to follow.


E8 practises scales for BeeGFS singalong

E8, the NVMe-oF array supplier, is backing up ThinkParQ’s clustered parallel file system to deliver files faster to high performance computing customers.

ThinkParQ’s product is called BeeGFS, an open source file system, and is an alternative to Lustre and Spectrum Scale, formerly known as GPFS. It has thousands of users worldwide.

BeeGFS schematic

BeeGFS is a scale out system with client software accessing storage servers. These can use flash, ordinary or shingled disk drives, with BeeGFS providing parallel access for IO speed.

E8’s NVME-oF storage combines scale-out all-flash arrays with NVMe-over-Fabrics access to give low latency, high-bandwidth access to data. These arrays can use Optane persistent memory for added performance. E8 storage agents on the BeeGFS nodes link to the E8 arrays and complete the path between BeeGFS clients and the backing E8 data stores.

E8 array

Customers that require access to very large volumes of data can meet or exceed capacity requirements by adding as many nodes as needed, according to E8 and ThinkParq. They said their combined system can also meet performance needs, for small and large files.

You can get comprehensive BeeGFS documentation here.

Exclusive: Veritas lays off Aussie tech staff

At least 20 Veritas tech staff have lost their jobs in the latest round of restructuring which sees the private equity-own company shutter Australia-based support at the end of the month.

Veritas’s decision to move some support functions to Pune, India seems to be the reason for this redundancy round.

According to our sources 15 backline engineers from Sydney, and one manager, two advanced support engineers and three backline engineers in EMEA have gone. In addition, we understand the cull includes some support jobs at two US locations.

April is the cruellest month for Veritas staff, with the company this time last year laying off at least 100 UK employees who worked mostly in research and development.

In response to our questions, James Blamey, Veritas head of corporate comms, pulled out the following drawer statement: “Veritas is transforming for growth, which means making operational changes to ensure that the company is investing in the right areas while continuing to meet our commitments, enhance our opportunity to grow, and deliver outstanding customer value and competitive differentiation.

“While we continue to innovate and hire talent in strategic areas of focus for the company, roles at some sites will be affected. Veritas does not take these changes lightly and is fully committed to supporting employees during this transition period.“


HPE teams up with Nutanix for GreenLake managed service

Nutanix logo
Nutanix logo

HPE has reversed its no partnership stance with Nutanix to work together on the GreenLake Nutanix managed service.

Let’s roll the clock back to May 2017 when Nutanix made its hyperconverged infrastructure software available on HPE ProLiant servers without HPE’s help. This riled HPE, which fired off a HPE Community Home statement entitled: “Don’t be misled… HPE and Nutanix are not partners.” 

This statement is no longer live but is still available via Google’s cache. The author, Paul Miller, VP of marketing for the HPE Software-Defined and Cloud Group business unit, wrote: “It came as no surprise when I read the news: Nutanix wants to run its software on HPE ProLiant. It was only a matter of time. Customers love HPE ProLiant—the DL380 is the best-selling server in the industry[1]. Now, Nutanix wants a piece of the pie… but then again, what software vendor wouldn’t?

“Here’s what we believe: if you’re considering running hyperc-converged infrastructure (HCI) on an HPE server, you should consider the HPE HCI offerings. Over the last 18 months, HPE has invested nearly a billion dollars to bring the best solution to the market. HPE is combining industry-leading data services with the industry’s best-selling compute platform, and we are continuing to invest in the future.”

The HPE HCI offerings are,based on its acquired SimpliVity HCI business

Times change

Fast forward to today: “Hewlett Packard Enterprise and Nutanix today announced a global partnership to deliver an integrated hybrid cloud as a Service (aaS) solution to the market…delivered through HPE GreenLake to provide customers with a fully HPE-managed hybrid cloud.”

HPE ProLiant or Apollo servers will be installed, and loaded with Nutanix Enterprise Cloud OS software, including its AHV hypervisor, in customers’ on-premises or co-location data centres and consumed as a service through HPE’s GreenLake cloud-like consumption offering.

Suggested use cases are mission-critical workloads and big data applications; virtualized tier-1 workloads such as SAP, Oracle, and Microsoft; as well as support for virtualized big data applications, like Splunk and Hadoop.

Nutanix founder and CEO Dheeraj Pandey said: “We are delighted to partner with HPE for the benefit of enterprises looking for the right hybrid cloud solution for their business.”

Nutanix’s channel will also be able to sell HPE ProLiant or Apollo servers loaded with Enterprise Cloud software and delivered from HPE factories. In effect, HPE is OEMing its servers to Nutanix.

But HPE’s channel will not be able to sell these HPE server/Nutanix software appliances.

SimpliVity or Nutanix?

It seems that SimpliVity is no longer enough. If Dell EMC can OEM Nutanix software with its XC server range then why can’t HPE do the same, albeit using the GreenLake subscription business model?

HPE’s Miller said; “Our strategy has not changed…SimpliVity is our lead HCI” for the company’s channel. However since May 2017 Nutanix has adopted a software-led business model and has become a more hardware-neutral provider, he said.

HPE believes its SimpliVity product is the best HCI product offer in the general marketplace, with built-in data protection among other features. Enterprises use it to run large virtualization farms and remote office/branch office (ROBO) deployments, according to Miller.

SimpliVity, via Plexxi software-defined networking has an involvement with HPE’s Synergy composable infrastructure and there is no role for Nutanix in that.

How should customers choose between SimpliVity and Nutanix? Miller said that from a GreenLake (OPEX) standpoint, customers wanting the VMware or Hyper-V hypervisors should choose SimpliVity whilst those wanting a free hypervisor should head in the Nutanix AHV direction.

From a CAPEX standpoint HPE sells what it considers to be the best HCI product in the marketplace while Nutanix sells its own product.

To help you choose we built a handy little table comparing the two products:

Blocks & Files‘ was told by a source close to Nutanix that the SimpliVity system is limited in its ability to scale up to larger workloads because it has no distributed file system. However SimpliVity can scale up to 16 nodes in a cluster which puts it firmly in the enterprise class on that front.

We were told: “The biggest customers in Europe (like Sanofi) use it for ROBO scenarios.” Also, “GreenLake plus Nutanix is clearly positioned as an alternative to public cloud, it will give HPE customers the ability to consume their HPE plus Nutanix infrastructure as a service.”

HPE thinks the GreenLake Nutanix offering adds hypervisor choice, which its customers want, and, secondly, SimpliVity is an enterprise-class offering with a sound roadmap.

HPE is convinced GreenLake is a great idea. Pradeep Kumar, SVP for PointNext at HPE, says the company has not lost a single GreenLake customer in two years of operation. Customers want public-cloud-like consumption models, he said.

The Nutanix Enterprise Cloud OS software on HPE GreenLake and the integrated appliance utilising Nutanix software on HPE servers are expected to be available in calendar Q3 2019.

Backup for a minute and scan this data protection news

A constant of life is that data protection products are always getting better.

Five recent data protection news stories exemplify this, with Acronis, Exagrid, Unitrends and Spectra Logic in two cases improving products and operations.

Acronis backup capability additions

Acronis has updated Acronis Backup, introducing physical data shipping, cross-platform virtual machine conversion, protection from crypto-mining malware, and localisation to seven additional languages.

The feature list includes;:

  • Physical data shipping: Ability to protect large amounts of data by sending encrypted versions of full backups to an Acronis data on a hard drive.
  • Extended scalability: Manage up to 8,000 devices from a single management server.
  • Enhanced user experience: Organise, group and filter devices via a new comment feature, and schedule backups through a new performance and backup window.
  • Localisation to seven additional languages: Bulgarian, Norwegian, Swedish, Finnish, Serbian, Malay, Indonesian.
  • Improved Active Protection: Detection of crypto-mining malware, protection of network folders mapped as local drives.
  • Cross-platform conversion: Ability to convert backup files into VM files (VHDX, VMDK formats ready) capable of running on VMware Workstation and Microsoft Hyper-V hypervisors.
  • Support for additional operating systems and hypervisors: Microsoft Exchange Server 2019, Windows Server 2019 with Hyper-V, Hyper-V Server 2019, Windows XP SP1 (x64), SP2 (x64), (x86), VMware vSphere 6.7 update 1, Citrix XenServer 7.6, RHEL 7.6, Ubuntu 18.10, Fedora 25, 26, 27, 28, 29, Debian 9.5, 9.6,

You can get a 30-day trial of the new software here.

Acronis said achieved 20 per cent YoY net billing growth in 2018 and 160 per cent cloud business growth. It hired 400 additional employees, Including more than 100 in new R&D offices in Tempe, Arizona and Sofia, Bulgaria. It hopes growth will accelerate throughout 2019.

Exagrid on a roll

Exagrid ships globally deduping to disk backup target arrays and has added integrations with Veeam Software and Zerto (for disaster recovery. The firm’s arrays can sit behind Commvault deduplication software and, it claims, deduplicate Commvault data by an additional 3X.

It is also working with HYCU to receive backup data Nutanix ESXi and AHV systems.

CEO Bill Andrews says Exagrid’s deduplication is so good that over 80 per cent ExaGrid’s newly-acquired customers are replacing Dell EMC Data Domain, HPE StoreOnce, and the Veritas NetBackup 5200/5300 series of appliances. These customers get faster backups, restores, and VM boots with the Exagrid kit.  

Exagrid says it has had its best-ever first quarter for bookings and revenue. This is good news as it has recently expanded the number of offices worldwide and headcount.

Spectra Logic adds Ethernet access to tape libraries and 2EB library capability

Spectra Logic, the tape Library and protection array vendor, now offers Spectra Swarm, which adds Ethernet connectivity to its LTO tape libraries.

The company said Fibre Channel continues to be fully available across all Spectra tape libraries, but it is frequently being obsoleted in modern data centres.

SAS tape drives, and in particular half-height SAS drives, provide a significant cost savings, allowing the addition of more drives for the same price, but SAS is not normally viable for connections beyond a single rack.

Spectra Swarm uses a backbone 40GbitE RoCE (RDMA over Converged Ethernet) or iSCSI connection that is then locally switched to multiple SAS-connected LTO drives. This makes it easier to replace Fibre Channel and accommodates half-height SAS tape drives.

Both are cost savers. Customers could take that cost-saving and add more drives to a library to speed performance with no overall cost increase.

Spectra has also added built-in BlueScale encryption and key management. Spectra Swarm will be available in July.

Diving for BlackPearls

BlackPearl is a Spectra object storage gateway device that can act as front end to tape, disk and public cloud stores. Until now, Spectra partner software was responsible for moving data from production data to the BlackPearl system as a diagram shows:

Spectra has added RioBroker data-moving front end software which runs in a server connected to BlackPearl and offloads the data transfer job from the application to the Spectra RioBroker system. This adds greater performance, parallelism, scalability, ease of implementation and consistency to the BlackPearl platform. 

RioBroker schematic

The RioBroker software package is designed as an interface layer with a RESTful file transfer API that intelligently manages jobs in BlackPearl.

The firm says RioBroker support means:

  • Easier client development for partners with a simple abstraction layer over the BlackPearl interface 
  • More clients and applications can share BlackPearl object storage resources in parallel at higher performance
  • Remote input/output capabilities to multiple Spectra BlackPearl Converged Storage servers
  • Brokers data stream input and output between multiple sources and destinations
  • Support for high availability
  • Supports even higher performance with clustering capabilities
Higher performance with multiple RioBroker agents

RioBroker brings Partial File Recall and File Transfer Protocol (FTP) capabilities to BlackPearl. It is is available now. You can find out more information here.


Unitrends uplifts recovery capability

Kaseya-owned Unitrends has launched Recovery Series MAX appliances and claim they represent an entire data recovery environment. The systems are intended for organisations that lack local recovery infrastructure and must run through a remote recovery procedure when a local outage occurs.

Recovery Series MAX provides data protection and recovery software, and has extra CPU power and block agent recovery to enable on-the-box recovery and hosting of failed applications on site.

The three appliances range in size from 1TB through 4TB to 8TB of usable capacity, and feature Xeon D-1541 processors, built-in, dual 1GB and 10GB Ethernet ports, 128GB SSD cards, and 32 or 64GB of memory. 

They come with automatic ransomware detection, WAN optimisation, global adoptive deduplication, encryption and bandwidth throttling, proactive analytics, self-healing hardware, and 24X7 support coverage.

They are available under either the Unitrends or Unitrends MSP brands.

No corners cut in our cheap hybrid storage box, Qumulo claims

Qumulo has launched an entry-level system that adds 80 per cent more capacity and 200 per cent faster read speed.

There has been a flurry of recent activity from the scale-out filer company which last week introduced its software on the Google Cloud Platform and also launch the CloudStudio workload migration tool.

The company has three hardware ranges:

  • High-performance all-flash P32T and P92T
  • Capacity Series with hybrid flash/disk QC24 and QC40 entry-level systems
  • Capacity Series with larger capacity hybrid flash/disk QC104, QC208, QC260 and QC360 models
  • Nearline K-144T hybrid flash/disk system 

The new C-72T is available now but no pricing information was revealed at time of writing. The system fits in above the QC40, as this table shows:

The C-72T is a 1U box holding 12 x 6TB disk drives. We think they are laid out in three rows of four as the image below indicates:

Qumulo has used the 1U enclosure and Xeon D D-1531 6-core CPU from the Nearline K-144T system to power the C-72T. This new box uses 6TB disk drives instead of the nearline system’s 12TB spinners.

Customers can start at less than 200TB of capacity (4-node cluster) and scale to beyond 3PB in linear scalable increments.

The combination of low-power Xeon D processor and 6TB drives helps to lower cost – but not performance, as Qumulo is keen to emphasise.

It has 200 per cent more read performance than the QC24 and QC40, although the actual number isn’t published. In general, Qumulo systems have gained doubled throughput performance and a 50 per cent reduction in latency for both Server Message Block (SMB) and Network File System (NFS) since NAB 2018.

XPP and Minio S3-access

Qumulo has also automated Cross-Protocol Permissions (XPP) capabilities. Using this, sysadmins can enable Windows, Linux, and macOS platform users to collaborate without requiring manual creation of per-OS permissions safeguards. XPP ensures permissions compatibility as users access and collaborate on the same sets of files over SMB and NFS protocols.

The company has announced a certified solution with MinIO, Inc. for its Minio open source S3 object storage server. It is also working on its own, in-house S3 object storage access software. This was supposed to be initially available in March.

Molly Pressley, product marketing director, said the in-house S3 software is “still on the roadmap [but is] not part of this week’s announcements”.

The Minio gateway software enables customers to run file and object workloads from the same Qumulo storage infrastructure. Check out a tutorial here.

StorPool inches past Optane DIMM-fuelled Microsoft Storage Spaces Direct

StorPool, a Bulgarian storage startup, has matched an Optane DIMM-accelerated Microsoft Storage Spaces Direct 12-node cluster and its 13.7 million IOPS record for a hyperconverged system, by just 1,326 IOPS.

StorPool technology

StorPool produces scale-out block storage software with a distributed shared-nothing architecture. It claims its shared external storage system is faster than using local, direct-attached SSDs. It has its own on-drive format, protocol, quorum and client software features – about which we know nothing.

StorPool is distributed storage software, installed on each X86 server in a cluster, and pooling their attached storage (hard disks or SSDs) to create a single pool of ​shared block storage​.

StorPool logical diagram

The software consists of two parts – a storage server and a storage client that are installed on each physical server (host, node). Each host can be a storage server, a storage client, or both. To storage clients, StorPool volumes appear as block devices under /dev/storpool/* and behave identically to the dedicated physical alternatives. 

Data on volumes can be read and written by all clients simultaneously and consistency is guaranteed through a synchronous replication protocol. Volumes can be used by clients as they would use a local hard drive or disk array.

StorPool says its software combines the space, bandwidth and IOPS of all the cluster’s drives and leaves plenty of CPU and RAM for compute loads. This means virtual machines, applications, databases or any other compute load can run on the same server as StorPool in a hyper-converged system.

The StorPool software combines the performance ​and capacity of all drives attached to the servers into ​a single global namespace, with a claimed access latency of less than 100 µs.

The software is said to feature API access, end-to-end data integrity, self-healing capability, thin-provisioning, copy-on-write snapshots and clones, backup and disaster recovery.

Redundancy is provided by multiple copies (replicas) of the data written synchronously across the cluster. Users set the number of replication copies, with three copies recommended as standard and two copies for less critical data.

Microsoft’s super-speedy Storage Spaces Direct

Microsoft set the HCI record in October 2018 and used servers with Optane DC Persistent memory; Optane DIMMs with 3D XPoint media. Their host servers ran Hyper-V and Storage Spaces Direct (S2D) in Windows Server 2019 software.

A dozen clustered server nodes used 28-core Xeons with 1.5TB Optane DIMM acting as a cache and 4 x 8TB DC P4510 Optane SSDs as the capacity store. They were hooked up to each other over dual 25Gbit/s Ethernet. The benchmark test used random 4 KB block-aligned IO.

We understand the CPUs were 28-core Xeon 8200s.

With 100 per cent reads, the cluster delivered 13,798,674 IOPS with a latency consistently less than 40 µs. Microsoft said: “This is an order of magnitude faster than what typical all-flash vendors proudly advertise today…this is more than double our previous industry-leading benchmark of 6.7M IOPS. What’s more, this time we needed just 12 server nodes, 25 per cent fewer than two years ago.”

With 90 per cent reads and 10 per cent writes, the cluster delivered 9,459,587 IOPS. With larger 2 MB block size and sequential IO, the cluster can read 535.86 GB/sec.

Okay, StorPool. What can your storage software do?

StorPool’s Hyper-V and Optane DIMM beater

In essence StorPool took the Microsoft config and swapped out components such as the Optane DIMM cache, and the RDMA-capable Mellanox NICs:

The clustered servers accessed a virtual StorPool SAN running on CentOS 7 and the KVM hypervisor using Intel 25Gbit/s links. 

In each of these 12 nodes, 4 cores were allocated to the StorPool storage server, 2 cores for the StorPool block service, and 16 cores for the load-generating virtual machines. That totals 24 cores, with actual usage at full load clocking in at about 14 cores.

This setup achieved:

  • 13.8 million IOPS, 1.15 million per node, at 100 per cent random reads
  • 5.5 million IOPS with a 70/30 random read/write workload and a 70 µs write latency
  • 2.5 million IOPS with a 100 per cent random write workload
  • 64.6 GB/sec sequential read bandwidth
  • 20.8 GB/sec sequential write bandwidth

StorPool CEO BOyan Ivanov said: “Idle write latency is 70 µs. Idle read latency is 137 µs. Latency under the 13.8M IOPS random read was 404 µs, which is mostly queuing latency. … It was 404 µs under full load.”

Basically the StorPool design matched the Optane DIMM-accelerated S2D configuration on IOPS, producing 13.8 million vs Microsoft’s 13.798 million.

The results indicate S2D pumps out more GB/sec read bandwidth; 535.86 GB/sec vs StorPool’s 64.6 GB/sec. However S2D is using local cache and not getting data from the actual drives to get to this number.

Ivanov said the two numbers are “not comparable because one is from local cache (cheating) and the other one (StorPool) is from real “pool” drives over the network (not cheating). The StorPool number is 86 per cent of the theoretical maximum.” That maximum network bandwidth is75GB/sec.

We can safely assume a StorPool system will cost less than the equivalent IOPS-level Storage Spaces Direct system as no Optane DIMMs are required.

Find out more from StorPool and get a tech fact sheet here.

Veaam triumvirate takes charge – again

With the dust settling after Peter McKay’s October 2018 departure from being Veeam’s CEO how has the company changed and where is it going?

A conversation with Ratmir Timashev, co-founder of the Veeam rocketship-like data protection business, and now its EVP for sales and marketing, reveals that, in a way, Veeam has swung back to its roots while talking advantage of the business extension that McKay built.

Building the enterprise business

When McKay became co-CEO and President in May 2017 his brief included building up Veeam’s enterprise business. It already had a successful core small, medium and mid-level enterprise transactional business which had broken records with its growth rate capitalising on the VMware virtual server wave washing over the industry. The deal sizes were $25,000 to $150,000 and business was consistently booming, but the feeling was there that they were leaving money on the table, so to speak, by not selling to larger enterprises.

That was for McKay to do, and he did so, establishing Veeam as an enterprise supplier and building up its North American operation. He moved quickly and spent money and organisational effort reshaping Veeam. A little too quickly as it happened.

Ratmir Timashev

Timashev explains that the changes took investment away from the main Veeam business with marketing to the core transactional business customers suffering as did inside sales.

So, eighteen months after joining Veeam, McKay left, and three main executives, the tight triumvirate of ; Timashev, co-founder Andrei Baranov who was previously a CTO and then co-CEO with McKay, and Bill Largent, a senior board member from May 2017 after being CEO, resumed operational control, running Veeam in the same kind of way as they had before Mckay’s time.

Enterprise – SME balance

Baranov is now the CEO with Largent the EVP for operations. The three have set up a 70:30 balance scheme for spending and investment.

Seventy per cent goes to the core transactional business, Veeam’s high-velocity business, focussing on TDMs (Technical Decision Makers) in the small, medium and mid-level enterprises.

The remaining thirty per cent is focussed towards EDMs or Executive Decision Makers in the larger enterprises.

Timashev says very few businesses successfully sell to the whole range of small-to-medium-to-mid-to-large-enterprise customers. It’s hard to do with products and its also hard to do with sales and marketing. He cites Amazon and Microsoft as having managed to get over that high bar.

It’s clear he wants Veeam to be in that category.

Hybrid cloud moves

All customers are aware of and generally pursuing a hybrid cloud strategy, meaning cloud-style on-premises IT infrastructure combined with using the public cloud, and generally more than one public cloud. Their board directors and executives have charged the IT department with finding ways to leverage the cost and agility advantages of the public cloud.

For Veeam, with it being a predominantly on-premises data protection modernisation business, this hybrid cloud move represents an opportunity but also a threat. Get it wrong and its customers could buy data protection services elsewhere.

Timashev said Veeam is pursuing its own hybrid cloud strategy. All new products have a subscription license model. The main data protection product still has a perpetual license model but is also available on subscription.

Subscription licenses can be moved from on-premises to the public cloud at no cost with Veeam Instance Licensing. VIL is portable and can be used to protect various workloads across multiple Veeam products and can be used on premises, in the public cloud and anywhere in between. Timashev believes Veeam is the only data protection vendor to offer this capability.

Veeam has also added tiering (aka retiring) of older data to the AWS and Azure clouds, or to on-premises object storage. It has its N2SW offering for backing up AWS EC2 instances and RDS data.

It wants to do more, helping customers migrate to the public cloud if they wish, and provide data mobility between the on-premises and public cloud environments.

Veeam’s triple play

Can Veeam, with what we might call the old guard back in control, pull off this triple play; reinvigorating core transactional business growth, building up its enterprise business, and migrating its product, sales and marketing to a hybrid cloud business model paralleling its customer’s movements?

The enterprise data protection business features long established stalwarts, like Commvault, IBM and Veritas, and newer fast-moving and fast-growing upstarts such as Cohesity and Rubrik who are moving into data management. 

We have to add SaaS data protection companies like Druva to the mix as well. Veeam is going to need eyes in the back and sides of its head as well as the front to chart its growth course through this crowded forest of competing suppliers. 

It used to say it was a hyper-availability company and will need to be hyper-agile as it moves ahead. It has the momentum and we’ll see if it can make the right moves in what is going to be a mighty marketing, sales and product development battle.

Lenovo gets into fast virtual SAN array business with Excelero

Lenovo is buddying up with Excelero to provide NVMe over Fabrics access speed to data for its ThinkSystem servers.

The Data Centre Group at Lenovo has done the global reselling deal with Excelero and it gets Lenovo into NVMe-oF block data access without having to build a storage array of its own. Its own servers with their local storage provides the data store for Excelero’s NVMesh 2 software. 

Excelero NVMesh diagram

NVMesh involves intelligent client block device drivers – agent software – in data accessing servers. These use a Remote Direct Drive Access (RDDA) protocol to access data on target servers with their local (SSD) storage.

The system can provide a shared external array or operate in hyper-converged mode with a virtual SAN. This initially used RoCE (RDMA over Converged Ethernet) and Infiniband.

The second generation NVMesh 2 added support for standard gigabit Ethernet and TCP/IP and Fibre Channel. V2.0 also added erasure coding N+M type data redundancy. It is a parity-based data redundancy scheme running as a distributed service, meaning that the parity calculations are running on the clients, in a decentralised fashion. 

This was designed for large scale and adding more clients increases the overall CPU power available for parity calculations. Striping, mirroring and striping and mirroring are also available in what’s called a MeshProtect offering.

V2 also has a MeshInspect facility to provide cluster-wide and per-object statistics on throughput, IOPS, and latency.

Lenovo joins NVMe-oF mainstream

The Excelero partnership gets Lenovo into the NVMe-oF game, alongside all the mainstream storage vendors. Just this week both IBM with Storwize  V5000E and V5100 products, and Quantum with its F2000 array added end to end NVMe fabric access to these products.

Interestingly Excelero partnered Quantum with an NVMe-oF and StorNext combination at an industry event last September.

Lenovo and Excelero have Big Data and web-scale deployments in mind. They think that, as NVMe flash nears price parity with traditional flash, the NVMe over Fibre Channel and ordinary Ethernet means customers can get more fast storage capacity, flexibility and scale-out support without changing networking protocols or storage fabrics. Expensive RoCE or InfiniBand networking gear is no longer mandatory.

For Excelero, Lenovo gives it a useful additional channel into the enterprise storage market as it fights to grow its startup business with fellow fast data access startups Apeiron, E8 and Pavilion Data Systems, as well as competing with existing suppliers such as Dell EMC, HPE, IBM, NetApp, Quantum and Pure Storage.

Patrick Guay, Excelero’s VP for strategic accounts, issued a prepared quote: “We’re extremely proud to be the first shared NVMe storage software chosen by Lenovo, and look forward to expanding our business together.”