Home Blog Page 307

Dell’s VxRail is an EPYC story in the works

Dell Technologies has spruced up the VxRail hyperconverged line-up with AMD EPYC processor options brought in to P Series performance-intensive configurations.

The new all-flash P675F and hybrid flash-disk P675N have a 2U chassis and support Nvidia Tesla T4 and V100S GPUs, which provide more power for intensive workloads such as AI, data analytics, deep learning, and high performance computing.

Nancy Hurley

Nancy Hurley, Dell’s senior manager CI/HCI product marketing, said in a blog post that the P Series chassis enables larger PSU options to allow a broader range of CPU offerings, larger memory configurations, and additional PCIe cards compared to the E Series EPYC systems. Dell introduced EPYC options for the E Series in June 2020.

There are six VxRail product families:

  • E Series – 1U/1Node with an all-NVMe option and T4 GPUs for use cases including artificial intelligence and machine learning. AMD EPYC processor and PCIe Gen 4 config available.
  • P Series – Performance-intensive 2U/1Node platform with an all NVMe option, configurable with 1, 2 or 4 sockets optimised for intensive workloads such as databases
  • V Series – VDI-optimised 2U/1Node platform with GPU hardware for graphics-intensive desktops and workloads
  • S Series – Storage dense 2U/1Node platform for applications such as virtualized SharePoint, Exchange, big data, analytics and video surveillance
  • G Series – Compute dense 2U/4Node platforms for general purpose workloads.
  • D Series – ruggedised 1U short depth – 20-inch – box in all-flash [SAS SSD] and hybrid SDD/disk versions.

VxRail systems support Optane Persistent Memory DIMMs, as well as Optane SSDs, also Nvidia Quadro RTX GPUs and vCPUs to accelerate rendering, AI, and graphics workloads.  The new P-Series should go faster still with these workloads.

A VxRail 7.0.130 software update adds support for Intel’s x710 NIC Enhanced Network Stack (ENS) driver. This driver can dynamically prioritise network traffic to support particular workloads. A new feature enables system health runs to use the latest set of health checks whenever they are made available. There is also a support procedure to expand a 2-node ROBO VxRail configuration to a 3-or-more-node cluster

Salesforce specialist OwnBackup achieves unicorn status

OwnBackup, the backup for Salesforce vendor, has bagged $167.5m at a $1.4bn valuation in a D-series round, taking total funding to $267.5m.

The startup, which was founded in 2012, will spend the cash on “ongoing investments in global expansion and extend OwnBackup’s platform to help companies big and small manage and secure their most mission-critical SaaS data.”

OwnBackup is present in the Salesforce AppExchange. It offers secure, automated daily backups of Salesforce SaaS and PaaS data, restoration, disaster recovery and management tools to pinpoint backup gaps.

The company said it has almost 3000 customers on its books, up from 2000 in July 2020, including 400 new customers gained in most recent quarter. “As cloud adoption and digital transformation accelerate, the data produced by and stored in SaaS applications is growing even faster,” Sam Gutmann, CEO said. “Our platform is purpose-built for cloud-to-cloud backup and protection and this latest round of funding is the next step in our mission to help our customers truly own their SaaS data.”

There is plenty of scope for expansion. OwnBackup’s customer count is a fraction of Salesforce, which has over 150,000 customers.

OwnBackups six consecutive years of increased funding rounds. Spot a hockey stick curve? The VCs are sure excited.

OwnBackup’s announcement says it is a cloud data protection platform company, with nothing about being specific to Salesforce. It currently supports four SaaS applications: Salesforce, Sage Business Cloud Financials, Veeva (life sciences) and nCino (financial data). It mentioned possible expansion to cover Workday in April last year.

Competitors in the Salesforce backup market include AvePoint, CloudAlly, Commvault (with Metallic), Druva, Odaseva, Kaseya’s Spanning and Skyvia. None of the other big backup suppliers, such as Acronis, Cohesity, Dell EMC, Rubrik, Veeam, and Veritas, appear to have Salesforce-specific capabilities yet.

The new funding was co-led by Insight Partners, Salesforce Ventures, and Sapphire Ventures, with participation from existing investors Innovation Endeavors, Vertex Ventures, and Oryzn Capital.

Note. Cohesity does provides backup for Salesforce, albeit not native (yet!). It’s enabled through an app in the Cohesity Marketplace.

Qumulo goes big on AWS Marketplace

Qumulo has increased the number of configurations for its scale-out File Data Platform on AWS Market place from seven to 27. The company says it has reduced prices by up to 70 per cent.

Barry Russell

Barry Russell, GM of cloud at Qumulo, said in a press statement: “Our new configurations in the AWS Marketplace enable customers to tackle the toughest, large-scale unstructured data challenges by creating high-performance file data lakes connected to workflows, while leveraging data intelligence services such as AI and ML from AWS.”

The AWS Marketplace lists Qumulo file storage cluster configurations in various capacity and media (HDD/SSD and SSD-only) combinations, as if they were HW/SW filers.

Seven pre-existing Qumulo configurations in the AWS Marketplace.

We checked the $/TB cost for Qumulo’s original seven AWS Marketplace configurations. Higher capacities get you lower cost/TB, but greater speed sends the cost/TB higher. Annual contracts will get you a 40 per cent or so discount.

The new configs and prices will become available on the AWS marketplace website.

The HDD+SSD cost per TB/hour for the 12TB config is $0.0341. The 96TB and 270TB configs work out at $0.01718/hour and there is a lower $0.01668/hour for 809TB. A 1TB SSD cost $0.05/hour but drops to $0.0239/hour for 103TB.

Each config comes with a CloudFormation template which specifies AWS instance types.

Commvault cranks out strong Q3 figures

Commvault’s third quarter earnings shows the company’s subscription business is taking off. The data management vendor pulled in record revenues of $188m, up seven per cent and $16.7m net income (-$650m) for the three months ended December 31.

President and CEO Sanjay Mirchandani said in a statement: “The strategic moves we made over the past two years are delivering results. We have simplified how we do business, dramatically improved our execution, and are innovating faster than ever.”

Sanjay Mirchandani

Total revenue at the 9-month point is $532.1m and, if this growth is sustained through the fourth quarter, we could be looking at a record full year revenue number. Wells Fargo senior analyst Aaron Rakers told subscribers he expects Commvault to earn full fy2021 revenues of $714.7m. This would be a record, exceeding fy2019’s $710.9m.

Commvault is guiding a +6 to 7 per cent revenue CAGR through fiscal 2023.

The good news was spread pretty much all over the results.

  • Software and Products Revenue was $88.6m, up 16 per cent and an all-time record.
  • Services revenue was flat at $99.4m.
  • Annual Recurring Revenue (ARR) grew 11 per cent to $507m.
  • Subscription revenue represented 55 per cent of Software and Products Revenue versus 41 per cent a year ago.
  • Operating cash flow totalled $17m compared to $0.9m a year ago.
  • Total cash and short-term investments were $388.4m at quarter end, compared to $339.7m a year ago.
Three fy2021 revenue growth quarters; Q1, Q2 and now Q3.

Commvault said larger-deal revenue represented 68 per cent of its software and products revenue. The number of larger-deal transactions grew three per cent Y/Y to 187 deals and the average dollar amount of larger deal revenue transactions was approximately $322,000 – 15 per cent up on last year.

Mirchandani said in the Q3 earnings letter: “Our regions were all strong, including the best-ever top line result for EMEA. We landed multiple seven-figure deals and recorded balanced growth in small and midsized businesses across all geographic regions.”

He added; “The Metallic software-as-a-service offering has become an integral component of our land and expand strategy. In Q3, nearly 40% of Metallic customers were net new to Commvault.” 

Commvault is reorganising its products into four groups, according to William Blair analyst Jason Ader.

  1. Data Insights (includes file storage optimisation, data governance, eDiscovery, and compliance),
  2. Data Storage (includes Hedvig distributed storage, Commvault HyperScale X, Metallic Cloud Storage),
  3. Data Protection (includes backup/recovery and disaster recovery),
  4. Metallic SaaS Offering (includes VM and Kubernetes backup, database backup, file and object backup, Office 365 backup, Salesforce backup, and endpoint backup).

Will you still feed me when ARM64? You bet, says SUSE’s Rancher Labs

SUSE yesterday announced the release of Longhorn 1.1, the Kubernetes persistent storage platform built by its subsidiary Rancher Labs. The upgrade includes support for ARM64 and edge deployments, and extra performance and management goodies.

Rancher Labs teamed up with Arm to create a Kubernetes platform for low-powered IoT, edge, and data centre server nodes. “Longhorn is the first Kubernetes-native storage solution to support edge deployments, allowing DevOps teams to manage persistent data volumes from core to cloud to edge,” a SUSE spokesperson told us via email.

Rancher Labs is best known for its open source multi-cluster container orchestration software. This management tool includes the Rancher Kubernetes Engine (RKE) distribution but can manage any certified Kubernetes distribution.

Rancher has ported RKE and RancherOS to Arm Neoverse servers. Rancher Labs has also upgraded Rancher 2.1 to manage mixed x86 and ARM64 nodes via a single Rancher server. In other words, users can run x86 clusters in the data centre and Arm clusters on the edge.

The Longhorn 1.1 goodie bag includes…

  • Ability to manage container data volumes in any Kubernetes environment
  • Self-healing capabilities
  • “ReadWriteMany” support so it can write data across multiple containers
  • Prometheus systems and service monitoring support
  • CSI snapshotting
  • Data Locality to keep a storage replica local to the workload itself, ensuring that, even if the node temporarily loses network connectivity, access to storage will never be lost.

Red Hat OpenShift now does container storage backup

Red Hat has teamed up with three container backup suppliers to integrate their services with the company’s OpenShift Kubernetes distribution.

The Red Hat-certified backup products for OpenShift container storage are parent company IBM’s Spectrum Protect Plus; TrilioVault for Kubernetes; and Veeam-owned Kasten’s K10.

OpenShift Container Storage (OCS) runs as a Kubernetes service within Red Hat OpenShift to provide persistent storage to applications through automated management of storage resources.

OCS 4.6 adds container app snapshotting using a Container Storage Interface (CSI) plug-in. It also has a set of OpenShift APIs for data protection which refers to the snapshotting.

Kasten K10 and Red Hat OCS integration.

Kasten’s VP for Products Gaurav Rishi blogs that K10 “can perform durable backups of your data using OCS storage classes (PVCs), your metadata (Kubernetes and OpenShift APIs such as namespaces and secrets), provides local persistence of the backup for a minimal restore time and the ability to restore a running application namespace.”

The restoration can be to “a different namespace for test and QA purposes and even to a different OpenShift Container Platform cluster”.

Commvault Metallic BaaS puts stake in the ground with HyperScale X

Commvault’s Metallic Backup-as-a-Service can now store backup data on the company’s HyperScale X appliance.

HyperScale X, launched in 2017 offers scale-out backup and recovery for container, virtual and database workloads. The system supports on-premises deployments and multiple public clouds, with data movement between these locations. HyperScale X for Metallic supports Commvault’s appliance for single-vendor management of on-premises storage and SaaS-delivered data backup.

HyperScale X runs in edge mode, allowing it to operate as a backup target for hybrid cloud workloads protected by Metallic. Commvault said this move represents the next step in Metallic’s hybrid expansion.

Customers can use Metallic to protect data in any hybrid data protection schema, whether cloud to cloud, on-premises data to cloud with an on-premises copy for fast restore, cloud to appliance, and more.

Manoj Nair.

Manoj Nair, Metallic GM for Commvault, said in a statement: “With our SaaS Plus capabilities, Metallic solutions offer customers what no other cloud-delivered backup service can match: the most comprehensive portfolio of BaaS solutions and the flexibility to backup each data source to the optimal storage target–whether that be cloud or on-premises storage, or HyperScale X for Metallic for ultimate performance with BaaS simplicity.”

Metallic has also been given additional capabilities;

  • Hybrid cloud support for SAP HANA and Kubernetes,
  • Metallic Salesforce Backup protects Salesforce data with unlimited retention, unlimited storage, and hardened security controls built-in
  • Metallic Office 365 & Teams Backup & Recovery now includes in-place restore of Teams conversations and other data. Admin staff can granularly recover data stored within Teams, channels, and conversations.
  • Metallic Database Backup supports Oracle Database and Microsoft Active Directory.

Metallic Database Backup gives administrators seamless visibility and database protection for SAP HANA, Oracle, Microsoft SQL, and Active Directory on-premises, in the cloud, and in Azure.

All new and existing Metallic Office 365 Backup customers will automatically receive Microsoft Teams functionality as part of their subscription – which also covers Exchange, SharePoint, and OneDrive.

Quantum upgrades hybrid storage appliance

Quantum has refreshed its hybrid storage appliance line-up with the H2000 giving the new range higher-capacity drives and faster connectivity than its predecessor.

Quantum said the HS2000 marks a significant increase in performance on previous generation, the QXS Series. The company did not disclose H2000 bandwidth performance numbers at launch time.

Noemi Greyzdorf, director of product marketing, provides the announcement quote: “The H2000 Series represents a generational upgrade, providing better storage capacity and access, while enabling customers to fully tap into their software subscription license through its tight integration with the StorNext 7 file system. It’s a win-win for everyone.” 

The H2000 come in two flavours, a 12 x 3.5inch-slot HS2012 enclosure and a 24 x 2.5inch slot one, both 2RU in size. They support SAS-connected SSDs and HDDs and can be managed via Quantum’s StorNext workflow software.

Supported drives include 3.5-inch format 4, 8 and 16TB nearline disk drives and 1.2TB, 1.8TB and 2.4TB 10,000rpm 2.5-inch disk drives. Customers can also specify 1.9TB, 7.68TB and 15.36TB SAS SSDS.

H2000 chassis

Faster connectivity should enable the H2000 to exceed 2RU QXS-3 and QXS-4 performance. The H2000 systems support 32Gbit/sFibre Channel and 10/25/40/100Gbit/sec Ethernet connectivity. QXS systems uses 16Gbit/s Fibre Channel, half the speed of the H2000’s Fibre Channel, and 1/10GbitE.

HS2000s can be used for storage in a StorNext system, alongside the F-Series NVMe drive arrays.

At time of publication, Quantum has not released H2000 availability or pricing information.

Astronauts walk in space to upgrade International Space Station datacomms. No more hard drives by courier

Two astronauts will walk in space today to upgrade the International Space Station’s datacomms. Their efforts will mean that data collected in science experiments conducted aboard the ISS will no longer be sent to Earth via hard drives carried by returning astronauts.

The space walkers are expected to take six hours to install the ColKa (Columbus Ka-band), a fridge-sized terminal funded by the UK Space Agency and built by MDA UK.

The ISS Columbus module, launched in 2008, currently has lousy data comms to ground stations on Earth. Hence the physical transfer of data by hard drives. However, arrival is contingent on the return schedule of the astronaut, which may result in many weeks’ delay.

ISS Columbus module.

With the new set-up, results are delivered to scientists a day or two after the data is recorded. Data transmission is asynchronously bi-directional. ColKa promises speeds of up to 50 Mbit/s in downlink and up to 2 Mbit/s in uplink.

This will allow high data volume downlink, including video streaming. Speed is limited by the ISS-Earth comms infrastructure components. The terminal itself is capable of speeds of up to 400 Mbit/s downlink and 50 Mbit/s uplink. 

ColKa will send signals from the Station, which orbits at an altitude of 400km above Earth, even further into space, where they will be picked up by EDRS satellites in geostationary orbit 36,000 km above the surface. From there the data is transmitted data to a ground station at Harwell Campus, Oxfordshire. Then the signals are transferred to the Columbus Control Centre and user centres across Europe. 

Veritas’s NetBackup now comes in HCI deployments

Veritas
Veritas

Veritas has gone live with NetBackup 9.0, a major release of its flagship data protection software. Enhancements include scale-out and hyperconverged ‘Flex Scale’ nodes, simpler operations, and OpenStack integration.

Deepak Mohan

“The best strategy for businesses that want to embrace hyperconverged architectures is to standardise on a single data protection platform,” Deepak Mohan, Veritas EVP, said:

“Veritas NetBackup 9 with Flex Scale gives customers freedom of choice with the most flexible deployment models in the industry today by empowering them to protect over 800 workloads in a scale-out, scale-up, or cloud storage model.”

Now for an analyst quote – from Ashish Nadkarni, group VP at IDC. “With Veritas’s heritage of providing enterprise-class data protection, enterprises can now consume software-defined NetBackup in the cloud, as BYOS [build your own server] and purpose-built appliances, and now in a new, hyper-converged scale-out deployment mode – giving customers the breadth of diverse workload coverage – all from a single platform.”

New features

Flex Scale is NetBackup deployed in hyperconverged nodes that scale out in line with evolving backup needs. This complements two other deployment options.

  • NetBackup – in the cloud, on build your own server (BYOS), purpose-built appliances and virtual appliances
  • NetBackup Flex – secure, multi-tenant containerised deployment.

HPE and Veritas have jointly developed a certified and Veritas-branded Flex Scale system, using HPE ProLiant servers and NetBackup software.

Veritas says NetBackup 9.0 is simpler to operate. New features include:

  • Policy-driven automation to manage provisioning, scaling, load-balancing, cloud integration and recovery operations,
  • Auto-discovery of workloads to improve data protection services and eliminate protection gaps,
  • API-first focus for integrations into existing toolchains and cloud-based workflows,
  • Simplified data protection for OpenStack.

The OpenStack additions use native APIs to provide integration, multi-tenant controls, and self-service management on-premises and in public clouds.

NetBackup 9 is available now.

Qumulo helps drive Covid-19 pandemic modelling scenarios

The Institute for Health Metrics and Evaluation (IHME) is using scale-out Qumulo filers and 18TB Western Digital drives, to aid Covid-19 research.

Based at the University of Washington, IHME supplies Covid-19 pandemic modelling visualisations and models for hospitals and health authorities. The institute produces daily and cumulative COVID-19 death reports, infections and testing numbers, and social distancing information. vaccine rollout research and modelling.

IHME uses the Qumulo systems for on-premises data processing of up to 2PB data per day, and hosts data visualisations in Azure. IHME has about 8PB of stored disk data and 15PB of archived data.

Serkan Yalcin, director of IT Infrastructure at IHME, who used to work at Qumulo, said in a statement: “Teaming up with Qumulo from the beginning and utilising Western Digital drives has enabled us to distill hundreds of millions of data points into a single visualisation which allows policymakers to easily view results and communicate them with their teams. Now, as vaccines are rolling out around the world, we have been able to further extend projections with even more data to give decision makers the most robust information possible.”

IHME mask-wearing visualisation.

IHME uses Qumulo C-432T file system nodes with 432TB of mixed disk drive and flash storage. The 432TB capacity 2RU nodes have 24 x 18TB Western Digital DHC550 disk drives and 6 x SN640 NVMe 3.2TB SSDs for caching, also from WD

A 42RU rack can hold 8.6PB of data with these nodes.

Read more in a Qumulo case study.

DataCore snaps up Caringo for object storage

Datacore has bought the object storage supplier Caringo for an undisclosed amount and will now offer software-defined block, file and object storage.

Caringo customers include BT Television, Department of Defense, Disney Streaming Services, National Institutes of Health (NIH), Argonne National Labs, and hundreds more worldwide. The company took in $33m in funding, including $8.8m in its last round in 2016.

Its acquisition shows that endgame is nearing for object storage software vendor consolidation. Today three independent object storage startups are still standing. Cloudian, Minio and Scality. Othe contenders – Amplidata, Archivas, ByCast, Ceph, Cleversafe, Evertrust, FilePool, OpenIO and others have all been acquired.

Dave Zabrowski.

Dave Zabrowski, CEO of DataCore Software, said: “With our acquisition of Caringo, we are excited to offer a proven, highly reliable object storage technology with an unmatched breadth of features to IT departments, service providers, and government customers worldwide.”

Jeff Horing, co-founder and managing director at Insight Partners – a VC backer of DataCore – was also active on the quote front: “We are excited DataCore is adding leading object storage technology from Caringo to become the vendor with the most comprehensive software-defined portfolio in the industry.”

DataCore’s software-defined storage concept.

Caringo’s Swarm object storage software joins DataCore’s vFilo file and SanSymphony block storage software portfolio. Datacore believes strongly in a best-of-breed approach for file, block and object software, rejecting Ceph-style one-product-does-it-all approaches, and also unified file+object stores.

CMO Gerardo Dada told us that optimisations for file make a unified product poor at object performance and vice versa.

Caringo supplies Swarm in appliance and software form. Version 11 Swarm was announced in September last year and introduced large file bulk uploads, partial file restore, file sharing and backup to AWS S3-format targets.

Swarm 12 in November 2020 added distributed protection, immediate content access across geographically dispersed sites, single sign-on (SSO) support, and support for S3 Glacier and S3 Glacier Deep Archive. Dense storage and flash drives were also supported. 

Dada said 95 per cent of Austin-based Caringo’s staff are joining DataCore, which also has an Austin office. The Caringo CEO, Tony Barbagallo will be a DataCore advisor for a while.

DataCore said it had a strong close to 2020 with double-digit growth Y/Y in capacity sold. The company claims it is consistently adding more than 100 new customers per quarter.