Home Blog Page 308

IBM speeds up Cloud Object Storage with LucidLink

IBM executive Arvind Krishna. 5/30/19 Photo by John O’Boyle

LucidLink is to offer a bundled version of Filespaces based on IBM Cloud Object Store.

LucidLink says the bundle can cost less than standard prices from AWS, Azure and other S3-compatible object stores. Peter Thompson, LucidLink CEO, said in a statement: “Now, with IBM Cloud, we will be able to further offer egress fees 60 per cent lower than our previous offering to a wider audience and pass those saving directly along to our customers.”

Adam Kocoloski, IBM Fellow, VP and CTO, Cloud Data Services, said: “As companies adjust to a new way of working, the ability to securely share and access large amounts of data remotely while taking advantage of object storage in a cloud environment has become indispensable.“

The IBM COS LucidLink bundle is designed for applications that require fast file-protocol access to massive datasets. These are stored most cost-effectively as object repositories rather than in more expensive NAS filer systems. However object stores are slower to access than filers – unless they are front-ended with LucidLink’s Filespaces or an equivalent software.

LucidLink schematic diagram.

Filespaces runs directly on each endpoint, caching the working data set for each user. The LucidLink client provides file system access while natively using S3 object protocols. All the file metadata is held in the local node so file:folder lookups do not incur network hop latencies.

User file sharing is supported. Julie O’Grady, LucidLink’s marketing director, told us. “Filespaces … enables teams to collaborate on the same file, no matter where they are located.”

All access to the object store is carried out using parallel links to speed file reads and writes. All file reads result in partial file transmission with pre-fetching of data that is likely to be needed next. File writes are packaged by the Filespaces cache node, and then compressed, encrypted and multiplexed across parallel links to the back-end object store. The net effect of the caching, local metadata look-up and pre-fetching is that remote object access can be as fast as local filer access.

In May 2019 IBM had a partnership with file collaborator Panzura to provide a Freedom Cloud file access front-end to its object store. The focus was on file sharing. Today’s LucidLink-IBM partnership has a focus on file access speed.

IBM is supporting access to all its US data centres together with its UK and Australian data centres. The bundle does not support on-premises IBM COS deployments.

LucidLink Filespaces pricing. IBM COS is the storage for the Teams and Enterprise categories.

The IBM COS/Filespaces bundle is available now from LucidLink. Users are charged per account at $10.00/user month, starting at 6 users. It will provide users access to all IBM’s worldwide regions

Pure Storage FlashBlade goes to work with Azure Stack

Gridpro, a Swedish cloud management software vendor, has integrated Pure Storage’s unified file and object FlashBlade array with Azure stack.

Gridpro is the developer of EvOps, a software technology that plugs various product and services into Azure Stack Hub. EvOps integrates FlashBlade management into Azure Stack Hub and provisions file systems and object store account through EvOps workflows.

The integration represents the first time Pure Storage and Azure Stack Hub are physically working side-by-side in the data centre,” Pure Storage engineer director Brian Gold said in a blog post.

He suggests three use cases for an Azure Stack Hub/FlashBlade combo: storing and processing analytics data; backup and restore of snapshotted Azure Stack Hub virtual machine instances; and large-capacity, fast access object storage. These use cases could be found in edge data centres with local processing and storage requirements, and also meet data locality needs.

Pure’s FlashBlade integration with Azure Stack Hub mirrors the company’s support of AWS Outposts and Pure supports the hybrid cloud concept.

Azure Stack Hub backgrounder

Azure Stack Hub is a converged or hyperconverged IT system running an on-premises version of the Azure public cloud that can be hooked up to the Azure public cloud. It is built from a Scale Unit, using four to 16 servers from a Microsoft hardware partner.

The system is set up by a Microsoft Solution Provider partner. And networking and storage hardware can be incorporated, fusing network and storage resource providers. Such resource providers are web services that form the foundation for all Azure Stack Hub IaaS and PaaS capabilities.

The Azure public cloud supports file storage (SMB File shares), unlike Azure Stack Hub. This limitation can be overcome with FlashBlade, which presents both file and object services.

Azure Stack Hub supports blob (object), queue, and table (NoSQL data) storage services plus Azure Key Vault account management. Azure Storage can store and retrieve large amounts of unstructured data, like documents and media files with Azure Blobs, and structured NoSQL-based data with Azure Tables.

Silk’s Azure speed is based on Ephemeral OS disks

Silk gets crazy fast Azure speed by using and protecting Azure’s fast and unprotected ephemeral OS disks – incurring no storage cost.

Ephemeral OS disks are created on the local Azure virtual machine (VM) storage and are not saved to Azure Storage. Azure documentation states: “With Ephemeral OS disk, you get lower read/write latency to the OS disk and faster VM reimage.”

When Silk storage software runs on the Azure public cloud it can achieve 1 million IOPS and 20GB/sec read bandwidth. This is 6.25 times more IOPS and 10x greater bandwidth than Azure fastest storage offering – Ultra Disk Storage.

Chris Buckel.

We talked to Silk’s VP for business development, Chris Buckel, to find out more.

Blocks & Files: How does Silk get such high-performance from Azure?

Chris Buckel: What Silk does is to spin up a whole set of Azure Compute instances and aggregate their performance, then use our software to provide all of the enterprise features and resilience, etc. So when the customer runs a Silk Data Pod in Azure, they are running a bunch of Azure Compute VMs all orchestrated by our Flex software. But the key fact here is that, since they are effectively using Azure Compute to provide storage, they get to take advance of Microsoft’s reserved instances discounts… so suddenly, storage is as discounted as compute.

Blocks & Files: Could you explain what a Silk Data Pod consists of?

Chris Buckel: A Silk ‘Data Pod’ in Azure consists of two sets of Azure Compute instances running in the customer’s own subscription. The first layer (called c.nodes) provide the block data services to the customer’s database systems, while the second layer (the d.nodes) persist the data. This means the SDP can be scaled in two dimensions: capacity can be increased by adding more d.nodes while performance can be added by scaling out the number of c.nodes in a fully symmetric active/active manner. And, of course, when performance is no longer needed, the number of c.nodes can be reduced to lower the Azure infrastructure cost.

Using this design, we can automatically scale out workloads in Azure based on their constantly-evolving requirements and achieve very high levels of performance on demand. And most importantly, everything is based on Azure Compute resources.

Blocks & Files: Why is it beneficial to use Azure compute instances?

Chris Buckel: Azure Compute instances (i.e. virtual machines) are available on PAYG options, but also via “Reserved Instances” discounts of 41 per cent for a one year commit or 62 per cent for a three year commit. Azure Disk, on the other hand, doesn’t give any discount for committing to a period of time… it’s more or less the same cost no matter what you do.

Blocks & Files: Why is this such a big deal?

Chris Buckel: That’s really big for customers with enterprise workloads like Oracle, MSSQL, Postgres etc because they all tend to be building these systems with a >3 year lifetime in mind. So suddenly the storage cost can be massively offset just like the compute cost, but at the same time with all this additional performance benefit, resilience, and features like inline deduplication, zero footprint snapshots and the like.

The features and functionality are great, but it’s the change in cost/performance profile that makes it compelling.

Blocks & Files: Okay, you use compute instances, but where is the data stored?

Chris Buckel: In addition to the types of Azure Disk you outlined in your recent article, there is another type of disk in each cloud provider called ‘ephemeral‘ or local SSD storage. It’s very fast and very low cost, because it’s completely unprotected, unlike all the other options which are protected (and therefore slow).

We then use our patented RAID technology to stripe across multiple ephemeral volumes, providing the resiliency in case one or more are lost, but taking advantage of the low cost and high speed. We believe this is very different to all the other options, e.g. NetApp or Pure, who have to use the cloud provider’s disks and protection.

This – along with the scale-out architecture – is the secret sauce that gives us the very high levels of IOPS and throughput.

Blocks & Files: Do you provide any other protection?

Chris Buckel: Silk data protection comes in two forms: an optional change log written to low-cost HDDs, and cross-zone/cross-region/cross-cloud replication to another Silk Data Pod running elsewhere.

HPE heralds move to speedier SANS with faster Fibre Channel

HPE has introduced built-in telemetry and self-healing SAN features in the seventh generation of B-series Fibre Channel SAN directors, switches and HBAs. It has also doubled their speed.

Announcing the launch in a blog post, HPE Storage product manager Rob Gee set out the need for faster and smarter Fibre Channel SAN. “Flash storage is the de facto choice for many data centres worldwide. NVMe storage is becoming more and more common, bringing with it new benchmarks for high performance.

“Virtual machine densities continue to rise; the pressure to deliver reliable and fast access to data and applications – while ensuring business-critical workloads function correctly – is more significant than ever before. These trends combine to underscore just how important it is to have data paths keep pace and deliver on growing and evolving user expectations.”

HPE presents the Gen 7 tech as self-learning, self-optimising and self-healing. These features help SAN admin staff “take corrective action before a performance issue impacts the business”, Gee writes.

The HPE gen 7 products will support 64Gbit/s line speed with 64Gbit/s optics, which will become available in mid-2021. 

HPE’s B-series is is based on Broadcom Gen 7 hardware, which was announced last September. The Broadcom Gen 7 range includes X7 Directors with component FC32-64 blades and G720 switches. The X7 Directors support gen 6 and gen 7 blades. They also enable NVMe and SCSI to run concurrently. 

A Broadcom table compares gen 6 and gen 7 Fibre Channel product technology. 

Silk seals block storage co-selling deal with Azure

Microsoft has signed a co-selling agreement with Silk, the company formerly called Kaminario, which says this is the first such block storage deal in the Azure cloud.

Part of the Silk Cloud Data Platform, Silk’s VisionOS storage array software is presented as a smart, resilient and invisible software layer that sits between a public cloud infrastructure and databases such as Oracle and SQL Server.

The software speeds database performance without change to applications or the databases, with 99.9999 per cent resiliency and no single point of failure, the company claims. Silk’s software can also move data between zones and regions to ensure its availability.

Dani Golan, Silk’s CEO, said in a press statement: “The orchestration and optimisation power of Silk coupled with Azure will enable customers to achieve faster performance, far better resiliency, and more flexibility and agility.” 

Dani Golan

Block work

Azure offers four kinds of native block storage, called Azure Managed Disks:  

  • Ultra Disk Storage – high performance SSD with configurable performance for the lowest latency and consistent high IOPS/throughput. 
  • Premium SSD – high performance storage for I/O intensive workloads with significantly high throughput.
  • Standard SSD – a low-cost offering optimised for test and entry-level production workloads requiring consistent latency. 
  • Standard HDD – disk-drive based for dev/test and other infrequent access workloads.

Ultra Disk Storage, the highest performance tier, delivers up to 160,000 IOPS, and 300MB/sec to 2GB/sec throughput. Silk said its software running on Azure can achieve 1 million IOPS and 20GB/sec read bandwidth. That’s 6.25 times more IOPS and 10x greater bandwidth.

Channel flannel

There are two kinds of Azure co-sell status, Co-sell Ready (access to Microsoft field sellers) and Co-sell Incentivised (access to field sellers who are paid for selling the product/service plus marketing help). Silk has achieved Co-sell Incentivised status.

VisionOS also supports AWS and Google Cloud Platform, so we might look forward to similar deals with Amazon and Google.

HYCU joins O365 data protection crowd with new SaaS offering

HYCU has launched SaaS-based Backup and Recovery for Office 365. “We see SaaS as a natural extension of the customer’s data estate,” CEO Simon Taylor said today in a press briefing. He noted that many HYCU customers are adopting multiple public clouds at the same time.

Simon Taylor

Office 365 is now called Microsoft Office – we’ll refer to it as O365 to stay in tune with HYCU’s announcement. The new fully-managed O365 service is integrated into HYCU’s Protégé multi-cloud management facility.

Subbiah Sundaram, HYCU VP Products, said there are four advantages to HYCU’s Office 365 backup that make it a unique product.

  • It’s a service and not based on an on-premises machine,
  • It’s comprehensive and covers more than SharePoint and Outlook,
  • It can meet data residency requirements,
  • It has the most comprehensive eDiscovery features.

Multiple competitors offer O365 backup, including Acronis, Clumio, Cohesity, Commvault, Druva, IBM, Rubrik, Veeam and Veritas. They all say they offer comprehensive Office 365 backup with many – Cohesity, Veeam and Veritas, for example – calling out their eDiscovery capabilities. Customers will need a supplier-feature matrix to compare and contrast the various offerings.

In the meantime, here’s a quick feature list for HYCU’s O365 protection.

  • Coverage of Outlook, Contacts, OneDrive (1x/day), OneNote, SharePoint (3x/day), Teams, Outlook email (12x/day). There is also email journalling.
  • Granular and full recovery for OneDrive (files), SharePoint (sites), Groups and Teams. Search and recover functionality for Email, OneDrive and SharePoint.
  • Detailed audit trail of email backups, searches, downloads, deletions. Suspend email expiration on-demand for legal audit, etc.
  • Many security standards, encryption, key management and compliance standards met.
  • No sizing involved; it’s all dynamic.

Not only but also

HYCU offers agent-less, purpose-built backup and recovery services for Nutanix, Azure, Google Cloud, VMware, AWS and – now – O365. Services are native to each cloud and can be used for disaster recovery and data migration. The company has a Net Promotor Score of 91 and 2,000-plus customers, which includes Nutanix and GigaOm. The company said revenues grew 450 per cent in 2020.

HYCU has a SAP HANA Disaster Recovery offering for Google Cloud Platform. Nutanix Mine v3.0 object storage is supported by HYCU and has built-in replication to multiple sites.

HYCU is expanding ransomware protection with immutable backups to WORM-enabled S3-compliant targets like Nutanix Objects, and source-to-cloud target encryption.

The company has added agentless backup of physical Linux servers for Red Hat Enterprise Linux and Oracle Enterprise Linux. HYCU has also added network throttles to optimise backup traffic.

Storage includes RAM, says UK Court of Appeal

A ruling in a court case in the UK concerning intercepts of encrypted EncroChat mobile phone messages has decided that data in the phone’s RAM was being stored: nonsense in an IT sense, but nonetheless allowing law enforcement to use the information.

The judgement was outlined in a Register story – “EncroChat hack case: RAM, bam… what? Data in transit is data at rest, rules UK Court of Appeal” – and we delved into the actual judgement to see what was going on.

The UK Court of Appeal ruled that RAM counted as data storage. How can this be the case?

Two meanings

There are two meanings of storage that appear relevant to this: IT and common parlance. In everyday speech storage means the retention of something for future use and memory the faculty by which the mind stores and remembers information. Storage and memory overlap. Both, we might say, are persistent.

But we might also say that, in humans, when the power is switched off at the end of our lives, then the contents of our memory disappear, being to this extent “volatile”. (But we are playing with words here.)

In IT memory contents are volatile, disappearing when power to the DRAM is switched off, and storage is persistent, with the contents unaffected by on-off power cycles.

Storage vs transmission

In the court case a distinction was drawn between storage of information and transmission of data in telephone devices. Different warrants have to be obtained by the police for intercepting and copying transmission data and for copying “stored” data.

Information recovered via a stored data warrant cannot be used in a case involving information transmission. Neither can recovered transmission data be used in a case involving stored data.

The information recovery from the EncroChat phones was achieved with a storage warrant and not a transmission warrant. Defence counsel made the case that the warrants were inappropriate as the recovered data was being transmitted, as it had been copied from the phones’ DRAM and not their storage, or ‘Realm’ as it was called.

This argument relied upon the transmission process including the preparation of the data being included in a message. Thus that data would be fetched from storage and copied into DRAM. There it would be used to construct a message and be formatted before being sent to the phone’s transmission hardware: radio chip and antenna.

The justices ruled that “what was intercepted, was not the same as what had been transmitted because what had been transmitted was encrypted. It cannot therefore have been ‘being transmitted’ when it was intercepted: it can only have been ‘being stored’.” The ruling likened the extracted data to a “draft”.

No ‘technical terms’ in the 2016 Act

The Appeal Court justices disagreed with “expert witnesses” who explained that making a copy was part and parcel of the act of transmission, stating that while this might be true from an expert’s point of view, it was not the intent of the government when drafting the relevant Act of Parliament.

The judgement stated: “The experts have an important role in explaining how a system works, but no role whatever in construing an Act of Parliament. They appear to have assumed that because a communication appears in the RAM as an essential part of the process which results in the transmission it did so while ‘being transmitted’. 

“That is an obvious error of language and analysis. It can be illustrated by considering the posting of a letter. The process involves the letter being written, put in an envelope, a stamp being attached and then the letter being placed in the post box. Only the last act involves the letter being transmitted by a system, but all the acts are essential to that transmission.”

As the relevant Act defined only two states for data: being stored or being transmitted, then, if it was not being transmitted it was being stored.

The reasoning was as follows: Data in RAM is not being transmitted. It is necessary for it to be in DRAM for a message to be prepared for transmission, but the act of preparation is not transmission. Therefore the data must be in a stored state and the warrant used to recover it was valid.

In other words, the common parlance meaning of storage takes precedence over the technical definition, despite the very technical issues at play.

IBM’s got a brand new pizza box – an entry-level FlashSystem

Lance Cpl. Brandon Allomong, Marine Aviation Logistics Squadron, (right) and fellow competior wolf down their pizza during one of the four Pizza Eating Contests at this years BayFest celebration. Allomong was the winner of this contest by managing to take down one whole pizza. He now has a year of free pizza from Papa John's Pizza.

IBM has launched the pizza box-sized FlashSystem 5200, its most compact storage system to date. The company has further updated the 5000 line with new 5015 and 5035 systems and added Spectrum Virtualize for Public Cloud on Azure.

The FlashSystem 5200 is faster and stores more data than its predecessor, the 5100, but base price averages out at 20 per cent lower.

Denis Kennelly, GM for IBM Storage, said in a statement: “Systems that provide global data availability, data resilience, automation, and enterprise-class data services are more critical than ever. Today’s announcement is designed to bring these capabilities to organisations of any size.”

IBM FlashSystem family. The 9200R is a rack mount 9200.

The current FlashSystem 5000, 5100, and 7200 and 9200 models are 2RU enclosures with up to 24 x 2.5-inch format FlashCore Module Drives (FMD) mounted vertically across the front.

FlashSystem 5200

The 5200 is half the height at 1RU – so rack densities are doubled – and has up to 12 FMDs mounted horizontally in two rows across the front. The NVMe FMDs use 96-layer NAND which is organised into a combination of SLC flash and QLC flash and come with a seven-year warranty in 4.8, 9.6, 19.2 or 38.4TB capacities. Industry-standard storage-class memory drives fit in the same slots. 

In-drive FMD hardware compression yields a general 2:1 data reduction ratio. Software deduplication in the 5200 controller ramps this up to 5:1. Thin provisioning can yield a further 2:1 effective data reduction.

A clustered system supports up to four 5200 enclosures for a maximum of 48 drives, meaning a theoretical maximum raw capacity of 1.84PB using 48 x 38.4TB drives. SAS SSDs are supported in the expansion enclosures.

FlashSystem product range characteristics.

After data reduction, the 5200 scales from 38TB effective capacity to 460.8TB in the chassis and out to 1.7PB.

The FlashSystem 5200 has one 8-core 2.3GHz Skylake-D Intel Xeon D-2146NT processor per controller with two dual-active controllers per control enclosure. The 5200 is equipped with a PCIe gen 3 bus and has eight x 32Gbit/s Fibre Channel or 25Gbit/s iSCSI ports.

The system incorporates all the software features of the 5100, including high-availability and external storage virtualization. But it offers 66 per cent greater maximum I/Os than the 5100 and 40 per cent more data throughput at 21GB/sec.

The operating system is Spectrum Virtualize, which manages up to 300 external storage arrays and adds their capacity to the FlashSystem pool. A Smart Data Placement algorithm puts the hottest data in the SLC flash for faster access.

The IBM FlashSystem 5200 supports Red Hat OpenShift, Container Storage Interface (CSI) for Kubernetes, Ansible automation, and VMWare and bare metal environments. The system comes with IBM Storage Insights, which provides visibility across complex storage environment.

Future plans

IBM Spectrum Virtualize for Public Cloud is software that enables users to replicate or migrate data from heterogeneous storage systems between on-premises environments and IBM Cloud or Amazon Web Services. IBM will extend the same capabilities to Microsoft Azure, starting with a beta program in the third quarter of 2021.

The company is developing IBM Cloud Satellite software to enable customers to build, deploy and manage cloud services from any vendor anywhere.

Cloud Satellite standardises a set of Kubernetes, data, AI and security services. It will be delivered as-a-service from a single console, managed through the IBM public cloud and is currently in beta test.

The software is expected to become generally available in March and IBM will add support for it to the FlashSystem portfolio, SAN Volume Controller, Elastic Storage System and Spectrum Scale.

Your occasional storage digest with Dell EMC, AWS Outposts, a PCIe 5.0 switch and more

The University of Pisa relies on multiple Dell EMC Power-something arrays for its storage needs. AWS has given its on-premises public cloud presence local backups, Microchip has brought out the fastest PCIe switch to date while data protector and file manager Quantum has sidestepped a potentially nasty lawsuit.

University of Pisa is a Dell EMC Power user

Italy’s University of Pisa uses a raft of Dell EMC storage technologies.

Leaning tower of Pisa.

A PowerStore system stores scientific computing applications for genomics and biology, plus chemistry, physics and engineering. It delivered a 6x performance improvement on a previous unnamed storage system.

The University uses the system to support remote learning. CTO Maurizio Davini, said: “As we transitioned to remote learning, we needed reliable, scalable technology to provide our 53,000 students and faculty with quick, easy access to critical data and applications at all times, from any location. Dell EMC PowerStore is … a game-changer.”

The university supports VDI and remote workstations and database workloads with a PowerMax storage array. A PowerScale all-flash system handles artificial intelligence and bare metal high performance computing (HPC) workloads, The university expects unstructured data volumes to double within a year.

AWS Outposts gets local backups

Amazon’s on-premises appliance AWS Outposts now supports local snapshots for Amazon Elastic Block Store (EBS) volumes. This makes it easier to comply with data residency and local backup requirements.

Until now, Amazon EBS snapshots on Outposts were stored by default on Amazon S3 in the AWS Region. EBS Local Snapshots on Outposts is a new capability that enables snapshots and Amazon Machine Image (AMI) data to be stored locally. The feature is handled through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. You can also continue to take snapshots of EBS volumes on Outposts, which are stored in S3 in the associated parent region.

AWS said customers can easily migrate, replicate, and recover workloads from any sources directly into AWS Outposts, or between AWS Outposts devices, without requiring the EBS snapshot to data to go through an AWS Region. This also allows CloudEndure Migration and Disaster Recovery services to copy data locally, improving recovery times and assisting customers with strict data residency requirements. 

Microchip’s PCIe Gen 5 switch

PCIe gen 4 is just arriving in products, giving the twice gen 3 PCIe bus speed and now Microchip brings out a PCIe Gen 5 switch, doubling speed again.

PCIe Gen 5 runs at 128GB/sec across 16 lanes. PCIe Gen 4 does 64GB/sec and PCIe Gen 3 operates at 32GB/sec.

Microchip PCIe Gen 5 switch

Microchip’s PFX Switchtec supports 28 to 100 lanes and up to 48 non-transparent bridges (NTBs). It comes with XpressConnect retimers, which extend the physical distance PCIe Gen 5 supports using copper wires. The switch comes with a suite of debug and diagnostic features.

Dr. Debendra Das Sharma, Intel fellow and director of I/O technology and standards, said in a statement: “Intel’s upcoming Sapphire Rapids Xeon processors will implement PCI Express 5.0 and Compute Express Link running up to 32.0 GT/s to deliver the low-latency and high-bandwidth I/O solutions our customers need to deploy.”

Starboard Value sues former Quantum execs

Starboard Value has dropped its law suit against Quantum. However, in an amended filing, the activist investment fund, is still suing Jon Gacek former Quantum CEO, and former CFO Paul Auvil in the California Superior Court in Santa Clara County, alleging mis-representation and fraud.

In a 10-Q Filing, dated 27 January, Quantum said it “expects to continue to incur expenses related to this litigation, subject to potential offset from insurance. At this time, the Company is unable to estimate the range of possible outcomes with respect to this matter.”

Starboard bought Quantum shares between 2012 and 2014 and gained board seats. It agreed not to seek more control and to support Quantum’s slate of directors if company performance objectives were met.

Jon Gacek

Starboard alleges Gacek and Auvil artificially inflated Quantum’s earnings in its fiscal 2015 year to meet these objectives. Gacek and Auvil resigned in November 2017. Initially, Starboard included Quantum as a defendant, but Quantum rebutted this and Starboard has filed an amended complaint that mentions Gacek and Auvil onlyk

Shorter news items

Amazon S3 now supports AWS PrivateLink, providing direct access to S3 via a private endpoint within the customer’s virtual private network. This eliminates the need to use public IPs, configure firewall rules, or configure an Internet Gateway to access S3 from on-premises.

Videocams are watching you – and Scale Computing has software to store the images. It has announced HC3 hyperconverged infrastructure options for video surveillance, security and IoT edge applications.

Veeam has launched Veeam Backup for Google Cloud Platform, claiming ultra-low RPOs and RTOs. The software automates Google-native snapshots to protect VMs across projects and regions. Backups are stored in Google Object Storage for long-term retention.

Database virtualizer Delphix says a Covid-19-caused surge in demand accelerated its annual growth rate by over 85 per cent for the fiscal year ending January 2021, pushing it into non-GAAP profitability. Delphix also achieved a Net Promoter Score (NPS) of 89 during the year. The company now presents itself as a supplier of programmable data infrastructure.

NVMe-over-TCP supplier Lightbits Labs said it increased sales in 2020 by more than 500 per cent through a significant uptick in IaaS, SaaS, financial services, and video gaming customers.

We’re hiring!

VAST Data has hired Helen Protopapas as VP of finance, Tom Whaley as VP of sales, and Rick Franke as VP of global customer success, services and support. VAST said it is experiencing hyper-growth with accelerated global expansion and customer adoption. Whaley comes from NetApp and Franke from VMware.

Cloud storage service supplier Backblaze has hired Frank Patchel as Chief Financial Officer. Patchel has worked for multiple software as a service (SaaS) technology companies and overseen the successful sale of two businesses to public companies while serving as their president.

Isabelle Guis

SoftIron has appointed Phil Crocker as VP business development and channel with a focus on HyperDrive, the company’s software-defined storage system, which is based on Ceph. He joins from HPC storage supplier Panasas.

Commvault has appointed Isabelle Guis as CMO, to replace the departing Chris Powell. She was previously Salesforce’s VP for product marketing at Sales Cloud.

Database supplier SingleStore has appointed Oliver Schabenberger, former COO and CTO of SAS, as Chief Innovation Officer. SingleStore recently announced an $80m Series E investment round and a strategic partnership with SAS. 

Intel sues ex-employee for USB stick Xeon files theft

Top Secret

An Intel staffer who left the company to join Microsoft walked out the door with a USB stick holding 3,900 confidential files about the Xeon processor, his former employer alleges.

Intel accuses Dr. Varun Gupta of trade secrets file theft. According to documents filed in the US District Court in Portland, Oregon, Gupta worked at an Intel facility in Portland for almost ten years, in product marketing and strategic planning and business development.

He left in January 2020 to join Microsoft as a Principal for Strategic Planning in Cloud and AI. At Intel he had access to Xeon processor documents about pricing structure and strategies, parameter definition and manufacturing capabilities.

On his last day at the company, Gupta is alleged to have copied about 3,900 documents onto a Seagate FreeAgent GoFlex USB drive bearing an identified serial number. According to Intel he also copied information onto a Western Digital My Passport USB drive, also with an identified serial number. Some of the files were marked ‘Intel Top Secret and ‘Intel Confidential’.

In its court filing, Intel accuses Gupta of “deploying that information [in his role at Microsoft] in head-to-head negotiations with Intel concerning customised product design and pricing for significant volumes of Xeon processors”.

Specifically, he “used that confidential information and trade secrets to gain an unfair advantage over Intel in the negotiations concerning product specifications and pricing. Gupta had no way of knowing this information but for his access to it during his employment at Intel.”

An Intel security team began an investigation to determine the nature and scope of Gupta’s knowledge. With the assistance of Microsoft, “forensic analysis ultimately showed that Gupta had taken thousands of Intel documents, placed them on one or more of at least two USB drives (including the Seagate Drive), and accessed them on multiple dates throughout his employment by Microsoft.”

For example, “forensic analysis revealed that between February 3, 2020 and July 23, 2020, Gupta plugged the Western Digital Drive into his MS Surface at least 114 times.” Documents accessed included ”a slide deck relating to Intel’s confidential engagement strategy and product offerings for Xeon customised processors.”

After first denying he had the Seagate drive, “Gupta admitted to Microsoft that he did in fact have the Seagate Drive in his possession and only then that he turned it over to Microsoft for analysis.” Microsoft then commissioned a forensic analysis of the Seagate Drive. The Western Digital drive has not been found.

Intel seeks a jury trial and wants damages of at least $75,000, payment of its legal fees, and a restraining order preventing Gupta from using any confidential Intel information.

Gupta denies Intel’s claims.

Rubrik transitions to new phase as sales and engineering heads leave

Earlier this week, Blocks & Files reported the sudden departure of Rubrik’s Chief Revenue Officer Brett Shirk and several senior execs from the company. Since then we learnt that head of engineering Vinod Marur has also left the company. We caught up with Rubrik CEO and co-founder Bipul Sinha to ask him what was going on.

On our phone call yesterday Sinha painted the picture of a company growing at such breakneck pace that it needs to transition to new exec leadership as it enters new phases of growth.

Bipul Sinha

“We are on an exponential growth curve,” Sinha told us. “Obviously if you think about Rubrik as a high growth startup [then] what high growth does is that an average company takes 10 years to get to a point where a high growth startup can do it in two, three, four years. And then you have to constantly re platform the company for the next phase of growth.“

He thinks Rubrik is in the half a billion to two billion dollars transition phase right now. The company goes from zero to $100m, $100m to $300m, $300m to $500m, and $500m to $2bn is the next phase of growth.”

As a consequence, “when you enter that next phase of growth, you have to really think about things like talent, leadership, how are we approaching the market, and things like that, and that always leads to to new talent coming in to really accelerate the strategic direction and lead the next phase of growth.”

He told us that this new growth phase required different sales leadership, but emphasised: “Brett is an incredible leader, and he’s really built a powerful sales engine that we have today. And that really sets the platform for our next phase.”

So how well is Rubrik doing right now? The full 2020 year and fourth 2020 quarter, ended January 29, set revenue records, according to Sinha, who noted that Rubrik had grown against the background of a data management market that is changing to a service and subscription orientation.

Bullet points from his comments include;

  • “We added 300-plus new employees just last quarter.” 
  • “Our customer base is also increasing, very very rapidly. We now have over 3200 customers worldwide [and] our product is installed in over 55 countries around the world.”  
  • “Customer spend on Rubrik is also growing very rapidly with over 200 customers with more than a million dollar spend.” 
  • “We are going through the subscription transition of Rubrik. And quarter over quarter, our subscription product revenues are growing rapidly… we were nearly 70 per cent subscription last quarter.“
  • “So again, very very high growth and this is very important data for us because, as we mature our SaaS model, it is important to have subscription as the as the majority of our business.”

Enterprise SaaS supplier

“So what we’re seeing,” Sinha said, ‘is the thing is that we sell into the large enterprise segment, you’re not an SMB player and large enterprise segment when you’re selling like hundreds of thousands of dollars deals or millions of dollars deal it always has to be high touch.

“Even as a SaaS platform, you will have high touch sales, just like ServiceNow’s SasS platform. When you sell… millions of dollar deals to people [they do] not buy on phone or cell service budget because they want to understand the alignment vision, they are trusting you with their most important asset. It’s going to be a big deal.”

This “demands a different kind of sales process, a different kind of of sales leadership to definitely orient us in a different direction.” 

New CRO and engineering head

Brian McCarthy.

Sinha said Rubrik is appointing a new CRO, Brian McCarthy, who currently occupies the same role at ThoughtSpot. McCarthy is a “leader who has long experience in software and SaaS, to really lead the company and double down on the direction that we are going.”

Arvind Nithrakashyap (known as “Nitro”), Rubrik co-founder and CTO, has replaced SVP Vinod Marur as  the head of engineering. Sinha said: “So, Nitro as you know, has been my strategic partner from day one… He is the spiritual kind of guru of Rubrik engineering and… we brought him back to really strategically direct again the next round of Rubrik’s [growth].”

Regarding Marur, Sinha said: “We’re very thrilled about that our VP of Engineering, the role Marur had, has moved on to his next opportunity.  And he has done a fabulous job of really building the engineering team and the management team and the structure and brought in like lot of great discipline and a structure from Google. And he he wanted to pursue his passion in the next company.”

So, like Shirk, Marur built a great team that, now needs new leadership. It’s a ruthless world in accelerated growth companies like Rubrik.

HPE servers gain SSD performance ‘at 10k HDD price points’

HPE has announced the Very Read-Optimised SSD replacement option for SATA-connected disk drives in Apollo, ProLiant and Synergy servers. The 2.5-inch and 3.5-inch VRO SSDs are plug-in replacements for the drives and deliver better TCO than 10,000rpm HDDs, the company claims.

“When solid-state drive (SSD) performance meets 10K hard-disk drive (HDD) price points, you get the best of both worlds,” HPE said in a Community Experts blog post this week.

“And for years, that’s exactly what HPE has been working toward: Enabling you to experience the breakthrough performance, reliability, and energy efficiency of SSDs on HPW ProLiant, Apollo and Synergy platforms – at the closest possible price to the HDDs. Now available to replace HDDs in popular workloads, that’s exactly what we’re excited to deliver.”

HPE VRO SSD

VRO SSDs are optimised for a typical mix of >80 per cent random reads and <20 per cent sequential writes (large block size). Example storage workloads include vSAN capacity tiers, NoSQL databases, business intelligence, Hadoop, analytics, object stores, content delivery, and AI and machine learning data lakes.

The VRO drives have a 6GBit/s SATA interface, and are available in 1.92TB, 3.84TB, and 7.68TB capacities in the 2.5in form factor and 3.84TB and 7.68TB versions in the 3.5-inch form factor. An HPE QuickSpecs data sheet also lists a 960GB 2.5-inch drive.

The drives come with a three-year warranty and a lifetime of 700 drive writes (which is represented by HPE as 0.2 drive writes per day). The random read IOPS performance is 51,000, with a peak maximum of 63,000, and random write IOPS are 12,600, peaking at 13,000.

The drives use 96-layer QLC flash and prices start at $739.99. For comparison, on Amazon a 2TB Barracuda Pro 7,200rpm disk drives costs $55.49 while a 1.2TB 10,000rpm Seagate Enterprise Performance disk drive retails at $190. The VRO SSDS have quite the price difference.

Benchmarks

The VRO SSDs deliver 7x faster object reads and 6x faster object writes at a lower TCO, according to a joint HPE Micron product brief. The claims are based on testing three Apollo 4200 Gen 10 servers, each fitted with either eight x 8TB HPE 7,200rpm 3.5-inch SATA disk drives or eight x 7.68TB VRO SSDs. The test environment was Ceph with object reads and writes tested.

The disk drive based Apollo system cost $152,760 while the VRO equivalents cost $234,384 – 53.4 per cent more.

The disk-drive Apollos delivered 3.0GB/sec read and 1.5GB/sec write bandwidth. The VRO SSD Apollos went much faster, hitting 23.1GB/sec read and 9.0GB/sec write bandwidth.

Divide the system cost by the read GB/sec number and this works out at $50,920 per GB/sec for the disk drive Apollos and $10,146.5 per GB/sec for the VRO version. This is the justification for HPE’s claim that VRO SSD Apollos delivered 5x lower cost per read GB/s at lower TCO.