Home Blog Page 358

Dremio, the wannabe data warehouse killer, raises $70m

Dremio, a data analytics storage startup, has raised $70m in a C series round, to fund growth. Total funding stands at $115m.

In a post discussing the new investment, CEO Billy Bosworth wrote: “Dremio’s Data Lake Engine makes analytics directly on data lake storage fast, efficient, and secure, which drives down cloud infrastructure costs while giving data consumers what they need, when they need it.”

Dremio dubs itself a cloud data lake storage company and it aims to replace the traditional extract, transform and load (ETL) method of populating data warehouses. The Santa Clara-based company has built a data lake engine running on AWS and Azure and claims it is more efficient to run the analytics directly on source data in the data lake. (Read our January 2020 profile of Dremio for more technology details.)

Dremio CEO Billy Bosworth

According to Bosworth, Dremio has grown annual recurring revenue (ARR) more than 3.5x over the past year. “For startups, fundraises are typically meaningful events; this one will always be special due to the global situation that surrounds us,” he wrote.

The C-round was led by new investor Insight Partners, with participation from existing investors Cisco Investments, Lightspeed Venture Partners, Norwest Venture Partners and Redpoint Ventures.

Micron gives insight (kinda) into 3D XPoint revenues for the first time

Micron sold at least $118.8m worth of 3D XPoint media and drives in its second fy2020 quarter, ended February 27. This is the first time the US chipmaker has given us a glimpse into revenues for the storage class memory technology, which include Optane products made for Intel.

We can derive this figure of $118.8m from CFO Dave Zisner’s prepared remarks discussing the earnings, made earlier this week. (Our sister publication The Register has written up Micron’s Q2 fy2020 results.)

Micron includes XPoint revenues in its compute and networking business unit (CNBU). Revenues for the second quarter decreased one per cent sequentially to $1.97bn and 17 per cent y/y. Zisner revealed that, “excluding XPoint”, revenues would have fallen seven per cent sequentially.

This gives us the reference point that XPoint accounted for a minimum of six per cent of revenues in the quarter i.e. $118.8m. But is that also a total revenue figure for XPoint?

Assessing this hangs on the meaning of the word “excluding”, as used by Zisner. He may have excluded all XPoint revenues from the rest-of-CNBU’s seven per cent sequential fall. Or he may simply have been referring only to XPoint sales growth. We are unable to determine either way without knowledge of XPoint sales figures in Q1.

Micron X100 3D XPoint SSD

Micron’s XPoint line includes chips, some of which go to Intel for its Optane-brand products, and the rest are used in Micron’s own X100 drives. We infer that Micron currently derives most of its XPoint revenue from OEM work for Intel. But the company aims to grow its own X100 3D XPoint SSD business and expand the product portfolio for data centre customers.

DDR5 DRAM

Other Micron technology matters revealed in the Q2 earning statement include sampling 1z DDR5 DRAM chips, its smallest 10nm-class node, and being on track to introduce HBM chips this year. HBM tech involves stacking memory die one above the other and giving them a special link to a CPU. Micron is also developing a 1a (alpha) DRAM node, smaller than 1z but still a 10nm-class product.

On the NAND front, Micron should ship 128-layer chips this quarter and it expects revenue in Q4. Currently it ships 96-layer NAND.

Every data storage product pic needs the human touch

So how to give a sense of scale or indeed liven up a boring product shot? Simple, use the human touch – as is literally the case in the picture below. You can see a disk drive being extracted from a Western Digital Ultrastar Data102 enclosure. (We used the picture in a Acronis uses WD JBODs story).

But who is the hand model?


Read the email I received to find out.

Hi Chris,

Thanks to Blocks and Files, I can now check being “famous” off my bucket list. Imagine my surprise to see my photo on your website. (Or at least my hand is famous. That’s me pulling a disk cartridge out of a WD Ultrastar Data102.)

Thanks for making my week a bit more interesting. I’ve now shared this with all my friends and will arrange autograph sessions once the shelter-in-place order is lifted. 😉

Candace

Candace Doyle

Candace is Candace Doyle, senior director of sales and marketing at the Linley Group. Before that she was a senior marketing manager at Western Digital, where she was pressed into service for as a hand model for the pic above.

If there are any more storage people lending anonymised arm or limbs to product shots, do let me know and I’ll add you to our wall of unsung storage heroes.

Life after NKS. NetApp to work with ‘all flavours of Kubernetes’

NetApp is developing its hyperconverged platform to deliver an automated Kubernetes facility. The storage giant has told us to expect a “significant announcement in the Spring”.

Longer term, NetApp will develop cloud-like products for file storage on-premises, based on NetApp HCI.

The company last week announced its decision to close NetApp Kubernetes Service (NKS), with effect from April 20. This week a spokesperson told Blocks & Files that it will subsume NKS technology into new offerings.

NKS enables customers to orchestrate a set of containers, but was focused too high up the Kubernetes stack, according to NetApp, which will swim downstream to provide lower-level support for many Kubernetes distributions. These may include Red Hat CoreOS, Canonical, Docker, Heptio, Kontena Pharos, Pivotal Container Service, Rancher, Red Hat OpenShift, SUSE Container as a Service and Telekube.

Blocks & Files asked NetApp for more details about the demise of NKS and Cloud Volumes services (CDS) on its hyperconverged platform. Here are the company’s replies.

Blocks & Files: Does this mean the NetApp HCI product goes away?

NetApp: Absolutely not. NetApp has every intention to stay in the HCI space as it is a fast-growing market segment. NetApp HCI offers a unique value proposition for customers looking for cloud-like simplicity in an integrated appliance. With the added differentiation of NetApp’s data fabric strategy and integration, we believe we have a very competitive product.

In the future, we will be investing in NetApp HCI to become a simplified and automated infrastructure solution for on-premises Kubernetes environments. We will share more about our strategy in the Kubernetes market in the Spring.

Blocks & Files: What does a distribution-agnostic approach to Kubernetes mean?

NetApp: Distribution agnostic means we will work with all flavors of Kubernetes, currently there are more than 30 different distributions. Our storage needs to work with as many as customers demand.

Blocks & Files: How is the NKS offering not distribution-agnostic?  

NetApp: NKS has been very much upstream-Kubernetes based and spins up an upstream-Kubernetes cluster. At the same time, customers may want to pick a different distribution of Kubernetes curated by a vendor. Being distribution-agnostic just means allowing more than upstream like an OpenShift.

Blocks & Files; Why is the NKS offering not being evolved into a distribution-agnostic one?

NetApp:The change in direction is an evolution to our approach to help customers simplify the Kubernetes environments. NKS is being consumed into new NetApp projects specific to Kubernetes, we have more than quadrupled the investment in our Kubernetes plans – stay tuned for more on this soon.

Blocks & Files: Is the StackPointCloud technology being discarded?

NetApp: Absolutely not. The StackPoint technology and team are a central part of our investment in Kubernetes development and tools that will continue working on a focused set of solutions at NetApp to bring innovation and new capabilities to the Kubernetes ecosystem. Again, stay tuned.

Blocks & Files: Are the Cloud Volumes services on HCI, on-premises, AWS, Azure and GCP now all finished?

NetApp: Absolutely not, we have so much demand for CVS and Azure NetApp Files we have to allocate more of our resources and more of our infrastructure to Azure, Google and AWS. We have changed course for Cloud Volumes Service on premises and HCI to meet the demand on the three public clouds and focus our on-prem services with new investment areas.

For Cloud Volumes on HCI, in the near-term the service will be replaced by ONTAP Select included in the cost of the NetApp HCI appliance. Long-term, NetApp will use the feedback from the Cloud Volumes on NetApp HCI preview and develop new, innovative  cloud-like products for file storage on-premises on the NetApp HCI platform.

Blocks & Files: Are we looking in the future to a single replacement software product for NKS and the Cloud Volumes Service that covers the on-premises, AWS, Azure  and GCP worlds with hardware supplier-agnostic on-premises converged and  hyperconverged hardware?

NetApp: As we shared with our customers, NetApp’s goal in Kubernetes market will be to make applications and associated data highly available, portable, and manageable across both on-premises and clouds through a software-defined set of data services. Please stay tuned for announcements this Spring.

Blocks & Files: Does that hyperconverged hardware include standard HCI offerings such as VxRail, Nutanix and HPE SimpliVity?

NetApp: NetApp’s goal is to continue investing in our converged and hyperconverged solutions, including NetApp HCI. Our investment is focused on continuing to offer a unique NetApp HCI solution with a focus on simplifying Kubernetes solutions while continuing to support our partners like VMware, Google, (Red Hat), and others through offering support for their software running on NetApp HCI. However, we do not have plans at this time to offer VxRail, Nutanix, or HPE HCI products.

Dell debuts oven-ready AI platforms to ease researchers’ setup pain

Dell EMC announced yesterday a bunch of reference architectures and pre-defined workstation, server and Isilon scale-out filer bundles for data scientists and researchers working in artificial intelligence.

In effect these are recipes that are quicker to prepare and easier to cook than starting from scratch using raw ingredients only.

Dell’s purpose for the initiative is to reduce the time that customers take in setting up workstations, servers, filers, system software, and installing cloud-native working environments. That frees up more time for analysis and AI model runs.

A Dell spokesperson said: “AI initiatives generally start small in the proof-of-concept stage but preparation and model training are often the most time-consuming portions of the job. Combining hardware and software, Dell’s new AI-ready solutions will hasten researchers’ ability to stand up AI applications efficiently.”

Dell EMC Isilon H400

David Frattura, a senior Dell technology strategist, details the eight AI reference bundles in this blog. The architectures encompass use cases such as machine learning, deep learning, artificial intelligence, high performance computing, data analytics, Splunk, data science and modelling.

The buzzword benefits are legion; deploy faster, achieve greater model accuracy, accelerate business value and more on your AI digital transformation journey.

SoftIron, the Ceph storage startup, raises $34m

SoftIron, the UK developer of Ceph storage hardware and software, has raised $34m in venture capital.

Some of the cash will be used to grow sales, product marketing, and support in North America, Europe and APAC. The company will spend the rest on developing its portfolio of data centre appliances based on open source software.

Phil Straw, SoftIron CEO, said today in an announcement quote: “We had nothing to lose when we started out, so we did the unthinkable and built our appliances from scratch to address what we saw as the new normal: a flexible, adaptable, open-source based, software-defined data centre. I’m proud to say we are now well on our way to being a full spectrum computer company.”

He added: “We are no longer just building a storage appliance; we are offering a coherent end-to-end solution that I believe will revolutionise the enterprise data centre.”

Harry Richardson, chief scientist at SoftIron, said the company has “spent the last few years flying under the radar, honing our vision and working hard to deliver it through genuine, cutting-edge technological wins. … We’ve got some truly great things in store for organisations looking to leverage open source and transition their mission-critical computing away from proprietary vendor lock-in.”

SoftIron technology

Ceph is open source storage software that supports block, file and object access protocols in one package.

SoftIron develops ARM CPU-powered HyperDrive appliances that operate as scale-out storage nodes behind an HD storage router front-end box which processes the Ceph storage requests. The company also supplies the Accepherator FPGA-based erasure coding speed-up card.

Softiron’s Accepherator

SoftIron products include HyperSwitch, a top-of-rack switch that uses Microsoft’s open-source SONiC (Software for Open Networking in the Cloud) switch software. It also sells HyperCast, a task-specific 4K transcoding system for multi-screen, multi-format delivery.

SoftIron’s funding announcement specifically mentions task-specific appliances, so we can expect more appliances like HyperCast.

Background

SoftIron was founded in 2012 by California-based Phil Straw, Mark Chen and London based exec chairman Norman Fraser.

SoftIron is a late starter and apparent frugal spender amongst startups. We do not know its initial funding source but company took in a $7m A-round in 2017, and then nothing more until today’s $34m B-round announcement. LinkedIn lists 39 employees.

SoftIron builds its hardware systems in-house in Newark, California.

Diamanti: K8s workloads run so much faster on our bare metal HCI system

Shipping container

Profile: Diamanti has built a bare metal Kubernetes container platform with persistent all-flash storage.

The California startup was founded in 2014 by three Cisco UCS server veterans and has taken $78m in four funding rounds.

It claims the Diamanti Enterprise Kubernetes Platform delivers the lowest total cost of ownership for cloud-native execution hardware and software. Diamanti’s roadmap includes a software-as-a-service, additional hardware products and extended software.

The company’s basic idea is to run containerised apps in a clustered bare metal hyperconverged system. Network and storage virtualization is offloaded from the main CPU complex to a subordinate smart PCIe card running CNI and CSI software.

It says the technology is more efficient than a server with a virtualization layer that runs applications in virtual machines, each with their own operating system and clogging up CPU cycles.

Hardware

According to Diamanti, its D20 hyperconverged system achieves sub-100 microseconds read and write latency and can performs up to 1 million IOPS. The firm claims the D20 is orders of magnitude more performant than running containers on virtual machines.

GPUs can be added to the D20 as a go faster option, to make a better fit for AI and machine learning-type workloads. These support Nvidia’s NVLink cross connect GPU cards.

Also, Diamanti offers the D20 RH system which is Red Hat certified for running Red Hat’s OpenShift Container Platform.

The D20 uses the same basic hardware design as its predecessor, the D10. The system is built with 1U dual Xeon CPU nodes.

Diamante D10 diagram.

A D20 node has 22, 32 or 44 cores and 192GB, 384GB or 768GB RAM. Direct-attached all-flash storage is 4 x 1TB, 2TB or 8TB Intel NVMe SSDs, providing 4TB, 8TB and 32TB configurations. There are 2 x 480 SATA SSDs for the host Linux OS and Docker image storage.

Nodes are clustered using 4 x 10 GbitE via a single 40 GbitE QSFP+ connection per node. Container orchestration is via Kubernetes v1.3 and the container runtime system is Docker v1.13.

Software matters

A Diamanti deep dive, written by Sambasiva Bandarupalli, states: “Designing a storage system for a bare metal container platform presented an immediate advantage: it was possible to use a log-structured file system, which is ultimately friendly to flash in terms of prolonging its endurance. The algorithms we’ve designed allow for us to perform nearly-sequential writes of data in 4K blocks.

“By contrast, a virtualized stack doesn’t employ a log-structured file system. The hypervisor has to field and translate IO requests to specific storage drives, which takes time. As a result, writes are scattered, essentially becoming random writes.

“Our approach ensures that every IO that comes in goes through a fixed path, which enables us to guarantee deterministic latency for all types of IO (reads, writes, random or sequential).”

D20 software provides per-container quality of service and includes mirroring between nodes, snapshots, backup and restore, and replication.

The company’s Diamanti Spektra software, currently in tech preview mode, provides hybrid cloud data management to integrate on-prem D20 clusters with the AWS, Azure and Google clouds. This raises the prospect of Diamanti D20 software running in those clouds and providing a single Kubernetes system across the on-premises and public cloud worlds.

Diamanti’s competition

To succeed against commodity hardware-based competitors, Diamanti needs to deliver significant speed and cost advantages with its hot box proprietary hardware.

451 Research, in a report published Feb 20 on Diamanti’s website, lists several established hyperconverged infrastructure players as competitors. They include VMware with vSphere 7, HPE’s Container Platform, NetApp’s NKS, Nutanix Karbon, and Cisco’s HyperFlex, with its Application Platform for Kubernetes.

In common with Diamanti, HPE Container Platform and Cisco HyperFlex support bare metal servers.

Smaller competitors include DataCore and LINBIT – which have their own hyperconverged systems – and MayaData, Portworx, Quobyte, StorageOS and Virtuozzo.

HPE releases urgent fix to stop enterprise SSDs conking out at 40K hours

Updated 28 March 2020.

HPE has told customers that four kinds of SSDs in its servers and storage systems may experience failure and data loss at 40,000 hours of operations.

The company said in a bulletin that the “issue is not unique to HPE and potentially affects all customers that purchased these drives.”

HPE issued a statement on 28 March 2020, which said: “HPE was notified by Western Digital of a firmware issue in a specific line of older end-of-life SanDisk SAS solid state drive (SSD) models used by select OEM customers. The defect causes drive failure after 40,000 hours of operation; no HPE customers are in danger of immediate failure. HPE is actively reaching out to impacted customers to provide updated firmware that addresses the issue.”

A Dell EMC urgent firmware update issued last month also mentioned SSDs failing after 40,000 operating hours and specifically identified SanDisk SAS drives. The update included firmware version D417 as a fix.

The fault fixed by the Dell EMC firmware concerns an Assert function which had a bad check to validate the value of a circular buffer’s index value. Instead of checking the maximum value as N, it checked for N-1. The fix corrects the assert check to use the maximum value as N.

Blocks & Files asked Western Digital, which acquired SanDisk in 2016, for comment. A company spokesperson said: “Per Western Digital corporate policy, we are unable to provide comments regarding other vendors’ products. As this falls within HPE’s portfolio, all related product questions would best be addressed with HPE directly.”

For HPE customers SSD Firmware Version HPD7, is available to remedy the affected drives which are:

  • EK0888JVYPN – HPE 800GB 12G SAS WI-1 SFF SC SSD – WI meaning write-intensive
  • EO1600JVYPPHPE – HPE 1.6TB 12G SAS WI-1 SFF SC SSD
  • MK0800JVYPQ – HPE 800GB 12G SAS MU-1 SFF SC SSD – MU meaning mixed use
  • MO1600JVYPR – HPE 1.6TB 12G SAS MU-1 SFF SC SSD
Image showing MO1600JVYPR – HPE 1.6TB 12G SAS MU-1 SFF SC SSD. Th supplier part number on this image corresponds to a SanDisk SSD..

The drives will suffer data loss entailing recovery from a backup unless they are arranged in a RAID scheme that provides protection against drive failure. If the RAID scheme uses more than one affected drive you should consider all are at risk.

Forty-thousand hours is equivalent to four years, 206 days, 16 hours. This implies that the first affected drives were switched on in late 2015. 

Are machine-learning-based automation tools good at storage management?

face popping out of computer chip

Survey We hear a lot these days about IT automation. Yet whether it’s labelled intelligent infrastructure, AIOps, self-driving IT, or even private cloud, the aim is the same.

And that aim is: to use the likes of machine learning, workflow automation, and infrastructure-as-code to automatically make changes in real-time, eliminating as much as possible of the manual drudgery associated with routine IT administration.

Are the latest AI/ML-powered intelligent automation solutions trustworthy and ready for mainstream deployment, particularly in areas such as storage management?

Should we go ahead and implement the technology now on offer?

This is the subject of our latest reader survey, and we are eager to hear your views.

Please complete our short survey, here.

Your responses will be anonymous and your privacy assured.

Inspur leapfrogs Huawei to take second place in SPC-1 benchmark

Inspur has claimed the second spot in the SPC-1 benchmark, behind Fujitsu. Prior to the latest results, the Chinese systems vendor’s best SPC1 performance was 15th place.

Inspur used an all-flash AS5600G2 array for the tests, with eight dual-CPU controllers and 16 flash enclosures filled with 400 x 1.92TB SSDs.

SPC-1 tests business-class, block-access workloads with data that can be compressed and/or deduplicated. Inspur’s AS5600G2 is rated at 7,520,358 SPC-1 IOPS. The Fujitsu Eternus DX8900 S4 scores 10,001.522 IOPS. Huawei’s OceanStor 1800 V3, the previous second-placed system, is rated at 7,000,565 IOPS.

Inspur AS5600G2

In price per thousand IOPS terms, the Eternus is $644.16, the Inspur is much cheaper at $386.50, and Huawei’s OceanStor 1800 is $376.96.

At the moment SPC-1 looks like a benchmark arena for Chinese vendors to strut their stuff. NetApp is the sole US vendor in the top 10 SPC-1 charts. The company’s A800 ranks tenth, with 2,401,171 IOP and a price performance rating of $1,154.53. This is three times more expensive than the Inspur system.

NetApp posted that result was in July 2018. The company launched the top-end AFF A700 in October 2019 and it would be interesting to see how that performs on SPC-1.

Pure Storage loses third CMO in three years

Robson Grieve, Pure Storage chief marketing officer, has left the company after 14 months in post. He was the company’s third CMO in three years.

Pure is a fast-growing all-flash storage company. Our contact files for Pure list three CMOS and one fill-in person since 2017;

  • Robson Grieve – CMO from Jan 2019. Resigned from April.
  • Lisa Adam – temporary marketing person. VP Portfolio and Solutions Marketing, moved on to VP Service Provider Strategy and Marketing. Left in 2019 after less than 1 year.
  • Todd Forsythe – CMO from May 2017. Left August 2018. Joined Veritas March 2019.
  • Jonathan Martin – CMO from 23 Jun 2015. Leaving June 2017.

We hear from a source close to Pure that none of the original marketing team out together by VP Strategy Max Kixmoeller back in 2015 are still with the company.

Pure Storage sent us this statement: “Robson Grieve will be moving on from Pure in early April to pursue another opportunity. We appreciate all he has done for the company since joining and we wish him all the best. A search for his replacement is underway. Pure is stronger than ever and we are confident we have the right leadership team in place to execute our long-term vision.”

We understand on background that it was it was Grieve’s own decision to leave.

Zerto 8 extends VMware backup and DR to Google Cloud Platform

Zerto has added support for backup and disaster to VMware workloads on Google Cloud Platform via its latest software release.

Zerto 8 makes its official debut tomorrow, March 24, in a webinar at 11am (ET) . A blog provides a sneak, watch-the-webinar, preview.

Google is prepping a VMware Engine andwill announce VMware-as-a-service this summer, according to Zerto, which will provide a backup and DR facility. The backup and disaster recovery supplier already supports VMware workloads on AWS and Azure.

Zerto said in the company blog: “You’ll be able to protect and migrate your native VMware workloads to your dedicated VMware environment in Google Cloud using the same policies and configurations that you utilize for your current infrastructure, just in the cloud. Stay tuned for Google’s public announcement.”

Zerto announced v7.0 in April last year, and combined backup and disaster recovery using hypervisor-based replication and journalling for short- and long-term retention.

Zerto 8.0 features include:

  • Deeper AWS and Azure integration for more automation and orchestration,
  • Support for VMware VVOLs, with VVOLS being usable for protection and recovery,
  • New user interface and app consistency across short-term (backup)and long term (DR) workloads,
  • Added analytics to check on state of backup and DR files and issue alerts if boundary conditions exceeded, such as to minimising the risk of failed retention jobs due to lack of storage capacity.

Zerto v8.0 Release Notes provides more details, on Zerto Analytics, for example. But they say nothing about Google Cloud Platform support. We note there is no support for Hyper-V virtual machines on the Google cloud.