Home Blog Page 343

HPE launches Ezmeral containerised app framework

HPE wants to be part of the picture when businesses develop and deploy apps and workflows with AI and ML functionality in containerised environments and has announced a newly created software portfolio – Ezmeral – to that end. 

HPE is another mainstream vendor looking to grab a part of the emerging new Kubernetes container organisation territories.

HPE’s Kumar Sreekanti, CTO and head of software, provided a canned overarching quote: “The HPE Ezmeral software portfolio fuels data-driven digital transformation in the enterprise by modernising applications, unlocking insights, and automating operations.” Yes, OK.

“Our software uniquely enables customers to eliminate lock-in and costly legacy licensing models, helping them to accelerate innovation and reduce costs, while ensuring enterprise-grade security.” (That refers to an open source piece of the news.)

HPE is getting on a Kubernetes train with other passengers already onboard. NetApp also has a Kubernetes framework initiative with its Project Astra, and MayaData has its Kubera Kubernetes management service.

Ezmeral brand

Ezmeral is wrapped in a trendy digital transformation marketing framework and is a brand name developed from esmeralda, the Spanish word for emerald, with lots of high-flown HPE quotes talking about “The transformation of raw emerald to a cut and polished stone to reveal something more beautiful and valuable is analogous to the digital transformation journey our customers are on.” 

The marketing bods have clearly lined emeralds, which are green, with GreenLake, HPE’s all-singing, all-product dancing subscription deal.

Ezmeral portfolio

Back in the real world HPE has combined its acquired BlueData and MapR assets with open source Kubernetes-related software to build the Ezmeral portfolio. This applies to the range of IT environments from edge locations, data centres and the public cloud and includes:

  • Container orchestration and management, 
  • AI/ML and data analytics, 
  • Cost control, 
  • IT automation and AI-driven operations, 
  • Security

The Ezmeral Container Platform and Ezmeral ML Ops are two software items announced as part of the portfolio.

Ezmeral Container Platform

This enables customers to manage multiple Kubernetes clusters with a unified control plane, and use a MapR distributed file system for persistent data and stateful applications. Interestingly HPE says customers can run both cloud-native and non-cloud-native applications in containers without having to rewrite the legacy apps.

This involves use of the HPE-contributed KubeDirector open source project which provides the ability to run non-cloud native stateful applications on Kubernetes without modifying the code. 

But it is no silver bullet, being focused on distributed stateful applications. KubeDirector enables data scientists familiar with data-intensive distributed applications such as Hadoop, Spark, Cassandra, TensorFlow, Caffe2, etc. to run these applications on Kubernetes – with a minimal learning curve and no need to write GO code. 

OK – so forget running the broad mass of legacy apps in containers.

Ezmeral ML OPs software uses containerisation to introduce a DevOps-like process to machine learning workflows. HPE claims it will  accelerate AI deployments from months to days. The BlueData part of this refers to using container technology for Big Data analytics and machine learning.

The Ezmeral Container Platform and Ezmeral ML OPs products will be available as software and also delivered as a cloud service through GreenLake.

Let the SAN shine: Nebulon drops cloak, reveals DPU-enhanced, cloud-managed server SAN

Startup Nebulon has come out of stealth to reveal scale-out, on-premises, server SAN, block-based storage using commodity X86 servers bolstered with storage processing offload cards, along with a data management service delivered from its cloud.

It claimed its so-called cloud-defined storage (CDS) is less pricey than equivalent all-flash SAN array storage and doesn’t use up CPU capacity in its host servers, a disadvantage it claims affects both software-defined storage (SDS) and hyperconverged infrastructure (HCI) systems using commodity server chassis.

Siamak Nazari.

A prepared quote from Siamak Nazari, co-founder and CEO of Nebulon, said: “Cloud-Defined Storage delivers global insights, AI-based administration and API-driven automation making enterprise-class storage a simple attribute of the data centre fabric with self-service infrastructure provisioning and storage operations as-a-service for application owners.”

Nebulon’s storage is embodied in its Storage Processing Unit, an add-in, FH-FL PCIe card, with an 8-core, 3GHz ARM CPU plus encryption/dedupe offload engine, that is layered in front of a host server’s SAS or SATA SSDs, and connects to them via a triple SerDes connector. Nebulon co-founder and COO Craig Nunes told B&F the SPU will support “NVMe when it becomes generally available in the early Fall.”

The SPU card, which looks to upstream system software like an HBA or RAID card, presents block volumes to applications running in the servers. Up to 32 servers can be clustered in an nPod with the SPUs connected by a 10 or 25gigE network. There is a separate 1gigE port for management from the cloud.

Nebulon SPU

Data services provided by the SPU include deduplication, compression, encryption, erasure coding, snapshots and mirroring. There is no GPU on the card.

The SPU contains 32GB of NVRAM to speed writes, and reads come straight from the SSDs. NVRAM write caching means the SPU can turn random writes into sequential writes to the SSDs, thus helping to lengthen the drives’ endurance. Data is not striped across SPUs.

Initially 4TB SSDS are supported, with up to 24 supported by a single SPU, meaning 96TB, and a maximum capacity of 3,072TB across the 32 SPUs in an nPod. There is a single, all-flash storage block-access tier.

The performance on reads is slower than if NVMe SSDs were supported. Nunes told us: “At the device level, SATA latencies can be many times [that of] NVMe. However, when measured with the enterprise data services software stack, the latencies at the application level will be in the 300us to 400us range and acceptable to the cloud native, container and hypervisor use cases we are targeting.”

OEM channel

Nebulon sells its SPU and ON service through an OEM channel, with both HPE and Supermicro signed up so far, and a third OEM likely. An HPE configuration is based on the ProLiant Dl380 gen 10 server in a 2U x 24 slot chassis, while Supermicro uses its Ultra line of servers for Nebulon storage 

That means that actual server hardware configurations, including drive types and capacities come from the OEMs. So too do purchase and/or subscription arrangements.

Nebulon is pitching its product, through its OEMs, at mid-to-large enterprises needing block storage at PB and up scale and wanting to increase storage and app server efficiency and reduce acquisition and management costs.

The card can be set up using application templates to optimise it it for different workloads, such as VMare, MongoDN and Kubernetes. Nebulon storage supports any OS or hypervisor in the host server. NebOS upgrades are non-disruptive.

SPU management

The SPU runs NebOS software and is managed through a Nebulon ON SaaS service hosted in a Nebulon cloud which uses multiple CSPs and multiple regions for high-availability. It is updated through the ON service.

Nebulon says the ON service manages fleets of Nebulon systems at scale. These systems send telemetry messages to the ON Cloud; tens of thousands of storage, server and application metrics per hour. These are stored in a distributed time series database. 

ON includes an AIOps function which looks at the telemetry, analyses it in real time, and responds to adverse events in seconds by re-jigging a Nebulon system to respond to changing operational patterns. It also provides storage usage metrics over time and predictive analytics.

Nebulon ON dashboard

Customer admin staff can self-provision Nebulon storage through an ON dashboard and ON can deliver automated updates across a Nebulon fleet. 

Replication will be delivered as a future upgrade, possibly in the next software release. We expect stretch cluster support in a future release as well.

The SPU is a DPU

Nebulon said the SPU card is an example of a DPU (Data Processing unit), a dedicated storage or networking processor intended to offload storage and/or network processing from a host server’s CPU so that it can concentrate on application processing.

Examples of DPU supply and use include Diamanti, Fungible, Pensando, Nvidia (SmartNICs) and AWS for in-house use (Nitro). 

We might say Nebulon is an HCI system on DPU steroids; so many that Nebulon claims it is no longer an HCI system at all, but an ultraconverged system.

The SPU is a gateway to the storage for its host CPU. If it fails, the host server loses access to its storage. A server can have other storage installed which is not accessed through the SPU. A loss of internet connectivity will not prevent an SPU from functioning.

Competition

The Nebulon system, being a kind of super-server SAN system, will compete with Dell’s VxRail and Nutanix systems. It will also compete with disaggregated HCI systems such as Nimble’s dHCI and Datrium.

Inside HPE the Nebulon storage competes with or complements SimpliVity HCI and Nimble dHCI systems, and Primera, 3PAR and Nimble arrays. It also competes/complements other HPE storage partners providing block array services such as Datera.

Nebulon is not supported by HPE’s cloud-delivered, predictive analytics Infosight management or its GreenLake subscription service, but it is early days.

Nebulon’s Craig Nunes says more than half of HPE’s servers are sold into customers with non-HPE storage. The Nebulon storage, which should cost less than external array storage, and uses lower server SAN CPU resources, gives HPE a win-back opportunity in his view.

Regarding Pensando Nunes tells us: “Each Nebulon SPU has an 8-core 3GHz processor and 32GB of battery backed NVRAM, and runs the entire software stack you might find on a 3PAR or Pure Storage array controller.  As a compare, Pensando supports 8GB of RAM—enough for the network/security functionality but not enough to run a full storage SW stack on the card.”  

Executive Chairman David Scott says Nebulon and Pensando are complimentary: “I could easily see some use cases where a customer has both a Pensando DSC card and a Nebulon SPU in the same application server(s). :)”

Comment

Nebulon is entering a new and undeveloped market, the DPU-enhanced server SAN market with cloud-delivered, AIOps management, and its competitors, at the OEM level, are suppliers such as Pensando and the other DPU suppliers.

At the end-user level its competitors are, well, legion, and existing SDS and HCI vendors will say Nebulon is simply just another SDS or HCI vendor, one using proprietary HW and SW to give its host server chassis a performance kick. If customers accept that positioning then suppliers will compete on speeds, feeds, support – the usual stuff.

If customers see Nebulon as a new class of server SAN, then the OEM+Nebulon offer will be differentiated, although this will require a marketing and sales push.

SmartX produces Optane DIMM DAX cached hyperconverged system

SmartX has announced what is probably the fastest hyperconverged infrastructure appliance in the world, if the speeds it has reported are verified.

The Chinese hyperconverged vendor has launched a Halo P product using Optane DIMM caching to push out 1.2 million IOPS with 100μs latency and 25GB/sec bandwidth from a three-node Lenovo-based system, using NVMe SSDs.

Kyle Zhang, co-founder and CTO of SmartX, provided a quote: “We have seen that the introduction of new storage technologies can greatly improve the performance of HCI system and address the real-workload challenges for critical applications. In the future, SmartX will collaborate with Intel and other leading industry leaders to introduce more advanced technologies to lead the next-level innovations in HCI.”

SmartX Optane diagram

How does SmartX get latency down to that level? It has gone the extra mile with its SMTX OS and uses the Optane DC Persistent Memory DIMMs in byte-addressable App Direct (DAX) mode. This persists written data (VM IO) in any node in its Optane DIMM memory cache. Cached data is also be replicated to the other nodes using the RDMA protocol, which reduces write latency before the write is acknowledged. 

Cache data is written down to SSDs when it cools, and promoted back to Optane if it is re-accessed.

The SMTX OS uses the byte-addressable feature of persistent memory to redesign its journal, using 64 byte alignment instead of 4KB (SSD-type) alignment, and so reducing the problem of write amplification with small (sub 4KB) journal entries.

Also, storage virtualization is devolved from the virtual machine (VM) to the storage software stack, through an SMTX ELF boost mode, to avoid performance overhead caused by I/O requests passing through the VMs. Memory is shared by the VM and the storage system to avoid memory replication on the IO path.

SmartX IO path diagram.

RDMA over Converged Ethernet (RoCE) is used to accelerate network IO requests with the protocol operating on the network card.

SmartX claimed its Halo P appliance is powerful enough for OLTP database and machine learning workloads. It can also support more virtual machines than its raw capacity might suggest.

The company has an office in Palo Alto and claims it has the biggest hyperconverged system deployment in China – China Unicom’s “Wo Cloud” – as well as customers in finance, manufacturing and real estate. It has partnerships with Citrix, Mellanox, Commvault and Rancher in the fields of servers, high-speed networks, virtualization, disaster recovery, cloud computing and containers.

Qumulo litmus: Firm Shifts file format for AWS

Qumulo
Qumulo

Scale out filesystem supplier Qumulo has launched Shift for AWS. This moves files from any Qumulo on-premises or public cloud cluster into Amazon Simple Storage Service (S3) transforming the files into natively accessible objects and buckets.

Once in the AWS cloud, this object data can be stored as an archive or used by AWS-resident applications and services such as Sagemaker and Rekognition. It cannot be automatically moved back to Qumulo though. Updated files are written as new objects.

Barry Russell, SVP and GM of cloud at Qumulo, emitted a canned quote: “With Qumulo Shift, customers can now move data faster and no longer worry about being stuck in legacy proprietary file data formats … . Leveraging our work with AWS, we are now able to integrate with Amazon S3 natively and enable … workloads to use cloud applications and services at any scale in Qumulo or S3.”

Qumulo says users with large video projects can move the files into AWS and burst rendering jobs to thousands of AWS compute nodes. Enterprises can migrate large datasets, such as data lakes, to AWS that could exceed the scale capabilities of other file system products.

Blocks & Files suggests that AWS’s own file services, such as EFS, could perhaps be supported by Qumulo, using the NFS format. EFS doesn’t support NFSv2, or NFSv3, but does support NFSv4.1 and NFSv4.0, except for certain features.

Qumulo’s Molly Presley, Global Product Marketing Head, disagrees. If Qumulo users want file-level operations in the cloud they may as well spin up a Qumulo file system in AWS. Also Amazon’s EFS doesn’t support SMB or as large volumes as Qumulo. Basically it’s not a good idea.

Qumulo has gained an Amazon Well-Architected designation, by which it means customers can reliably run Qumulo file services in cloud-native form on AWS.

The Shift product is included at no charge with an updated Qumulo file system which will be available in July this year. 

Lightbits Labs adds Kubernetes table stakes: CSI support

All-flash array startup Lightbits Labs has launched a software update that provides NVMe-based persistent volume storage for Kubernetes.

It says LightOS v2.0 provides virtual NVMe volumes to Kubernetes, delivering low latency and high performance. It also provides clustering and high availability via target-side storage server failover. This is done via the standard Container Storage Interface (CSI) plug-in route.

Kam Eshghi.

Kam Eshghi, chief strategy officer at Lightbits Labs added: “At cloud scale, everything fails. LightOS 2.0 is the industry’s first NVMe/TCP scale-out clustered storage solution – protecting against data loss and avoiding service interruptions at scale in the presence of SSD, server, storage, or network failures.”

Lightbits distinguishes its array with NVMe/TCP support, providing NVMe-oF access across Ethernet TCP/IP network links.

Many, many suppliers provide persistent volume support for K8s – Portworx, StorageOS, Dell with PowerStore, NetApp, VAST Data and more –it is becoming standard table stakes in the containerisation storage game.

Lightbits claims to be different from the pack because it is clustered and supports rapid node migration, workload rebalancing, or recovery from failure without copying data over the network. If any computer node in the network fails, data is moved virtually by pointing it to another container.

LightOS 2.0 is automatically optimised for I/O intensive compute clusters, such as Kafka, Cassandra, MySQL, MongoDB, and time series databases. Each storage server in the cluster can support up to 64,000 namespaces and 16,000 connections.

V2.0 LightOS has support for Kubernetes v1.13 and v1.15 – v1.18 and later, for any volume size, number of volumes or Kubernetes size cluster. Aa well as the CSI interface it also allows stateful containers via a Cinder plugin. The v2.0 SW is now available.

NAND here’s your storage digest, featuring Samsung, Pavilion Data and more

We start off today’s roundup with news about Samsung facing production problems with its 128-layer 3D NAND. We also take a look at a Sony business using a fast Pavilion array for capturing the video points in a 3D space over time.

Samsung and string-stacking

Wells Fargo senior analyst Aaron Rakers has said Samsung may be be facing production yield challenges with its gen 6, 128-layer V-NAND (3D NAND) technology. This is a single stack technology whereas Samsung’s competitors are building 100+ layer 3D NAND dies by stacking smaller layer-count blocks on top of each other. This is called string-stacking.

Kioxia and Western Digital’s BiCS 5 112-layer die uses a pair of 56-layer stacks.

Gen 6 Samsung is 128 layers. Gen 7 Samsung is 166 layers.

Apparently a single stack etch through 128 layers is taking twice as long as the same etch through 96 layers. The etch creates a conductive vertical channel through the layers. If the yield from the wafers is too low, then Samsung’s costs go up.

Rakers suggested Samsung could change to string stacking with its Gen 7, 160-layer 3D NAND die. String-stacking could cost up to 30 per cent more than single-stacking, so Samsung will be motivated to get its single stack etching working.

Sammobile reports Samsung has set up a task force to work through the yield problems.

Pavilion and Sony 

Sony Innovation Studios has picked Pavilion Data’s Hyperparallel Flash Array (HFA) for storing data from real-time volumetric virtual production with its Atom View software. Volumetric capture is a performance and latency hungry application needed for the rendering of 3-D virtual and mixed environments.

Volumetric capture captures the visual image points in a 3D space (volume) over time and in minute detail. The AtomView software is point-cloud rendering, editing, and colouring software that enables content creators to visualise, edit, colour correct and manage volumetric data. It can combine multiple volumetric data sets captured from different angles producing a single output for use in virtual film productions, video games, and interactive experiences with true photoreal cinematic quality.

The deployment was in partnership with Alliance Integrated Technologies and Pixit Media’s PixStor product. 

Ben Leaver, CEO of Pixit Media, said: “Volumetric capture brings a new paradigm of size and information capable of being stored and requiring the highest performance in render speeds. With an approach that mimics a director class core switch architecture, Pavilion’s approach to multi-line card, multi-controller design means PCIe speeds to each drive, and massive bandwidth to the network over a low latency RDMA protocol.”

Billy Russell, CTO at Alliance IT, said: “It was clear that a 100G ethernet infrastructure was needed to deliver the data. We also wanted the ability to scale in the future to 200 and 400G Ethernet and support migration to tier 2 or cloud as data ages off.”

The Cloud migration uses an Ngenea product. Pavilion said it has “multiple” deployments in the Media and Entertainment vertical.

Shorts

Data protector Acronis has made Acronis Cyber Protect, its cloud offering through service providers, available as a beta version, to deploy on-premises.

Acronis has also struck yet another sports sponsorship deal. This time it’s with AFC Ajax, the Dutch professional football club. The club has yet to schedule its first post-COVID match.

IBM’s Cloud Pak for Data 3.0 is a data and AI platform that containerises multiple offerings for delivery as microservices and runs on the Red Hat OpenShift Container Platform. It includes Actifio’s Virtual Data Pipeline to provision and refresh virtual test environments in minutes, enabling an up to 95 per cent storage capacity saving over non-use of Actifio’s VDP.

Enterprise data cataloguer Alation is working with Databricks to provides data teams with a platform to identify and govern cloud data lakes; discover and leverage the best data for data science and analytics; and collaborate on data to deliver high quality predictive models and business insights.

NoSQL database supplier DataStax today announced the private beta of Vector, an AIOps service for Apache Cassandra. Vector continually assesses the behaviour of a Cassandra cluster to provide developers and operators with automated diagnostics and advice.

Recursion, a digital biology company industrialising drug discovery through the combination of automation, AI and machine learning (ML) capabilities, is using DDN EXAScaler ES400NV and ES7990X parallel filesystem appliances that were later scaled to 2PBs of capacity for staging ML models. An all-flash layer is employed as a front-end to the file system supported by spinning disk. The first 64K of each file is stubbed to this layer, which then accelerates access to the first part of the data before streaming the rest to spinning disk.

Data protector Druva has received an NPS score of 88. NPS (Net Promotor Scores) range from -100 to 100, meaning 88 is a high positive score.

Google has announced the beta launch of Filestore High Scale, a GCP file storage product, which includes Elastifile’s scale-out file storage capability. Google completed its acquisition of Elastifile in August 2019. The Filestore High Scale tier adds the ability to deploy shared file systems that can scale-out to hundreds of thousands of IOPS, 10s of GB/s of throughput, and 100s of TBs.

Komprise has claimed it saw 400 per cent revenue growth Y/Y in 2020’s first quarter. It also added DataCentrix and Vox Telecom as resellers in South Africa.

Composable systems technology developer Liqid has signed up Climb Channel Solutions to distribute Liqid products.

In-memory database supplier MemSQL has announced v7.1 of its software. This delivers SingleStore, an an extension of MemSQL’s columnstore technology that includes support for indexes, unique keys, upserts, seeks, and fast, highly selective, nested-loop-style joins. It also provides fast disaster recovery failback, MySQL language support and the ability to back up data incrementally to more environments: Amazon S3, Azure Blob Store, and Google Cloud Platform.

Netlist announced that the U.S. Court of Appeals for the Federal Circuit (Federal Circuit) has affirmed the U.S. Patent Trial and Appeal Board’s (PTAB) decision upholding the validity of Netlist’s U.S. 7,619,912 (‘912) patent. This was a win over Google, which has used Netlist technology described in the patent. The way is clear for some kind of money flow from Google to Netlist, potentially in the multi-million dollar area.

Nutanix has added capabilities to its Desktop as a Service (DaaS) solution Xi Frame. These include enhanced onboarding for on-premises desktop workloads on Nutanix AHV, expanded support for user profile management, the ability to convert Windows Apps into Progressive Web Apps (PWA), and increased regional data centre support to 69 regions across Microsoft Azure, Google Cloud Platform and Amazon Web Services (AWS).

Entertainment and media workflow object storage supplier Object Matrix says its products  now support the recently launched Adobe Productions workflow for Adobe PremierePro. 

Telecoms operator BSO announced the launch of an Object Storage product in public cloud mode, called BSO.st. The tech is based on the software-defined storage developed by the French company OpenIO.

PlanetScale announced the beta release of PlanetScaleDB for Kubernetes, which allows organisations to host their data in their own network perimeter and deploy databases with just a few clicks using the PlanetScale control plane and operator. PlanetScaleDB for Kubernetes is a fully managed MySQL compatible database-as-a-service for companies looking to deploy distributed containerised applications.

HCI supplier Scale Computing has added Mustek as a distribution partner in South Africa and Titan Data Solutions as a distributor in the UK.

Object (and file) storage supplier and orchestrator Scality announced an investment in Fondation Inria, a French national research institute for digital sciences. Scality is bringing both financial backing and collaboration to help support multi-disciplinary research and innovation initiatives in mind-body health, precision agriculture, neurodegenerative diagnostics, and privacy protection.

Cloud data warehouser Snowflake has announced the launch of its Snowflake Partner Network (SPN), an ecosystem of Technology and Services partners for customers.

Samsung-backed all flash key:value store startup Stellus Technologies laid off its entire sales and marketing department in April, according to a senior ex-employee. Stellus launched its first product at the beginning of February. So sales must presumably have been catastrophically bad for the entire sales and marketing team to be laid off.

ReRAM developer Weebit Nano is going  to place circa $6.6 million worth of new shares via a two-tranche placement. It will also conduct a non-underwritten Share Purchase Plan to raise a further $500,000.  The $71m cash will be used to complete its memory module development for the embedded memory market, transfer the tech to a production facility, continue selector development at Leti for the standalone memory market. Some of it will also go to sales and marketing and general working capital.

Veeam says Veeam Backup for AWS v2 is generally available and Veeam has achieved AWS Storage Competency status. It supports changed block tracking (CBT) API to shrink backup windows. The product makes application consistent snapshots and backups of running Amazon EC2 instances without shutting down or disconnecting attached Amazon EBS volumes.

Veeam Backup for AWS can be implemented as a standalone AWS backup and disaster recovery system for AWS-to-AWS backup, or integrated with the Veeam Platform.

Veeam has announced new Veeam Availability Orchestrator v3 with full recovery orchestration support for NetApp ONTAP snapshots, a new Disaster Recovery Pack at a lower price, and the capability of automatically testing, dynamically documenting and executing disaster recovery plans.

Data warehouser Yellowbrick Data is offering multiple petabyte (PB) capacity on its new hybrid data warehouse 3-chassis configuration. It claims this provides offers unparalleled, single-warehouse capacity with support for 3.6PB of user data in a 14U rack form factor. This 3-chassis instance has a maximum node count of 45 in that 14U and also supports 45 concurrent, single-worker queries on one system.

The actual chassis product is the 2300 series. Each node delivers 36 vCPUs per node (that’s 2 vCPUs per physical core) and has 8 NVMe SSD slots. There are HDR, VHDR and EHDR models; High Density, Very High Density and Extremely High Density. The HW differences are essentially the NVMe densities shipping on each node.

People

Dremio, which produces data lake SW, has  appointed Ohad Almog as VP of Customer Success, Colleen Blake as Vice President of People and Thirumalesh Reddy as VP of Engineering. The company recently raised $70m in a Series C round of funding.

Igneous co-founder and board member Kiran Bhageshpur has relinquished his CEO slot to board member Dean Darwin. VP Products Christian Smith has left Igneous and is now a storage business development person at AWS. B&F has put out feelers to find out what’s going on.

File lifecycle management supplier Komprise has appointed Clare Loveridge as VP EMEA Sales. She comes from ExaGrid and before that Cloudcheckr, Nimble Storage and Data Domain.

It’s harsh at the edge: Dell gives VxRail a boost with PCIe 4, K8S, Optane DIMM, GPU and ruggedisation

Dell has extended its VxRail hyperconverged infrastructure systems with support for AMD EPYC processors, PCIe Gen 4.0, Kubernetes, Optane, more GPUs and ruggedised deployments, making it more relevant to the edge.

The ruggedised systems form a new VxRail product type: The EPYC-using VxRail is a new specific E Series configuration, and the other additions apply to VxRail systems generally

Tom Burns, SVP and GM for Integrated Products & Solutions at Dell, said in a canned statement: “With the new ruggedized VxRail systems, location and conditions don’t matter.” He’s not kidding.

There are five existing VxRail product flavours;

  • E Series – 1U/1Node with an all-NVMe option and T4 GPUs for use cases including artificial intelligence and machine learning
  • P Series – Performance-intensive 2U/1Node platform with an all NVMe option, configurable with 1, 2 or 4 sockets optimised for intensive workloads such as databases
  • V Series – VDI-optimised 2U/1Node platform with GPU hardware for graphics-intensive desktops and workloads
  • S Series – Storage dense 2U/1Node platform for applications such as virtualised SharePoint,  Exchange, big data, analytics and video surveillance
  • G Series – Compute dense 2U/4Node platforms for general purpose workloads.

Get rugged

The ruggedised VxRail boxes are a sixth variant: the D Series comes in a 1U short depth – 20-inch – box that can operate at an altitude of up to 15,000 feet [1.4miles], sustain a 40G operational shock (all-flash model] and operate within a temperature envelope of 5 to 131 degrees Fahrenheit [-15° Celsius  to 55° Celsius], withstanding the extremes for up to eight hours. They also resist sand and dust ingress, claimed Dell.

VxRail D Series systems come in all-flash [SAS SSD] and hybrid SDD/disk versions, and can be used outside data centres in industrial and external environments within the limits above – it can be harsh at the edge.

EPYC, Optane DIMM, Quadro GPUs and LCM

The E Series E665 system supports AMD EPYC processors,  a first for VxRail, with up to 64 cores, and also PCIe Gen 4.0, making them powerhouses and suitable, Dell sugested, for workloads with stringent performance needs, such as databases, unstructured data, VDI and HPC. Blocks & Files expects PCIe Gen 4 support to spread across the VxRail range in the next few quarters.

VxRail systems now support Optane Persistent Memory DIMMs, as well as Optane SSDs, and can deliver a claimed 90 per cent drop in latency and a sixfold IOPS increase, Dell said.

Dell said this was tested using an OLTP 4k workload on 4 x VxRail P570F systems with Optane persistent memory in app-direct mode versus a VxRail all-NVMe flash system. No actual numbers were revealed. The available data suggests Optane DIMM-enhanced VxRail systems are good for in-memory databases and other workloads needing low latencies.

The VxRail systems also support Nvidia Quadro RTX GPUs and vCPUs to accelerate rendering, AI, and graphics workloads. 

Dell has announced Lifecycle Management (LCM) software for VxRail which can streamline updates by running pre-upgrade health checks on demand. It produces continually-validated VxRail system states to reduce downtime, with non-disruptive upgrades.

DTCP and Kubernetes

The Dell Technologies Cloud Platform (DTCP) on VxRail supports Kubernetes clusters, with VMware Cloud Foundation (VCF) v4.o and VxRail 7.0. VCF can operate with a Consolidated Design architecture, in which compute workloads are co-resident with management workloads in the management domain. This is said to be good for general-purpose, virtualised workloads.

Alternatively it can have a Standard Design architecture with independent management and workload domains. This enables it to run multiple traditional and cloud native workloads, such as Horizon VDI and vSphere with Kubernetes.

It should be possible to upgrade from Consolidated Design to Standard Design in a future release of VxRail software.

DTCP starts at the 4-node level. The VxRail HCI System Software latest update, Nvidia Quadro RTX GPUs and Optane DC Persistent Memory options are available globally now. VxRail D Series and the E Series with EPYC processors will be available globally on June 23 this year

Comment

Dell VxRail HCI is a stronger offering with these additions. The Optane DIMM, PCIe 4.0 and new GPU support make it a low-latency, IOPS-munching machine suitable for compute and graphics-intense workloads. Enterprise data centre admins should appreciate the smoother and more certain update routines with the LCM software. 

Both Dell and it’s channel’s sales force should also appreciate the ability to sell HCI systems with VMware and VxRail (HW + SW) working together in a neat package.

Clouds part, soon-to-decloak startup Nebulon’s plans emerge: to start with, hardware-assisted storage

Update: No QLC, no GPU. June 24, 2020

Storage startup Nebulon plans to come out of stealth on Wednesday with a hardware-assisted, scale-out, all-flash storage array featuring real-time AI Ops management and a cloud control plane.

Nebulon was started up by 3PAR veterans and first hove into view in November 2018. It revealed initial details about its technology in January this year, via this website.

The company said it plans to launch this week, on June 24, at HPE’s virtual Discover event. Blocks & Files has had a look at some of the material that’s come out from the firm itself as well as the HPE Discover agenda topics to arrive at a view of Nebulon’s tech.

SPU hardware

The hardware is  based on on-premises, commodity-based, 2U, 24-slot X86 server boxes that hook up to accessing application servers across Ethernet. This HW forms a scale-out storage resource, to petabyte levels, that uses add-in PCI cards the firm has dubbed Nebulon Storage Processing Units (SPUs). 

This is par for the course for some of the 3PAR veterans who work at the startup, who developed an ASIC to handle data reduction operations for the 3PAR array.

Nebulon’s principal product manager, Tobias Flitsch, wrote late last year: “Many modern workloads are built for shared-nothing environments for which these architectures [SAN, SDS, HCI] introduce unnecessary capacity and performance overheads. You know what I’m talking about if you’ve ever tried to run Apache Cassandra, Apache CouchDB, Apache Kafka, etc. on a shared storage array.”

B&F thinks the SPUs will compose Nebulon storage and may sub-divide the storage pool into shared and non-shared storage resources. It could then support both shared storage workloads and shared-nothing workloads such as those Flitsch mentioned.

At HPE Discover the SPUs will be installed into ProLiant DL380 servers. These will be all-flash storage servers, and offer sub-millisecond latency and mission-critical reliability. We estimate this could mean 6 nines or more and NVMe drives.

To have affordable flash capacity at petabyte scale means these storage servers must surely use QLC flash and employ data reduction technology. The SPUs could help with data reduction processing to offload the storage server’s base X86 CPUs.

GPUs?

Siamak Nazari, Nebulon’s CEO, last week tweeted: “What we are doing at Nebulon couldn’t have been done even two years ago, the technology require to create the cloud-defined storage solution simply did not exist.” 

Blocks & Files speculates this almost certainly refers to QLC (4bits/cell) flash and, possibly, to GPUs.

There are two sessions at HPE Discover by Nebulon presenters which refer to GPUs: B548 and D566. Both are entitled “Honey, I Shrunk the Enterprise Storage Array to a Cloud-Managed Storage GPU.“ If this is not a tortured metaphor and we take the GPU reference literally, the possibility emerges that the Nebulon storage servers could have GPUs inside them (i.e. fitted to the SPUs).

The Nebulon storage server is managed through a Nebulon ON SaaS management facility or control plane and features AIOps operating in real-time. That could indicate that on-board GPUs run AI machine learning models to control, monitor and optimise the array in real-time, meaning responding to events affecting storage service delivery in seconds or less.

Using GPUs in this way – and again, the inclusion of GPUs is speculation on our part – it could be described as a game-changer. But it is, to say the least, an unusual idea.

The Nebulon storage server will provide storage services for Kubernetes, VMware and Microsoft environments. Nebulon has said nothing about which storage protocols will be supported. Our thinking is that block storage will be supported first, followed by file and maybe object.

Check back with B&F later this week for a deeper look – we’ll soon have a lot more detail.

Update

The Nebulon announcement on June 23 revealed the base details described above were accurate but there is no specific QLC flash support and no use of GPUs in the Nebulon server SAN.

Converged systems fare well in the first quarter 2020

IDC’s converged systems tracker for Q1 2020 shows resilient demand despite the early effects of the pandemic.

The status quo remained the status quo, with VMware leading Nutanix, and HP and Cisco duking it out a long way behind.

The market segment revenue splits were:

  • HCI – $1.95bn revenues, 8.3 per cent growth Y/Y, 50.9 per cent revenue market share
  • Certified Reference Systems & Integrated Infrastructure – $1.46bn revenues, 4.4 per cent growth, 36.8 per cent market share
  • Integrated Platforms – $478m, -8.7 per cent decline, 12.3 per cent share.

Sebastian Lagana, IDC research manager, said in a statement: “While the hyperconverged system market continued to expand as enterprises seek to take advantage of software-defined infrastructure, the Certified Reference Systems and Integrated Infrastructure segment posted its best quarter of growth since 2Q19 on the strength of richly configured platform sales related to demanding workloads in industries such as healthcare and telecoms.”

Let’s look at the trends with this chart of the converged systems market. 

The Certified Reference Systems boost is accompanied by an HCI drop but this is only a quarter-on-quarter change and may not be significant.

A closer look at the top three HCI vendors reveals a quarter-on-quarter revenue drop by the two leaders and also by Cisco. HPE’s Q/Q change is unclear because IDC is not revealing all its numbers as HPE weaves in and out of a top three spot where it ties for third place with Cisco.

We have charted the revenues of these suppliers over the past few quarters using revenue attributed to the owner of the HCI software rather than IDC’s measure of revenue by HCI brand.

Missing sections of the HPE line are filled with dotted grey lines to show the general trend

Basically, apart from the first 2020 quarter pandemic hit causing revenues to fall, there’s little or no change in the vendors’ relative positions. (Revenues from other contenders such as those from Pivot3 and Scale Computing, are too low to show in IDC’s public tables.)

Let’s check if HCI is taking more revenue from the overall external storage market. The Q1 IDC numbers for both storage categories are:

  • HCI – $1.95bn revenue; 8.3 per cent change Y/Y.
  • External storage – $6.52bn revenue, -8.2 per cent change Y/Y.

Charting the trends to see the longer term picture shows a pretty consistent gap between the two. 

External storage sales are much more seasonal than HCI sales. Note the Q4 peaks on external storage sales, but there is no general sign yet of HCI sales eating into external storage sales.

Komprise adds another brick to hybrid cloud data management wall

Komprise has updated its Intelligent Data Management to include AWS cloud data. The company’s new Cloud Data Growth Analytics (CDGA) utility builds and maintains an index of cloud file/object storage items, buckets, tiers, activities and costs. This is collated for the customer across their AWS accounts and the storage tiers inside each account. Storage admins can track file storage costs by tier and by account, see how capacity usage is trending and set cost alerts.

CDGA for AWS is available today and additional public cloud coverage is pencilled for the end of the year.

Komprise is building a data management abstraction layer that covers on-premises and the three main public cloud environments. Its goal is to enable customers to tier files to/from on-premises file stores to on-premises object stores and public cloud object stores.

The on-premises stores will be cross-data centre and the public cloud stores will be multi-region. In both cases data location and storage tier are controlled to get the right data into the right place and cost-optimised tier of storage.

Blocks & Files diagram.

As part of this effort Komprise announced object storage tiering in December last year. This moves objects between cloud object classes and between on-premises object stores.

Elastic Data Management,launched in March, is another building block. This data mover takes NFS/SMB/CIFS file data and moves it across a network to a target NAS system or via S3 to object storage systems or the public cloud.

Comment

We see a growing capability for file and object movement and management across the public cloud and on-premises environments. Komprise needs to extend coverage to look at cloud file offerings, such as Google’s Elastifile services, AWS Elastic File Systems, and services like NetApp Cloud Volumes and other proprietary third-party file services in the public clouds.

A growing number of companies are developing combined on-premises and public cloud file and object management and access services. They include Cohesity, Hammerspace, InfiniteIO and Komprise. The contenders are barrelling into this area from different starting points and with different intentions. But they will all end up as a single class of file and object storage managers.

Data protectors, such as Commvault, Druva and Rubrik and Dell EMC – with its new DataIQ offering – are also arriving on the scene. Their entry point is backup and they are evolving towards unstructured data management. Clumio and HYCU could evolve this way too as they develop their multi-cloud/on-premises backup management software.

The result will be an almighty competitive clash as all these suppliers realise they are moving into the same general area and try to differentiate themselves with a marketing message blizzard.

WekaIO hires industry sales vet as chief revenue officer

WekaIO has hired Ken Grohe as president and chief revenue officer, as it looks to ramp up to IPO.

Grohe joins from Stellus Technologies, a Samsung backed startup that shut down the entire sales and marketing team in April – just three months after launching its first product.

Ken Grohe headshot
Ken Grohe.

Weka CEO Liran Zvibel said in a statement: “Ken’s expertise in this market and keen understanding of the customer journey will be the catalyst that drives the next phase of growth for Weka… Ken’s role will be influential in executing the company’s vision to become the de-facto solution for enterprise high-performance computing.”

Steve Duplessie, senior analyst at Enterprise Strategy Group (ESG) chipped in: “I have known Ken since our shared time at EMC, he is one of the best performing sales and marketing executives you will ever meet.”

Grohe’s career includes senior positions at SignNow, Barracuda Networks, Virident, and encompasses a 25-year stint at Dell EMC, where he finished as GM for the global flash business.  

He told us he had been job hunting and had received two written and two verbal offers when the Weka offer arrived. Grohe said customers he talked to advised him to join WekaIO. Weka’s momentum impressed him, with its 600 per cent revenue growth rate in 2019, and so far maintaining growth rates in the pandemic.

“This company more resembles VMware than any other company I know,” Grohe added. He said its product is hardware-agnostic and heterogeneous, widely applicable and scales to huge levels; “We eat petabytes for lunch.”

Sales success

He noted existing OEMs, like HPE, have already invested in Weka. And he reckons it can keep the peace with partners – as VMare does -and grow the market overall for everyone.

Grohe tells us there are four general strategies to grow sales to high levels; VARs, OEMs, selling direct to masses of customers, and big game hunting – going directly for million dollar-size deals. Weka is equipped to do all four simultaneously, he said.

There’s confidence there. In his hiring announcement he declarles: “I have proven success in this market, and I am grateful to join the leadership team and to have the opportunity to influence and guide Weka into the next phase of growth…The pathway to IPO is ahead of us.”

Stellus

And Stellus? Grohe declined to comment. That company has gone quiet since May and executives are not responding to our enquiries.

That company launched its first product at the beginning of February. So what reason would there be for CEO Jeff Treuhaft to pull the sales and marketing plug three months later? Did Samsung cut funding for some reason? is it pandemic-related? It’s a mystery.

Intel launches gen 2 Optane DIMMs

Intel has announced second generation Optane Persistent Memory DIMMs with the same capacity as gen 1 but faster IO. The company has also launched new SSDs.

Intel said the PMEM 200 series is optimised for use with gen 3 Xeon 4-socket processing systems, which also launched today.

The Optane PMEM 200 series DIMMs come in 128GB, 256GB and 512GB capacities and their sequential bandwidth is up to 8.10GB/sec for reads and 3.15GB/sec for writes. The first generation series runs up to 6.8GB/sec reading and can reach 2.3GB/sec writes.

We calculate the PMEM 200 is around 19 per cent faster at reads and 37 per cent faster at writes. On average, there is 25 per cent higher memory bandwidth overall, according to Intel. That’s a benefit of using 4-layer XPoint, instead of the 2 layers in gen 1.

Endurance varies with capacity. 128GB = 292 petabytes written; 256GB = 497PBW; 512GB = 410PBW. For comparison, the gen 1 256GB capacity product has a 360PBW rating.

Intel says the PMem 200 series provides up to 4.5TB of memory per socket for data intensive workloads (e.g. in-memory databases, dense virtualisation, analytics, and HPC.)

3D NAND SSDs

The new data centre D7-P5500 and P5600 SSDs are U.2 format drives, built with 96-layer 3D NAND in TLC cell format and an NVMe interface running across PCIe Gen 4 with 4 lanes. The P5500 has a 1 drive write per day endurance while the  P5600 has a 3DWPD rating, making it better suited to heavier write workloads.

Available capacities are 1.92TB, 3.84TB and 7.68TB for the P5500. The P5600 needs to over-provision for extended endurance, and so available capacities come in lower at 1.6TB, 3.2TB and 6.4TB.

The PCE gen 4 links should enable high performance. The P5500 and P5600 deliver 7GB/sec when sequential reading and 4.3GB/sec when writing. Both drives provide up to 1 million random read IOPS, with the P5500 delivering up to 230,000 random write IOPS and P5600 providing up to 260,000 random write IOPS.

Get data sheet info off an Intel product brief.

The  Optane Persistent Memory 200 series and D7-P5500 and P5600 3D NAND SSDs are available today.