Home Blog Page 354

Dell EMC rebrands VxFlex to PowerFlex, adds async rep, DR and safe snapshots

Dell EMC has confirmed PowerFlex as the new name for its scale-out, multi-hypervisor, virtual SAN and HCI VxFlex product set, and added data safety features to the OS.

PowerFlex rack.

PowerFlex, sold as a hardware appliance, is Dell EMC’s 3 to multi-thousand node scale-out block storage product providing a virtual SAN or hyper-converged deployment mode. It uses PowerEdge X86 server nodes in a parallel node architecture to provide basic storage, a  scale-out SAN (2-layer), HCI (1-layer) or mixed system, and was available as a VxFlex appliance, VxRack with integrated networking or Ready Node. (Now they are PowerFlex-branded.) Dell EMC has said it’s software-defined but the SW doesn’t appear to be available on its own.

VxFlex itself is the 2018 rebrand of the acquired ScaleIO product, which EMC bought in June 2013 for circa $200m. This was a Dell EMC alternative to VMware’s vSAN. Compared to VxRail, which supports VMware exclusively and with which it overlaps, VxFlex supports multiple hypervisors and bare metal servers, and can be used as a traditional SAN resource or in HCI mode.

PowerFlex appliance.

PowerFlex storage can be used for bare-metal databases, virtualised workloads and cloud-native containerised applications. Its parallel architecture makes for quick rebuilds when drives or nodes fail. 

Dell EMC PowerFlex performance claims.

The PowerFlex OS, Dell EMC said,  now delivers native asynchronous replication, alongside the existing sync replication,  and disaster recovery with RPO down to 30 seconds. It also delivers secure snapshots for customers with specific corporate governance and compliance requirements, including healthcare and finance, the company added.

PowerFlex appliance table.

PowerFlex appliances have all-flash hardware nodes with SATA, SAS and NVMe drive support. Blocks & Files expects more NVMe drive support, along with NVMe-oF and Optane DIMM and/or SSD support in the future to pump up the performance power further.

Feel the power

Dell EMC’s Power portfolio now includes:

  • PowerEdge server portfolio, 
  • PowerSwitch networking, 
  • PowerOne autonomous infrastructure, 
  • PowerProtect data protection,
  • PowerMax high-end primary storage,
  • PowerStore mid-range primary storage,
  • PowerScale unstructured storage,
  • PowerFlex scale-out SDS for virtual SANS and HCI,
  • PowerVault entry-level storage. 

Dell EMC said it has completed its brand and product simplification under the Power portfolio umbrella. VMware-focussed products have a V-something moniker while Dell’s own products use the Power-prefix.

Intel ruler SSD sneaks out into the wild

Intel watchers at Tom’s Hardware have picked up a DC P4510 ruler-format drive available from an online Etailer.

Intel’s DC P4510 is a U.2 (2.5-inch) data centre SSD with a PCIe 3 x 4 lane, NVMe interface. It was announced in January, 2019, comes in 1, 2, 4, and 8TB capacities and is built using 64-layer 3D NAND in TLC format. It pumps out up to 641,000 random read IOPS and 134,500 random write IOPS. The sequential read and write bandwidth numbers are up to 3.2GB/sec and 3.0GB/sec.

Intel ruler format.

Now there is a 15.36TB  capacity version, using the same NAND, in the EDSFF E1.L format, L meaning long. You can cram more of these ruler drives into server chassis than U.2 format SSDs, thus gaining more storage density in the same server chassis space.

You can have either a 9mm or an 18mm heat sink with the drive and its random read/write IOPS numbers are 583,800 and 131,400 IOPS; both less than the U.2 drive with its 8TB maximum capacity. There must be a reason for this performance drop, despite the drive logically having more dies and thus more parallel access headroom, meaning more performance ought to be possible.

On the sequential read and write front the 15.56TB version performs at 3.1GB/sec each; more or less the same as the U.2 version.

The endurance is 1.92, 2.61, 6.3, 13.88 and 22.7 PBW (petabytes written) as capacity rises from 1TB to 15.36TB, with a limited 5-year warranty.

You can check out an Intel DC P4510 data sheet for more information. 

Blocks & Files expects major server manufacturers such as Dell EMC, HPE and Lenovo to embrace the EDSFF format from now on, with EDSFF servers becoming mainstream in 2021. We might also expect 32 and 64TB ruler capacity point to come onto the scene.

SQream if you’re ready for some data warehouse tech development cash

GPU-accelerated data warehouser SQream has put another round of VC cash in its pockets.

SQream took in $39.4m in a B+ round led by Mangrove Capital Partners and Schusterman Family Investments. Existing investors, including Hanaco Venture Capital, Sistema.vc, World Trade Center Ventures, Blumberg Capital, Silvertech Ventures and Alibaba Group, also pumped in cash. Total SQream funding is now $65.8m and this round follows a $19.8m B2 round in 2018.

The cash will pay for what SQream calls “top talent recruitment” to develop the product technology.

The company claims its growth is accelerating, despite the Covid-29 pandemic.

Two VCs – Roy Saar (Mangrove Capital and Wix founder) and Charlie Federman (Partner at Silvertech Ventures) will join the SQream board of directors.

A canned quote from Saar said: “Given the growth of data, which has been on a rocket-ship trajectory to zettabyte levels due to rapid digitalisation, we are just scraping the surface of how companies will be generating value from their data.”

The data warehouse business is on a roll, with Snowflake preparing for an IPO. SQream can’t afford to fall behind and its GPU turbo-charging should give it an edge.

WD launches 24-slot dragster NVMe-oF box

Update. Read/write bandwidth and latency numbers added. 25 June 2020.

WD has built a faster NVMe SSD and populated an OpenFlex NVMe-oF composable JBOF enclosure with 24 of them to provide a hot box, fast-access data centre flash array.

This is somewhat surprising as WD has recently divested its data centre storage array business, witness the disposal of IntelliFlash arrays to DDN and ActiveScale archiving to Quantum. But the OpenFlex box does not have a storage controller delivering data services: it’s bare metal.

A canned quote about the new WD products from IDC research VP Jeff Janukowicz struggled to say anything specific about them: “The future of Flash is undoubtedly NVMe as it’s all about speed, efficiency, capacity and cost-effective scalability, and NVMe-oF takes it to the next level. … the company is well positioned to help customers fully embrace NVMe and get the most out of their storage assets.” 

Yusuf Jamal, SVP of WD’s Devices and Platforms Business, provided an anodyne quote, too: “We’re fully committed to helping companies transition to NVMe and move to new composable architectures that can maximise the value of their data storage resources.” 

The SSD

The DC (Data Centre) SN840 uses the same 96-layer 3D NAND TLC technology as the earlier SN640 and its capacity range is 1.6, 3.2, 6.4 (3 drive write per day) or 1.92, 3.84, 7.68, and 15.36TB (1DWPD) in its U.2 (2.5-inch) form factor. The 15.36TB peak is double the earlier SN640’s 7.68TB in its U.2 form factor. That drive also came in a 30.72TB EDSFF E1.L ruler form factory, which format is not available for the SN840.

The SN840 outputs up to 780,000/250,000 random read/write IOPS, much more than the SN640’s maximum 480,000/220,000 IOPS. The sequential read/write bandwidth numbers are 3.5GB/sec and 3.4GB/sec, and the latency is 157µs or lower.

WD SN840 performance charts

Its 1 and 3 drive write per day formats make it suitable for either read-intensive or mixed read/write-use workloads. The SN840 is a dual-ported drive with power-loss protection and TCG encryption.

The JBOF

The OpenFlex Data24 NVMe-oF Storage Platform is a 2U x 24 slot all-flash box populated with the SN840 SSDs. It uses an RDMA-enabled RapidFlex controller/network interface cards, developed with WD’s acquired Kazan Networks NVMe-oF Ethernet technology. Notice the vertical integration here.

This JBOF (Just a Bunch of Flash) follows on from the OpenFlex F3100 which was an awkward design housing 10 x 2.5-inch SSDs inside 3.5-inch carriers.

OpenFlex Data24.

The 2U Data24 box houses 24 hot-swap SN840s with a maximum 368TB capacity. Up to 6 hosts can be directly connected to this JBOF with a 100GbitE link and 6 RapidFlex NICs. They get the benefit of up to 13 million IOPS, 70GB/sec throughput and sub-500 nanosecond latency.

RapidFlex controller.

The Data24 can interoperate with the existing OpenFlex F-series products and comes with a five-year limited warranty.

It’s certainly a fast box and comes with no traditional storage controller providing data services such as erasure-coding, RAID, snapshots, or data reduction.  This is just a stripped down, NVMe-oF-accessed bare flash drive dragster box supporting composability.

WD said it can be used for high-performance computing (HPC), cloud computing, SQL/NoSQL databases, virtualisation (VMs/containers), AI/ML and data analytics. We envisage DIY hyperscalers and system builders will be interested in looking at the box, as a SAS JBOF replacement perhaps. All WD’s data centre SSD customers will pay attention to the DC SN840.

Ultrastar DC SN840 NVMe SSD shipments will begin in July. The OpenFlex Data24 NVMe-oF Storage Platform is scheduled to ship in autumn/fall. RapidFlex NVMe-oF controllers are available now. 

WD 16/18TB disk drive details leak. Plus: Firm slaps Red rebrand-aid on shingle wound

Western Digital has responded to its shingled Red NAS drive issue by adding a Red Plus product variant, and the company’s coming 14TB and 16TB Gold disk drives’ details have leaked into the market.

The Golden touch

Tom’s Hardware has revealed online eTailers are listing 16TB and 18TB Western Digital Gold drives ahead of their formal launch.

The Gold series officially have a 1TB to 14TB capacity range and are helium-filled drives in the 12TB and 14TB models, use conventional magnetic recording, not shingled, spin at 7,200rpm and employ a 6Gbit/s SATA interface. They are also built for hard work; 24x7x365.

The UK’s Span lists a WD181KRYZ 18TB Gold model for £624.00. It has a 5-year limited warranty and a 2.5 million hours MTBF rating. The cache and bandwidth numbers are not certain, though Span suggests a 52GB cache and a 257-267MB/sec read speed. A 14TB Gold disk cost $578.00. 

We expect a formal launch in a few days or weeks.

WD Red drives and shingling

WD has been facing user problems with its WD Red NAS disk drives using shingled magnetic recording. To give users clarity, it’s now splitting the line into shingled (Red) and non-shingled (Red Plus) product types. The Red Pro line is not affected by this change.

Expanded Red disk drive range.

A WD blog explained: “WD Red Plus is the new name for conventional magnetic recording (CMR)-based NAS drives in the WD Red family, including all capacities from 1TB to 14TB. These will be the choice for those whose applications require more write-intensive SMB workloads such as ZFS. WD Red Plus in 2TB, 3TB, 4TB and 6TB capacities will be available soon.”

“The Red line with device-managed shingled magnetic recording (DMSMR) (2TB, 3TB, 4TB, and 6TB [capacities] will be the choice for the majority of NAS owners whose demands are lighter SOHO workloads.”

The firm added: “We want to thank our customers and partners for your feedback on our WD Red family of network attached storage (NAS) hard drives. Your real-world insights shared through in-depth reviews, blogs, forums and from our trusted partners are directly contributing to our work on an expansion of models and clarity of choice for customers. Please continue sharing your experiences and expectations of our products, as this input influences our development.”

Comment

This blog and the Red Plus product line clarification will be seen by some as recognition for the hundreds of users who said they had problems with Red NAS drives using shingling when the write load exceeded the drive’s ability to delay performance-sapping writes by caching them in a buffer until the drive was idle.

It is very good that WD is listening to its users, but a pity that its product design engineers didn’t realise there would be a problem in the first case.

HPE launches Ezmeral containerised app framework

HPE wants to be part of the picture when businesses develop and deploy apps and workflows with AI and ML functionality in containerised environments and has announced a newly created software portfolio – Ezmeral – to that end. 

HPE is another mainstream vendor looking to grab a part of the emerging new Kubernetes container organisation territories.

HPE’s Kumar Sreekanti, CTO and head of software, provided a canned overarching quote: “The HPE Ezmeral software portfolio fuels data-driven digital transformation in the enterprise by modernising applications, unlocking insights, and automating operations.” Yes, OK.

“Our software uniquely enables customers to eliminate lock-in and costly legacy licensing models, helping them to accelerate innovation and reduce costs, while ensuring enterprise-grade security.” (That refers to an open source piece of the news.)

HPE is getting on a Kubernetes train with other passengers already onboard. NetApp also has a Kubernetes framework initiative with its Project Astra, and MayaData has its Kubera Kubernetes management service.

Ezmeral brand

Ezmeral is wrapped in a trendy digital transformation marketing framework and is a brand name developed from esmeralda, the Spanish word for emerald, with lots of high-flown HPE quotes talking about “The transformation of raw emerald to a cut and polished stone to reveal something more beautiful and valuable is analogous to the digital transformation journey our customers are on.” 

The marketing bods have clearly lined emeralds, which are green, with GreenLake, HPE’s all-singing, all-product dancing subscription deal.

Ezmeral portfolio

Back in the real world HPE has combined its acquired BlueData and MapR assets with open source Kubernetes-related software to build the Ezmeral portfolio. This applies to the range of IT environments from edge locations, data centres and the public cloud and includes:

  • Container orchestration and management, 
  • AI/ML and data analytics, 
  • Cost control, 
  • IT automation and AI-driven operations, 
  • Security

The Ezmeral Container Platform and Ezmeral ML Ops are two software items announced as part of the portfolio.

Ezmeral Container Platform

This enables customers to manage multiple Kubernetes clusters with a unified control plane, and use a MapR distributed file system for persistent data and stateful applications. Interestingly HPE says customers can run both cloud-native and non-cloud-native applications in containers without having to rewrite the legacy apps.

This involves use of the HPE-contributed KubeDirector open source project which provides the ability to run non-cloud native stateful applications on Kubernetes without modifying the code. 

But it is no silver bullet, being focused on distributed stateful applications. KubeDirector enables data scientists familiar with data-intensive distributed applications such as Hadoop, Spark, Cassandra, TensorFlow, Caffe2, etc. to run these applications on Kubernetes – with a minimal learning curve and no need to write GO code. 

OK – so forget running the broad mass of legacy apps in containers.

Ezmeral ML OPs software uses containerisation to introduce a DevOps-like process to machine learning workflows. HPE claims it will  accelerate AI deployments from months to days. The BlueData part of this refers to using container technology for Big Data analytics and machine learning.

The Ezmeral Container Platform and Ezmeral ML OPs products will be available as software and also delivered as a cloud service through GreenLake.

Let the SAN shine: Nebulon drops cloak, reveals DPU-enhanced, cloud-managed server SAN

Startup Nebulon has come out of stealth to reveal scale-out, on-premises, server SAN, block-based storage using commodity X86 servers bolstered with storage processing offload cards, along with a data management service delivered from its cloud.

It claimed its so-called cloud-defined storage (CDS) is less pricey than equivalent all-flash SAN array storage and doesn’t use up CPU capacity in its host servers, a disadvantage it claims affects both software-defined storage (SDS) and hyperconverged infrastructure (HCI) systems using commodity server chassis.

Siamak Nazari.

A prepared quote from Siamak Nazari, co-founder and CEO of Nebulon, said: “Cloud-Defined Storage delivers global insights, AI-based administration and API-driven automation making enterprise-class storage a simple attribute of the data centre fabric with self-service infrastructure provisioning and storage operations as-a-service for application owners.”

Nebulon’s storage is embodied in its Storage Processing Unit, an add-in, FH-FL PCIe card, with an 8-core, 3GHz ARM CPU plus encryption/dedupe offload engine, that is layered in front of a host server’s SAS or SATA SSDs, and connects to them via a triple SerDes connector. Nebulon co-founder and COO Craig Nunes told B&F the SPU will support “NVMe when it becomes generally available in the early Fall.”

The SPU card, which looks to upstream system software like an HBA or RAID card, presents block volumes to applications running in the servers. Up to 32 servers can be clustered in an nPod with the SPUs connected by a 10 or 25gigE network. There is a separate 1gigE port for management from the cloud.

Nebulon SPU

Data services provided by the SPU include deduplication, compression, encryption, erasure coding, snapshots and mirroring. There is no GPU on the card.

The SPU contains 32GB of NVRAM to speed writes, and reads come straight from the SSDs. NVRAM write caching means the SPU can turn random writes into sequential writes to the SSDs, thus helping to lengthen the drives’ endurance. Data is not striped across SPUs.

Initially 4TB SSDS are supported, with up to 24 supported by a single SPU, meaning 96TB, and a maximum capacity of 3,072TB across the 32 SPUs in an nPod. There is a single, all-flash storage block-access tier.

The performance on reads is slower than if NVMe SSDs were supported. Nunes told us: “At the device level, SATA latencies can be many times [that of] NVMe. However, when measured with the enterprise data services software stack, the latencies at the application level will be in the 300us to 400us range and acceptable to the cloud native, container and hypervisor use cases we are targeting.”

OEM channel

Nebulon sells its SPU and ON service through an OEM channel, with both HPE and Supermicro signed up so far, and a third OEM likely. An HPE configuration is based on the ProLiant Dl380 gen 10 server in a 2U x 24 slot chassis, while Supermicro uses its Ultra line of servers for Nebulon storage 

That means that actual server hardware configurations, including drive types and capacities come from the OEMs. So too do purchase and/or subscription arrangements.

Nebulon is pitching its product, through its OEMs, at mid-to-large enterprises needing block storage at PB and up scale and wanting to increase storage and app server efficiency and reduce acquisition and management costs.

The card can be set up using application templates to optimise it it for different workloads, such as VMare, MongoDN and Kubernetes. Nebulon storage supports any OS or hypervisor in the host server. NebOS upgrades are non-disruptive.

SPU management

The SPU runs NebOS software and is managed through a Nebulon ON SaaS service hosted in a Nebulon cloud which uses multiple CSPs and multiple regions for high-availability. It is updated through the ON service.

Nebulon says the ON service manages fleets of Nebulon systems at scale. These systems send telemetry messages to the ON Cloud; tens of thousands of storage, server and application metrics per hour. These are stored in a distributed time series database. 

ON includes an AIOps function which looks at the telemetry, analyses it in real time, and responds to adverse events in seconds by re-jigging a Nebulon system to respond to changing operational patterns. It also provides storage usage metrics over time and predictive analytics.

Nebulon ON dashboard

Customer admin staff can self-provision Nebulon storage through an ON dashboard and ON can deliver automated updates across a Nebulon fleet. 

Replication will be delivered as a future upgrade, possibly in the next software release. We expect stretch cluster support in a future release as well.

The SPU is a DPU

Nebulon said the SPU card is an example of a DPU (Data Processing unit), a dedicated storage or networking processor intended to offload storage and/or network processing from a host server’s CPU so that it can concentrate on application processing.

Examples of DPU supply and use include Diamanti, Fungible, Pensando, Nvidia (SmartNICs) and AWS for in-house use (Nitro). 

We might say Nebulon is an HCI system on DPU steroids; so many that Nebulon claims it is no longer an HCI system at all, but an ultraconverged system.

The SPU is a gateway to the storage for its host CPU. If it fails, the host server loses access to its storage. A server can have other storage installed which is not accessed through the SPU. A loss of internet connectivity will not prevent an SPU from functioning.

Competition

The Nebulon system, being a kind of super-server SAN system, will compete with Dell’s VxRail and Nutanix systems. It will also compete with disaggregated HCI systems such as Nimble’s dHCI and Datrium.

Inside HPE the Nebulon storage competes with or complements SimpliVity HCI and Nimble dHCI systems, and Primera, 3PAR and Nimble arrays. It also competes/complements other HPE storage partners providing block array services such as Datera.

Nebulon is not supported by HPE’s cloud-delivered, predictive analytics Infosight management or its GreenLake subscription service, but it is early days.

Nebulon’s Craig Nunes says more than half of HPE’s servers are sold into customers with non-HPE storage. The Nebulon storage, which should cost less than external array storage, and uses lower server SAN CPU resources, gives HPE a win-back opportunity in his view.

Regarding Pensando Nunes tells us: “Each Nebulon SPU has an 8-core 3GHz processor and 32GB of battery backed NVRAM, and runs the entire software stack you might find on a 3PAR or Pure Storage array controller.  As a compare, Pensando supports 8GB of RAM—enough for the network/security functionality but not enough to run a full storage SW stack on the card.”  

Executive Chairman David Scott says Nebulon and Pensando are complimentary: “I could easily see some use cases where a customer has both a Pensando DSC card and a Nebulon SPU in the same application server(s). :)”

Comment

Nebulon is entering a new and undeveloped market, the DPU-enhanced server SAN market with cloud-delivered, AIOps management, and its competitors, at the OEM level, are suppliers such as Pensando and the other DPU suppliers.

At the end-user level its competitors are, well, legion, and existing SDS and HCI vendors will say Nebulon is simply just another SDS or HCI vendor, one using proprietary HW and SW to give its host server chassis a performance kick. If customers accept that positioning then suppliers will compete on speeds, feeds, support – the usual stuff.

If customers see Nebulon as a new class of server SAN, then the OEM+Nebulon offer will be differentiated, although this will require a marketing and sales push.

SmartX produces Optane DIMM DAX cached hyperconverged system

SmartX has announced what is probably the fastest hyperconverged infrastructure appliance in the world, if the speeds it has reported are verified.

The Chinese hyperconverged vendor has launched a Halo P product using Optane DIMM caching to push out 1.2 million IOPS with 100μs latency and 25GB/sec bandwidth from a three-node Lenovo-based system, using NVMe SSDs.

Kyle Zhang, co-founder and CTO of SmartX, provided a quote: “We have seen that the introduction of new storage technologies can greatly improve the performance of HCI system and address the real-workload challenges for critical applications. In the future, SmartX will collaborate with Intel and other leading industry leaders to introduce more advanced technologies to lead the next-level innovations in HCI.”

SmartX Optane diagram

How does SmartX get latency down to that level? It has gone the extra mile with its SMTX OS and uses the Optane DC Persistent Memory DIMMs in byte-addressable App Direct (DAX) mode. This persists written data (VM IO) in any node in its Optane DIMM memory cache. Cached data is also be replicated to the other nodes using the RDMA protocol, which reduces write latency before the write is acknowledged. 

Cache data is written down to SSDs when it cools, and promoted back to Optane if it is re-accessed.

The SMTX OS uses the byte-addressable feature of persistent memory to redesign its journal, using 64 byte alignment instead of 4KB (SSD-type) alignment, and so reducing the problem of write amplification with small (sub 4KB) journal entries.

Also, storage virtualization is devolved from the virtual machine (VM) to the storage software stack, through an SMTX ELF boost mode, to avoid performance overhead caused by I/O requests passing through the VMs. Memory is shared by the VM and the storage system to avoid memory replication on the IO path.

SmartX IO path diagram.

RDMA over Converged Ethernet (RoCE) is used to accelerate network IO requests with the protocol operating on the network card.

SmartX claimed its Halo P appliance is powerful enough for OLTP database and machine learning workloads. It can also support more virtual machines than its raw capacity might suggest.

The company has an office in Palo Alto and claims it has the biggest hyperconverged system deployment in China – China Unicom’s “Wo Cloud” – as well as customers in finance, manufacturing and real estate. It has partnerships with Citrix, Mellanox, Commvault and Rancher in the fields of servers, high-speed networks, virtualization, disaster recovery, cloud computing and containers.

Qumulo litmus: Firm Shifts file format for AWS

Qumulo
Qumulo

Scale out filesystem supplier Qumulo has launched Shift for AWS. This moves files from any Qumulo on-premises or public cloud cluster into Amazon Simple Storage Service (S3) transforming the files into natively accessible objects and buckets.

Once in the AWS cloud, this object data can be stored as an archive or used by AWS-resident applications and services such as Sagemaker and Rekognition. It cannot be automatically moved back to Qumulo though. Updated files are written as new objects.

Barry Russell, SVP and GM of cloud at Qumulo, emitted a canned quote: “With Qumulo Shift, customers can now move data faster and no longer worry about being stuck in legacy proprietary file data formats … . Leveraging our work with AWS, we are now able to integrate with Amazon S3 natively and enable … workloads to use cloud applications and services at any scale in Qumulo or S3.”

Qumulo says users with large video projects can move the files into AWS and burst rendering jobs to thousands of AWS compute nodes. Enterprises can migrate large datasets, such as data lakes, to AWS that could exceed the scale capabilities of other file system products.

Blocks & Files suggests that AWS’s own file services, such as EFS, could perhaps be supported by Qumulo, using the NFS format. EFS doesn’t support NFSv2, or NFSv3, but does support NFSv4.1 and NFSv4.0, except for certain features.

Qumulo’s Molly Presley, Global Product Marketing Head, disagrees. If Qumulo users want file-level operations in the cloud they may as well spin up a Qumulo file system in AWS. Also Amazon’s EFS doesn’t support SMB or as large volumes as Qumulo. Basically it’s not a good idea.

Qumulo has gained an Amazon Well-Architected designation, by which it means customers can reliably run Qumulo file services in cloud-native form on AWS.

The Shift product is included at no charge with an updated Qumulo file system which will be available in July this year. 

Lightbits Labs adds Kubernetes table stakes: CSI support

All-flash array startup Lightbits Labs has launched a software update that provides NVMe-based persistent volume storage for Kubernetes.

It says LightOS v2.0 provides virtual NVMe volumes to Kubernetes, delivering low latency and high performance. It also provides clustering and high availability via target-side storage server failover. This is done via the standard Container Storage Interface (CSI) plug-in route.

Kam Eshghi.

Kam Eshghi, chief strategy officer at Lightbits Labs added: “At cloud scale, everything fails. LightOS 2.0 is the industry’s first NVMe/TCP scale-out clustered storage solution – protecting against data loss and avoiding service interruptions at scale in the presence of SSD, server, storage, or network failures.”

Lightbits distinguishes its array with NVMe/TCP support, providing NVMe-oF access across Ethernet TCP/IP network links.

Many, many suppliers provide persistent volume support for K8s – Portworx, StorageOS, Dell with PowerStore, NetApp, VAST Data and more –it is becoming standard table stakes in the containerisation storage game.

Lightbits claims to be different from the pack because it is clustered and supports rapid node migration, workload rebalancing, or recovery from failure without copying data over the network. If any computer node in the network fails, data is moved virtually by pointing it to another container.

LightOS 2.0 is automatically optimised for I/O intensive compute clusters, such as Kafka, Cassandra, MySQL, MongoDB, and time series databases. Each storage server in the cluster can support up to 64,000 namespaces and 16,000 connections.

V2.0 LightOS has support for Kubernetes v1.13 and v1.15 – v1.18 and later, for any volume size, number of volumes or Kubernetes size cluster. Aa well as the CSI interface it also allows stateful containers via a Cinder plugin. The v2.0 SW is now available.

NAND here’s your storage digest, featuring Samsung, Pavilion Data and more

We start off today’s roundup with news about Samsung facing production problems with its 128-layer 3D NAND. We also take a look at a Sony business using a fast Pavilion array for capturing the video points in a 3D space over time.

Samsung and string-stacking

Wells Fargo senior analyst Aaron Rakers has said Samsung may be be facing production yield challenges with its gen 6, 128-layer V-NAND (3D NAND) technology. This is a single stack technology whereas Samsung’s competitors are building 100+ layer 3D NAND dies by stacking smaller layer-count blocks on top of each other. This is called string-stacking.

Kioxia and Western Digital’s BiCS 5 112-layer die uses a pair of 56-layer stacks.

Gen 6 Samsung is 128 layers. Gen 7 Samsung is 166 layers.

Apparently a single stack etch through 128 layers is taking twice as long as the same etch through 96 layers. The etch creates a conductive vertical channel through the layers. If the yield from the wafers is too low, then Samsung’s costs go up.

Rakers suggested Samsung could change to string stacking with its Gen 7, 160-layer 3D NAND die. String-stacking could cost up to 30 per cent more than single-stacking, so Samsung will be motivated to get its single stack etching working.

Sammobile reports Samsung has set up a task force to work through the yield problems.

Pavilion and Sony 

Sony Innovation Studios has picked Pavilion Data’s Hyperparallel Flash Array (HFA) for storing data from real-time volumetric virtual production with its Atom View software. Volumetric capture is a performance and latency hungry application needed for the rendering of 3-D virtual and mixed environments.

Volumetric capture captures the visual image points in a 3D space (volume) over time and in minute detail. The AtomView software is point-cloud rendering, editing, and colouring software that enables content creators to visualise, edit, colour correct and manage volumetric data. It can combine multiple volumetric data sets captured from different angles producing a single output for use in virtual film productions, video games, and interactive experiences with true photoreal cinematic quality.

The deployment was in partnership with Alliance Integrated Technologies and Pixit Media’s PixStor product. 

Ben Leaver, CEO of Pixit Media, said: “Volumetric capture brings a new paradigm of size and information capable of being stored and requiring the highest performance in render speeds. With an approach that mimics a director class core switch architecture, Pavilion’s approach to multi-line card, multi-controller design means PCIe speeds to each drive, and massive bandwidth to the network over a low latency RDMA protocol.”

Billy Russell, CTO at Alliance IT, said: “It was clear that a 100G ethernet infrastructure was needed to deliver the data. We also wanted the ability to scale in the future to 200 and 400G Ethernet and support migration to tier 2 or cloud as data ages off.”

The Cloud migration uses an Ngenea product. Pavilion said it has “multiple” deployments in the Media and Entertainment vertical.

Shorts

Data protector Acronis has made Acronis Cyber Protect, its cloud offering through service providers, available as a beta version, to deploy on-premises.

Acronis has also struck yet another sports sponsorship deal. This time it’s with AFC Ajax, the Dutch professional football club. The club has yet to schedule its first post-COVID match.

IBM’s Cloud Pak for Data 3.0 is a data and AI platform that containerises multiple offerings for delivery as microservices and runs on the Red Hat OpenShift Container Platform. It includes Actifio’s Virtual Data Pipeline to provision and refresh virtual test environments in minutes, enabling an up to 95 per cent storage capacity saving over non-use of Actifio’s VDP.

Enterprise data cataloguer Alation is working with Databricks to provides data teams with a platform to identify and govern cloud data lakes; discover and leverage the best data for data science and analytics; and collaborate on data to deliver high quality predictive models and business insights.

NoSQL database supplier DataStax today announced the private beta of Vector, an AIOps service for Apache Cassandra. Vector continually assesses the behaviour of a Cassandra cluster to provide developers and operators with automated diagnostics and advice.

Recursion, a digital biology company industrialising drug discovery through the combination of automation, AI and machine learning (ML) capabilities, is using DDN EXAScaler ES400NV and ES7990X parallel filesystem appliances that were later scaled to 2PBs of capacity for staging ML models. An all-flash layer is employed as a front-end to the file system supported by spinning disk. The first 64K of each file is stubbed to this layer, which then accelerates access to the first part of the data before streaming the rest to spinning disk.

Data protector Druva has received an NPS score of 88. NPS (Net Promotor Scores) range from -100 to 100, meaning 88 is a high positive score.

Google has announced the beta launch of Filestore High Scale, a GCP file storage product, which includes Elastifile’s scale-out file storage capability. Google completed its acquisition of Elastifile in August 2019. The Filestore High Scale tier adds the ability to deploy shared file systems that can scale-out to hundreds of thousands of IOPS, 10s of GB/s of throughput, and 100s of TBs.

Komprise has claimed it saw 400 per cent revenue growth Y/Y in 2020’s first quarter. It also added DataCentrix and Vox Telecom as resellers in South Africa.

Composable systems technology developer Liqid has signed up Climb Channel Solutions to distribute Liqid products.

In-memory database supplier MemSQL has announced v7.1 of its software. This delivers SingleStore, an an extension of MemSQL’s columnstore technology that includes support for indexes, unique keys, upserts, seeks, and fast, highly selective, nested-loop-style joins. It also provides fast disaster recovery failback, MySQL language support and the ability to back up data incrementally to more environments: Amazon S3, Azure Blob Store, and Google Cloud Platform.

Netlist announced that the U.S. Court of Appeals for the Federal Circuit (Federal Circuit) has affirmed the U.S. Patent Trial and Appeal Board’s (PTAB) decision upholding the validity of Netlist’s U.S. 7,619,912 (‘912) patent. This was a win over Google, which has used Netlist technology described in the patent. The way is clear for some kind of money flow from Google to Netlist, potentially in the multi-million dollar area.

Nutanix has added capabilities to its Desktop as a Service (DaaS) solution Xi Frame. These include enhanced onboarding for on-premises desktop workloads on Nutanix AHV, expanded support for user profile management, the ability to convert Windows Apps into Progressive Web Apps (PWA), and increased regional data centre support to 69 regions across Microsoft Azure, Google Cloud Platform and Amazon Web Services (AWS).

Entertainment and media workflow object storage supplier Object Matrix says its products  now support the recently launched Adobe Productions workflow for Adobe PremierePro. 

Telecoms operator BSO announced the launch of an Object Storage product in public cloud mode, called BSO.st. The tech is based on the software-defined storage developed by the French company OpenIO.

PlanetScale announced the beta release of PlanetScaleDB for Kubernetes, which allows organisations to host their data in their own network perimeter and deploy databases with just a few clicks using the PlanetScale control plane and operator. PlanetScaleDB for Kubernetes is a fully managed MySQL compatible database-as-a-service for companies looking to deploy distributed containerised applications.

HCI supplier Scale Computing has added Mustek as a distribution partner in South Africa and Titan Data Solutions as a distributor in the UK.

Object (and file) storage supplier and orchestrator Scality announced an investment in Fondation Inria, a French national research institute for digital sciences. Scality is bringing both financial backing and collaboration to help support multi-disciplinary research and innovation initiatives in mind-body health, precision agriculture, neurodegenerative diagnostics, and privacy protection.

Cloud data warehouser Snowflake has announced the launch of its Snowflake Partner Network (SPN), an ecosystem of Technology and Services partners for customers.

Samsung-backed all flash key:value store startup Stellus Technologies laid off its entire sales and marketing department in April, according to a senior ex-employee. Stellus launched its first product at the beginning of February. So sales must presumably have been catastrophically bad for the entire sales and marketing team to be laid off.

ReRAM developer Weebit Nano is going  to place circa $6.6 million worth of new shares via a two-tranche placement. It will also conduct a non-underwritten Share Purchase Plan to raise a further $500,000.  The $71m cash will be used to complete its memory module development for the embedded memory market, transfer the tech to a production facility, continue selector development at Leti for the standalone memory market. Some of it will also go to sales and marketing and general working capital.

Veeam says Veeam Backup for AWS v2 is generally available and Veeam has achieved AWS Storage Competency status. It supports changed block tracking (CBT) API to shrink backup windows. The product makes application consistent snapshots and backups of running Amazon EC2 instances without shutting down or disconnecting attached Amazon EBS volumes.

Veeam Backup for AWS can be implemented as a standalone AWS backup and disaster recovery system for AWS-to-AWS backup, or integrated with the Veeam Platform.

Veeam has announced new Veeam Availability Orchestrator v3 with full recovery orchestration support for NetApp ONTAP snapshots, a new Disaster Recovery Pack at a lower price, and the capability of automatically testing, dynamically documenting and executing disaster recovery plans.

Data warehouser Yellowbrick Data is offering multiple petabyte (PB) capacity on its new hybrid data warehouse 3-chassis configuration. It claims this provides offers unparalleled, single-warehouse capacity with support for 3.6PB of user data in a 14U rack form factor. This 3-chassis instance has a maximum node count of 45 in that 14U and also supports 45 concurrent, single-worker queries on one system.

The actual chassis product is the 2300 series. Each node delivers 36 vCPUs per node (that’s 2 vCPUs per physical core) and has 8 NVMe SSD slots. There are HDR, VHDR and EHDR models; High Density, Very High Density and Extremely High Density. The HW differences are essentially the NVMe densities shipping on each node.

People

Dremio, which produces data lake SW, has  appointed Ohad Almog as VP of Customer Success, Colleen Blake as Vice President of People and Thirumalesh Reddy as VP of Engineering. The company recently raised $70m in a Series C round of funding.

Igneous co-founder and board member Kiran Bhageshpur has relinquished his CEO slot to board member Dean Darwin. VP Products Christian Smith has left Igneous and is now a storage business development person at AWS. B&F has put out feelers to find out what’s going on.

File lifecycle management supplier Komprise has appointed Clare Loveridge as VP EMEA Sales. She comes from ExaGrid and before that Cloudcheckr, Nimble Storage and Data Domain.

It’s harsh at the edge: Dell gives VxRail a boost with PCIe 4, K8S, Optane DIMM, GPU and ruggedisation

Dell has extended its VxRail hyperconverged infrastructure systems with support for AMD EPYC processors, PCIe Gen 4.0, Kubernetes, Optane, more GPUs and ruggedised deployments, making it more relevant to the edge.

The ruggedised systems form a new VxRail product type: The EPYC-using VxRail is a new specific E Series configuration, and the other additions apply to VxRail systems generally

Tom Burns, SVP and GM for Integrated Products & Solutions at Dell, said in a canned statement: “With the new ruggedized VxRail systems, location and conditions don’t matter.” He’s not kidding.

There are five existing VxRail product flavours;

  • E Series – 1U/1Node with an all-NVMe option and T4 GPUs for use cases including artificial intelligence and machine learning
  • P Series – Performance-intensive 2U/1Node platform with an all NVMe option, configurable with 1, 2 or 4 sockets optimised for intensive workloads such as databases
  • V Series – VDI-optimised 2U/1Node platform with GPU hardware for graphics-intensive desktops and workloads
  • S Series – Storage dense 2U/1Node platform for applications such as virtualised SharePoint,  Exchange, big data, analytics and video surveillance
  • G Series – Compute dense 2U/4Node platforms for general purpose workloads.

Get rugged

The ruggedised VxRail boxes are a sixth variant: the D Series comes in a 1U short depth – 20-inch – box that can operate at an altitude of up to 15,000 feet [1.4miles], sustain a 40G operational shock (all-flash model] and operate within a temperature envelope of 5 to 131 degrees Fahrenheit [-15° Celsius  to 55° Celsius], withstanding the extremes for up to eight hours. They also resist sand and dust ingress, claimed Dell.

VxRail D Series systems come in all-flash [SAS SSD] and hybrid SDD/disk versions, and can be used outside data centres in industrial and external environments within the limits above – it can be harsh at the edge.

EPYC, Optane DIMM, Quadro GPUs and LCM

The E Series E665 system supports AMD EPYC processors,  a first for VxRail, with up to 64 cores, and also PCIe Gen 4.0, making them powerhouses and suitable, Dell sugested, for workloads with stringent performance needs, such as databases, unstructured data, VDI and HPC. Blocks & Files expects PCIe Gen 4 support to spread across the VxRail range in the next few quarters.

VxRail systems now support Optane Persistent Memory DIMMs, as well as Optane SSDs, and can deliver a claimed 90 per cent drop in latency and a sixfold IOPS increase, Dell said.

Dell said this was tested using an OLTP 4k workload on 4 x VxRail P570F systems with Optane persistent memory in app-direct mode versus a VxRail all-NVMe flash system. No actual numbers were revealed. The available data suggests Optane DIMM-enhanced VxRail systems are good for in-memory databases and other workloads needing low latencies.

The VxRail systems also support Nvidia Quadro RTX GPUs and vCPUs to accelerate rendering, AI, and graphics workloads. 

Dell has announced Lifecycle Management (LCM) software for VxRail which can streamline updates by running pre-upgrade health checks on demand. It produces continually-validated VxRail system states to reduce downtime, with non-disruptive upgrades.

DTCP and Kubernetes

The Dell Technologies Cloud Platform (DTCP) on VxRail supports Kubernetes clusters, with VMware Cloud Foundation (VCF) v4.o and VxRail 7.0. VCF can operate with a Consolidated Design architecture, in which compute workloads are co-resident with management workloads in the management domain. This is said to be good for general-purpose, virtualised workloads.

Alternatively it can have a Standard Design architecture with independent management and workload domains. This enables it to run multiple traditional and cloud native workloads, such as Horizon VDI and vSphere with Kubernetes.

It should be possible to upgrade from Consolidated Design to Standard Design in a future release of VxRail software.

DTCP starts at the 4-node level. The VxRail HCI System Software latest update, Nvidia Quadro RTX GPUs and Optane DC Persistent Memory options are available globally now. VxRail D Series and the E Series with EPYC processors will be available globally on June 23 this year

Comment

Dell VxRail HCI is a stronger offering with these additions. The Optane DIMM, PCIe 4.0 and new GPU support make it a low-latency, IOPS-munching machine suitable for compute and graphics-intense workloads. Enterprise data centre admins should appreciate the smoother and more certain update routines with the LCM software. 

Both Dell and it’s channel’s sales force should also appreciate the ability to sell HCI systems with VMware and VxRail (HW + SW) working together in a neat package.