SmartX has announced what is probably the fastest hyperconverged infrastructure appliance in the world, if the speeds it has reported are verified.
The Chinese hyperconverged vendor has launched a Halo P product using Optane DIMM caching to push out 1.2 million IOPS with 100μs latency and 25GB/sec bandwidth from a three-node Lenovo-based system, using NVMe SSDs.
Kyle Zhang, co-founder and CTO of SmartX, provided a quote: “We have seen that the introduction of new storage technologies can greatly improve the performance of HCI system and address the real-workload challenges for critical applications. In the future, SmartX will collaborate with Intel and other leading industry leaders to introduce more advanced technologies to lead the next-level innovations in HCI.”
SmartX Optane diagram
How does SmartX get latency down to that level? It has gone the extra mile with its SMTX OS and uses the Optane DC Persistent Memory DIMMs in byte-addressable App Direct (DAX) mode. This persists written data (VM IO) in any node in its Optane DIMM memory cache. Cached data is also be replicated to the other nodes using the RDMA protocol, which reduces write latency before the write is acknowledged.
Cache data is written down to SSDs when it cools, and promoted back to Optane if it is re-accessed.
The SMTX OS uses the byte-addressable feature of persistent memory to redesign its journal, using 64 byte alignment instead of 4KB (SSD-type) alignment, and so reducing the problem of write amplification with small (sub 4KB) journal entries.
Also, storage virtualization is devolved from the virtual machine (VM) to the storage software stack, through an SMTX ELF boost mode, to avoid performance overhead caused by I/O requests passing through the VMs. Memory is shared by the VM and the storage system to avoid memory replication on the IO path.
SmartX IO path diagram.
RDMA over Converged Ethernet (RoCE) is used to accelerate network IO requests with the protocol operating on the network card.
SmartX claimed its Halo P appliance is powerful enough for OLTP database and machine learning workloads. It can also support more virtual machines than its raw capacity might suggest.
The company has an office in Palo Alto and claims it has the biggest hyperconverged system deployment in China – China Unicom’s “Wo Cloud” – as well as customers in finance, manufacturing and real estate. It has partnerships with Citrix, Mellanox, Commvault and Rancher in the fields of servers, high-speed networks, virtualization, disaster recovery, cloud computing and containers.
Scale out filesystem supplier Qumulo has launched Shift for AWS. This moves files from any Qumulo on-premises or public cloud cluster into Amazon Simple Storage Service (S3) transforming the files into natively accessible objects and buckets.
Once in the AWS cloud, this object data can be stored as an archive or used by AWS-resident applications and services such as Sagemaker and Rekognition. It cannot be automatically moved back to Qumulo though. Updated files are written as new objects.
Barry Russell, SVP and GM of cloud at Qumulo, emitted a canned quote: “With Qumulo Shift, customers can now move data faster and no longer worry about being stuck in legacy proprietary file data formats … . Leveraging our work with AWS, we are now able to integrate with Amazon S3 natively and enable … workloads to use cloud applications and services at any scale in Qumulo or S3.”
Qumulo says users with large video projects can move the files into AWS and burst rendering jobs to thousands of AWS compute nodes. Enterprises can migrate large datasets, such as data lakes, to AWS that could exceed the scale capabilities of other file system products.
Blocks & Files suggests that AWS’s own file services, such as EFS, could perhaps be supported by Qumulo, using the NFS format. EFS doesn’t support NFSv2, or NFSv3, but does support NFSv4.1 and NFSv4.0, except for certain features.
Qumulo’s Molly Presley, Global Product Marketing Head, disagrees. If Qumulo users want file-level operations in the cloud they may as well spin up a Qumulo file system in AWS. Also Amazon’s EFS doesn’t support SMB or as large volumes as Qumulo. Basically it’s not a good idea.
Qumulo has gained an Amazon Well-Architected designation, by which it means customers can reliably run Qumulo file services in cloud-native form on AWS.
The Shift product is included at no charge with an updated Qumulo file system which will be available in July this year.
All-flash array startup Lightbits Labs has launched a software update that provides NVMe-based persistent volume storage for Kubernetes.
It says LightOS v2.0 provides virtual NVMe volumes to Kubernetes, delivering low latency and high performance. It also provides clustering and high availability via target-side storage server failover. This is done via the standard Container Storage Interface (CSI) plug-in route.
Kam Eshghi.
Kam Eshghi, chief strategy officer at Lightbits Labs added: “At cloud scale, everything fails. LightOS 2.0 is the industry’s first NVMe/TCP scale-out clustered storage solution – protecting against data loss and avoiding service interruptions at scale in the presence of SSD, server, storage, or network failures.”
Lightbits distinguishes its array with NVMe/TCP support, providing NVMe-oF access across Ethernet TCP/IP network links.
Many, many suppliers provide persistent volume support for K8s – Portworx, StorageOS, Dell with PowerStore,NetApp, VAST Data and more –it is becoming standard table stakes in the containerisation storage game.
Lightbits claims to be different from the pack because it is clustered and supports rapid node migration, workload rebalancing, or recovery from failure without copying data over the network. If any computer node in the network fails, data is moved virtually by pointing it to another container.
LightOS 2.0 is automatically optimised for I/O intensive compute clusters, such as Kafka, Cassandra, MySQL, MongoDB, and time series databases. Each storage server in the cluster can support up to 64,000 namespaces and 16,000 connections.
V2.0 LightOS has support for Kubernetes v1.13 and v1.15 – v1.18 and later, for any volume size, number of volumes or Kubernetes size cluster. Aa well as the CSI interface it also allows stateful containers via a Cinder plugin. The v2.0 SW is now available.
We start off today’s roundup with news about Samsung facing production problems with its 128-layer 3D NAND. We also take a look at a Sony business using a fast Pavilion array for capturing the video points in a 3D space over time.
Samsung and string-stacking
Wells Fargo senior analyst Aaron Rakers has said Samsung may be be facing production yield challenges with its gen 6, 128-layer V-NAND (3D NAND) technology. This is a single stack technology whereas Samsung’s competitors are building 100+ layer 3D NAND dies by stacking smaller layer-count blocks on top of each other. This is called string-stacking.
Gen 6 Samsung is 128 layers. Gen 7 Samsung is 166 layers.
Apparently a single stack etch through 128 layers is taking twice as long as the same etch through 96 layers. The etch creates a conductive vertical channel through the layers. If the yield from the wafers is too low, then Samsung’s costs go up.
Rakers suggested Samsung could change to string stacking with its Gen 7, 160-layer 3D NAND die. String-stacking could cost up to 30 per cent more than single-stacking, so Samsung will be motivated to get its single stack etching working.
Sammobile reports Samsung has set up a task force to work through the yield problems.
Pavilion and Sony
Sony Innovation Studios has picked Pavilion Data’s Hyperparallel Flash Array (HFA) for storing data from real-time volumetric virtual production with its Atom View software. Volumetric capture is a performance and latency hungry application needed for the rendering of 3-D virtual and mixed environments.
Volumetric capture captures the visual image points in a 3D space (volume) over time and in minute detail. The AtomView software is point-cloud rendering, editing, and colouring software that enables content creators to visualise, edit, colour correct and manage volumetric data. It can combine multiple volumetric data sets captured from different angles producing a single output for use in virtual film productions, video games, and interactive experiences with true photoreal cinematic quality.
The deployment was in partnership with Alliance Integrated Technologies and Pixit Media’s PixStor product.
Ben Leaver, CEO of Pixit Media, said: “Volumetric capture brings a new paradigm of size and information capable of being stored and requiring the highest performance in render speeds. With an approach that mimics a director class core switch architecture, Pavilion’s approach to multi-line card, multi-controller design means PCIe speeds to each drive, and massive bandwidth to the network over a low latency RDMA protocol.”
Billy Russell, CTO at Alliance IT, said: “It was clear that a 100G ethernet infrastructure was needed to deliver the data. We also wanted the ability to scale in the future to 200 and 400G Ethernet and support migration to tier 2 or cloud as data ages off.”
The Cloud migration uses an Ngenea product. Pavilion said it has “multiple” deployments in the Media and Entertainment vertical.
Shorts
Data protector Acronis has made Acronis Cyber Protect, its cloud offering through service providers, available as a beta version, to deploy on-premises.
Acronis has also struck yet another sports sponsorship deal. This time it’s with AFC Ajax, the Dutch professional football club. The club has yet to schedule its first post-COVID match.
IBM’s Cloud Pak for Data 3.0 is a data and AI platform that containerises multiple offerings for delivery as microservices and runs on the Red Hat OpenShift Container Platform. It includes Actifio’s Virtual Data Pipeline to provision and refresh virtual test environments in minutes, enabling an up to 95 per cent storage capacity saving over non-use of Actifio’s VDP.
Enterprise data cataloguer Alation is working with Databricks to provides data teams with a platform to identify and govern cloud data lakes; discover and leverage the best data for data science and analytics; and collaborate on data to deliver high quality predictive models and business insights.
NoSQL database supplier DataStax today announced the private beta of Vector, an AIOps service for Apache Cassandra. Vector continually assesses the behaviour of a Cassandra cluster to provide developers and operators with automated diagnostics and advice.
Recursion, a digital biology company industrialising drug discovery through the combination of automation, AI and machine learning (ML) capabilities, is using DDN EXAScaler ES400NV and ES7990X parallel filesystem appliances that were later scaled to 2PBs of capacity for staging ML models. An all-flash layer is employed as a front-end to the file system supported by spinning disk. The first 64K of each file is stubbed to this layer, which then accelerates access to the first part of the data before streaming the rest to spinning disk.
Data protector Druva has received an NPS score of 88. NPS (Net Promotor Scores) range from -100 to 100, meaning 88 is a high positive score.
Google has announced the beta launch of Filestore High Scale, a GCP file storage product, which includes Elastifile’s scale-out file storage capability. Google completed its acquisition of Elastifile in August 2019. The Filestore High Scale tier adds the ability to deploy shared file systems that can scale-out to hundreds of thousands of IOPS, 10s of GB/s of throughput, and 100s of TBs.
Komprise has claimed it saw 400 per cent revenue growth Y/Y in 2020’s first quarter. It also added DataCentrix and Vox Telecom as resellers in South Africa.
Composable systems technology developer Liqid has signed up Climb Channel Solutions to distribute Liqid products.
In-memory database supplier MemSQL has announced v7.1 of its software. This delivers SingleStore, an an extension of MemSQL’s columnstore technology that includes support for indexes, unique keys, upserts, seeks, and fast, highly selective, nested-loop-style joins. It also provides fast disaster recovery failback, MySQL language support and the ability to back up data incrementally to more environments: Amazon S3, Azure Blob Store, and Google Cloud Platform.
Netlist announced that the U.S. Court of Appeals for the Federal Circuit (Federal Circuit) has affirmed the U.S. Patent Trial and Appeal Board’s (PTAB) decision upholding the validity of Netlist’s U.S. 7,619,912 (‘912) patent. This was a win over Google, which has used Netlist technology described in the patent. The way is clear for some kind of money flow from Google to Netlist, potentially in the multi-million dollar area.
Nutanix has added capabilities to its Desktop as a Service (DaaS) solution Xi Frame. These include enhanced onboarding for on-premises desktop workloads on Nutanix AHV, expanded support for user profile management, the ability to convert Windows Apps into Progressive Web Apps (PWA), and increased regional data centre support to 69 regions across Microsoft Azure, Google Cloud Platform and Amazon Web Services (AWS).
Entertainment and media workflow object storage supplier Object Matrix says its products now support the recently launched Adobe Productions workflow for Adobe PremierePro.
Telecoms operator BSO announced the launch of an Object Storage product in public cloud mode, called BSO.st. The tech is based on the software-defined storage developed by the French company OpenIO.
PlanetScale announced the beta release of PlanetScaleDB for Kubernetes, which allows organisations to host their data in their own network perimeter and deploy databases with just a few clicks using the PlanetScale control plane and operator. PlanetScaleDB for Kubernetes is a fully managed MySQL compatible database-as-a-service for companies looking to deploy distributed containerised applications.
HCI supplier Scale Computing has added Mustek as a distribution partner in South Africa and Titan Data Solutions as a distributor in the UK.
Object (and file) storage supplier and orchestrator Scality announced an investment inFondation Inria, a French national research institute for digital sciences. Scality is bringing both financial backing and collaboration to help support multi-disciplinary research and innovation initiatives in mind-body health, precision agriculture, neurodegenerative diagnostics, and privacy protection.
Cloud data warehouser Snowflake has announced the launch of its Snowflake Partner Network (SPN), an ecosystem of Technology and Services partners for customers.
Samsung-backed all flash key:value store startup Stellus Technologies laid off its entire sales and marketing department in April, according to a senior ex-employee. Stellus launched its first product at the beginning of February. So sales must presumably have been catastrophically bad for the entire sales and marketing team to be laid off.
ReRAM developer Weebit Nano is going to place circa $6.6 million worth of new shares via a two-tranche placement. It will also conduct a non-underwritten Share Purchase Plan to raise a further $500,000. The $71m cash will be used to complete its memory module development for the embedded memory market, transfer the tech to a production facility, continue selector development at Leti for the standalone memory market. Some of it will also go to sales and marketing and general working capital.
Veeam says Veeam Backup for AWS v2 is generally available and Veeam has achieved AWS Storage Competency status. It supports changed block tracking (CBT) API to shrink backup windows. The product makes application consistent snapshots and backups of running Amazon EC2 instances without shutting down or disconnecting attached Amazon EBS volumes.
Veeam Backup for AWS can be implemented as a standalone AWS backup and disaster recovery system for AWS-to-AWS backup, or integrated with the Veeam Platform.
Veeam has announced new Veeam Availability Orchestrator v3 with full recovery orchestration support for NetApp ONTAP snapshots, a new Disaster Recovery Pack at a lower price, and the capability of automatically testing, dynamically documenting and executing disaster recovery plans.
Data warehouser Yellowbrick Data is offering multiple petabyte (PB) capacity on its new hybrid data warehouse 3-chassis configuration. It claims this provides offers unparalleled, single-warehouse capacity with support for 3.6PB of user data in a 14U rack form factor. This 3-chassis instance has a maximum node count of 45 in that 14U and also supports 45 concurrent, single-worker queries on one system.
The actual chassis product is the 2300 series. Each node delivers 36 vCPUs per node (that’s 2 vCPUs per physical core) and has 8 NVMe SSD slots. There are HDR, VHDR and EHDR models; High Density, Very High Density and Extremely High Density. The HW differences are essentially the NVMe densities shipping on each node.
People
Dremio, which produces data lake SW, has appointed Ohad Almog as VP of Customer Success, Colleen Blake as Vice President of People and Thirumalesh Reddy as VP of Engineering. The company recently raised $70m in a Series C round of funding.
Igneous co-founder and board member Kiran Bhageshpur has relinquished his CEO slot to board member Dean Darwin. VP Products Christian Smith has left Igneous and is now a storage business development person at AWS. B&F has put out feelers to find out what’s going on.
File lifecycle management supplier Komprise has appointed Clare Loveridge as VP EMEA Sales. She comes from ExaGrid and before that Cloudcheckr, Nimble Storage and Data Domain.
Dell has extended its VxRail hyperconverged infrastructure systems with support for AMD EPYC processors, PCIe Gen 4.0, Kubernetes, Optane, more GPUs and ruggedised deployments, making it more relevant to the edge.
The ruggedised systems form a new VxRail product type: The EPYC-using VxRail is a new specific E Series configuration, and the other additions apply to VxRail systems generally
Tom Burns, SVP and GM for Integrated Products & Solutions at Dell, said in a canned statement: “With the new ruggedized VxRail systems, location and conditions don’t matter.” He’s not kidding.
There are five existing VxRail product flavours;
E Series – 1U/1Node with an all-NVMe option and T4 GPUs for use cases including artificial intelligence and machine learning
P Series – Performance-intensive 2U/1Node platform with an all NVMe option, configurable with 1, 2 or 4 sockets optimised for intensive workloads such as databases
V Series – VDI-optimised 2U/1Node platform with GPU hardware for graphics-intensive desktops and workloads
S Series – Storage dense 2U/1Node platform for applications such as virtualised SharePoint, Exchange, big data, analytics and video surveillance
G Series – Compute dense 2U/4Node platforms for general purpose workloads.
Get rugged
The ruggedised VxRail boxes are a sixth variant: the D Series comes in a 1U short depth – 20-inch – box that can operate at an altitude of up to 15,000 feet [1.4miles], sustain a 40G operational shock (all-flash model] and operate within a temperature envelope of 5 to 131 degrees Fahrenheit [-15° Celsius to 55° Celsius], withstanding the extremes for up to eight hours. They also resist sand and dust ingress, claimed Dell.
VxRail D Series systems come in all-flash [SAS SSD] and hybrid SDD/disk versions, and can be used outside data centres in industrial and external environments within the limits above – it can be harsh at the edge.
EPYC, Optane DIMM, Quadro GPUs and LCM
The E Series E665 system supports AMD EPYC processors, a first for VxRail, with up to 64 cores, and also PCIe Gen 4.0, making them powerhouses and suitable, Dell sugested, for workloads with stringent performance needs, such as databases, unstructured data, VDI and HPC. Blocks & Files expects PCIe Gen 4 support to spread across the VxRail range in the next few quarters.
VxRail systems now support Optane Persistent Memory DIMMs, as well as Optane SSDs, and can deliver a claimed 90 per cent drop in latency and a sixfold IOPS increase, Dell said.
Dell said this was tested using an OLTP 4k workload on 4 x VxRail P570F systems with Optane persistent memory in app-direct mode versus a VxRail all-NVMe flash system. No actual numbers were revealed. The available data suggests Optane DIMM-enhanced VxRail systems are good for in-memory databases and other workloads needing low latencies.
The VxRail systems also support Nvidia Quadro RTX GPUs and vCPUs to accelerate rendering, AI, and graphics workloads.
Dell has announced Lifecycle Management (LCM) software for VxRail which can streamline updates by running pre-upgrade health checks on demand. It produces continually-validated VxRail system states to reduce downtime, with non-disruptive upgrades.
DTCP and Kubernetes
The Dell Technologies Cloud Platform (DTCP) on VxRail supports Kubernetes clusters, with VMware Cloud Foundation (VCF) v4.o and VxRail 7.0. VCF can operate with a Consolidated Design architecture, in which compute workloads are co-resident with management workloads in the management domain. This is said to be good for general-purpose, virtualised workloads.
Alternatively it can have a Standard Design architecture with independent management and workload domains. This enables it to run multiple traditional and cloud native workloads, such as Horizon VDI and vSphere with Kubernetes.
It should be possible to upgrade from Consolidated Design to Standard Design in a future release of VxRail software.
DTCP starts at the 4-node level. The VxRail HCI System Software latest update, Nvidia Quadro RTX GPUs and Optane DC Persistent Memory options are available globally now. VxRail D Series and the E Series with EPYC processors will be available globally on June 23 this year
Comment
Dell VxRail HCI is a stronger offering with these additions. The Optane DIMM, PCIe 4.0 and new GPU support make it a low-latency, IOPS-munching machine suitable for compute and graphics-intense workloads. Enterprise data centre admins should appreciate the smoother and more certain update routines with the LCM software.
Both Dell and it’s channel’s sales force should also appreciate the ability to sell HCI systems with VMware and VxRail (HW + SW) working together in a neat package.
Storage startup Nebulon plans to come out of stealth on Wednesday with a hardware-assisted, scale-out, all-flash storage array featuring real-time AI Ops management and a cloud control plane.
Nebulon was started up by 3PAR veterans and first hove into view in November 2018. It revealed initial details about its technology in January this year, via this website.
The company said it plans to launch this week, on June 24, at HPE’s virtual Discover event. Blocks & Files has had a look at some of the material that’s come out from the firm itself as well as the HPE Discover agenda topics to arrive at a view of Nebulon’s tech.
SPU hardware
The hardware is based on on-premises, commodity-based, 2U, 24-slot X86 server boxes that hook up to accessing application servers across Ethernet. This HW forms a scale-out storage resource, to petabyte levels, that uses add-in PCI cards the firm has dubbed Nebulon Storage Processing Units (SPUs).
This is par for the course for some of the 3PAR veterans who work at the startup, who developed an ASIC to handle data reduction operations for the 3PAR array.
Nebulon’s principal product manager, Tobias Flitsch, wrote late last year: “Many modern workloads are built for shared-nothing environments for which these architectures [SAN, SDS, HCI] introduce unnecessary capacity and performance overheads. You know what I’m talking about if you’ve ever tried to run Apache Cassandra, Apache CouchDB, Apache Kafka, etc. on a shared storage array.”
B&F thinks the SPUs will compose Nebulon storage and may sub-divide the storage pool into shared and non-shared storage resources. It could then support both shared storage workloads and shared-nothing workloads such as those Flitsch mentioned.
At HPE Discover the SPUs will be installed into ProLiant DL380 servers. These will be all-flash storage servers, and offer sub-millisecond latency and mission-critical reliability. We estimate this could mean 6 nines or more and NVMe drives.
To have affordable flash capacity at petabyte scale means these storage servers must surely use QLC flash and employ data reduction technology. The SPUs could help with data reduction processing to offload the storage server’s base X86 CPUs.
GPUs?
Siamak Nazari, Nebulon’s CEO, last week tweeted: “What we are doing at Nebulon couldn’t have been done even two years ago, the technology require to create the cloud-defined storage solution simply did not exist.”
Blocks & Files speculates this almost certainly refers to QLC (4bits/cell) flash and, possibly, to GPUs.
There are two sessions at HPE Discover by Nebulon presenters which refer to GPUs: B548 and D566. Both are entitled “Honey, I Shrunk the Enterprise Storage Array to a Cloud-Managed Storage GPU.“ If this is not a tortured metaphor and we take the GPU reference literally, the possibility emerges that the Nebulon storage servers could have GPUs inside them (i.e. fitted to the SPUs).
The Nebulon storage server is managed through a Nebulon ON SaaS management facility or control plane and features AIOps operating in real-time. That could indicate that on-board GPUs run AI machine learning models to control, monitor and optimise the array in real-time, meaning responding to events affecting storage service delivery in seconds or less.
Using GPUs in this way – and again, the inclusion of GPUs is speculation on our part – it could be described as a game-changer. But it is, to say the least, an unusual idea.
The Nebulon storage server will provide storage services for Kubernetes, VMware and Microsoft environments. Nebulon has said nothing about which storage protocols will be supported. Our thinking is that block storage will be supported first, followed by file and maybe object.
Check back with B&F later this week for a deeper look – we’ll soon have a lot more detail.
Update
The Nebulon announcement on June 23 revealed the base details described above were accurate but there is no specific QLC flash support and no use of GPUs in the Nebulon server SAN.
IDC’s converged systems tracker for Q1 2020 shows resilient demand despite the early effects of the pandemic.
The status quo remained the status quo, with VMware leading Nutanix, and HP and Cisco duking it out a long way behind.
The market segment revenue splits were:
HCI – $1.95bn revenues, 8.3 per cent growth Y/Y, 50.9 per cent revenue market share
Certified Reference Systems & Integrated Infrastructure – $1.46bn revenues, 4.4 per cent growth, 36.8 per cent market share
Integrated Platforms – $478m, -8.7 per cent decline, 12.3 per cent share.
Sebastian Lagana, IDC research manager, said in a statement: “While the hyperconverged system market continued to expand as enterprises seek to take advantage of software-defined infrastructure, the Certified Reference Systems and Integrated Infrastructure segment posted its best quarter of growth since 2Q19 on the strength of richly configured platform sales related to demanding workloads in industries such as healthcare and telecoms.”
Let’s look at the trends with this chart of the converged systems market.
The Certified Reference Systems boost is accompanied by an HCI drop but this is only a quarter-on-quarter change and may not be significant.
A closer look at the top three HCI vendors reveals a quarter-on-quarter revenue drop by the two leaders and also by Cisco. HPE’s Q/Q change is unclear because IDC is not revealing all its numbers as HPE weaves in and out of a top three spot where it ties for third place with Cisco.
We have charted the revenues of these suppliers over the past few quarters using revenue attributed to the owner of the HCI software rather than IDC’s measure of revenue by HCI brand.
Missing sections of the HPE line are filled with dotted grey lines to show the general trend
Basically, apart from the first 2020 quarter pandemic hit causing revenues to fall, there’s little or no change in the vendors’ relative positions. (Revenues from other contenders such as those from Pivot3 and Scale Computing, are too low to show in IDC’s public tables.)
Let’s check if HCI is taking more revenue from the overall external storage market. The Q1 IDC numbers for both storage categories are:
HCI – $1.95bn revenue; 8.3 per cent change Y/Y.
External storage – $6.52bn revenue, -8.2 per cent change Y/Y.
Charting the trends to see the longer term picture shows a pretty consistent gap between the two.
External storage sales are much more seasonal than HCI sales. Note the Q4 peaks on external storage sales, but there is no general sign yet of HCI sales eating into external storage sales.
Komprise has updated its Intelligent Data Management to include AWS cloud data.
The company’s new Cloud Data Growth Analytics (CDGA) utility builds and maintains an index of cloud file/object storage items, buckets, tiers, activities and costs.
This is collated for the customer across their AWS accounts and the storage tiers inside each account. Storage admins can track file storage costs by tier and by account, see how capacity usage is trending and set cost alerts.
CDGA for AWS is available today and additional public cloud coverage is pencilled for the end of the year.
Komprise is building a data management abstraction layer that covers on-premises and the three main public cloud environments. Its goal is to enable customers to tier files to/from on-premises file stores to on-premises object stores and public cloud object stores.
The on-premises stores will be cross-data centre and the public cloud stores will be multi-region. In both cases data location and storage tier are controlled to get the right data into the right place and cost-optimised tier of storage.
Blocks & Files diagram.
As part of this effort Komprise announced object storage tiering in December last year. This moves objects between cloud object classes and between on-premises object stores.
Elastic Data Management,launched in March, is another building block. This data mover takes NFS/SMB/CIFS file data and moves it across a network to a target NAS system or via S3 to object storage systems or the public cloud.
Comment
We see a growing capability for file and object movement and management across the public cloud and on-premises environments. Komprise needs to extend coverage to look at cloud file offerings, such as Google’s Elastifile services, AWS Elastic File Systems, and services like NetApp Cloud Volumes and other proprietary third-party file services in the public clouds.
A growing number of companies are developing combined on-premises and public cloud file and object management and access services. They include Cohesity, Hammerspace, InfiniteIO and Komprise. The contenders are barrelling into this area from different starting points and with different intentions. But they will all end up as a single class of file and object storage managers.
Data protectors, such as Commvault, Druva and Rubrik and Dell EMC – with its new DataIQ offering – are also arriving on the scene. Their entry point is backup and they are evolving towards unstructured data management. Clumio and HYCU could evolve this way too as they develop their multi-cloud/on-premises backup management software.
The result will be an almighty competitive clash as all these suppliers realise they are moving into the same general area and try to differentiate themselves with a marketing message blizzard.
WekaIO has hired Ken Grohe as president and chief revenue officer, as it looks to ramp up to IPO.
Grohe joins from Stellus Technologies, a Samsung backed startup that shut down the entire sales and marketing team in April – just three months after launching its first product.
Ken Grohe.
Weka CEO Liran Zvibel said in a statement: “Ken’s expertise in this market and keen understanding of the customer journey will be the catalyst that drives the next phase of growth for Weka… Ken’s role will be influential in executing the company’s vision to become the de-facto solution for enterprise high-performance computing.”
Steve Duplessie, senior analyst at Enterprise Strategy Group (ESG) chipped in: “I have known Ken since our shared time at EMC, he is one of the best performing sales and marketing executives you will ever meet.”
Grohe’s career includes senior positions at SignNow, Barracuda Networks, Virident, and encompasses a 25-year stint at Dell EMC, where he finished as GM for the global flash business.
He told us he had been job hunting and had received two written and two verbal offers when the Weka offer arrived. Grohe said customers he talked to advised him to join WekaIO. Weka’s momentum impressed him, with its 600 per cent revenue growth rate in 2019, and so far maintaining growth rates in the pandemic.
“This company more resembles VMware than any other company I know,” Grohe added. He said its product is hardware-agnostic and heterogeneous, widely applicable and scales to huge levels; “We eat petabytes for lunch.”
Sales success
He noted existing OEMs, like HPE, have already invested in Weka. And he reckons it can keep the peace with partners – as VMare does -and grow the market overall for everyone.
Grohe tells us there are four general strategies to grow sales to high levels; VARs, OEMs, selling direct to masses of customers, and big game hunting – going directly for million dollar-size deals. Weka is equipped to do all four simultaneously, he said.
There’s confidence there. In his hiring announcement he declarles: “I have proven success in this market, and I am grateful to join the leadership team and to have the opportunity to influence and guide Weka into the next phase of growth…The pathway to IPO is ahead of us.”
Stellus
And Stellus? Grohe declined to comment. That company has gone quiet since May and executives are not responding to our enquiries.
That company launched its first product at the beginning of February. So what reason would there be for CEO Jeff Treuhaft to pull the sales and marketing plug three months later? Did Samsung cut funding for some reason? is it pandemic-related? It’s a mystery.
Intel has announced second generation Optane Persistent Memory DIMMs with the same capacity as gen 1 but faster IO. The company has also launched new SSDs.
Intel said the PMEM 200 series is optimised for use with gen 3 Xeon 4-socket processing systems, which also launched today.
The Optane PMEM 200 series DIMMs come in 128GB, 256GB and 512GB capacities and their sequential bandwidth is up to 8.10GB/sec for reads and 3.15GB/sec for writes. The first generation series runs up to 6.8GB/sec reading and can reach 2.3GB/sec writes.
We calculate the PMEM 200 is around 19 per cent faster at reads and 37 per cent faster at writes. On average, there is 25 per cent higher memory bandwidth overall, according to Intel. That’s a benefit of using 4-layer XPoint, instead of the 2 layers in gen 1.
Endurance varies with capacity. 128GB = 292 petabytes written; 256GB = 497PBW; 512GB = 410PBW. For comparison, the gen 1 256GB capacity product has a 360PBW rating.
Intel says the PMem 200 series provides up to 4.5TB of memory per socket for data intensive workloads (e.g. in-memory databases, dense virtualisation, analytics, and HPC.)
3D NAND SSDs
The new data centre D7-P5500 and P5600 SSDs are U.2 format drives, built with 96-layer 3D NAND in TLC cell format and an NVMe interface running across PCIe Gen 4 with 4 lanes. The P5500 has a 1 drive write per day endurance while the P5600 has a 3DWPD rating, making it better suited to heavier write workloads.
Available capacities are 1.92TB, 3.84TB and 7.68TB for the P5500. The P5600 needs to over-provision for extended endurance, and so available capacities come in lower at 1.6TB, 3.2TB and 6.4TB.
The PCE gen 4 links should enable high performance. The P5500 and P5600 deliver 7GB/sec when sequential reading and 4.3GB/sec when writing. Both drives provide up to 1 million random read IOPS, with the P5500 delivering up to 230,000 random write IOPS and P5600 providing up to 260,000 random write IOPS.
Zerto, the disaster recovery startup, has raised $53m in equity and venture debt financing. It will use the money to bolster its cash position and on updating its software.
Details are thin – the valuation is not disclosed and there is no momentum press release boasting of sales growth. Also the capital raising comes less than three months after significant layoffs. This looks defensive and it would be no surprise if this is a down round.
CEO Ziv Kedem said in a statement today: “This is another milestone for the business and allows us to confidently push forward with our plans to provide customers with a solution for their next generation business realities.”
Zerto raised $70m four years ago in its previous funding round. Established in 2009, Zerto is a startup only in the sense that it has not yet filed for IPO. The company claims more than 8,000 customers and over 1,500 resellers. Entirely unconfirmed and unsourced revenue estimates range from $104m-$140m.
Our take is that Zerto needs to add cloud-native and general backup strings added to its bow before it can envisage an exit. Datrium and others are pushing hard on the ransomware disaster recovery front. Also Kasten and Portworx shows that basic containerised app DR is possible through Kubernetes.
Zerto, by contrast, looks a bit old-fashioned and expensive. The company has to develop great technology to preserve and extend the customer base into cloud-native apps. It also needs to make progress with a continuous journalling approach to general backup.
Data management startup Cohesity has hired a chief financial officer- its first since 2017. The appointment of Robert O’Donovan signifies a change of gear in its financial strategy to emergence from VC funding status.
Cohesity needs a CFO to prepare the company for IPO, according to our sources, who say a trade sale is the less preferred option. O’Donovan joins the company from DataStax, where he was CFO. He also held senior positions at Pivotal and Dell EMC.
Robert O’Donovan.
Cohesity CEO and funder Mohit Aron issued a quote: “At this important stage in the evolution and growth of Cohesity, Robert’s years of experience managing the financial operations for innovative technology market leaders will help support our ambitious business objectives.”
O’Donovan also issued a quote: “The scale and growth Cohesity has achieved is testament to the tremendous value that its data management innovations bring to organisations, and my role is to ensure the company has the best framework to continue transforming the industry.”
Cohesity last had a CFO in late 2017. Since then, SVP Lorenzo Montesi has held the financial reins, navigating the company through $250m D-round in 2018 and a $250m E-round this year. Total funding now stands at $660m. Montesi will now report to O’Donovan.