Home Blog Page 138

CloudCasa bets on Velero Kubernetes backup

Kubernetes logo
Kubernetes logo

Open source Velero has recorded more than 100 million Docker pulls – making it among the most popular Kubernetes app backup software, says CloudCasa, which supports it.

CloudCasa is the Kubernetes backup business of Catalogic that’s heading for a spin-out. Its cloud-native product integrates with Kubernetes engines on AWS, Azure, and GCP, and can see all the K8s clusters running through these engines. Velero provides snapshot-based backup for Kubernetes stateful containers and can run in a cloud provider or on-premises. CloudCasa supports Velero and provides a paid-for support package.

COO Sathya Sankaran told us the Velero stats imply “at least a million clusters are downloading.”

He says he had a conversation with a VMware product manager at KubeCon who told him that “they estimate about one-third of all Kubernetes clusters have been touched by Velero and at some point have had Velero installed and running …  it’s a very substantial market presence.”

Sankaran added: “This is already a community ecosystem, driven very strongly by what the rest of the community thinks is good or bad. 

We have asked business rivals Veeam, Pure, and Trilio what they think.

Sankaran says CloudCasa for Velero is the only Kubernetes Backup-as-a-Service offering with integration across multiple public clouds and portability between them. It offers a swathe of extra features over the base Velero provision (think Red Hat and Linux.)

CloudCasa for Velero features

Sankaran says: “Velero wants to become the Kubernetes backup standard. The commercial backup products are pre-packaged… Velero wants to be a plug-in engine, useable by new storage products as well as the historic incumbents.” 

The exec’s hope is that CloudCasa can overtake rivals Kasten, Portworx, and Trilio by riding what he sees as a wave of Velero adoption, particularly in the enterprise, by offering them a multi-cluster and anti-lock-in offering. K8s app protection is different from traditional backup, says Sankaran, who claims layering it on to legacy backup is the wrong approach.

CloudCasa on backups for Kubernetes

Whether it’s wrong will be decided by the market, by whether enterprises agree they need special (Velero-based) protection for their K8s apps or such protection provided by their incumbent data protection supplier.

VAST Data and Nvidia form SuperPOD squad

VAST Data says its all-QLC flash file storage has been certified as an Nvidia SuperPOD data store.

Nvidia’s SuperPOD houses 20 to 140 DGX A100 AI-focused GPU servers and uses its InfiniBand HDR (200Gbps) network connect. The DGX A100 features eight A100 Tensor Core GPUs, 640GB of GPU memory and dual AMD Rome 7742 CPUs in a 6RU box. It also supports BlueField-2 DPUs to accelerate IO. The box provides up to 5 petaFLOPS of AI performance, meaning 100 petaFLOPS in a SuperPOD with 20 of them.

VAST has been certified as a SuperPOD data store
18-rack SuperPOD

VAST CEO and co-founder Renen Hallak said: “VAST’s alliance and growing momentum with Nvidia to help customers solve their greatest AI challenges takes another big step forward today … The VAST data platform brings to market a turnkey AI datacenter solution that is enabling the future of AI.”

The VAST pitch is that its Universal Storage system brings to market the first enterprise network attached storage (NAS) system approved to support the Nvidia DGX SuperPOD.

VAST Data co-founder and CMO Jeff Denworth told us: “For years customers have not had an enterprise option for these large systems, since the AI system vendors need to adhere to a very limited set of offerings. Many were burned by other NFS platforms in the past.”

A VAST statement said: “AI and HPC workloads are no longer just for academia and research, but these are permeating every industry and the enterprise players that own and manage their own proprietary AI technologies are going to be differentiated going forward. Historically, customers building out their supercomputing infrastructure have had to make a choice around performance,  capabilities, scale and simplicity.”

The company reckons its storage system provides all four attributes, and says: “We have already sold multiple SuperPODs with more in the pipeline so the market  is validating/recognizing this as well.”

The Nvidia-VAST relationship dates back to 2016, VAST says, with original development of its disaggregated, shared-everything (DASE) architecture. VAST supports Nvidia’s GPUDirect storage access protocol and also its BlueField DPUs. VAST’s Ceres data enclosure includes four BlueField DPUs.

DDN has a combined SuperPOD and Lustre-based A3I storage system. Previously, NetApp has certified its E-Series hardware running ThinkParQ’s BeeGFS parallel file system with Nvidia’s SuperPOD. Neither of these are enterprise NAS systems.

Back in 2020, NetApp provided a reference architecture for twinning ONTAP AI with Nvidia’s DGX A100 systems for AI and machine learning workloads. ONTAP is an enterprise NAS operating system as well as a block and object access system. Surely it must be possible to get an all-flash ONTAP system certified as a SuperPOD data store, unless ONTAP’s scalability limit of 24 clustered NAS nodes (12 HA pairs), meaning a 702.7PB maximum effective capacity with the high-end A900, proves to be a blocking restriction.

Dell wields NeuroBlade to cut analytics time

Dell is partnering with startup NeuroBlade to put SQL query-accelerating PCIe cards in select PowerEdge servers to speed up high throughput data analytics.

Update. Note added about NeuroBlade branding and CMO Priya Doty’s exit. 20 Sep 2023.

NeuroBlade has won Dell as a go-to-market partner for its hardware acceleration units that speed SQL queries. These G200 SPUs – SQL Processing Units – contain two NeuroBlade processors and work with query engines such as Presto, Trino, Spark and Dremio. No changes are needed to a host system’s data, queries or code as the SPU operates transparently as far as the application code is concerned. 

Elad Sity, CEO and co-founder of NeuroBlade, said: “The work we have done enables organizations to keep up with their exponential data growth, while taking their analytics performance to new levels, and creating a priceless competitive advantage for them. This success couldn’t have been achieved without our engineering team, who have been collaborating with companies like Dell Technologies to unlock this new standard for data analytics.”

A couple of years ago NeuroBlade had developed its Xiphos rack enclosure, a Hardware Enhanced Query System (HEQS), and a compute-in-storage appliance containing four so-called Intense Memory Processing Units (IMPUs). These are formed from a multi-1,000-core XRAM processor, DRAM and an x86 controller. There are up to 32 x NVMe SSDs in the chassis and these can be Kioxia SCM-class FL6 drives for the fastest response.

NeuroBlade SPU card
NeuroBlade SPU card

The IMPU has been developed into what’s now called the SPU and NeuroBlade says it delivers consistently high throughput regardless of query complexity for applications involving business intelligence, data warehouses, data lakes, ETL, and others. As a dedicated SQL query processor, it can replace several servers previously doing the job and cut compute, software and power costs by a claimed 3 to 5x.

NeuroBlade HEQS chassis
HEQS chassis

A HEQS image above shows eight SPUs fitted inside the chassis. This chassis can hold up to 100TB of data with its NVMe SSDs, and six chassis can be clustered to provide a 600TB resource. The Xiphos brand has lapsed with HEQS effectively replacing it.

Customers can buy SPU cards to put into servers, such as Dell’s PowerEdge, or complete HEQS chassis.

Although founded in Tel-Aviv, NeuroBlade has a US HQ in Palo Alto. It will be present at Dell Technologies World in Las Vegas, May 22-25, booth 1222, with displayed products and staff to talk about them.

NeuroBlade branding

NeuroBlade tells us: “The SQL Processing Unit of NeuroBlade is its own product, NeuroBlade SPU. The company has migrated away from the Xiphos and XRAM brand names. The SPU can be integrated into NeuroBlade’s HEQS (hardware enhanced query system), which is a rack-mounted server, or directly into the customer’s own data center.” 

NeuroBlade CMO

We saw CMO Priya Doty has left Neuroblade, about 4 months after Mordi Blaunstein became VP Tech Business Development & Marketing. We asked NeuroBlade if the company could say why she left? The reply was: “NeuroBlade has focused its marketing strategy towards a more technical approach, emphasizing the product and its benefits. This, in turn, led to a restructuring of the marketing team. The company is now forming a new team in California led by Mordi Blaunstein, who has extensive experience in startups and enterprise organizations.” 

Scality plugs security, scalability and Veeam v12 support into ARTESCA

Scality has released second generation ARTESCA object storage software with a 250 percent capacity increase, hardened security and Veeam v12 integration.

ARETESCA is Scality’s cloud-native S3-compatible object storage software, developed to complement its enterprise-class infrastructure RING object and file storage product with a lightweight alternative to supported MinIO, which has a $4,000/year entry point. Additional deployment options include VMware OVA (Open Virtual Appliance) format or as a complete software appliance system with a bundled and hardened Linux OS. It’s Scality’s fastest-growing product line.

Scality CMO Paul Speciale said: “ARTESCA makes data storage simple and secure for CISOs and their teams. It’s both affordable and easy to deploy in any environment, no strings attached. … ARTESCA 2.0 delivers the full package that today’s organizations are looking for — enterprise-grade security, simplicity and maximum performance at a price that won’t give CFOs heartburn.”

While RING software scales out to 100PB or more, ARTESCA is for use in the TBs to 5PB area where simple deployment and operations are needed. Like RING it can be used as both a Veeam performance and capacity object storage tier in a core data center but also as an edge backup target for Veeam in smaller datacenters. Scality suggested that RING is also suited to function as an archive tier.

ARTESCA 2 upgrades include security hardening for better malware protection. This has a new hardened Linux option that precludes OS access, reduces exposure to critical vulnerabilities, and limits a wide range of potential malicious attacks, we’re told. There is multi-factor authentication, unused network port lockdown, S3 object lock and auto-configuring of firewall rules, and asynchronous replication for virtual air-gapped offsite storage.

Access from Veeam is controlled by identity and access management policies. Specific Veeam v12 support includes Direct to Object and Smart Object Storage API, enabling added ransomware protection, data immutability and operational efficiencies. There is a simplified installer, shorter backup windows and restore times.

ARTESCA 2.0 will be available early June 2023. For new customers ARTESCA is free for 90 days starting early in the third quarter, with unlimited capacity. Software subscriptions start at less than $4,000/year for 5TB usable capacity with 24 x 7 support.

Background

Object storage software supplier Scality is 14 years old, has raised $172 million in funding, and reckons it will be profitable inside 12 months. It has 2EB of capacity under management for customers, encompassing some 5 trillion objects spread across 182,669 disk drives.

A conversation with CEO and co-founder Jerome Lecat provided an insight into its thinking about the object storage market and where it is going, such as towards an all-flash future.

Lecat said Scality is fully funded and should be profitable in a year.

Scality supports all-flash hardware configurations but sees little demand for them. Lecat said he doesn’t forsee the object storage market going all-flash because of energy savings compared to disk, saying: “I disagree that disk drives necessarily use more electricity than SSDs.”

This was a response to Pure’s Shawn Rosemarin predictiion that HDD sales will stop after 2028 through the combination of lower NAND prices, high electricity costs and limited electricity availability. The overall effect is that the TCO of flash arrays will be so much lower than disk as to prompt the start of a mass migration from spinning rust to electrically charged flash cells.

Lecat said disk drives lower their spin speed these days if they detect inactivity, and so save power.

He doesn’t see customer demand for all-flash object systems and, all in all: “I don’t think the all-flash object market is there.”

Kioxia CD7 SSD: PCIe 5 drive hits HPE servers

Kioxia has announced its datacenter SSD series is for HPE servers and arrays, using 3D NAND two generations behind its latest tech but with the latest PCIe gen 5 interconnect.

PCIe 5 operates at 32Gbps lane bandwidth — four times faster than PCIe 3’s 8Gbps and double PCIe 4’s 16Gbps sec.

The CD7 was originally unveiled in November 2021 as a PCIe 5 NVMe SSD using 96-layer BiCS4 technology in TLC (3 bits/cell) format and the E3.S drive standard. The drive was then sample shipping to potential OEMs. Kioxia is now transitioning NAND production in its joint venture fabs with Western Digital to BiCS6 162-layer chips.

Neville Ichhaporia, Kioxia America’s SVP and GM of its SSD business unit, said: “EDSFF and PCIe 5.0 technologies are transforming the way storage is deployed, and our CD7 Series SSDs are the first to deliver these technologies on HPE’s next-generation systems.” 

Kioxia CD7

The CD7 series drive is listed as a read-intensive drive with random read/write IOPS of up to 1,050,000/180,000 and sequential read and write bandwidth of 6.45GBps and 5.6GBps respectively. It has a five-year warranty, a 2.5 million hours MTBF rating and can sustain 1 drive write per day.

Other PCIe 5 SSDs vary in speed and storage. For example, Samsung’s PM1743 is available in E3.S format with up to 15.36TB capacity from its 128-layer NAND. It puts out 2,500,000/250,000 random read/write IOPS, 13GBps sequential read bandwidth and 6.6GBps sequential write bandwidth.

Kioxia says its CD7 SSDs support ProLiant Gen11 servers, Alletra 4000 storage servers (rebranded Apollo servers,) and Synergy 480 Gen11 Compute Modules, which all have PCIe gen 5 capability and have E3.S storage bays.

E3.S enables denser, efficient deployments in the same rack unit compared to 2.5-inch drives, with better cooling and thermal characteristics, we’re told. The format can, Kioxia says, raise capacities by up to 1.5-2x – although the CD7 only supports 1.92TB, 3.84TB and 7.68TB.

The CD7 is said to be suited for customers and applications such as hyperscalers, IoT and big data analytics, OLTP, transactional and relational databases, streaming media and content delivery networks as well as virtualized environments. A higher 3D NAND layer count version is surely on Kioxia’s roadmap.

Merger talks

Reuters reports that merger talks between Kioxia and Western Digital have sped up with a deal structure being developed. It cites unnamed sources. WD is under pressure from activist investor Elliott management to split its disk drive and SSD businesses into separate companies, and to then merge the SSD unit with Kioxia.

According to the newswire’s sources, under the fresh deal the combined Kioxia-WD business would be majority-owned by Kioxia with a 43 percent share, with WD having 37 percent, and the rest owned by existing shareholders of the two companies.

Kioxia was bought out of Toshiba by a Bain Capital-led consortium in 2017. A WD-Kioxia merger could provide a financial exit for that consortium. Toshiba owns 40.6 percent of Kioxia and Elliott Management has a Toshiba investment plus a board position.

Solidigm pumps out new QLC datacenter SSD

Solidigm is touting a PCIe gen 4 QLC flash SSD offering TLC-class read performance and has appointed a pair of co-CEOs.

QLC or 4bits/cell NAND provides less expensive SSD capacity than TLC (3 bits/cell) NAND but has generally lower performance and a shorter working life. Solidigm is making a big deal about its new optimized QLC drive, which it says can cost-effectively replace both a TLC flash and a hybrid disk/SSD setup in a 7PB object storage array.

Greg Matson, VP of Solidigm’s datacenter group, played the sustainability card: “Datacenters need to store and analyze massive amounts of data with cost-effective and sustainable solutions. Solidigm’s D5-P5430 drives are ideal for this purpose, delivering high density, reduced TCO, and ‘just right’ performance for mainstream and read-intensive workloads.” 

Solidigm says the DC P5430 is a drop-in replacement for TLC NAND-based PCIe gen 4 SSDs. It is claimed to reduce TCO by up to 27 percent for a typical object storage solution with a 1.5x increase in storage density and 18 percent lower energy cost. And it can deliver up to 14 percent higher lifetime writes than competing TLC SSDs.

Solidigm drives
From left; D5 P5430 in U.2 (15mm), E1.S (9.5mm) and E3.S (7.5mm) formats

The P540 uses 192-layer 3D NAND with QLC cells, and comes in three formats: U.2, E1.S and E3.S. The capacities are 3.84TB, 7.68TB, 15.36TB, and 30.72TB, but with the physically smaller E1.S model limited to a 15.36TB maximum. The drive does up to 971,000/120,000 random read/write IOPS and its sequential read and write bandwidth numbers are up to 7GBps and 3GBps respectively.

Solidigm says the new SSD has read performance optimized for both mainstream workloads, such as email/unified communications, decision support systems, fast object storage, and read-intensive workloads like content delivery networks, data lakes/pipelines, and video-on-demand. These have an 80 percent or higher read IO component.

How does the performance stack up versus competing drives? Solidigm has a table showing this:

Solidigm performance

The comparison ratings are normalized to Micron’s 7450 Pro and they do look good. The P5430’s endurance is limited, though, with Solidigm providing two values dependent upon workload type – random to 0.58 DWPD, and sequential up to 1.83 DWPD. It is a read-optimized drive after all.

Solidigm wants us to know that it has up to 90 percent IOPS consistency and ~6 percent variability over the drive’s life, and it supports massive petabytes written (PBW) totals of up to 32PB. Kioxia’s CD6-R goes up to 28PBW. 

It says a 7PB object storage system using 1,667 x 18TB disk drives with 152 TLC NAND cache drives will cost $395,944/year to run. A 7PB alternative,using 480 x 30.72TB P5430s will cost $242,863/year – 39 percent less.

Solidigm value

Solidigm ran the same comparison against an all-TLC SSD 7PB Object storage array and says its kit costs $257,791/year, 27 percent less than the TLC system’s $334,593/year. The TLC NAND system uses 15.36TB drives while Solidigm’s P5430-based box uses its 30.72TB drives, giving it a smaller rack footprint.

The D5-P5420 SSDs are available now but there is a delay before the maximum capacity 30.72TB versions arrive, which should be later this year.

The CEOs

Solidigm’s board has appointed two co-CEOs. The original CEO, Intel veteran Rob Crooke, left abruptly in November last year. The co-CEO of SK hynix, Noh-Jung Kwak, was put in place as interim CEO. Now two execs, SK hynix president Kevin Noh and David Dixon, ex-VP and GM for Data Center at Solidigm, are sharing responsibility.

David Dixon and Kevin Noh

Noh was previously chief business officer for Solidigm, joining in January this year. He has a 20-year history as an SK Telecom and SK hynix exec. Dixon was a near-28-year Intel vet before moving to Solidigm when that rebranded Intel NAND business was sold to SK hynix.

Bootnote

Here are the calculations Solidigm supplied for its comparison between a hybrid HDD/SSD and all-P5430 7PB object storage array and all-TLC array:

Micron declares TLC NAND war on Solidigm

Micron has built a TLC SSD with 232-layer tech that’s faster and more efficient than Solidigm’s lower cost QLC drives. It’s also launched a fast SLC SSD for caching.

SLC (1 bit/cell) NAND is the fastest flash with the longest endurance. TLC (3bits/cell) makes for higher-capacity drives using lower cost NAND but with slower speed and shorter working life. QLC (4 bits/cell) is lower cost NAND again but natively has slower speeds and less endurance. 3D NAND has layers of cells stacked in a die – the more layers the more capacity in the die and, generally, the lower the manufacturing cost. Solidigm and other NAND suppliers are shipping flash with less than 200 layers while Micron has jumped to 232 layers. 

Alvaro Toledo, Micron
Alvaro Toledo

Alvaro Toledo, Micron VP and GM for datacenter storage, told us: “Very clearly, we’re going after QLC drives in the market like the [Solidigm] P5316. And what we can say is this drive will match that on price, but beats it on value by a mile and a half. We have 56 percent better power efficiency, at the same time giving you 62 percent more random reads.”

The power efficiency claim is based on the P5316 providing 32,000 IPS/watt versus Micron’s 6500 ION delivering 50,000 IPS/watt.

Micron provides a set of performance comparison charts versus the P5316:

Micron 6500 ION performance

Solidigm coincidentally launched an updated QLC SSD, the P5430, today. Micron will have to rerun its tests and redraw its charts.

We have crafted a table showing the main speeds and feeds of the two Solidigm drives and the 6500 ION – all PCIe 4 drives – for a quick comparison:

The 6500 ION has a single 30.72TB capacity point and beats both Solidigm QLC drives with its up to 1 million/200K random read/write IOPS performance, loses out on sequential read bandwidth of 6.8GB/sec vs Solidigm’s 7GB/sec, and regains top spot with a 5GB/sec sequential write speed, soundly beating Solidigm.

Toledo points out that 6500 ION supports 4K writes with no indirection unit. Solidigm’s P5316 “drive writes in 64K chunks.” This, Toledo claims, incurs more read, modify, write cycles as the 64K is mapped to the drive’s 4K pages.

Micron 6500 ION
Micron 6500 ION

Its capacity and cost means that “by lowering the barriers of entry, we see that the consolidation play will make a lot more sense right now. You can store one petabyte per rack unit, which gives you up to 35 petabytes per rack.”

Toledo says the 6500 ION is much better in terms of price/performance than QLC drives: “Just the amount of value that we’re creating here is gigantic.” You can use it to feed AI processors operating on data lakes  with 9x better power efficiency than disk drives in his view. And it’s better than QLC drives too: “This ION drive is just absolutely the sweet spot, the Goldilocks spot in the middle, where for about 1.2, 1.3 watts per terabyte, you can get all the high capacity that you need fast enough to feed that [AI] beast with a very low power utilization.”

Download a 6500 ION product brief here.

XTR

The XTR is positioned as an affordable single-port caching SSD. Micron compares it to Intel’s discontinued Optane P800X series storage-class memory (SCM) drive, saying it has up to 44 percent less power consumption, 20 percent more usable capacity, and up to 35 percent of P5800X endurance at 20 percent of the cost. 

Micron XTR.

It suggests using the XTR as a caching drive paired with its 6500 ION, claiming this provides identical query performance as an Optane SSD cache would provide. Toledo said: “We are addressing the storage-class memory workload that requires high endurance; this is not a low latency drive.”

Kioxia also has a drive it positions as an Optane-type device, the FL6, and Micron’s XTR doesn’t fare that well against it in random IO but does better in sequential reads, as a table shows:

Toledo says the FL6 is going after lower latency workloads than the XTR, but: “If you need to strive for endurance, the XTR can go toe to toe with a storage-class memory solution.”

Micron says the XTR has good security ratings – better than the Optane P5800X products, such as FIPS 140-3 L2 certification at the ASIC level – and provides up to 35 random DWPD endurance (60 for sequential workloads).

NetApp wants to be cool kid of SAN with latest all-flash array

NetApp has doubled down on its all-flash systems with a new SAN array, the ASA A-Series, and a new StorageGRID object storage system, and has added a ransomware recovery guarantee as well as making its ONTAP One data services software suite available to all ONTAP users without added charge.

Update: Infinidat cyber resilience guarantee information added. May 18, 2023.

NetApp supplies unified block and file storage arrays running its ONTAP operating system. These arrays include the FAS hybrid flash+disk arrays and the AFA (All-Flash Array) systems. There’s the TLC flash-based A-Series and lower cost, capacity-focused C-Series, which use QLC flash. The SAN-only ASA product line, introduced in October 2019, is powered by a block-only access version of ONTAP and is based on AFA A-Series hardware.

Sandeep Singh, NetApp
Sandeep Singh

NetApp SVP and GM of Enterprise Storage Sandeep Singh told us: “What it means for customers is that NetApp has been already been a leader in NAND. NetApp has been the leader in unified including with file, block and object capabilities. And now NetApp is becoming a leader in SAN.”

He justified this by saying: “We are building on a very, very strong base of SAN workload deployments. It turns out that over 20,000 customers already trust NetApp with their SAN workloads. And out of those 20,000, 5,000 customers deploy that for their SAN-only workloads.” 

ASA

The new ASA systems are for customers who separate out their SAN workloads, such as SAP and Oracle-accessing ones, from unstructured data access during unplanned outages with a symmetric, active-active controller architecture. Product marketer and VP Jeff Baxter told us: “Symmetric active-active multipathing … is typically reserved only for high-end frame arrays.”

NetApp ASA systems
NetApp ASA systems

The new ASAs use NVMe SSDs and support both NVMe/TCP and NVMe/FC access. NetApp offers a six nines availability guarantee, with remediation available if downtime exceeds 31.56 seconds a year, plus a storage efficiency guarantee of a minimum 4:1 data reduction based on in-line compression, deduplication and compaction.

There is a five-member product range running from the entry-level A150 and A250 through the mid-range A400 and on up to the A800 and range-topping A900. A data sheet table provides speeds and feeds:

NetApp ASA range

NetApp claims the ASA delivers up to 50 percent lower power consumption and associated carbon emissions than competitive offerings but without supplying comparative numbers – you’ll need to check.

StorageGRID

NetApp offers low-end SG100 and SG1000 object storage nodes, mainstream SG5000 series cost-optimized boxes, and the high-end SG6000 series. These comprise the SG6060 and SG6060-Expansion systems for transactional small object and large scale data lake deployments, plus the all-flash SGF6024 for primary object-based workloads needing more performance. That system is now surpassed by the new SGF6112 StorageGRID system running v11.7 of the StorageGRID operating system.

The SGF6112 supports 1.9TB, 3.8TB, or 15.3TB SED (self-encrypting drives) or non-SED drives. NetApp blogger Tudor Pascu writes: “As an all-flash platform, the SGF6112 hits a sweet spot for workloads with small object ingest profiles. The main difference between the SGF6112 and the previous-generation all-flash appliance is the fact that the new appliance no longer leverages the EF disks and controllers.”

It uses a software-based RAID 5 (2 x 5+1) configuration for node-level data redundancy. NetApp says the SGF6112 has improved performance and density. We don’t have any performance or capacity comparison between the SGF6024 and the latest SGF6112 to back this up.

v11.7 adds cross-grid replication which replicates objects from one grid to another, physically separate, grid, also cloning tenant information and bi-directional replication. This is better than the existing CloudMirror object replication because it supports disaster recovery.

It also has an improved UI showing capacity utilization and top tenants by logical space. There is also a software wizard to set up tiering from ONTAP to StorageGRID.

Ransomware 

NetApp is offering a ransomware recovery guarantee. Singh said: “What NetApp uniquely delivers is providing customers this flexibility to have the ability to protect, detect, and recover in the event of a ransomware attack …  In the event that they are unable to recover their data then NetApp will offer them compensation.”

This is based on ONTAP automatically blocking known malicious file types, rogue admins, and malicious users with multi-admin verification, and tamper-proof snapshots that can’t be deleted, not even by a storage admin.

It also looks at IO patterns. Baxter said: “Our autonomous ransomware protocol, when it’s enabled, goes into a learning period. It learns what’s the IO rate look like? What’s the entropy, the change rate percentage? What’s the throughput of the volumes in question? And then, once it’s learned enough, it shifts into an active mode. And it does so automatically in our new version of ONTAP that just shipped.”

Singh added: “When ONTAP detects a ransomware attack, it automatically creates a tamper proof snapshot as a recovery point and notifies the administrators and then customers were able to recover literally within seconds to minutes their data from the data copies that are available to them. This we believe is industry leading and uniquely available from NetApp.”

Data protection vendors Druva and Rubrik offer ransomware recovery guarantees. Infinidat has a cyber storage resilience guarantee for its InfiniSafe offering. This protects the backup repository on Infinidat’s InfiniGuard data protection arrays. Infinidat also has a cyber storage resilience guarantee on InfiniBox and InfiniBox SSA II for production storage based on extending its InfiniGuard protection to its these arrays. NetApp, like Infinidat, is offering its guarantee on a production storage system but it is not the first to do so.

ONTAP One

The ONTAP One Storage software suite, including all available NetApp ONTAP software, was announced for NetApp’s AFA C-Series in March. It is now available for all AFF, ASA, and FAS systems. ONTAP One is also available to existing NetApp deployed systems under support.

NetApp says it has also expanded its Advance set of buyer programs and guarantees.

Scott Sinclair, Practice Director at the Enterprise Strategy Group, offered this thought: “This announcement is strategic for NetApp, but also aligns to what ESG is finding within our research, which is that the datacenter isn’t going away.”

One theme of this set of announcements is increased value for money and that could help NetApp sustain its revenues or even grow them in its current downturn.

4 ways CIOs can optimize IT and boost business value

Commissioned: The current economic uncertainty has cautious CIOs tucking in for tough times.

Although still important for brand viability and competitiveness, potentially transformative digital projects and so-called moonshoots appear to be giving way to workstreams that bolster organizational resiliency.

Those tasks? System uptime and cybersecurity and, naturally, cost optimization.

Companies are seeking IT leaders who can be surgical about cutting costs and other critical measures for building business resiliency during a downturn, executive recruiters recently told CIO Journal. Also, financial leaders for leading technology providers noted in earnings calls that customers are optimizing their consumption of cloud software in the face of macroeconomic headwinds.

Their common keywords and phrases? Slower growth of consumption. Economic uncertainty. Tough economic conditions. Optimize their cloud spending. New workload trends.

New workload trends. This means IT leaders are rethinking where and how their workloads are running. They are focused on workload optimization and placement, or rightsizing and reallocating applications across a variety of enterprise locations.

How IT leaders got here

Consider this new focus a correction to trends that date more than a decade ago and accelerated in recent years.

As organizations expanded their IT capabilities to meet new business demands they allocated more assets outside their datacenters, widening their technology footprints. For instance, many CIOs launched new ecommerce capabilities and built mobile applications and services.

In 2020, the pandemic spurred IT leaders to address a new IT reality. CIOs spent a lot of money on an array of on-premises and cloud technologies with which to build digital services that created socially distant bridges to stakeholders.

The sentiment was: Build it and fast, test it quickly and ship it. We’ll worry about the technical debt and other code consequences later.

From remote call centers and analytics tools to mobile payment and curbside pickup services, IT leaders built solutions to strengthen connections between their companies and the employees and customers they serve.

Collectively, these new digital services hastened an already proliferating sprawl of workloads across public and private clouds, colocation facilities and even edge environments.

Fast forward to today and businesses are grappling with these over-rotations. They’ve built an array of systems that are multicloud-by-default, which is inefficient and clunky, with data latency and performance taxes, not to mention unwieldy security profiles.

A playbook for building business value

Fortunately, IT leaders have at their disposal a playbook that enables them to optimize the placement of their workloads across a multicloud IT estate.

This multicloud-by-design approach brings management consistency to storing, protecting and securing data in multicloud environments. Delivered as-a-Service and via a pay-per-use model, this cloud-like strategy helps IT leaders provide a cloud experience, while hardening the business and reining in costs.

The multicloud-by-design strategy helps build business value in four ways:

Cost optimization. High CapEx costs are a burden. But IT teams can operate like the public cloud while retaining your assets on-premises

A pay-per-use consumption model allows IT leaders to align infrastructure costs with use. This can help reduce overprovisioning by up to 42 percent and save up to 39 percent in costs over a three-year operating period, according to IDC research commissioned by Dell.

Productivity. The talent crunch is real, but IT can reduce the reliance on reskilling by placing workloads in their optimal location, which will help mitigate risk and control costs. For instance, reducing time and effort IT staff spend on patching, monitoring and troubleshooting, among other routine tasks, is a key issue for talent-strapped IT teams. Such actions can help make infrastructure teams up to 38 percent more efficient, says IDC.

Digital resiliency.Recovering from outages and other incidents is challenging regardless of the operating model. Subscription-based offerings help reduce risks associated with unplanned downtime and costs with as much as 46 percent faster time to recovery, IDC data shows.

Business acceleration. Optimizing workload placement with pay per-use and subscription models helps shorten cycles for procuring and deploying new compute, storage or data protection capacity by as much as 60 percent faster, IDC says.

This helps business boost time to value compared with existing on-premises environments that do not leverage flexible consumption models.

The takeaway

In a tight economy, IT leaders must boost business value as they deploy their IT solutions. Yet they still struggle with unpredictable and higher than expected costs, performance and latency concerns, as well as security and data locality issues.

Sound daunting? It absolutely is. But running a more efficient IT shop shouldn’t be a moonshot.

Our Dell APEX as-a-Service suite of solutions can help IT departments exercise intelligent workload placement and provision infrastructure resources quickly. Dell APEX will also help IT teams improve interoperability across IT environments while enabling teams to focus on high value tasks. All while protecting corporate data and keeping it compliant with regulations.

Ultimately, organizations that can react nimbly to changing business requirements with resiliency are more likely to prosper. But this requires new ways of operating IT – and the right partner to help execute.

Learn more about Dell APEX here.

Sponsored by Dell Technologies.

Storage races to innovate in face of data explosion

The whole area of IT storage is buzzing with innovation, as startups and incumbents race to provide the capacity needed in a world exploding with unstructured data. Chatbot technology, the I/O speed to get data into memory, the memory capacity needed, and the software to provide, manage, and analyze vast lakes of data all create massive opportunities.

Growing amounts of data need storing, managing, and analyzing, and that is driving technology developments at all levels of the IT storage stack. We’ve tried to put this idea in a diagram, looking at the storage stack as a spectrum running from semiconductor technology at the left (yellow boxes) to data analytics technology at the right (green boxes):

Storage innovation

Talk about a busy slide! It starts with big blue and pink on-premises and public cloud rectangles. Below this is a string of ten boxes, stretching from the smallest hardware to the largest software. The first six have sample technology types below them. A set of nine blue bubbles contain example suppliers who are constantly bringing out new technology. A tenth such bubble can be found to the top left – building public cloud structures on-premises.

This diagram is not meant to be an authoritative and comprehensive view of the storage scene – think of it as a representative sample. There isn’t an exhaustive list of innovating suppliers either, just examples. Exclusion does not mean a supplier isn’t busy developing new technology – they all are. You can’t survive in the IT storage world unless you are constantly innovating – both incrementally with existing technology and through new means.

Although innovation is ongoing, the amount of money and number of funding events for storage startups has slowed this year. So far this year we have recorded:

  • Cloudian – $60 million
  • Impossible Cloud – $7 million
  • Intrinsic Semiconductor – $9.73 million
  • Komprise – $37 million
  • Pinecone – $100 million
  • Volumez – $20 million
  • Weebit Nano – $40 million share placement

That’s a total of $273.7 million – not a lot compared to 2022, when we saw $3.1 billion in total funding events for storage startups.

Near the midpoint this year there have been just five acquisitions, and no IPOs:

  • Iguaz.io bought by McKinsey & Co.
  • Model9 bought by BMC
  • Ondat acquired by Akamai
  • Databricks bought Okera
  • Serene bought crashed Storcentric and its Nexsan products

In 2022 Backblaze IPO’d, Datto was bought for $1.6 billion, Fungible was acquired by Microsoft, MariaDB had a SPAC exit, AMD bought Pensando, Rakuten bought Robin, Hammerspace acquired Rozo, and Nasuni bought Storage Made Easy – it was a much busier year.

There could be more this year as DRAM and NAND foundries buy CXL technology, and data warehouse and lakehouse suppliers acquire AI technologies.

We have, we think, at least three IPOs pending: Cohesity, Rubrik and VAST Data. Possibly MinIO as well, and we might see private equity takeouts for other companies.

The world is bristling with new technology developments. Think increased layer counts in the NAND area, memory pooling and sharing with CXL, large language model interfaces to analytical data stores, disk drive recording (HAMR/MAMR), cloud file services, tier-2 CSP changes, incumbent supplier public cloud-like services for on-premises products, SaaS application backup, and Web3 storage advances.

Keeping up with all this is getting to be a full-time job.

WANdisco has 2 months to raise $30M or crash

Scandal-hit WANdisco wants to raise $30 million in equity to avoid running out of working capital by mid-July as the business embarks on a “deep transformation recovery program”, including sustained efforts to reduce expenses and bolster trade.

The data replicator biz risks running out of cash in the wake of falsified sales reporting that was revealed on March 9. The AIM-listed company reported revenues of $24 million for 2022 when they should have been $9.7 million, and sales bookings were also grossly inflated at $127 million rather than the actual $11.4 million. An investigation is ongoing but so far WANdisco has laid the blame for the fake sales at the feet on one unnamed salesperson.

Share trading in WANdisco stock was immediately suspended following the discovery of the incorrect sales data. Co-founder, CEO and chairman Dave Richards subsequently quit along with CFO Erik Miller, and new C-suite hires were made including a board chairman, interim CEO and CFO.

Ken Lever, WANdisco
Ken Lever

Interim chair Ken Lever said: “Having now been in the business for some six weeks, there is no doubt in my mind that the company should have a very bright future given its differentiated technology. However, improvements across sales and marketing need to be made to properly take advantage of the opportunity.  

“To do this, the business needs to be urgently properly capitalized and so today we are announcing our desire to raise $30 million towards the end of June.“

He said the decision to raise equity “is a direct result of the issues that led to our announcement on 9 March.” The company is cutting costs with 30 percent of its staff leaving, and trying to reduce its annualized cost base from $41 million to around $25 million, but it only has an $8.1 million cash left in the bank. It could run out of working capital by mid-July unless the finances are shored up, WANdisco said.

A Proposed Fundraise is deemed the most suitable way to rebuild the balance sheet and fund operations, yet given the uncertainty of the share price, WANdisco might have insufficient shareholder authorities to issue the required number of Ordinary Share to deliver the new ordinary shares, it said.

“The Board strongly believes there are significant benefits in asking for shareholder authority to issue shares for the Proposed Fundraise in advance, rather than following the announcement of the Proposed Fundraise with the admission of the New Ordinary Shares subject to approval. This is because the Board cannot realistically launch the Proposed Fundraise until it is confident that the suspension in the Company’s shares will be lifted at the point in time the New Ordinary Shares are issued, or shortly thereafter,” the company said.

Trading in WANdisco stock won’t happen again on AIM until after the audit of 2022 company accounts are concluded, which is expected at the end of next month. Resident execs will, as such, seek approval from requisite shareholder authorities before launching the Proposed Fundraise.

Management said it intends to try to issue the New Ordinary Shares at a price that “minimises dilution for existing shareholders whilst also ensuring the Company raises sufficient capital”. Pricing the new shares will be done by contacting potential investors and asking them how much they’d be prepared to pay; this is called a bookbuild process. 

Potential investors will need to think the company has a future, and Interim CEO Stephen Kelly is developing a turnaround plan with six elements:

  • Go-to-market structure – this will include sales, marketing, pipeline creation and partnerships to build the foundations towards consistent sales execution.
  • Enhanced board and management to run the company properly.
  • Better investor engagement with improved disclosures and transparency. 
  • Headcount and organization cost reductions to achieve the milestone of cash flow break-even with progress to sustainable profitable growth.
  • Market validation with a realistic view of the obtainable market based on product/market fit, competitive differentiation, proof of value, commercial pricing, and branding.
  • Excellence in the Company’s Governance and Control environment – meaning no more incorrect sales reporting

Product market strategy

The existing strategy – replicating live data from edge and core data centers up to the public cloud – will be given added data migrator capabilities, and there will be a new target market. This is Application Lifecycle Management (APM), which means selling WANdisco product and also services, SaaS, to help distributed software development organizations collaborate more efficiently. This relies on using WANdisco’s replication and load-balancing technology to provide a globally-distributed active-active configuration across wide area networks.

Enhancing data migrator sales means adding support for more targets via agreements with Microsoft, Google, AWS, IBM and Oracle, plus integration with cloud-centric data analytic platforms including Databricks and Snowflake. The product will be enhanced with more performance, scale, and ease of use, we’re told.

WANdisco’s new management says its Data Migrator technology lies in the Data Integration Software tools segment of Gartner’s Data Management market. The total addressable market (TAM) for such software tools is $4.4 billion in 2023 with a forecast average annual growth rate of 8.7 percent taking it to a $6.3 billion TAM in 2027.

We note that WANdisco competitors include Cirrus Data (block) and Atempo, Datadobi, Data Dynamics and Komprise in the file moving area.

Time is tight and WANdisco’s runway limited. Tom Kennedy, analyst at Megabuyte, said: “This certainly looks like a final throw of the dice for WANdisco, and the odds do not look in its favour.”

“For one, we struggle to believe it can complete a funding round in a matter of weeks, particularly with the added complexity of its share suspension. While, even if it does manage to organise the round, we struggle to see how shareholders have even a remotely positive reaction to being asked for money yet again given its dismal track record – it has already almost entirely wasted the proceeds from its IPO in 2012 and follow-on placings in 2013, 2015, 2016, 2017, 2019 (twice), 2020, 2021, and 2022.

“Moreover, even if it miraculously manages to pull it off, its new annualised $25.0m cost base will quickly burn a hole into the new funds, and it’s limited in further cost-cutting as additional headcount reductions will come at a significant price. Frankly, this looks like a last-ditch attempt to extend its life long enough to complete an asset / IP sale process and return some value back to shareholders, albeit at pennies on the pound, but we just can’t see it happening,” added Kennedy.

GigaOm has fresh rankings for infrastructure unstructured data management market

Analyst house GigaOm reckons Cohesity is the leading supplier of infrastructure unstructured data management, having upped its game with its DataHawk threat intelligence and data classification suite.

GigaOm puts out two unstructured data management (UDM) reports, this one focused on infrastructure UDM and the second looking at business-oriented UDM. Business UDM products look at compliance, security, data governance, big data analytics, e-discovery and the like. Infrastructure UDM feature automatic tiering and basic information lifecycle management, data copy management, analytics, index, and search.

The report authors write: “As you can see in the Radar chart [below] vendors are spread across an arc that lies primarily in the lower half of the Radar, denoting a market that is particularly innovation-driven.”

There are 12 vendors included, with four leaders: Cohesity, followed by NetApp and Hitachi Vantara and the Komprise. Three: Cohesity, NetApp and Komprise, are fast movers.

No other suppliers are expected to reach the Leader ring any time soon.

There are five Challengers: CTERA, Druva and Arcitecta, Datadobi (StorageMap) and Dell (DataIQ). We find Panzura and Atempo (Miria) moving from new entrancy into the Challengers’ ring and DataDynamics moving towards it.

Five suppliers are placed in the lower right quadrant; the innovation-platform play area, with three more set to join them: CTERA, Datadobi and Hitachi Vantara. Product tech is developing fast and general platform appeal is more important than having extra features.

Hitachi Vantara has made the report available to interested readers, saying the report highlights the exceptional performance of HCP (Hitachi Content Platform) across a wide range of metrics and criteria. You can get a  copy here.

Absentees

We asked ourselves why Nasuni and Hammerspace were not included? We were curious about Nasuni because competitors CTERA and Panzura were included. Ditto Hammerspace as Komprise was included. Did they not meet the criteria for entry? 

So we asked GigaOm, and Arjan Timmerman, one of the report authors, told us: “We have an Infrastructure as well as a Business-oriented report [and] we made the following decisions.”

“Hammerspace does not have a UDM solution that would fit in either report. Nasuni has a decent business-orientated offering, but that does not really fit on the infra side.”  

“CTERA is working hard on their Unstructured Data Management solution and [we included] Panzura although it didn’t brief us or provide a questionnaire response. But it was in both in reports last year so we decided to add them in both reports as well. We reached out to them on multiple occasions.” 

Analysts rely on vendor co-operation and analytical life gets harder if vendors decide not to play nice.

By the fours

The GigaOm Radar Screen is a four-circle, four-axis, four-quadrant diagram. The circles form concentric rings, for new entrant, challenger, or leader, and mature tech (inner white ring). A supplier’s status is indicated by ring placement and product/service type relates to axis placement. 

The four axes are maturity, horizontal platform play, innovation and feature play.

There is a depiction of supplier progression, with new entrants growing to become challengers and then, if all goes well, leaders. The speed and direction of progression is shown by a shorter or longer arrow, indicating slow, fast and out-performing vendors.

Supplier placement does not take into account vendor market shares.