Home Blog Page 129

SK hynix breaks barriers with 321-layer 3D NAND

SK hynix has unveiled a sample 321-layer 3D NAND chip and is in the process of developing a PCIe gen 5 interface for UFS flash modules.

At FMS 2023 in Santa Clara, the company showcased its 321-layer flash chip, noting it offers a 1 terabit capacity using the TLC (3bits/cell) format. This layer count surpasses previous benchmarks set by Micron’s 232 layers and Samsung’s 236 layers, while Kioxia and WD currently have technology at the 218-layer level. YMTC’s in-development 232-layer chip is delayed due to US tech export restrictions.

NAND layer table

SK hynix said: “With another breakthrough to address stacking limitations, SK hynix will open the era of NAND with more than 300 layers and lead the market.” The new chip is projected to enter mass production in the first half of 2025, indicating it’s in the early stages of development.

While specifics are unclear, there’s speculation on whether the 321-layer chip consists of two separate 260-layer class chips (string stacking) or is a singular stacked device, which would pose more manufacturing challenges. Given that the SK hynix 238-layer chip is a combination of two 119-layer components, a string stacking technique seems plausible.

SK hynix 321-layer chip

Earlier this year, SK hynix presented on its 300+ layer technology at the ISSCC 2023 conference, boasting a record 194GBps write speed. A white paper released at the time stated: “To reduce the cost/bit, the number of stacked layers needs to increase, while the pitch between stacked layers decreases. It is necessary to manage the increasing WL (wordline) resistance produced by a decreased stack pitch.”

No technical details are included in its latest announcement.

SK hynix’s existing 238-layer generation features a 512Gb die, but with the increased density of the 321-layer technology, this will be expanded to 1Tb. This suggests potential for greater storage capacities within compact spaces for SSDs, as well as embedded UFS-type NAND drives for mobile devices and lightweight notebooks.

SK hynix also announced its introduction of a UFS v4 chip, offering transfer speeds up to 5,800 MBps, with a UFS v5 variant, capable of up to 46.4Gbps transfer speed, currently in the works.

It is also progressing towards the development of PCIe gen 6 interface drives, offering an 8GBps lane bandwidth, a significant leap from PCIe gen 5’s 4GBps. This development, in conjunction with SK hynix’s UFS advancements and the 321-layer technology, is attributed to the rising demand from AI workloads, notably influenced by platforms like ChatGPT and other large language models.

Jungdal Choi, head of NAND development at SK hynix, said: “With timely introduction of the high-performance and high-capacity NAND, we will strive to meet the requirements of the AI era and continue to lead innovation.”

Gartner sees shifting players in enterprise backup

In the Gartner 2023 Enterprise Backup and Recovery Solutions Magic Quadrant, leaders appear more tightly grouped than before, with notable changes in other sections of the quadrant.

The Magic Quadrant is a two-dimensional chart with “Ability to Execute” on the vertical axis and “Completeness of Vision” on the horizontal axis. The chart is divided into four quadrants: Leaders and Challengers are in the upper half, while Niche Players and Visionaries occupy the lower half. A balance between execution and vision is indicated by proximity to the diagonal line running from the bottom left to the top right.

Comparing the current Enterprise Backup and Recovery Magic Quadrant with last year’s, several key differences emerge.

Gartner backup magic quadrants

Four leading companies – Commvault, Rubrik, Cohesity, and Veritas – are closely positioned. Veeam is distinct due to its superior execution capability but relatively limited vision, while Dell maintains a consistent position from the previous year, marginally trailing the other five.

A significant shift is observed with IBM. Previously in the Challengers quadrant, it has now moved to the Visionaries section, joining Druva and HYCU, both of which maintained their positions from last year.

While Acronis was categorized as a Visionary in the previous report, it now appears as a Niche Player, albeit with an improved Ability to Execute score. In the Niche Players quadrant, Zerto (recently acquired by HPE) and Microfocus have departed, while Microsoft and OpenText have made their debut, joining Acronis and Unitrends.

Scality claims disk drives can use less electricity than high-density SSDs

Scality analysts have said that, in some use cases, disk drives can use less power than SSDs – specifically when compared with high-density SSDs when the storage is in active use.

Hard disk drives (HDDs) have physical platters that are kept spinning by electric motors and also read/write heads that are moved across platter surfaces by electricity as well. Solid state drives (SSDs), in which data writing and reading relies on electrical currents, have no mechanical moving parts and so, you would assume, need less electricity. The company says this is true when HDDs and SSDs are inactive but not when data is being accessed.

Paul Speciale, Scality
Paul Speciale

Scality CMO Paul Speciale says in a blog: “Surprisingly, our research here at Scality has found that high-density SSDs don’t have a power consumption or power-density advantage over HDD. In fact, we see the reverse today.”

The Scality research was kicked off by CEO Jerome Lecat’s rebuttal of the thinking that SSDs will kill off HDDs due to SSDs having a lower total cost of ownership. Speciale then said that Scality was looking more closely at the power, cooling and rack density aspects of the debate and that examination has resulted in the discovery that active HDDs use less power than active SSDs.

The analysis compared two drives:

  • A Seagate Exos X22 7200rpm 22 TB HDD rated at  5.7 watts (idle), 9.4 watts (active read), 6.4 watts (active write) 
  • A Micron 6500 ION 30.72 TB TLC SSD rated at 5 watts (idle), 15 watts (read), 20 watts (write) per drive, and priced to be competitive with Solidigm P5316 QLC drive.

It modeled two workloads. There was a read-intensive one with 10 percent idle, 80 percent reading and 10 percent writing, and a write-intensive one with the same idle time, 80 percent writing and 10 percent reading. Speciale says: “For each workload profile, drives are assumed to be in the specified power state for the percentage of time indicated. The average per-drive power calculations for each workload profile are as follows:

Micron ION: 

  • Power consumption (read-intensive): (5*0.10 + 15*0.8 + 20*0.10) watts  = 14.5 watts 
  • Power density (read-intensive): 30.72 TB / 14.5 watts = 2.1 TB / watt
  • Power consumption (write-intensive): (5*0.10 + 15*0.10 + 20*0.80) watts  = 18 watts 
  • Power density (write-intensive): 30.72 TB / 18 watts = 1.7 TB / watt 

Seagate EXOS:

  • Power consumption (read-intensive): (5.7*0.10 + 9.4*0.80 + 6.4*0.10) watts = 8.7 watts 
  • Power density (read-intensive): 22 TB / 8.7 watts = 2.5 TB / watt
  • Power consumption (write-intensive): (5.7*0.10 + 9.4*0.10 + 6.4*0.80) watts = 6.6 watts 
  • Power density (write-intensive): 22 TB / 6.6 watts = 3.3 TB / watt

Note: All calculations rounded to the nearest tenth.

We charted the power consumption numbers to make the comparison clearer:

SSDs had a marginal advantage in the idle state, with HDDs needing 15 percent more power, but HDDs were 68 percent more efficient than SSDs when writing data, which resulted in a 63 percent efficiency advantage in the write-intensive workload. The HDD read advantage was less, 37 percent on a straight active read comparison and 40 percent in the read-intensive workload.

Speciale says: ”This clearly demonstrates that the perception of high-density QLC flash SSDs as having a power efficiency advantage over HDDs isn’t accurate today.”

His analysts also looked at a TB/watt power-density rating, and found that the 22TB disk drive had a 19 percent read-intensive and 94 percent write-intensive power-density advantage over the 30.72 TB SSD. Speciale reckons: “This will obviously vary with other workload pattern assumptions and is certainly subject to change as SSD densities increase in the coming years.”

Speciale also caveats the results by saying: “Moreover, there are additional considerations for enclosure-level (servers and disk shelves) density and power consumption metrics, and how the cost of power affects each customer’s overall storage TCO.”

His conclusion from all this is that: “We do not see a significant power consumption difference between the drive types (SSD vs HDD) for this to be a decision criteria.”

Rack power density counts

However, as SSDs are physically smaller than HDDs: “Based on these specs, we can see that SSDs provide a significant density advantage over HDDs. Some commercially available enclosures (storage servers and disk shelves) we see available today also can provide more than a 2X advantage in rack density for these SSDs.”

“It is important to note, however, that the amount of power that can be delivered to a datacenter rack (‘rack power density’) can be a limiting factor in many datacenters. In some cases, this rack power limitation does not support fully populating the racks with these high density servers. So ultimately power delivery can become the limiting factor in achieving ultra-high levels of rack density.” 

But the amount of power that can be delivered to a datacenter rack (“rack power density”) can be a limiting factor in many datacenters. Ultimately, power delivery can become the limiting factor in achieving ultra-high levels of rack density. 

Scality says it is itself agnostic in general about the HDD vs SSD choice for its object and file storage: “SSDs can deliver tangible performance benefits for read-intensive, latency-sensitive workloads.” But: “HDDs will remain a good choice for many other petabyte-scale unstructured data workload profiles for the next several years, especially where price (both $ per TB and price/performance) are a concern.”

Comment

This Scality finding casts a different light on the will-SSDs-kill-HDDs debate as it could surely change the five-year TCO calculation. Read Speciale’s blog here.

When hyperscalers catch a cold, Quantum sneezes

Storage supplier Quantum has encountered significant challenges due to a decrease in device and media sales, and reduced spending by hyperscale customers.

In the first fiscal quarter of 2024 ending June 30, Quantum experienced a 5.4 percent year-over-year decline in revenues, recording $91.8 million. This resulted in a net loss of $10.6 million, a figure consistent with the previous year’s results. Notably, this marks Quantum’s 14th consecutive quarter with a loss, following five quarters of year-over-year revenue growth.

CEO Jamie Lerner said: “First quarter revenue was impacted by booking delays; an unanticipated drop in device and media sales late in the quarter; and higher than anticipated weakness in the hyperscale vertical.

Quantum revenues
Five revenue growth quarters come to a crashing halt

”Our subscription ARR in the quarter increased 78 percent year-over-year and 9 percent sequentially as we continue to advance recurring software subscriptions across our customer  base.” Over 89 percent of new unit sales were subscription-based.

A chart shows the damage by product sector, with secondary storage the only growth area:

Quantum results

There was a decrease in sales of primary storage systems, device and media, as well as lower services business, partially offset by growth in hyperscale secondary storage systems. 

Then this happened: “Subsequent to quarter-end, our largest hyperscale customer paused orders due to excess capacity driven by broader macro weakness. This development was unexpected and will have a meaningful impact on our second quarter and full year outlook and further punctuates the importance of transitioning our business to a more stable, subscription-based business model to moderate quarterly volatility.”

Financial summary:

  • Gross margin: 38.1 percent vs 35.1 percent a year ago
  • Operating expense: $40.8 million vs $41.1 million the year prior
  • Cash, cash equivalents and restricted cash: $25.7 million
  • Outstanding term loan debt: $88.6 million vs $78.4 million on June 30, 2022

Lerener was open about the need to end dependence on lumpy perpetual license sales and hyperscale customers, saying: “Our entire team is fully focused on executing with a high sense of urgency to secure and convert our expanding pipeline of opportunities into customers. This includes aggressively scaling our non-hyperscale businesses and ramping our full portfolio of end-to-end solutions. We are also further tightening spending across the organization, while maintaining our investment in key sales, marketing, and product development initiatives.”

For the upcoming quarter, the revenue projection stands at approximately $80 million, with a variance of +/- $3 million, translating to a 19.3 percent decline at the midpoint. The revenue forecast for the entire fiscal year is set at around $360 million, with a possible variance of +/- $10 million, which is 12.8 percent less than the fiscal 2023 revenues.

CFO Ken Gianella said: “Not reflected in our original full year outlook was more pronounced declines in both our hyperscale and media businesses as well as the potential impact of a prolonged entertainment work stoppage.

“Our total gross margins are improving with the rotation to a higher revenue contribution from primary storage and non-hyperscale secondary storage customers.”

Quantum reckons its primary and non-hyperscale secondary business is poised to grow up to 40 percent year-over-year. Its hope is the worst should be over in a quarter or two, with things improving in the second half of the year.

MaxLinear pushes out 3rd gen Panther storage accelerator

Solar flare.
Source Massive X-Class Solar Flare uploaded by PD Tillman; Author - NASA Goddard Space Flight Center

MaxLinear has launched a third-generation OCP-compliant Panther III storage accelerator promising 12:1 data reduction.

The company was founded in 2003, IPO’d in 2010, and supplies digital, high-performance analog and mixed-signal integrated circuits and software network connectivity products. Its Panther products offload storage IO from a host CPU and provides lower latency, higher throughput compression and security acceleration. They have  multiple independent parallel transform engines, each of which provides simultaneous compression, encryption, and hashing for FC, NVMe and other connected storage devices: SSD, HDD and tape. The PCIe gen 3-connected Panther II arrived in 2014 with 40Gbps throughput, which rose to 640Gbps when devices were cascaded. Panther III started sampling in August 2022.

James Lougheed, VP & GM of High Performance Analog & Accelerators at MaxLinear, said: “Data storage capacity continues to double every three years and with the deployment of higher throughput NVMe drives and optical cabling, the need for hardware offload accelerators for data reduction continues to rise.”

OCP-compliant Panther III storage accelerator

Panther III provides 200Gbps throughput, five times faster than Panther II, and scalable to 3.2Tbps in cascaded configs.

MaxLinear says its new card is designed for storage workloads such as database acceleration, storage offload, encryption, compression, and deduplication enablement for maximum data reduction. The company says the Internet of Things, HPC, AI/ML and analytics workloads are all increasing storage IO needs.

The product features:

  • Maximized data reduction: Panther III’s 12:1 data reduction allowing storage systems to store 1/12th the data, and users to access, process, and transfer data 12 times faster even in an HDD system.
  • Independent hash block size and programmable offset to enhance deduplication hit rates.
  • Encryption capabilities eliminate the need for Self-Encrypting Drives (SED) and remove the cost of and need for security routers.
  • Software development kit (SDK) contains API, drivers, and source code for incorporation with end application software and software-defined storage (SDS).
  • So called six-nines reliability: Built-in end-to-end data protection, Real Time Verification (RTV) of all transforms, NVMe protection, and in-line CRCs/parity assures data integrity and eliminates data loss.

We think that this 12:1 data reduction is an “up to” number and dependent on the data set involved.

The  OCP version of the Panther III card is available immediately with a PCIe gen 4 version available in Q3 2023. MaxLinear says there are no CPU or software limitations; plug it in and storage IO will go faster.

Optionality: The key to navigating a multicloud world

Commissioned: Cloud software lies at the heart of modern computing revolution – just not in the way you might think.

When most people think of cloud computing, they think of the public cloud, which is fair play. But if you’re like most IT leaders, your infrastructure operations are far more diverse than they were 10 or even five years ago.

Sure, you run a lot of business apps in public cloud services but you also host software workloads in several other locations. Over time – and by happenstance – you’re running apps on premises, in private clouds and colos and even at the edge of your network, in satellite offices or other remote locations.

Your organization isn’t unique in this regard. Eighty-seven percent of 350 IT decision makers believe that their application environment will become further distributed across additional locations over the next two years, according to an Enterprise Strategy Group poll commissioned by Dell.

It’s a multicloud world; you’re just operating in it. But you need options to help make it work for you. Is that the public cloud, or somewhere else? Yes.

The many benefits of the public cloud

Public cloud services offer plenty of options. You know this better than most people because your IT teams have tapped into the abundant and scalable services the public cloud vendors offer.

Need to test a new mobile app? Spin up some virtual machines and storage, learn what you need to do to improve the app and refine it (test and learn).

What about that bespoke analytics tool your business stakeholders have been wanting to try? Assign it some assets and watch the magic happen. Click some buttons to add more resources as needed.
Such efficient development, fueled by composable microservices and containers that comprise cloud-native development, is a big reason why most IT leaders have taken a “cloud-first” approach to deploying applications. It’s not by accident that worldwide public cloud sales topped $545.8 billion in 2022, a 23 percent increase over 2021, according to IDC.

The public cloud’s low barrier to entry, ease-of-procurement and scalability are among the chief reasons why organizations pursuing digital transformations have re-platformed their IT operating models on such services.

The public cloud’s data taxes

You had to know a but is coming. And you aren’t wrong. Yes, the public cloud provides flexibility and agility as you innovate. And yes, the public cloud provides a lot of options vis-a-vis data, analytics, IoT and AI services.

But the public cloud isn’t always the best option for your business. Like anything else, it’s got its share of drawbacks, namely around portability. As many IT organizations have learned, getting data out of a public cloud can be challenging and costly.

In fact, many IT leaders have come to learn that operating apps in a public cloud comes with what amounts to data taxes. For one, public cloud providers use proprietary data formats, making it difficult to export data your store there to another cloud provider, let alone use it for on-premises apps.

Then there are the data egress fees, or the cost to remove data from a cloud platform, which can be exorbitant. A typical rate is $0.09 per gigabyte but the more data you want to move, the greater the financial penalty you’ll incur.

Finally, have you tried to remove large datasets from a public cloud? Okay, then you know how hard and risky it is – especially datasets stored in several locations. Transferring large datasets courts network latency that impinges application performance. Moreover, because your apps depend on your datasets the more you offload to a public cloud platform the greater the gravity of that data and thus the harder it is to move.

The sheer weight of data gravity is a major reason why so many IT leaders continue to run their software in public clouds, regardless of other available options. After a time, IT leaders feel locked-in to a particular cloud platform(s).

Rebalancing, or optimizing for a cloud experience

Such trappings are among the reasons many organizations are rethinking the public “cloud-first” approach and taking a broader view of optionality.

Many IT departments are assessing the best place to run workloads based on performance, latency, cost and data locality requirements.

In this cloud optimization or rebalancing, IT organizations are deploying apps intentionally in private clouds, traditional on-premises infrastructure and colocation facilities. In some cases, they are repatriating workloads – moving them from one environment to another.

This multicloud-by-design approach is critical for organizations seeking the optionality to move workloads where they make the most sense without sacrificing the cloud experience they’ve come to enjoy.

The case for optionality

This is one of the reasons Dell designed a ground-to-cloud strategy, which brings our storage software, including block, file and object storage to Amazon Web Services and Microsoft Azure public clouds.

Dell further enables you to manage multicloud storage and Kubernetes container deployments through a single console – critical at a time when many organizations seek application portability and control as they pursue cloud-native development.

Meanwhile, Dell’s cloud-to-ground strategy enables your organization to bring the experience of cloud platforms to datacenter, colo and edge environments while enjoying the security, performance and control of an on-premises solution. Dell APEX Cloud Platforms provide full-stack automation for cloud and Kubernetes orchestration stacks, including Microsoft Azure, Red Hat OpenShift and VMware.

These approaches enable you to deliver a consistent cloud experience while bringing management consistency and experience data mobility across your various IT environments.

Here’s where you can learn more about Dell APEX.

Brought to you by Dell Technologies.

Xinnor RAIDs Kioxia datacenter SSDs for performance

xiRAID dev Xinnor says its software can efficiently manage a suite of Kioxia CM7 datacenter SSDs, claiming to achieve over 70 percent of their top sequential read speed in a system it plans to demo later this week.

The CM7 SSDs are capable of reaching up to 2.7 million/600,000 random read/write IOPS and can deliver a bandwidth of up to 14/6.75 GBps for sequential read/write operations. Unlike conventional systems that use an external hardware RAID card, SoftwareRAID processes the RAID parity calculations using the host CPU. According to Xinnor, its xiRAID software surpasses the speed of other software RAID (SWRAID) alternatives. The benchmark results for RAID 5 and 6 were obtained using a dual gen 4 Xeon SP Ingrasys server in conjunction with a dozen CM7 dual-port, PCIe gen 5 interface SSDs. 

Ingrasys VP Brad Reger said: “The tests run in our lab on latest Ingrasys PCIe gen 5 server demonstrates that xiRAID excels in harnessing the full potential of PCIe gen 5 NVMe SSDs from Kixoia, showcasing remarkable performance with the reliability of RAID.” Although it doesn’t fully utilize the SSDs’ capabilities, the RAID 5 and 6 performance charts confirm it’s close.

Xinnor RAID performance

For context, a standalone Kioxia CM7 Series SSD can reach 14GBps in sequential read and 6.75GBps in sequential write. Its random read performance is rated at 2.7 million IOPS, with random write performance exceeding 300,000 IOPS.

Xinnor says its software has lockless datapath architecture that evenly distributes the workload across all CPU cores. Incorporating AVX (Advanced Vector Extensions) technology, the software can process multiple data units simultaneously using parallel YMM register operations. This capability allows the Xinnor software to match or even outperform certain hardware RAID cards.

While Xinnor claims that these performance levels reduce the need for external RAID cards, it’s worth noting that GPU-powered RAID card manufacturer Graid claims to have developed technology capable of boosting sequential writes to 90GBps or more in RAID-5 settings.

Both Xinnor and Kioxia are set to showcase their collaborative system at the FMS 2023 event in Santa Clara later this week.

Toshiba: $14B buyout offer led by Japanese businesses is go

Toshiba

Publicly owned Japanese conglomerate Toshiba, which makes disk drives and has a stake in Kioxia, is attempting to take itself private via a $14 billion ( ¥2 trillion) buyout scheme.

The scheme is led by Japan Industrial Partners (JIP), and funded by Japanese banks and businesses. It values Toshiba at $32/share (¥4,620) [PDF] – unchanged from the March offer – and will start its tender offer from August 8, until Sep 20 2023.  At least two-thirds of its shareholders must tender their stock for the bid to succeed.

JIP was established in November 2002 in Japan to function as a Japanese-style private equity investment business in the corporate reorganization and restructuring of Japanese companies. 

The buyout offer was recommended by Toshiba’s board chairperson, Akihiro Watanabe, who said in an earnings call  reporting the company’s Q1 results today: “It is a day that marks Toshiba’s exit from an eight-year tunnel.“

The tunnel part is a reference to problems like the mega-company’s accounting scandal of 2015 and the 2017 bankruptcy of its US-based Westinghouse nuclear power station construction business after tremendous losses. Toshiba bought Westinghouse for $5 billion in 2006. Japan’s nuclear power stations ceased being an attractive market after the Fukushima plant meltdown in Tokyo in 2011. That killed Toshiba’s nuclear power station business, but the Japanese market is now changing, as global warming enhances nuclear power’s profile due to its environmental credentials.

As a loss mitigation measure, Toshiba sold off part of its NAND memory business, with its Western Digital joint-venture for NAND manufacturing, in 2017, to a Bain-led consortium for $18 billion. This was rebranded as Kioxia, and Toshiba still owns 40 percent of it. There are ongoing merger discussions between Western Digital and Kioxia and a merger could be announced later this month.

Toshiba endured years of board and senior management turmoil as turnaround plans were rejected, and options such as a a CVC-led private equity bid, and three-way and two-way splits were examined and also rejected. There was also a governance scandal. The JIP buyout offer keeps Toshiba’s ownership in Japanese hands, and would pave the way to a private restructuring of the business to return it to profitability.

Its disk drive business has a 17.5 percent share of the worldwide HDD business behind leader Seagate’s 44.5 percent and Western Digital’s 38 percent. If the buyout offer succeeds then it is possible the HDD business unit would be sold.

Toshiba’s results for its first fiscal 2024 quarter, ended June 30, were 5 percent down Y/Y at $5 billion (¥704.1 billion) in sales, with a loss of $176 million (¥25.4 billion). Part of Toshiba’s latest loss was due to depressed results from Kioxia as it experiences a recession and NAND over-supply, like all the other NAND suppliers. Long term market prospects for NAND are excellent as more and more data needs to be stored and accessed quickly, faster than disk drive data access.

It makes a wide variety of products from air conditioners, home appliances such as TVs and microwaves, to semiconductors.

Kioxia speeds up in datacenter SSD refresh

Kioxia has launched a high-speed PCIe 5 data center SSD, the CD8P, which is significantly faster than its existing DC product, the CD8, and has double the stated capacity.

Both drives use BiCS 5, Kioxia’s 112-layer 3D NAND technology, with cells formatted for TLC (3 bits/cell). They are single port datacenter drives with power loss protection, end-to-end data protection, and support the OCP Datacenter NVMe SSD Specification 2.0. The CD8P range has flash die failure recovery as well. Self-encrypting (SED) models support the TCG Opal and Ruby SSC standards. Like the CD8, the CD8Ps come in read-intensive and mixed-use variants with differing capacity levels.

Kioxia SSD business unit SVP and GM Neville Ichhaporia said of the release: “The CD8P Series is ready for next-gen PCIe 5.0 servers, delivering a great combination of high performance with low latency in both E3.S and 2.5-inch (U.2) form factors.”

They are intended for general-purpose server and cloud applications.The CD8 starts at 960GB and tops out at 15.36TB in read-intensive form whereas the read-intensive CD8P starts at 1.92TB and goes up to 30.72TB. 

Mixed-use capacities for the CD8 range from 800GB to 12.8TB, reflecting the need for over-provisioning cells for the added write burden. The CD8P’s equivalent range is 1.6TB to 12.8TB. The CD8P E3.S format drive’s maximum capacity for read-intensive work is 7.68TB with the U.2 (2.5-inch) version running up to 30.72TB.

The CD8P stated performance:

  • Random read IOPS: 2 million vs CD8’s 1.25 million
  • Random write IOPS: 400,000  vs CD8’s 200,000
  • Sequential reads: 12 GBps vs CD8’s 7.2 GBps
  • Sequential writes: 5.5 GBps vs CD8’s 6 GB/sec

(These are all “up to “ numbers.) The difference from the CD8 is, we understand, down to Kioxia’s latest controller and firmware. The firm emphasizes the drive’s consistent performance, saying its 99.999 percentile latency is <255 µs.

As with the CDS8, the CD8P supports 1 drive write per day (DWPD) in read-intensive form and 3 DWPD in mixed-use, during its 5-year warranty.

Although faster than the CD8, Kioxia’s CD8P is not quite as fast as Samsung’s PM1743 dual-port data center SSD, which provides up to 2.5 million/250K random read/write IOPS and has sequential read and write bandwidth numbers of up to 13 GBps and 6.6 GBps respectively.

However, Kioxia has its own dual-port data center drive, the CM7, and that provides up to 2.7 million, 600K random read/write IOPS and 14/6.75 GBps sequential read/write bandwidth; better than Samsung.

Solidigm’s D5-P5336 datacenter SSD goes up to 61.44TB with its 192-layer QLC (4 bits/cell) NAND. Kioxia will, we think, need to move to BiCS6 (162-layer or BiCS 8 (218-layer) NAND technology to match that capacity level.

Kioxia’s CD8P drives are now sample shipping to prospective customers.

Cobalt Iron touts tech to migrate backups to its SaaS vaults

Cobalt Iron has released fresh backup migration tech, Compass Migrator, that it says can shift your legacy IBM backups to Cobalt Iron’s SaaS backup service, with IBM’s Spectrum Protect and TSM initially supported.

There are very many backup suppliers because each one’s backup file format is proprietary, making migration to a replacement supplier impractical, and sometimes impossible. You need to restore all the legacy format backups and then back them up again using the new supplier’s software. It’s not an automated process and can take many weeks if not months to complete. Cobalt Iron’s product is offering a solution to that problem.

Cobalt Iron’s deployment coordinator, Graham McGivern, said: “As a user of this remarkable Compass Migrator tool, I am amazed by how it simplifies the management of numerous data migrations from multiple sources and targets.”

Don’t get the wrong idea though. “Multiple sources” actually means IBM currently, as CMO Andy Hurt told us: “The current Compass Migrator announcement is for the Spectrum Protect Edition and includes the full migration automation available for legacy TSM and Spectrum Protect legacy environments. Other automated Compass Migrator Editions for other legacy backup environments will be completed as needed by customer engagements.”

TSM stands for Tivoli Storage Manager. Hurt said: “Both disk and tape-based backup sources are supported. … PBs of data and thousands of tape volumes are within the current scope of Compass Migrator operations.”

The company says Compass Migrator:

  • Selectively identifies data to be migrated (e.g., only particular types of data, for certain systems, by date range, active data only, etc.).
  • Automatically migrates data from legacy backup environments into multiple storage targets (including new data centers or the cloud) in the Compass environment.
  • Optimizes migration processes (e.g., minimizes tape mounts, dynamically manages a migration staging area, manages throughput, allows massive parallelism for large data amounts, etc.).
  • Rebinds long-term retention data with appropriate retention policies.
  • Provides migration progress indicators, details, and migration validation analysis.
  • Maintains data custody-tracking throughout the entire migration process and beyond.
  • Provides summary and analysis reports of all migration operations as audit evidence.
  • Supports migration restarts in the event of failed migration jobs or scheduled maintenance activities.

Hurt said: “Migration throughput is completely throttled by available customer networking and backup resources and other customer operations. Compass Migrator can be scaled to very high throughput leveraging many legacy source backup servers and many Compass Accelerator targets.”

Veeam resells Cobalt Iron’s SaaS data protection.

Comment

If Cobalt Iron can work out how to migrate Spectrum Protect and TSM backups to its SaaS vaults then so can other backup suppliers. Having it done for Spectrum Protect and Tivoli means, in principle, it could be done for Veritas, Legato and other well-used and long-lived backup software products. This opens the way to make competitive inroads into, and raids on, incumbent backup supplier’s customer bases. 

Storage news roundup – 3 August

Zivan Ori.

Zivan Ori, founder of NVMe flash array company E8, is leaving AWS for a to-be-revealed destination. AWS bought E8 in 2019. Ori said in a LinkedIn post: “The last time I took such a decision was 9 years ago when I founded E8 Storage. Exactly 4 years ago, AWS acquired E8, where we became part of EBS. Now with the public launch of EBS io2 Block Express, AWS is offering the fastest block storage in the public cloud – our vision is complete! It is time for me now to embark on a new and equally exciting adventure. Not much I can share publicly now, but stay tuned…”

Cohesity is making layoffs. A company spokesperson said this is a continuation of the plans  announced in June: “Cohesity is currently executing on its plan to optimize our workforce, with a twofold goal of having more flexibility to increase our investments in strategic areas of critical importance to our customers, and becoming cash flow positive in FY24. We will ensure that impacted employees receive resources and support from Cohesity, and where possible can be redeployed to open roles within the company. We will also continue to recruit globally in areas of strategic importance to Cohesity.” No specific numbers of terms of departures was mentioned.

Nick Pearce, co-founder and director of Object Matrix, which provides object storage for the media and entertainment market, is leaving the DataCore-acquired company. DataCore bought Object Matrix in January. His destination is as-yet unknown.

Singapore-based DapuStor has announced mass production of its PCIe 5.0 SSD Haishen5 product series with a Marvell Bravera SC5 controller. It has sequential read and write speeds reaching up to 14000/8000 MBps, 2.8 million random read IOPS and 600,000 random write IOPS with 55μs random read latency and 7μs random write. It supports SR-IOV and CMB (Controller Memory Buffers) and comes in E1.S, E3.S and U.2 formats. 

Amy Fowler.

At the Flash Memory Summit today, the Futurum Group announced that the SuperWomen in Flash Leadership Award for 2023 goes to Amy Fowler, VP and GM, FlashBlade, at Pure Storage. This reflects her key role as part of the team that launched FlashBlade, which became available in 2017, achieved over $1 billion in revenues within four years of launch and is now approaching $2 billion. She has long been involved in Women@Pure, an internal organisation providing an open forum where members collaborate. Fowler also serves as the executive sponsor of Pride at Pure.

Lithuania-based Genomika has received over €5 million from the EU’s Pathfinder initiative for research into DNA storage. It has previously worked with Twist BioScience. Genomika says it aims to develop a DNA storage drive in 3 years.

Kioxia is delaying the start of 3D NAND production at its No. 2 Factory (K2) Kitakami Plant. The new plant was originally expected to start mass production this year, but due to the reduced global demand for 3D NAND Flash, it will be postponed to an as-yet undefined date.

Kioxia America announced the availability of the first hardware samples supporting the Linux Foundation’s vendor-neutral Software-Enabled Flash Community Project, making flash software-defined. It consists of purpose-built, media-centric flash hardware focused on hyperscale requirements with an open source API and libraries. “The Software-Enabled Flash project allows the flash industry to shed the legacy HDD device paradigm,” said Eric Ries, SVP of the Memory and Storage Strategy Division for KIOXIA America. “Flash can be customized for different storage requirements, and protocols can be changed with a simple driver change while keeping the same hardware in place.” It’s expecting to deliver customer samples this month and will be showing the concept at FMS 2023.

NetApp has restructured its unified partner program into PartnerSphere with a single model covering public and hybrid clouds, as well as AI and analytics. The intent is to consolidate multiple programs for different kinds of channel partners, such as Spot partners, into a single entity that includes all partner types, business models and routes to market. PartnerSphere, NetApp says, shifts from solution specializations to identify partner capabilities and competencies aligned to its key focus areas: Cloud Solutions, Hybrid Cloud and AI & Analytics.

Observability and security supplier Dynatrace has signed a definitive agreement to acquire Israeli startup Rookout, which supplies dynamic instrumentation for cloud-native apps so developers can find and fix bugs faster. It is used by Backblaze, NetApp and Seagate. Adding Rookout to the Dynatrace platform will provide developers with increased code-level observability into production environments. It’s a cash buy and the amount was not disclosed.

Weka has provided some speed numbers for its scale-out parallel filesystem:

It says it delivers exceptional performance on-premises and in the cloud across bandwidth as well as IOPS and latency, which its customers say are all important for GPU workloads.

Dr. Siva Sivaram.

Next-gen solid-state lithium-metal battery tech developer QuantumScape has appointed Dr. Siva Sivaram, President of Western Digital, and a veteran of the semiconductor and data storage industries, to the newly created role of President. He will oversee QuantumScape’s technology and manufacturing groups as the company ramps up its transition from R&D to production.

Blancco champions permanent data erasure solutions

Profile. As the world continues to grapple with the realities of increasing data volumes and e-waste, companies like Blancco have found their niche by providing data erasure solutions. These have become critical as more devices that hold valuable data are sent for recycling rather than ending up as e-waste.

In contrast to anti-ransomware technologies that focus on maintaining immutable copies of data, Blancco’s objective is to ensure the secure deletion of data.

The company was established in 1997 in Joensuu, Finland, originally as Carelian Innovation. Early products included the Protekto anti-theft device for PCs and the Blancco Data Cleaner, a solution designed to overwrite data on disk drives. Blancco’s inception was largely driven by a data security breach in Finland in 1997, where health records of more than 3,000 patients were discovered on computers sold by a Finnish hospital. The product was officially launched in 1999.

Carelian decided to rebrand to Blancco in 2000, and subsequently focused its efforts on developing the software. This move led to the introduction of Blancco LAN Server in 2001, broadening horizons to a wide array of clientele including recycling and refurbishing centers, auction houses, leasing companies, and large corporations. By 2005, Blancco had amassed a user base surpassing 3 million.

Blancco’s software works by overwriting all sectors of a drive with random patterns of zeroes and ones, preventing any potential data leaks before devices are traded, sold, recycled, or reused. The software also allows for targeted sanitization, overwriting data in a specified file, folder, or location, while leaving non-targeted areas intact.

Blancco mobile graphic.

The company continued to evolve, aligning its tech to meet various international data erasure standards. In 2007, Blancco was added to the NATO Information Assurance Product Catalogue. This was followed by Sun Microsystems licensing Blancco’s software in 2008 to facilitate data removal for their workstations and servers. Blancco’s compliance with international regulatory standards for data erasure was a significant factor in securing the deal. To date, these standards include over 25 industry norms such as the US DoD 5220.22-M, NIST Clear and Purge, among others.

The company further expanded its portfolio by extending data erasure to other suppliers’ servers. A Data Center Edition was released in 2008 for mass storage systems from companies including EMC, HP, and Sun. Blancco also provides tamper-proof, audit-ready Certificates of Erasure that confirm the erasure process, aiding regulatory compliance and ensuring data protection. In 2012, Blancco added mobile data erasure to its repertoire with the introduction of Blancco Mobile.

The business now owns a diverse range of software that caters to mobile phones, PCs, servers, and datacenters. The software is capable of wiping data on SATA, SAS, and NVMe drives, both HDD and SSD, and can manage RAID groups and virtual machines. Some of their key offerings in the Data Center Edition include: 

  • Blancco Drive Eraser
  • Blancco Removable Media Eraser
  • Blancco LUN Eraser
  • Blancco Virtual Machine Eraser
  • Blancco Hardware Solutions
  • Blancco Management Console

Globally, over 250 refurbishment centers use its technology. The company has also collaborated with eBay Korea for smartphone data removal and Samsung for SSD data deletion. Blancco maintains offices in the UK, France, Germany, North America, and the Asia Pacific region.

In April 2014, the company was acquired by UK-based firm Regenersis. The latter’s software division, which offers device diagnostics, repair, and data erasure services, was subsequently renamed to Blancco Technology Group in 2015, with Blancco becoming a subsidiary.

Today, the Blancco Technology Group is listed on the London Stock Exchange and is currently poised to be acquired by private equity biz Francisco Partners for £175 million ($221.3 million). Francisco Partners is particularly interested in Blancco due to its sustainability and e-waste reduction initiatives, aspects that are gaining prominence in the era of ESG (Environmental, Social, and Governance) considerations.

Francisco Partners is also buying observability platform New Relic for $6.5 billion.