Home Blog Page 122

Lenovo picks Cigent data protection for its PC fleet

Lenovo is using Cigent software to hide PC data in secure and invisible vaults to prevent data theft.

The PC maker’s ThinkShield Data Defense, based on Cigent software, has prevention-based defenses embedded directly into secured storage devices that use self-encrypting drives (SED). This, Lenovo says, is better than software-only alternatives and also policy-based data loss prevention (DLP). Multi-factor authentication (MFA) adds an extra layer of security.

Tom Ricoy, Cigent CRO, said: “Detection-based endpoint security solutions continue to be bypassed by adversaries. … Using secured storage built into Lenovo devices and file encryption with multi-factor authentication (MFA) for file access, Cigent and Lenovo are able to help mitigate even the most sophisticated attacks.”

Lenovo commercial laptops ship with SED storage devices that follow the TCG Opal 2.0 specification. SEDs, even though the name self-encrypting drive implies so, do not in and of themselves “self-encrypt”.  The encryption is kicked off by additional software; the Cigent software in this case, which is used to implement encryption (full drive or partial drive encryption) on the SEDs.

Any file protected by ThinkShield Data Defense that is attempted to be copied, opened, accessed, moved, deleted, etc. by malware or a malicious user with remote or direct physical access to the PC should be stopped by MFA. This requires that the user authenticates with their 2FA/MFA selected factor, such as PIN, or Yubikey, or Duo, etc. Malware and malicious users, lacking the selected factor, cannot authenticate as a trusted user and file access is denied.

The Cigent software can also create so-called Secure Vaults. Think of these as additional partitions/drives, like  D:\ or F:\ drive on your PC in addition to your C:\, where data can be stored. When the vault is locked, the drive itself makes that portion of the storage itself completely inaccessible to the Operating System.

The  data in the vault is completely inaccessible, and invisible to malware, we’re told. Lenovo insists that no tool known to man can see the data, even a hex reader, such as Win Hex, that looks at the sectors of the drive. This, it claims, prevents data theft from a Lenovo PC or laptop that uses ThinkShield Data Defense.

Find out more about the Cigent Data Defense Secure Vault here.

Panmnesia CXL memory pool 3x faster than RDMA

Panmnesia, a technology firm based in South Korea, unveiled its pooled CXL memory system at Flash Memory Summit (FMS) 2023. In a live demonstration, the system showcased performance metrics more than three times faster than an RDMA-based system while running a Meta recommendation application.

The CXL technology behind this was developed in partnership with the Korea Advanced Institute of Science & Technology (KAIST) located in Daejeon. Panmnesia has crafted a comprehensive CXL framework that includes CXL CPU, Switch, and Memory expander modules. This framework possesses 6TB of DIMM-based CXL memory. The company’s strategy involves marketing its CXL hardware and software intellectual property products to CXL system developers and manufacturers.

Dr Myoungsoo Jung, Panmnesia CEO, said: “We’re excited to introduce our innovative multi-terabyte full-system CXL framework at this year’s Flash Memory Summit. Our commitment to pioneering cutting-edge solutions is demonstrated through our CXL IP, which we believe will significantly enhance memory and storage capabilities within data centers.”

The demonstrated system has a capacity three times larger than the 2TB Samsung/MemVerge-based pooled memory system also displayed at FMS 2023.

Panmnesia hardware

The Panmnesia framework system chassis has two CXL CPU modules, highlighted in the diagram above,  three lighter colored CXL switch modules, and six 1TB memory modules or CXL endpoint controllers, forming a single DIMM pool.  

On the software side, the system runs on Linux and consists of CXL hardware device drivers, a virtual machine subsystem, and a CXL-optimized user application. Interestingly, the virtual machine software component creates a CPU-less NUMA node within the memory space.

Panmnesia software

A video slide deck shows the performance of this CXL framework system in a movie recommendation application akin to Meta’s recommendation application. This is again compared to an RDMA-based alternative where the servers involved had no added external memory.

Panmnesia RDMA comparison

The video starts with the loading of user and item data (during Tensor initialization) and subsequently used a machine learning model for movie suggestions. The Panmnesia system completed the task 3.32 times faster than its RDMA counterpart.

An additional benefit of Panmnesia’s system is its modularity; DIMMs within memory modules can be substituted, allowing for a scaling up of memory capacity using larger-capacity DIMMs rather than incorporating additional memory modules.

Western Digital enhances OpenFlex JBOF

Western Digital has upgraded its OpenFlex storage solution, integrating faster dual-port SSDs and the new RapidFlex fabric bridge for improved performance.

OpenFlex is a 2RU x 24-bay disaggregated chassis designed with NVMe SSDs, offering Ethernet NVMe-oF access. It supports both RoCE and NVMe/TCP. The Data24 3200 enclosure is equipped with the Ultrastar DC SN655 drive, an enhancement from last year’s SN650 model. Both drives incorporate the WD/Kioxia BiCS5 flash with 112 layers in TLC (3bits/cell) configuration, reaching a peak capacity of 15.4TB. These drives are also compatible with PCIe gen 4.

Kurt Chan, VP and GM of Western Digital’s Digital Platforms division, said: “When combining RapidFlex and Ultrastar into the Data24, Western Digital gives organizations a transformative, next-generation way to share flash and bring greater value to their business.”

Western Digital OpenFlex Data24 3200 chassis, DC SN655 SSD, RapidFlex C2000 Fabric Bridge card, and A2000 ASIC
Clockwise from top left: OpenFlex Data24 3200 chassis, DC SN655 SSD, RapidFlex C2000 Fabric Bridge card, and A2000 ASIC

The SN655 provides up to 1,100,000/125,000 random read/write IOPS and sequential read/write speeds of up to 6.8/3.7GBps. In comparison, the older SN650 model achieved up to 970,000/109,000 random read/write IOPS and a sequential speed of 6.6/2.8GBps.

Both the SN650 and SN655 feature a U.3 format, come with a five-year warranty, and support 1 drive write per day. The SN655 is available in three capacities: 3.48, 7.68, and 15.36TB, whereas the SN650 was available in 7.68 and 15.56TB options.

The RapidFlex C2000 Fabric Bridge, powered by the A2000 ASIC, serves as a PCIe adapter. This device can export the PCIe bus over Ethernet and boasts two 100GbitE ports linked to 16 PCIe gen 4 lanes. An additional advantage of the C2000 is its ability to function in initiator mode, adding to its existing target mode. This feature enables clients to deploy more cost-efficient and energy-saving initiator cards in their servers, negating the need for a traditional Ethernet NIC for NVMe-oF connectivity.

With the C2000’s capabilities, the Data24 3200 can connect to up to six server hosts directly, eliminating the necessity for a switch device.

The OpenFlex Data24 3200 NVMe-oF Storage Platform, along with the RapidFlex A2000, C2000 FBDs, and the Ultrastar SN655 dual-port NVMe SSD are currently available for sampling.

Rubrik buys Laminar to shore up security position

Rubrik has announced its acquisition of Laminar, a data security posture management provider, for an undisclosed amount.

Update: Laminar acquisition price added. 9 April 2024.

It says the combination will integrate cyber recovery and cyber posture across enterprise, cloud, and SaaS platforms. As part of this expansion, Rubrik plans to establish an R&D center in Israel, concentrating on cybersecurity. Rubrik’s existing R&D facilities are in India and the United States.

Mike Tornincasa, Rubrik’s Chief Business Officer, said: “Rubrik and Laminar share a common vision that cyber resilience is the next frontier in data security. Laminar’s technology, ability to execute, and vision make it a perfect complement to our strategy and … roadmap.”

Established in Tel Aviv in 2020 by Amit Shaked and Oran Avraham, Laminar has secured $67 million through seed funding (Nov 2021) and two-part Series A funding (Nov 2021 and June 2022). Given its rapid growth, the acquisition price likely exceeds the company’s total funding. [Rubrik subsequently admitted in its IPO filing (2 April 2024) that it paid $105 million for Laminar; $91 million in cash and the remainder in shares, according to Calcalist.]

Amit Shaked and Oran Avraham of Laminar, which has been bought by Rubrik

The combined companies will develop an integrated product, ensuring clients benefit from enhanced cyber recovery and posture capabilities.

Rubrik is no stranger to the implications of cyber vulnerabilities. In March, they experienced a data breach through the Fortra GoAnywhere managed file transfer service, attributed to a zero-day vulnerability.

Laminar’s first product, released in February 2022, emphasizes data visibility, privacy, and governance across multiple public cloud environments to mitigate data breaches and compliance issues. Their current offerings provide comprehensive data discovery, classification, and protection across major cloud service providers and data warehouse environments.

Data Catalog by Laminar, which has been bought by Rubrik
Laminar Data Catalog screenshot

In the realm of data security posture management, the focus is on detection, prioritizing based on risk, and safeguarding sensitive data. Rubrik, which experienced a breach through the aforementioned Fortra incident, is enhancing its cybersecurity measures in partnership with Zscaler to prevent unauthorized data exports. This acquisition furthers Rubrik’s capabilities in breach prevention.

Rubrik and Laminar claim the merger will:

  • Proactively improve cyber posture to stop cyberattacks before they happen by knowing where all of an organization’s data lives, who has access to the data, and how it’s being used.
  • Expand focus beyond just network and endpoint security to include cloud and data security.
  • Prepare for more sophisticated cyber threats with AI-driven technology.

Amit Shaked, CEO and co-founder of Laminar, said in a statement: “There is a dark side to digital transformation in the form of shadow data, and more businesses are realizing they can’t protect against what they can’t see – leaving them vulnerable to cyberattacks… The combination of cyber posture and cyber recovery will help create a cyber resilient future where organizations can take on any threat, at any stage of the attack.”

Industry giants push CXL memory frontiers

Samsung CXL
Samsung CXL

Suppliers are showcasing a 2TB pooled CXL memory concept system at the Flash Memory Summit 2023 in Santa Clara.

H3 Platform has developed a co-engineered 2RU chassis that houses 8 x Samsung CXL 256GB memory modules. These are connected to an XConn XC50256 CXL switch and use MemVerge Memory Machine X software to manage the CXL memory – both pooling and provisioning it to apps in associated servers. The Computer eXpress Link (CXL) is a sophisticated PCIe gen 5 bus-based connectivity protocol. Its v2.0 version supports memory pooling and facilitates cache coherency between a CXL memory device and systems accessing it, such as servers.

JS Choi, Samsung Electronics’ VP of its New Business Planning Team, said: “The concept system unveiled at Flash Memory Summit is an example of how we are aggressively expanding its usage in next-generation memory architectures.”

Unlike conventional DRAM DIMMs that are sold directly to server OEMs, CXL pooled memory requires CXL switches, enclosures, and management software. Industry leaders like Micron, SK hynix, and Samsung view CXL pooled memory as a method to diversify the datacenter memory market beyond the typical server chassis. Their goal is to foster a comprehensive ecosystem comprising CXL switches, chassis, and software suppliers to propel this emergent market.

MemVerge, Samsung, XConn and H3 Platform unveiled a 2TB Pooled CXL Memory System for AI
MemVerge, Samsung, XConn and H3 Platform unveiled a 2TB Pooled CXL Memory System for AI

XConn’s XC50256 is recognized as the inaugural CXL 2.0 and PCIe Gen 5 switch ASIC. It has up to 32 ports, which can be subdivided into 256 lanes, a total switching capacity of 2,048GBps and minimized port-to-port latency.

According to H3 Platform, its concept chassis provides connectivity to eight hosts, with the MemVerge Memory Machine X software dynamically allocating it. Additionally, an H3 Fabric Manager facilitates the management interface to the chassis.

This MemVerge software mirrors the version used in Project Endless Memory in collaboration with SK hynix and its Niagara pooled CXL memory hardware in May. If an application requires memory beyond what’s available in its server host, it can secure additional allocation from the connected CXL pool through MemVerge’s software. A Memory Viewer service provides insight into the physical topology and displays a heatmap of memory capacity and bandwidth usage by application.

The DRAM producers are all interested in CXL because it enables them to sell DRAM products beyond the confines of x86 CPU DIMM socket. Micron, for example, is sampling its CXL 2.0-based CZ120 memory expansion modules to customers and partners. They come in 128GB and 256GB capacities in the EDSFF E3.S 2T form factor, with its PCIe Gen5 x8 interface, obviously competing head on with Samsung’s 256GB CXL Memory Modules.

Micron’s CZ120 can deliver up to 36GBps memory read/write bandwidth. Meanwhile, Samsung has introduced a 128GB CXL 2.0 DRAM module with a bandwidth of 35GBps. Although the bandwidth for Samsung’s 256GB module remains undisclosed, they have previously mentioned a 512GB CXL DRAM module, suggesting a variety of CXL Memory Module capacities in the pipeline.

B&F anticipates that leading server OEMs such as Dell, Lenovo, HPE, and Supermicro, will inevitably roll out CXL pooled memory chassis products. These will likely be complemented by supervisory software that manages dynamic memory composability functions in tandem with their server products.

SK hynix breaks barriers with 321-layer 3D NAND

SK hynix has unveiled a sample 321-layer 3D NAND chip and is in the process of developing a PCIe gen 5 interface for UFS flash modules.

At FMS 2023 in Santa Clara, the company showcased its 321-layer flash chip, noting it offers a 1 terabit capacity using the TLC (3bits/cell) format. This layer count surpasses previous benchmarks set by Micron’s 232 layers and Samsung’s 236 layers, while Kioxia and WD currently have technology at the 218-layer level. YMTC’s in-development 232-layer chip is delayed due to US tech export restrictions.

NAND layer table

SK hynix said: “With another breakthrough to address stacking limitations, SK hynix will open the era of NAND with more than 300 layers and lead the market.” The new chip is projected to enter mass production in the first half of 2025, indicating it’s in the early stages of development.

While specifics are unclear, there’s speculation on whether the 321-layer chip consists of two separate 260-layer class chips (string stacking) or is a singular stacked device, which would pose more manufacturing challenges. Given that the SK hynix 238-layer chip is a combination of two 119-layer components, a string stacking technique seems plausible.

SK hynix 321-layer chip

Earlier this year, SK hynix presented on its 300+ layer technology at the ISSCC 2023 conference, boasting a record 194GBps write speed. A white paper released at the time stated: “To reduce the cost/bit, the number of stacked layers needs to increase, while the pitch between stacked layers decreases. It is necessary to manage the increasing WL (wordline) resistance produced by a decreased stack pitch.”

No technical details are included in its latest announcement.

SK hynix’s existing 238-layer generation features a 512Gb die, but with the increased density of the 321-layer technology, this will be expanded to 1Tb. This suggests potential for greater storage capacities within compact spaces for SSDs, as well as embedded UFS-type NAND drives for mobile devices and lightweight notebooks.

SK hynix also announced its introduction of a UFS v4 chip, offering transfer speeds up to 5,800 MBps, with a UFS v5 variant, capable of up to 46.4Gbps transfer speed, currently in the works.

It is also progressing towards the development of PCIe gen 6 interface drives, offering an 8GBps lane bandwidth, a significant leap from PCIe gen 5’s 4GBps. This development, in conjunction with SK hynix’s UFS advancements and the 321-layer technology, is attributed to the rising demand from AI workloads, notably influenced by platforms like ChatGPT and other large language models.

Jungdal Choi, head of NAND development at SK hynix, said: “With timely introduction of the high-performance and high-capacity NAND, we will strive to meet the requirements of the AI era and continue to lead innovation.”

Gartner sees shifting players in enterprise backup

In the Gartner 2023 Enterprise Backup and Recovery Solutions Magic Quadrant, leaders appear more tightly grouped than before, with notable changes in other sections of the quadrant.

The Magic Quadrant is a two-dimensional chart with “Ability to Execute” on the vertical axis and “Completeness of Vision” on the horizontal axis. The chart is divided into four quadrants: Leaders and Challengers are in the upper half, while Niche Players and Visionaries occupy the lower half. A balance between execution and vision is indicated by proximity to the diagonal line running from the bottom left to the top right.

Comparing the current Enterprise Backup and Recovery Magic Quadrant with last year’s, several key differences emerge.

Gartner backup magic quadrants

Four leading companies – Commvault, Rubrik, Cohesity, and Veritas – are closely positioned. Veeam is distinct due to its superior execution capability but relatively limited vision, while Dell maintains a consistent position from the previous year, marginally trailing the other five.

A significant shift is observed with IBM. Previously in the Challengers quadrant, it has now moved to the Visionaries section, joining Druva and HYCU, both of which maintained their positions from last year.

While Acronis was categorized as a Visionary in the previous report, it now appears as a Niche Player, albeit with an improved Ability to Execute score. In the Niche Players quadrant, Zerto (recently acquired by HPE) and Microfocus have departed, while Microsoft and OpenText have made their debut, joining Acronis and Unitrends.

Scality claims disk drives can use less electricity than high-density SSDs

Scality analysts have said that, in some use cases, disk drives can use less power than SSDs – specifically when compared with high-density SSDs when the storage is in active use.

Hard disk drives (HDDs) have physical platters that are kept spinning by electric motors and also read/write heads that are moved across platter surfaces by electricity as well. Solid state drives (SSDs), in which data writing and reading relies on electrical currents, have no mechanical moving parts and so, you would assume, need less electricity. The company says this is true when HDDs and SSDs are inactive but not when data is being accessed.

Paul Speciale, Scality
Paul Speciale

Scality CMO Paul Speciale says in a blog: “Surprisingly, our research here at Scality has found that high-density SSDs don’t have a power consumption or power-density advantage over HDD. In fact, we see the reverse today.”

The Scality research was kicked off by CEO Jerome Lecat’s rebuttal of the thinking that SSDs will kill off HDDs due to SSDs having a lower total cost of ownership. Speciale then said that Scality was looking more closely at the power, cooling and rack density aspects of the debate and that examination has resulted in the discovery that active HDDs use less power than active SSDs.

The analysis compared two drives:

  • A Seagate Exos X22 7200rpm 22 TB HDD rated at  5.7 watts (idle), 9.4 watts (active read), 6.4 watts (active write) 
  • A Micron 6500 ION 30.72 TB TLC SSD rated at 5 watts (idle), 15 watts (read), 20 watts (write) per drive, and priced to be competitive with Solidigm P5316 QLC drive.

It modeled two workloads. There was a read-intensive one with 10 percent idle, 80 percent reading and 10 percent writing, and a write-intensive one with the same idle time, 80 percent writing and 10 percent reading. Speciale says: “For each workload profile, drives are assumed to be in the specified power state for the percentage of time indicated. The average per-drive power calculations for each workload profile are as follows:

Micron ION: 

  • Power consumption (read-intensive): (5*0.10 + 15*0.8 + 20*0.10) watts  = 14.5 watts 
  • Power density (read-intensive): 30.72 TB / 14.5 watts = 2.1 TB / watt
  • Power consumption (write-intensive): (5*0.10 + 15*0.10 + 20*0.80) watts  = 18 watts 
  • Power density (write-intensive): 30.72 TB / 18 watts = 1.7 TB / watt 

Seagate EXOS:

  • Power consumption (read-intensive): (5.7*0.10 + 9.4*0.80 + 6.4*0.10) watts = 8.7 watts 
  • Power density (read-intensive): 22 TB / 8.7 watts = 2.5 TB / watt
  • Power consumption (write-intensive): (5.7*0.10 + 9.4*0.10 + 6.4*0.80) watts = 6.6 watts 
  • Power density (write-intensive): 22 TB / 6.6 watts = 3.3 TB / watt

Note: All calculations rounded to the nearest tenth.

We charted the power consumption numbers to make the comparison clearer:

SSDs had a marginal advantage in the idle state, with HDDs needing 15 percent more power, but HDDs were 68 percent more efficient than SSDs when writing data, which resulted in a 63 percent efficiency advantage in the write-intensive workload. The HDD read advantage was less, 37 percent on a straight active read comparison and 40 percent in the read-intensive workload.

Speciale says: ”This clearly demonstrates that the perception of high-density QLC flash SSDs as having a power efficiency advantage over HDDs isn’t accurate today.”

His analysts also looked at a TB/watt power-density rating, and found that the 22TB disk drive had a 19 percent read-intensive and 94 percent write-intensive power-density advantage over the 30.72 TB SSD. Speciale reckons: “This will obviously vary with other workload pattern assumptions and is certainly subject to change as SSD densities increase in the coming years.”

Speciale also caveats the results by saying: “Moreover, there are additional considerations for enclosure-level (servers and disk shelves) density and power consumption metrics, and how the cost of power affects each customer’s overall storage TCO.”

His conclusion from all this is that: “We do not see a significant power consumption difference between the drive types (SSD vs HDD) for this to be a decision criteria.”

Rack power density counts

However, as SSDs are physically smaller than HDDs: “Based on these specs, we can see that SSDs provide a significant density advantage over HDDs. Some commercially available enclosures (storage servers and disk shelves) we see available today also can provide more than a 2X advantage in rack density for these SSDs.”

“It is important to note, however, that the amount of power that can be delivered to a datacenter rack (‘rack power density’) can be a limiting factor in many datacenters. In some cases, this rack power limitation does not support fully populating the racks with these high density servers. So ultimately power delivery can become the limiting factor in achieving ultra-high levels of rack density.” 

But the amount of power that can be delivered to a datacenter rack (“rack power density”) can be a limiting factor in many datacenters. Ultimately, power delivery can become the limiting factor in achieving ultra-high levels of rack density. 

Scality says it is itself agnostic in general about the HDD vs SSD choice for its object and file storage: “SSDs can deliver tangible performance benefits for read-intensive, latency-sensitive workloads.” But: “HDDs will remain a good choice for many other petabyte-scale unstructured data workload profiles for the next several years, especially where price (both $ per TB and price/performance) are a concern.”

Comment

This Scality finding casts a different light on the will-SSDs-kill-HDDs debate as it could surely change the five-year TCO calculation. Read Speciale’s blog here.

When hyperscalers catch a cold, Quantum sneezes

Storage supplier Quantum has encountered significant challenges due to a decrease in device and media sales, and reduced spending by hyperscale customers.

In the first fiscal quarter of 2024 ending June 30, Quantum experienced a 5.4 percent year-over-year decline in revenues, recording $91.8 million. This resulted in a net loss of $10.6 million, a figure consistent with the previous year’s results. Notably, this marks Quantum’s 14th consecutive quarter with a loss, following five quarters of year-over-year revenue growth.

CEO Jamie Lerner said: “First quarter revenue was impacted by booking delays; an unanticipated drop in device and media sales late in the quarter; and higher than anticipated weakness in the hyperscale vertical.

Quantum revenues
Five revenue growth quarters come to a crashing halt

”Our subscription ARR in the quarter increased 78 percent year-over-year and 9 percent sequentially as we continue to advance recurring software subscriptions across our customer  base.” Over 89 percent of new unit sales were subscription-based.

A chart shows the damage by product sector, with secondary storage the only growth area:

Quantum results

There was a decrease in sales of primary storage systems, device and media, as well as lower services business, partially offset by growth in hyperscale secondary storage systems. 

Then this happened: “Subsequent to quarter-end, our largest hyperscale customer paused orders due to excess capacity driven by broader macro weakness. This development was unexpected and will have a meaningful impact on our second quarter and full year outlook and further punctuates the importance of transitioning our business to a more stable, subscription-based business model to moderate quarterly volatility.”

Financial summary:

  • Gross margin: 38.1 percent vs 35.1 percent a year ago
  • Operating expense: $40.8 million vs $41.1 million the year prior
  • Cash, cash equivalents and restricted cash: $25.7 million
  • Outstanding term loan debt: $88.6 million vs $78.4 million on June 30, 2022

Lerener was open about the need to end dependence on lumpy perpetual license sales and hyperscale customers, saying: “Our entire team is fully focused on executing with a high sense of urgency to secure and convert our expanding pipeline of opportunities into customers. This includes aggressively scaling our non-hyperscale businesses and ramping our full portfolio of end-to-end solutions. We are also further tightening spending across the organization, while maintaining our investment in key sales, marketing, and product development initiatives.”

For the upcoming quarter, the revenue projection stands at approximately $80 million, with a variance of +/- $3 million, translating to a 19.3 percent decline at the midpoint. The revenue forecast for the entire fiscal year is set at around $360 million, with a possible variance of +/- $10 million, which is 12.8 percent less than the fiscal 2023 revenues.

CFO Ken Gianella said: “Not reflected in our original full year outlook was more pronounced declines in both our hyperscale and media businesses as well as the potential impact of a prolonged entertainment work stoppage.

“Our total gross margins are improving with the rotation to a higher revenue contribution from primary storage and non-hyperscale secondary storage customers.”

Quantum reckons its primary and non-hyperscale secondary business is poised to grow up to 40 percent year-over-year. Its hope is the worst should be over in a quarter or two, with things improving in the second half of the year.

MaxLinear pushes out 3rd gen Panther storage accelerator

Solar flare.
Source Massive X-Class Solar Flare uploaded by PD Tillman; Author - NASA Goddard Space Flight Center

MaxLinear has launched a third-generation OCP-compliant Panther III storage accelerator promising 12:1 data reduction.

The company was founded in 2003, IPO’d in 2010, and supplies digital, high-performance analog and mixed-signal integrated circuits and software network connectivity products. Its Panther products offload storage IO from a host CPU and provides lower latency, higher throughput compression and security acceleration. They have  multiple independent parallel transform engines, each of which provides simultaneous compression, encryption, and hashing for FC, NVMe and other connected storage devices: SSD, HDD and tape. The PCIe gen 3-connected Panther II arrived in 2014 with 40Gbps throughput, which rose to 640Gbps when devices were cascaded. Panther III started sampling in August 2022.

James Lougheed, VP & GM of High Performance Analog & Accelerators at MaxLinear, said: “Data storage capacity continues to double every three years and with the deployment of higher throughput NVMe drives and optical cabling, the need for hardware offload accelerators for data reduction continues to rise.”

OCP-compliant Panther III storage accelerator

Panther III provides 200Gbps throughput, five times faster than Panther II, and scalable to 3.2Tbps in cascaded configs.

MaxLinear says its new card is designed for storage workloads such as database acceleration, storage offload, encryption, compression, and deduplication enablement for maximum data reduction. The company says the Internet of Things, HPC, AI/ML and analytics workloads are all increasing storage IO needs.

The product features:

  • Maximized data reduction: Panther III’s 12:1 data reduction allowing storage systems to store 1/12th the data, and users to access, process, and transfer data 12 times faster even in an HDD system.
  • Independent hash block size and programmable offset to enhance deduplication hit rates.
  • Encryption capabilities eliminate the need for Self-Encrypting Drives (SED) and remove the cost of and need for security routers.
  • Software development kit (SDK) contains API, drivers, and source code for incorporation with end application software and software-defined storage (SDS).
  • So called six-nines reliability: Built-in end-to-end data protection, Real Time Verification (RTV) of all transforms, NVMe protection, and in-line CRCs/parity assures data integrity and eliminates data loss.

We think that this 12:1 data reduction is an “up to” number and dependent on the data set involved.

The  OCP version of the Panther III card is available immediately with a PCIe gen 4 version available in Q3 2023. MaxLinear says there are no CPU or software limitations; plug it in and storage IO will go faster.

Optionality: The key to navigating a multicloud world

Commissioned: Cloud software lies at the heart of modern computing revolution – just not in the way you might think.

When most people think of cloud computing, they think of the public cloud, which is fair play. But if you’re like most IT leaders, your infrastructure operations are far more diverse than they were 10 or even five years ago.

Sure, you run a lot of business apps in public cloud services but you also host software workloads in several other locations. Over time – and by happenstance – you’re running apps on premises, in private clouds and colos and even at the edge of your network, in satellite offices or other remote locations.

Your organization isn’t unique in this regard. Eighty-seven percent of 350 IT decision makers believe that their application environment will become further distributed across additional locations over the next two years, according to an Enterprise Strategy Group poll commissioned by Dell.

It’s a multicloud world; you’re just operating in it. But you need options to help make it work for you. Is that the public cloud, or somewhere else? Yes.

The many benefits of the public cloud

Public cloud services offer plenty of options. You know this better than most people because your IT teams have tapped into the abundant and scalable services the public cloud vendors offer.

Need to test a new mobile app? Spin up some virtual machines and storage, learn what you need to do to improve the app and refine it (test and learn).

What about that bespoke analytics tool your business stakeholders have been wanting to try? Assign it some assets and watch the magic happen. Click some buttons to add more resources as needed.
Such efficient development, fueled by composable microservices and containers that comprise cloud-native development, is a big reason why most IT leaders have taken a “cloud-first” approach to deploying applications. It’s not by accident that worldwide public cloud sales topped $545.8 billion in 2022, a 23 percent increase over 2021, according to IDC.

The public cloud’s low barrier to entry, ease-of-procurement and scalability are among the chief reasons why organizations pursuing digital transformations have re-platformed their IT operating models on such services.

The public cloud’s data taxes

You had to know a but is coming. And you aren’t wrong. Yes, the public cloud provides flexibility and agility as you innovate. And yes, the public cloud provides a lot of options vis-a-vis data, analytics, IoT and AI services.

But the public cloud isn’t always the best option for your business. Like anything else, it’s got its share of drawbacks, namely around portability. As many IT organizations have learned, getting data out of a public cloud can be challenging and costly.

In fact, many IT leaders have come to learn that operating apps in a public cloud comes with what amounts to data taxes. For one, public cloud providers use proprietary data formats, making it difficult to export data your store there to another cloud provider, let alone use it for on-premises apps.

Then there are the data egress fees, or the cost to remove data from a cloud platform, which can be exorbitant. A typical rate is $0.09 per gigabyte but the more data you want to move, the greater the financial penalty you’ll incur.

Finally, have you tried to remove large datasets from a public cloud? Okay, then you know how hard and risky it is – especially datasets stored in several locations. Transferring large datasets courts network latency that impinges application performance. Moreover, because your apps depend on your datasets the more you offload to a public cloud platform the greater the gravity of that data and thus the harder it is to move.

The sheer weight of data gravity is a major reason why so many IT leaders continue to run their software in public clouds, regardless of other available options. After a time, IT leaders feel locked-in to a particular cloud platform(s).

Rebalancing, or optimizing for a cloud experience

Such trappings are among the reasons many organizations are rethinking the public “cloud-first” approach and taking a broader view of optionality.

Many IT departments are assessing the best place to run workloads based on performance, latency, cost and data locality requirements.

In this cloud optimization or rebalancing, IT organizations are deploying apps intentionally in private clouds, traditional on-premises infrastructure and colocation facilities. In some cases, they are repatriating workloads – moving them from one environment to another.

This multicloud-by-design approach is critical for organizations seeking the optionality to move workloads where they make the most sense without sacrificing the cloud experience they’ve come to enjoy.

The case for optionality

This is one of the reasons Dell designed a ground-to-cloud strategy, which brings our storage software, including block, file and object storage to Amazon Web Services and Microsoft Azure public clouds.

Dell further enables you to manage multicloud storage and Kubernetes container deployments through a single console – critical at a time when many organizations seek application portability and control as they pursue cloud-native development.

Meanwhile, Dell’s cloud-to-ground strategy enables your organization to bring the experience of cloud platforms to datacenter, colo and edge environments while enjoying the security, performance and control of an on-premises solution. Dell APEX Cloud Platforms provide full-stack automation for cloud and Kubernetes orchestration stacks, including Microsoft Azure, Red Hat OpenShift and VMware.

These approaches enable you to deliver a consistent cloud experience while bringing management consistency and experience data mobility across your various IT environments.

Here’s where you can learn more about Dell APEX.

Brought to you by Dell Technologies.

Xinnor RAIDs Kioxia datacenter SSDs for performance

xiRAID dev Xinnor says its software can efficiently manage a suite of Kioxia CM7 datacenter SSDs, claiming to achieve over 70 percent of their top sequential read speed in a system it plans to demo later this week.

The CM7 SSDs are capable of reaching up to 2.7 million/600,000 random read/write IOPS and can deliver a bandwidth of up to 14/6.75 GBps for sequential read/write operations. Unlike conventional systems that use an external hardware RAID card, SoftwareRAID processes the RAID parity calculations using the host CPU. According to Xinnor, its xiRAID software surpasses the speed of other software RAID (SWRAID) alternatives. The benchmark results for RAID 5 and 6 were obtained using a dual gen 4 Xeon SP Ingrasys server in conjunction with a dozen CM7 dual-port, PCIe gen 5 interface SSDs. 

Ingrasys VP Brad Reger said: “The tests run in our lab on latest Ingrasys PCIe gen 5 server demonstrates that xiRAID excels in harnessing the full potential of PCIe gen 5 NVMe SSDs from Kixoia, showcasing remarkable performance with the reliability of RAID.” Although it doesn’t fully utilize the SSDs’ capabilities, the RAID 5 and 6 performance charts confirm it’s close.

Xinnor RAID performance

For context, a standalone Kioxia CM7 Series SSD can reach 14GBps in sequential read and 6.75GBps in sequential write. Its random read performance is rated at 2.7 million IOPS, with random write performance exceeding 300,000 IOPS.

Xinnor says its software has lockless datapath architecture that evenly distributes the workload across all CPU cores. Incorporating AVX (Advanced Vector Extensions) technology, the software can process multiple data units simultaneously using parallel YMM register operations. This capability allows the Xinnor software to match or even outperform certain hardware RAID cards.

While Xinnor claims that these performance levels reduce the need for external RAID cards, it’s worth noting that GPU-powered RAID card manufacturer Graid claims to have developed technology capable of boosting sequential writes to 90GBps or more in RAID-5 settings.

Both Xinnor and Kioxia are set to showcase their collaborative system at the FMS 2023 event in Santa Clara later this week.