Home Blog Page 431

Overland Storage survives another day in the Last Chance Saloon

Remember the tape and disk array vendor Overland Storage and its Tandberg Data subsidiary? It is about to get a new owner – albeit the same owner wearing a different hat.

This is after it was buried for several years inside Sphere 3D, a software virtualization vendor. Sphere 3D has not been that successful at bringing its products to the market and wants to sell Overland to pay its debts. It ain’t so simple.

The story goes back a long way but let’s begin in May 2014 when perennially loss-making Overland, with its Tandberg Data unit, CEO’d by Eric Kelly, was merged into or acquired by the Eric Kelly-chaired and CEO’d Sphere 3D. In my report at the time I described the companies as a “strange couple“.

Sphere 3D has been unable to grow its Glassware, HVE and other virtualization businesses sufficiently and has made losses ever since.

Overland and under water

In a little more detail, and in an unusual move, Overland CEO Eric Kelly took on the chairman’s role at Sphere 3D while running Overland.

He then took Overland into Sphere 3D and became boss of the whole shop. The idea was to combine two weak companies into a single stronger entity.

 The losses continued.

In May 2017 Sphere 3D revealed it was exploring strategic alternatives, a euphemism for selling all or some of the business.

The company announced a complicated plan in February 2018 to sell the Overland-Tandberg business for $45m to a California-based private business set up for the purpose.

This company, called Silicon Valley Technology Partners, is run by Eric Kelly.

 Eric Kelly Sphere 3D Chairman

We have a cunning plan

SVTP would raise the cash from investors, pay Sphere 3D, which would use it to pay its debts, including a debenture, promissory note and bank credit, and Sphere 3D and Overland would be free to grow and prosper.

In August 2018 and en route to the divestiture of Overland, Sphere 3D revealed that Overland and Tandberg had failed to make scheduled interest payments on time and were in default. This meant a creditor, Colbeck, could call in immediate repayment of its loan. Overland was also in default of a debenture issued by FBC.

In a move, the details of which we don’t understand, Sphere 3D assigned a credit agreement to Overland Storage and Tandberg Data as borrowers. FBC then declared “that all default and acceleration notices have been withdrawn, which eliminates the requirements of said debts being immediately due and payable”.

This week the agreement between Sphere 3D and SVTP was amended for the second time. You can read the details in full in 3D Sphere’s 8-K filing with the SEC. 

But the upshot is that there is no investor cash to pay off the $45m debt that Sphere 3D, Overland and Tandberg collectively owe. So, a bit of financial engineering is the order of the day.

Instead, SVTP is shunting $18m of the $24.5m that Sphere 3D owes FBC into its Overland /Tandberg subsidiary. It is also issuing paper to buy Overland/ Tandberg and FBC will exchange $6m in debt in return for equity.

Musical chairs

Before the second amendment, Overland/Tandberg owed $21.1m. It now owes $39m.

To summarise, Sphere 3D is now debt free. It pays off $45m in debt with SVTP paper, exchanging some FBC debt for equity, but mostly by transferring debt obligations to Overland Storage. We infer that creditors think this is the only realistic option for getting their money back.

This may be a tall order considering that Overland -Tandberg defaulted on loan repayments in August 2018 and is now saddled with an additional $18m in debt. Let’s hope its fortunes improve.

Comment

The Sphere 3D-Overland saga has been characterised by hard-to-understand financial dealings. The nuts and bolts of product development and sales have taken a back step. It has also been characterised by Eric Kelly’s tenure since 2009. It is time for a change of leadership.

Cohesity says Europe is going well. How well? Well…


Cohesity, the secondary storage converter, has 100 employees and 100 or more customers in EMEA. Not that the US company will talk hard numbers. But I have my sources.

That’s the thing about venture-backed companies on the pre-IPO bandwagon – financial details are scant and a few random factoids pumped out in the teaser press release never fails to annoy.

Yet here we are again, playing ball with the PR antics of a storage startup. Of course, Cohesity is a very big storage startup that has raised $410m to date including most recently $250m in June 2018.

The company is keen to show that it is a: putting the money to good use and b: generating sales momentum, albeit without telling anyone but investors and the I.R.S. what its sales revenues are.

Big in Europe

So in yesterday’s announcement, Cohesity boasted of its growth in EMEA and more specifically in some key European countries.

The UK, Germany, France, Italy, Denmark, and Switzerland generated most sales growth in the region, according to Cohesity. That means some 80 new customers have been acquired.

Cohesity says EMEA revenues grew 365 per cent over the period, although not revealing any financial numbers.

Mohit Aron, Cohesity founder and CEO 

Klaus Seidl was headhunted from HPE SimpliVity to head up EMEA sales in November 2017. We know three EMEA execs were also lured from HPE SimpliVity in April this year.

Altogether Cohesity’s EMEA headcount has increased 78 per cent to near-100 over the past five quarters.

It’s a rewarding result for Cohesity’s decision to cough up the money to build out its EMEA infrastructure channel, sales, marketing and support infrastructure.

Stealth startup Pliops unveils plans for storage processor

Pliops has emerged from stealth mode to herald its ambition to develop “the next generation NAND flash Storage Processors”. These are intended for use in disaggregated and hyperconverged storage and database systems.

On its website, the Israeli startup says its technology greatly simplifies the storage stack, “thus enabling highly scalable storage and DB solutions with optimal cost, power consumption and performance improvements”.

Pliops is a Tel Aviv company founded in 2017 by three guys who worked together at Samsung’s NAND R&D centre in Israel. They are Uri Beitlar, CEO; Moshe Twitto, CTO; and Aryeh Mergi, chairman.

The startup has secured undisclosed funding from State of Mind Ventures.

Pliops logo. Perhaps it indicates Programmable Logic IOPS

Keep your flashlight shining in the Pliops direction. We are sure to hear more in coming months.

Druva adds backup to AWS Snowball Edge data transfer appliance

Druva has teamed up with Amazon Web Services to offer its data protection software on AWS Snowball Edge appliances.

Snowball Edge is AWS’s dedicated server designed for on-premises use cases. Think of it as a mini, skeletal, temporary equivalent of Azure Stack.

The data transfer appliance incorporates a self-contained AWS environment for use at a remote location with restricted or no internet access. The appliance is designed for local data acquisition and processing. It can then be shipped to an AWS data centre for migration to the public cloud.

This is similar to the data transfer functionality of Amazon’s original Snowball device,  This is basically disk storage in a box for transferring bulk data to AWS from an on-premises customer data centre.

Snowball Edge combines AWS functions such as EC2 (Elastic Compute Cloud), Greengrass running Lambda functions and the Simple Storage Service (S3). The device includes 100TB of storage capacity.

So where does Druva fit in?

Druva Cloud Platform runs on AWS and provides data protection, governance and analytics, delivered as a data management service. The data protection includes backup, disaster recovery and archiving. It is built on and uses AWS microservices.

This product is now available on Snowball Edge. Customers can apply backup policies and backup or restore directly to and from the device, taking advantage of Druva’s deduplication.

Druva on AWS Snowball Edge is available via early access to Druva and AWS customers in North America. AWS Snowball Edge can be ordered directly through the Druva Cloud Platform and is shipped by Amazon Web Services, pre-configured, directly to the customer.

Nutanix tops WhatMatrix HCI league table

Nutanix has come out tops in a report by the enterprise tech comparison website WhatMatrix that compares 10 hyperconverged infrastructure vendors and 12 products.

The suppliers, products and overall percentage ranking scores, (higher=better) are:

  • Nutanix Enterprise Cloud Platform – 87.4%
  • DataCore SANsymphony – 83.3%
  • VMware VSAN – 76.6% 
  • Datrium DVX – 75.7%
  • Pivot3 Acuity – 73% 
  • Cisco Hyperflex – 71.6%
  • Dell EMC VxRail – 71.6%
  • HPE SimpliVity – 71.2%
  • NetApp HCI – 70.3%
  • Microsoft Storage Spaces Direct – 68.5%
  • Dell EMC VxFlex OS – 60.8%
  • HPE StoreVirtual VSA – 59.5%

For the second edition of its HCI report WhatMatrix ranks suppliers according to seven criteria: data availability, data services, management, design and deploy, workload support, server support, and storage support.

Their scores are aggregated for an overall ranking by percentage and used to position suppliers in a “2D Landscape Matrix”.

As you can see below this is a set of four nested oblongs. The similarities to Gartner’s Magic Quadrant and IDCs Marketscape are fairly obvious.

The WhatMatrix HCI landscape

WhatMatrix says the “y-axis represents the overall technical capability determined by the total score generated by all technical evaluation features in the comparison matrix. (Higher=Better) The x-axis visualises leadership in listed focus areas i.e. in subset(s) of evaluation features that focus on a certain use case (specialty).”

The individual criteria ranking components are separately listed and explained with supplier scores. For instance:

WhatMatrix HCI report storage support area chart

The HCI landscape

The report points out that three platforms have made “remarkable progress” since a previous edition published 22 months ago. They are Cisco HyperFlex, Datrium DVX and Pivot3 Acuity.

However, “Dell EMC VxFlex OS and HPE VSA seem to be on their way out in favour of other platforms in the respective vendor portfolios. Microsoft Storage Spaces Direct (S2D) on the other hand just had its first major update in two years with the release of Windows Server 2019, but has been seemingly unable to make the comeback that was expected.”

But can it Scale?

Maxta and Scale Computing are notable omissions, although they are on the radar of report author Herman Rutten, WhatMatrix Spokesperson Jane Rimmer told us.

“But as the on-boarding process is very time consuming he’s not had the bandwidth to include them,” she said. “Plus, we need the vendors to be proactive in working with us and it’s a bit of a case of “he who shouts loudest”.

“The ones covered in the report have invested the time in working with us and we hope to expand the numbers over time … When we first created this comparison, it was just five vendors. If we can get them interested, we will certainly make a serious attempt to include them in the next report.”

What about WhatMatrix?

WhatMatrix is an unusual beast, part analyst-part website. 

The company is self-funding and sustains itself through ‘enhanced listings’ where vendors can “show additional information on the site, engage with the visitors, etc.,”  according to Rimmer.

She says this gives the organisation a “unique financial independence from investors or sponsors allows us the agnostic place/view that we currently provide.”

Availability

Herman Rutten, lead consultant and report author

 “Software Defined Storage and Hyperconverged Infrastructure 2018” is a freely available 42-page report. No registration is required.


Intel bakes Optane DIMM support into Cascade Lake AP

Intel today announced its Cascade Lake Advanced Performance AP Xeon processor with Optane memory support.

The high-end server chip ships at the end of the year and is generally available in 2019. We expect Optane DIMM-enhanced application run-times to be popularised in the first half of 2019.

Intel will reveal more about the Cascade Lake AP at SuperComputing 2018 this month (but do check out The Register’s short piece about this upcoming high-end processor). In the meantime we shall use this opportunity to take a peek at Cascade Lake AP’s Optane DIMM support.

Obtain Optane

Cascade Lake is an iteration of Intel’s 14nm Xeon CPU line, with an optimised cache hierarchy, security features, VNN deep learning boost, optimised frameworks and libraries.

Optane is Intel’s 3D XPoint non-volatile, storage-class memory technology. We understand it is based on a form of phase-change memory.

Jointly-developed by intel and Micro, the technology is  available only in Intel’s Optane-branded products. There are NMVe SSDs and also Optane DIMMs which connect directly to the memory bus and offer faster access speeds than the Optane SSDs.

These DIMMs are accessed as memory, with load and store instructions instead of using a standard storage IO stack. Hence the faster access speed. They come with up to 512GB capacity and a server could have 3TB of Optane capacity with six such DIMMs.

Slide show

Price points

Typically, applications running in a server use DRAM and external storage – SSDs and disk drives. DRAM is very fast, SSDs are medium fast and disks are slow. Optane is slower than DRAM but faster than NAND SSDs. 

The more of an application’s working set (data + code) is in a faster medium the faster it will execute. But faster can come with a hefty price tag.

DRAM is expensive and servers can only have so much. So, hypothetically, a server might provide 1TB DRAM and 15TB of SSD to an application which can then handle 100 web transactions/minute.

If it could all fit in DRAM it would handle, say, 2,000 transactions/minute.

If you provide 1TB DRAM, 2TB of Optane and the 15TB of SSDs it might run at 800 – 1,000 transactions/minute or more; much better than just the 100 transactions/minute with1TB DRAM + 15TB SSDs

48-core

However, they require special support from the server processor – and that is delivered by the Cascade Lake AP CPUs.

The processor comes in a 1- or 2-socket multi-chip package. It incorporates a high-speed interconnect, up to 48 cores/CPU and support for 12 x DDR4 channels/CPU –  more memory channels than any other CPU, according to Intel.

Intel’s Optane SSD 905P has <10 μs  read/<11 μs write latency. The Optane DIMM operates down at a 7μs latency level.

Servers with Optane-enhanced memory capacity can run larger in-memory applications or have larger working sets in memory. This enables the server to execute the application much faster than when using a typical DRAM and SSD combination.

Light the DIMMs

NetApp’s MAX Data uses Optane DIMM caches in servers which are read and write data from/to an attached ONTAP flash array in single digit microseconds. Think 5 microseconds, compared to an NVMe SSD.

A Micron 9100, NVMe-accessed SSD has a 120 microsecond read access latency and a 30 microsecond write access latency.

Hello, Nebulon! 3Par amigos unveil their cloud storage start-up

Four 3PAR alumni led by David Scott have set up Nebulon, a cloud-defined storage company.

Scott was the CEO of 3PAR when it was sold for $2.4bn in 2010 to HP, now HPE. He ran the HPE storage business before retiring in February, 2015 to take sundry board-level positions.

Four guys from 3PAR want to do it again.

His co-founders are Siamak Nazari, Sean Etaati and Craig Nunes. Nazari is an ex-HPE Fellow, 3PAR principal engineer and Sun senior staff engineer. Etaati was a distinguished technologist and 3PAR chief hardware architect, resigning from HPE in October 2017. 

Craig Nunes was VP for product and worldwide marketing at 3PAR and then veep for global marketing and alliances at HPE. He jumped ship to Datera where he was the chief marketing officer, and then again to Datrium where he was marketing VP.

Nebulon has announced itself on Facebook, LinkedIn and Twitter. It does not have a website and no funding information is available.

The company is headquartered in Los Altos in Silicon Valley, with offices in Fremont and on the Peninsula. It says early employees include senior execs from Google, DeepLearning.ai, Stanford University, Datrium and HPE-3PAR.

We know little about its technology, except that it focuses on primary storage in a hybrid IT world. And of course it says it is a “cloud-defined storage” company, a term that is new to us.

A different kind of transformation

Wikipedia says Nebulon can refer to a fictional planet in the Transformers universe, or fictional characters in the Marvell universe or from Homestar Runner. 

E8 displaces DDN at top of genomics research team’s storage stack

Businessman in superman pose

Queen Mary University of London’s (QMUL)’s genomics research team generated too much data for its high performance computing infrastructure to handle comfortably.

And so the university’s IT department set out to improve workflow and performance. They did this buying in new hardware, a  D24 NVME-over-Fabrics array made by the Israeli startup E8 Storage, combined with a Mellanox InfiniBand connection to accessing clustered servers.

QMUL has retained its existing DDN GridScaler  array, a high-performance, parallel file system array based on Spectrum Sale. But it has pushed this  behind E8, down the stack  for use as a bulk storage tier.

Slow, slow, quick, quick, slow

QMUL’s IT team saw E8’s benchmark performances, such as this and this SPEC SFS2014 run, and decided to take a closer look.

On the SPEC SFS2014 run E8 used its D24 storage array with with 24 x HGST SN200 1.6TB dual-port NVMe SSDs, 24TB total capacity, and 16 Spectrum Scale client nodes to achieve 600 builds. The a record result at the time. It has since been superseded.

Tom King, assistant director for research, IT services at QMUL, said in a canned quote: “We were extremely pleased with the latest benchmark performance tests which showed E8 Storage as a leader in NVMe.”

The University acquired the D24 array from OCF, a  Sheffield, UK specialist integrator of HPC systems, and E8 partner. The Spectrum Scale integration made it feasible to use the E8 as a high-performance tier in front of the DDN GridScaler.

E8 D24 NVMe-oF array.

E8’s D24 is used as a fast-access scratch tier for researchers, holding data and metadata. It enables more jobs to be pushed through the cluster at a faster rate.

Accessing data from a shared block array storage resource across an NVMe-over-Fabrics link is generally reckoned to be the fastest way to get at data in such an array.  We compare it with with other storage protocols here.

If I could turn back time?

DDN recently set a new SPEC SFS2014 record,  some 25 per cent faster than an E8 NVMe storage system which used Intel Optane 3D XPoint drives. DDN used a bog-standard Fibre Channel SAS SSD array to outperform an array full of NVMe-connected Optane SSDs. So this is a surprising result.

Fujitsu preps NVMe storage line

Fujitsu this week launched the ETERNUS DX89000 high-end array (tune in here for my write-up). 

I was struck by the absence of NVMe drive and NVMe-oF fabric interconnect support – apart from a couple of NVMe drives in a cache sub-system.

So I asked Fujitsu about this omission. Unsurprisingly it already had a press statement in the works about its NVMe plans.

Today the vendor said it will introduce purpose-built NVMe storage products. These will complement existing ETERNUS hybrid flash/disk and all-flash arrays.

Fujitsu ETERNUS DX8900 array.

This plan includes NVMeoF interconnect support, according to Frank Reichart, Fujitsu senior director for product marketing storage and data centre solutions.

NVMe is the place to be

All mainstream storage array suppliers support NVMe drive and fabric access – either now or will do so in the near future.

According to Fujitsu, future storage systems based on the NVMe protocol and PCIe bus technology will enable customers to manage massive parallel data access without having to buy more storage systems.

With such access, external block array data access is much faster than external filer array access. Of necessity the latter has a filesystem stack to traverse and this takes up CPU cycles and time.

Fujitsu does not say when it is ready to market an NVMe-supporting array but we think the end of 2019 is realistic.

Speeds and feeds

Want to know more about storage drive access protocols? Check out our story on Burlywood, a flash controller start-up, in which we compare SATA, SAS and NVMe.


Excelero extends NVMesh reach with TCP and Fibre Channel support

Excelero is broadening its NVMesh offering by adding erasure coding, performance analytics and, most importantly, support for NVMe over traditional TCP/IP and Fibre Channel.

This could be considered to be a move downmarket for Excelero, which has focused to date on the much faster NVMe-oF over ROCE. However, this initiative makes its technology more accessible and affordable.

For example, shared external array storage using ether TCP/IP or Fibre Channel has a migration path to NVMe-oF that could be non-disruptive. Effectively, this includes the entire installed SAN (Storage Area Network) base.

Also, using existing NICs and adapters is likely cheaper than buying new data centre-class Ethernet gear.

IBM and NetApp have announced support for NVMe-FC, and  SolarFlare and Pavilion Data have worked with NVMe over TCP/IP.

NVMesh

Excelero introduced NVMesh to the world in March 2017.

NVMesh architecture

NVMesh offers NVMe-over-Fabric, using a shared external flash array or a hyperconverged setup that aggregates the component server’s direct-attached flash storage. The data access protocol is NVMe-oF using RDMA over data centre-class Ethernet (ROCE). This adds about 5µs of latency to the 100µs or so needed to access a direct-attached PCIe SSD.

NVMesh deployment options

TCP and FC support in NVMesh v2.0 adds some additional latency to data access, according to Excelero. If we imagine a baseline drive latency of about 100µs, users who deploy TCP-IP or Fibre Channel connectivity could get latency that goes out to 180-200µs.

NVMe-oF the People

This is still significantly faster than external flash array access, according to Yaniv Romen, Excelero’s CTO.  He said external flash arrays operate at sub-millisecond latency, and NVMe over TCP or FC is faster still.  This is similar to the performance reported by Pavilion Data, which found average TCP (183µs) latency was 71 per cent slower than RoCE (107µs).

Excelero has also added erasure coding in its MeshProtect facility which can, depending upon the implementation, provide 90 per cent drive space efficiency compared to the 50 per cent of RAID mirroring.

A MeshInspect function in v2 NVMesh adds performance monitoring of volume or client-level capacity and performance, and this can be tracked via a GUI.

Excelero’s NVMesh v2.0 is currently in beta test and will be available in early 2019. There’s more information here.

Veeam co-CEO Peter McKay off to ‘new endeavours’

Data protection powerhouse Veeam has lost its co-CEO Peter McKay, with co-CEO and co-founder Andrei Baranov taking sole ownership of the position.

McKay’s departure has prompted a reshuffle. Ratmir Timashev, Veeam’s other co-founder, is now executive vice president for worldwide sales and marketing and William Largent will run operations.  They will report to Baranov.

The company said the changes add extra focus and strength to help  continue its rapid expansion into Enterprise and Cloud and accelerate growth across all markets.

McKay was in charge of Veeam’s sales, sales operations, marketing, finance and human resources. His main brief was to grow enterprise sales.

Baranov said: “I want to thank Peter McKay for his dedication and energy, and we all wish him the best in his new endeavours.” 

McKay’s LinkedIn entry flags his Veeam departure but it sounds like he is going nowhere just yet.  “As former Co-CEO of Veeam Software, I had the pleasure of leading a worldwide organization that epitomizes the modern company. Growing a business from $400 Million to $1 Billion was a great ride.”  

Veeam is not yet officially a $1bn company.

The entry finishes with this: “It is an exciting time for organizations of all sizes as change brings opportunity. I’m honored that I had the chance to lead Veeam and have an impact on the lives of customers, partners, and employees. If you would like to contact me, find me on Twitter at @PeterCMcKay:”

Swings and roundabouts

McKay joined Veeam in July 2016 from VMware, where he was SVP and GM for the Americas.  He became a President and a board member. At that time Timashev stepped down as CEO and EVP William Largent was promoted to the CEO slot.

Less than a year later McKay was promoted to co-CEO, replacing Largent who went up to the board to chair the finance and compensation committees.

Peter McKay, Veeam’s ex-Co-CEO and President.

At the time Timashev said: “Peter has a powerful track record as CEO of several successful startups as well as leadership positions in large established technology leaders like VMware and IBM, and has taken Veeam to the next level in the short 10 months since he joined last summer.” 

Veeam announced impressive third quarter results a few days ago, with a 41st consecutive quarter of double-digit growth, and its $1bn annual revenue target in sight in a few quarters. The company said today that  these executive changes will enable it to continue its growth trajectory to be the next billion-dollar software company.

Timashev said: “According to the most recent IDC Software Tracker for Data Replication & Protection 2018H1, Veeam is #4 in market share after Dell, IBM and Veritas, and ahead of Commvault; we are by far the fastest growing vendor with 24.7 per cent YOY growth, while others are declining.”

Burlywood gets flash with NVMe

Burlywood has added NVMe protocol support for its customisable SSD controller software. And it is easy to see why.

As Tod Earhart, founder and CEO of the flash controller start-up, said, in announcing the move:  “With NVMe poised to become the future of storage, we see adding support for that protocol to be an obvious step to take.”

Burlywood last month scored $10.6m in A-series venture funding to pursue its goal of giving hyperscale customers the means to build their own SSDs.

The company’s TrueFlash software supports NVMe drives with up to 100TB capacity. And it claims to be the first to introduce a fully programmable, tunable NVMe flash storage system designed specifically for the cloud data center. In this context the company is using “cloud” as a marketing term for service provider-type data centres.

Burlywood says its SSD controller software has integrated multi-stream quality of service (QoS) and supports data centre needs including NVMe, computational storage and AI.

Tod Earhart, Burleywood founder and CEO

NVMe for my SSD

Traditionally, SSDs are accessed using storage protocols such as SATA and SAS, which were developed to access disk drives. Typical speeds are 6Gbit/s for SATA and 12Gbit/s for SAS.

NVMe (Non-Volatile Memory express) uses the server’s PCIe bus and so operates much faster, delivering higher bandwidth and better queuing.

Each SATA or SAS controller can have one queue and a depth of 32 for SATA and 254 for SAS. An NVMe controller supports 64,000 queues, each with a depth of 64,000 entries. Such a controller can handle many more IO requests than SATA and SAS controllers without degrading performance.

NVMe bandwidth is around 1Gbit/s per PCIe gen 3 lane and lanes can be grouped so that a 16-lane setup delivers 16Gbit/s. PCIe gen 4 will likely double this. NVMe handles more IO requests and deals with them quicker.

Samsung’s 970 PRO NVMe SSD delivers 3.5GB/sec sequential read and 2.7GB/se sequential write bandwidth with its NVMe PCIe gen 3 x4 lane interface and M.2 form factor. No SATA or SAS SSD can match that.

Burlywood v. CNEX Labs

Burlywood is not the only startup that wants to help hyperscale vendors make their own SSD controllers.  CNEX Labs, for instance, is better established and has some big industry backers. Both say they can provide more efficient SSD operation but are taking different approaches.

CNEX is developing skinny NVMe controllers with upper-level functions such as garbage collection carried out by upper-level controller software in a host server or FPGA managing a bunch of SSDs.

Burlywood’s controllers are SSD level and do not rely on offloading functionality to upper, system-level controllers.