Home Blog Page 314

Diamanti blows AWS and Azure away on containerised SQL Server benchmark

Diamanti has published some test results that show its containerised SQL Server performance is much faster and cheaper than running SQL Server in AWS or Azure container instances. It is also faster than using Portworx,

When SQL Server runs in containers on Diamanti’s software it outperforms the same software running in containers on AWS’s Elastic Kubernetes Service (EKS) and in the Azure Kubernetes Service (ALS). It also beats Pure Storage’s Portworx Enterprise Storage Platform when that uses the Kubernetes CSI plug-in.

A Diamanti spokesperson boasted: “Diamanti smoked AWS and Azure in SQL Server performance testing.”

The testing used the TPC-H benchmark, which comprises a suite of business oriented ad-hoc queries and concurrent data modifications. The benchmark produces a composite query-per-hour performance metric (QphH@Size), with size being the database scale factor.

Diamanti has released two charts to illustrate performance. Figure 1., below, use a SQL Server scale factor of 100:

The chart compares Diamanti to three different AWS instances and two Azure instances. AWS was consistently faster than Azure’s Ultra instance but also much slower than Azure’s Premium instance. Diamanti was faster than both and also less expensive on the TPC-H 3-year TCO measure.

Diamanti said it scaled up to more than 1 million IOPS, while AWS saturated at 200,000 IOPS, Azure at 192,000 IOPS, and Portworx at 29,000 IOPS. It runs Microsoft SQL Server twenty-five times (25x) faster than Portworx with the same hardware configuration.

A second chart uses a database scale factor of 300 and shows pretty much the same pattern of results, from Diamanti’s point of view.

Clumio simplifies ransomware protection with ‘virtual air gap’

Clumio this week launched RansomProtect which it claims is the industry’s first air-gapped ransomware protection for private and public clouds and SaaS applications.

According to the cloud-backup, companies seeking a native air gap backup solution for their VMware Cloud or AWS EC2 workloads could ultimately pay more than two times what they are currently paying without air gap backup protection, when local snapshots, remote snapshots and replication costs are factored in. 

“Traditional approaches to ransomware protection are way too complex, requiring additional hardware on-premises or the need to replicate snapshots in the public cloud. The end result is weeks or months of implementation, high costs, and heavy management,” Chadd Kenney, chief technologist for Clumio, said.

Complementing this structure with RansomProtect can save customers more than 20 per cent while completely replacing their native backup system with Clumio to achieve an air-gapped backup can boost cost savings by as much as 67 per cent. 

RansomProtect has 30-day retention for VMware and VMware Cloud on AWS, AWS EC2, RDS and EBS, and Microsoft M365 data. M365 is essentially Office 365 plus extra software such as Skype for Business, SharePoint, OneDrive, Teams, Yammer, and Planner.

RansomProtect can be set up in 15 minutes or less and its features include: 

  • Immutable storage, 
  • End-to-end encryption, 
  • Multi-factor authentication 
  • “Bring your own key” option. 

Clumio says RansomProtect also meets ISO 27001, PCI, AICPA SOC, and HIPAA certification and compliance designations. 

Air gap or not?

We think Clumio is bending the meaning of “air gap”. RansomProtect does not actually include a physical air gap between the computer systems storing the backup data and the systems that sent the backup data to them. There is a network link in place. Proponents of similar public-cloud-based backup systems rely on setting a retention period for files such that they cannot be altered or deleted within a set retention period.

Thus RansomProtect supports a 30-day retention period within which, it says, the files are immutable, and so that is the equivalent of an air gap.

But as we wrote earlier this week: “Ransomware producers can mount phishing expeditions against backup administrators and steal their credentials. They can they log in and reset backup immutability retention periods from months or weeks, to hours. Next, they run a backup to its conclusion, and then delete it after the now minimal retention period.”

A Clumio spokesperson told us: “I think the Clumio team can live with virtual air gap.”

Clumio RansomProtect, with its virtual air gap, is available today.

VMware cements HCI lead – and maybe at Nutanix’s expense

Once again, VMware and Nutanix dominate the IDC quarterly revenue tracker for HCI systems by software owner. But their numbers diverge in the 2020 third quarter tallies, with VMware gaining revenues and share in a flat market and Nutanix falling on both counts.

This is the third quarter in a row that Nutanix HCI revenues have fallen in the IDC tracker but the effect of the company’s ongoing transition to a subscription business is unclear. We are sure that newly-installed Nutanix CEO Rajiv Ramaswami, formerly a senior exec at arch-rival VMware, has a better idea of what is going on.

As the IDC table above shows, revenues for HCI systems installed with VMware software pulled in $821.1m revenues in Q3, up 4.9 per cent y/y and taking 40.2 per cent share (Q3 2019: 38.6 per cent). Nutanix-installed systems generated $512.6m revenues for 25.1 per cent share, down two per cent year on year.

Cisco HyperFlex HCI-installed systems generated $121.1m in revenues, up 11.1 per cent y/y and market share climbed a few points from 5.1 to 5.4 per cent. Could Cisco be a dark horse in the HCI software race? Let’s see how the next few quarters pan out before making that judgement.

When revenue is determined by HCI software owner, HPE is an also-ran, with SimpliVity and Nimble dHCI SW sales consigned to IDC’s “others”. It’s a different story when the revenue cake is sliced by HCI-branded systems. This gives a market view of the performance of companies that OEM VMware and Nutanix HCI software, as well as Nutanix’s branded sales.

The table above shows that HPE has had an outstanding year on that score.

Dell led the pack in Q3 with $678.6m revenues, 4.7 per cent down on a year ago, and taking 33.2 per cent share. HPE was second-placed in revenue at $262.7m, up 16.3 per cent, taking 12.9 per cent share. Nutanix with $234.3m revenues, a 10.8 per cent decline Y/Y, had an 11.5 per cent share. The rest of the market grew 4.5 per cent to $867.3m.

The divergence of convergence

IDC’s numbers are a subset of the analyst firm’s quarterly tracker for the converged systems market. This is a three-sector converged systems market according to IDC’s number crunchers, with hyperconverged systems like VxRail, certified reference systems such as FlexPod, and integrated platforms such as Oracle’s Exadata.

  • Certified reference systems & integrated infrastructure grew revenues 2.5 per cent Y/Y to $1.4bn, 36.7 per cent of the market.
  • Integrated platforms revenues declined 7.7 per cent Y/Y to $438m; 11.2 per cent of the market.
  • Hyperconverged systems revenues grew 0.6 per cent Y/Y to $2bn; 52.1 per cent of the market.

Adding these numbers to our records enabled a history chart showing long term patterns, with HCI rising, certified reference system flattish and integrated platforms declining;

The chart below shows there is no sign yet of HCI revenues overtaking external storage revenues – predicted by many – any time soon.

IDC, for this quarter, has updated its methodology by moving VMware vSAN ReadyNode solutions out of the “Other Suppliers” bucket to the branded server vendor for HCI systems. There is no change to the overall size of the HCI market, but you’ll see new HCI systems shares for some vendors (HPE and Lenovo) because of this new methodology.

Rubrik picks up Igneous pieces, gains Petabyte scale

Rubrik has bought “key technology and IP assets of Igneous”, the Seattle-based data management startup which hit the buffers this year. Terms are undisclosed.

Dan Rogers

Dan Rogers, Rubrik President, GTM Operations, said in a blog post announcing the deal that the company “welcome(s) the existing Igneous customers and their innovative team into the Rubrik family.”

Igneous has developed a UDMaaS (unstructured data management-as-a-service) that provides a petabyte-scale unstructured data backup, archive and storage system with a public cloud backend. At the end of 2019, the company said it had 40-60 customers, mostly large enterprises.

Rubrik thinks there is synergy between the Igneous and Rubrik approaches to managing NAS data and will integrate Igneous technology to build a unified NAS data management system.

Vinod Marur, SVP of Engineering at Rubrik, said: “Combining the innovative technology that Igneous has built with the power and breadth of Rubrik’s platform will bring the best in class solution for unstructured data management to our customers.” 

He said customers will be able to “realise the value of their unstructured data while reducing risk and optimising IT resource utilisation.” 

Jeff Hughes, former CTO of Igneous, said: “The combination of Igneous’s unique approach to unstructured data coupled with Rubrik’s extensive data management capabilities and their reach in the market is exactly what leading enterprises need today.”

Igneous UDMaaS 

Igneous has developed petabyte-scale unstructured files data management software to identify, classify and migrate billions and perhaps trillions of files. The company’s UDMaaS handles file and object data, and providies data protection, movement and discovery. Cloud backends for tiering off data including the big three: AWS, Azure and GCP.

Rubrik says Igneous is capable of scanning hundreds of millions of NAS files, indexing the metadata, and transferring data protection copies to the cloud with an incremental-forever architecture. The technology also offers a simple to use web interface in managing this data with the capability for deeper API-driven integration.

Igneous’s very bad year

Igneous was founded in 2013 in Seattle and has taken in $67.5m in funding, including a $25m C-round in March 2019.

The first public signs of trouble appeared in July this year when Igneous co-founder Kiran Bhageshpur gave up his CEO position while retaining a seat on the board. His replacement, Dean Darwin, a data storage industry veteran who had joined the board in March 2019 after a career including stints at F5 Networks and Palo Alto Networks.

At the time Darwin told us: “There’s no change in company strategy [but] we want to tell the story a little better than we had in the past.”

Christian Smith, VP Products, left the company around then for a business development role at AWS.

The second sign became visible in November when it laid off an unspecified number of its LinkedIn-counted 69 staff, citing a “difficult economic environment”. There was a rumour of bankruptcy mentioned to us by a prominent storage supplier competing with Igneous. At time of writing, LinkedIn records 39 employees.

IBM and Fujifilm demo 580TB tape. Yes it’s a record

IBM and Fujifilm have demonstrated a 580TB capacity tape – 32 times greater than current LTO-9 technology.

IBM and Fujifilm demonstrated a 220TB (raw) tape with 123 Gbits/sq in areal density in April 2015, using barium ferrite (BaFe) technology. In December 2017, IBM and Sony achieved a 330TB tape, using sputtered media with an areal density of 201Gbit/sq in.

Now IBM has gone further, resuming a partnership with Fujifilm, and stretching the state of the tape art out to 317Gbit/sq in and 580TB capacity.

Dr. Mark Lantz
Dr. Mark Lantz

“Hybrid clouds will rely on magnetic tape for decades to come,” Dr. Mark Lantz, IBM research manager for cloud FPGA and tape technologies, said in a press briefing. Disk drive capacity growth has stagnated and so only tape can keep up with the rise in unstructured data archival needs, he said.

IBM and Fujifilm’s achievement is close to the Information Storage Industry Consortium (INSIC) trendline for tape areal density growth. Tape has a steeper line, at 34 per cent a year, than disk drive, which has a forecast areal density growth of 7.6 per cent a year.

IBM slide.

The new areal density level was achieved with strontium ferrite media and narrower tracks: 56.2nm wide compared to 103nm wide tracks in the 201Gbit/sq in demo. This required new servo technology to enable the read-write heads to locate and follow the narrower tracks as the tape moved across the heads.

The 580TB capacity also required a 1,255m-long tape – the tape length for 330TB demo was 1,098m.

For comparison, an LTO-9 tape stores 24TB of raw data. The LTO roadmap extends out to LTO-12 and 144TB of capacity. Capacities double as LTO-9 moves to the LTO-10 generation (36TB) and then to LTO-11 (72TB).

On that basis, a possible LTO-13 generation would be a 288TB tape, well within the possibilities of the 580TB tape demo. LTO-14, with a posited 576TB capacity,is just inside the new limit set by IBM and Fujifilm.

This will make tape-drive, library, and archival software vendors happy.

Intel launches three new Optane drives. One is world’s fastest data centre SSD

Intel has launched three new Optane drives, all faster than their predecessors, including one it dubs “the world’s fastest data centre SSD”.

Speaking at the Intel Memory and Storage Moment event yesterday, David Tuhy, GM of Intel’s Optane storage division called the new P5800X enterprise SSD “a stunning, stunning product”.

Intel produces client and data centre SSDs and data centre memory Optane products and its three launches, the H20, P5800X and Persistent Memory 200 Series (PMem 200), slot respectively in to each product category.

All three new drives use gen 2 3D XPoint technology, which incorporate a new controller and four decks of cells, compared with two decks in gen 1, to achieve significant latency and performance gains over their predecessors.

We have positioned them in an Optane product table, along with rumoured future products that we will discuss below. 

H20

The H20 is a single-side M.2 gumstick format drive for thin and light notebooks, and follows on from the current H10. It pairs a 32GB slug of gen 2 3D Optane with a much larger lump of QLC 3D NAND, 144-layer in this case. Intel’s RST technology puts fast access data in the Optane cache and other data in the NAND.

The H20 will be supported by the upcoming 11th gen Intel Core U series processor and 500 Series chipset. It comes in 512GB and 1TB capacities, uses the PCIe 3.0 and should ship in the second 2021. Intel did not reveal performance details but said faster PCIe Gen 4 bandwidth is not needed for the use cases it is pursuing. The H10 has 256GB, 512GB and 1TB capacities – which means the 256GB variant is being dropped.

P5800X

The U.2 format P5800X drive, formerly known as Alder Stream, is a successor to the P4800X. Intel has disclosed a ton of performance data for the “world’s fastest data centre SSD”. The drive uses both gen 2 3DXP and the PCIe 4.0 bus to handle up to 1.5 million random read or write IOPS and up to 7.2GB/sec sequential read and 6.2GB/sec sequential write bandwidth. It also lives longer than the P4800X.

For comparison the P4800X drive delivers 550,000/500,000 random read/write IOPS and sequential read/write bandwidth is up to 2.4/2.0 GB/sec; Also the P4800 endurance is 60 drive writes per day (DWPD) for five years which is 60 per cent of the P5800X’s endurance. Coincidentally, the P5800X’s latency of <6μs is 60 per cent that of the P4800X.

P5800X capacity points are 400GB, 800GB, 1.6TB and 3.2TB. For reference the P4800X comes in 375GB, 750GB, and 1.5TB capacities. It should ship the first 2021 quarter.

Persistent Memory 200 Series

The Optane PMem 200, revealed in June and known as Barlow Pass, is 3DXP delivered in a DIMM form factor and connects to a CPU via the memory bus, not the PCIe bus. Latency is in the 100ns to 340ns area, which is more than a thousand times faster than the P5800X (1μs equals 1,000ns).

PMem 200 latency.

The PMem 200 goes up to 25 per faster in memory bandwidth terms than the current PMem 100. Application performance is improved by a utility, called eADR (enhanced Asynchronous DRAM refresh) that automates flushing of the cache in the event of a power failure. That means there is no waiting for a flush to complete as with the current product. Applications with mapped data in PMem will benefit from this.

There is the same 6TB total memory per CPU socket as the PMem 100, and the PMem 200 comes with the same Memory Mode and App Direct Mode access methods as the PMem 100. It should ship next quarter.

Future Optanes

Intel said at yesterday’s event it is planning two more 3DXP generations but did not discuss details. However, a leaked vendor presentation currently circulating the internet reveals Intel is developing a PMem product known as Crow Pass. This will support PCIe 5.0 and will require the upcoming Sapphire Rapids Xeon CPUs.

Intel Optane memory future slide.

After that a fourth generation, code-named Donahue Pass, will be paired with Intel’s Granite Rapids CPU. We have no further details, apart from suspecting that gen 3 may use a doubled deck count, with gen 4 doubling the count again.

Intel says SSDs will reach HDD TCO crossover in 2022, pushes PLC and new drives

Intel says SSDs will reach total cost of ownership crossover with hard disk drives in 2022. It’s also launching new SSDs, and sees PLC – that’s five bits per cell – NAND being used in its future SSDs.

Intel’s Rob Crooke, general manger of its Non-volatile Storage Group, said during a Memory and Storage Moment briefing today that 3D multi-layer NAND developments would enable a total cost of ownership cross-over between SSDs and hard drives in about a year’s time. 

That will enable SSDs to replace hard drives in more of the storage drive market.

A major reason for this, we’re told, is that penta-level cell – aka PLC or five-bits-per-cell – NAND will enable a lower $/TB and higher capacities for SSD. Crooke said Intel has solid plans to move to PLC in the future, and: “We’re on the right path to replace HDDs.” 

Such SSDs will enable 1PB of capacity in 1U machines using the E1.L “ruler” form-factor SSD.

Crooke also introduced three new Intel 3D NAND SSDs:

There is an SSD 670p for client systems, using QLC (four-bits-per-cell) NAND, and two data-center-grade drives: the TLC (three-bits-per-cell) D7-5510 and the QLC 144-layer D5-P5316. The D7-5510 has 3.4 and 7.68TB capacities in its U.2 (2.5-inch) form factor.

The D5-P5316 comes in 15.36 and 30.72TB capacities in U.2 and the E1.L format. It has a PCIe Gen 4.0 interface, and apparently goes faster than Intel’s previous generation SSDs. This drive enables 1PB in 1U, and a 40PB rack.

The Intel claim that HDD-SSD TCO cost crossover will happen as early as 2022 is unexpected and dramatic. If other NAND fabs and SSD manufacturers are of a similar mind, the HDD world’s core market – the high-capacity nearline drive – could be facing a long and slow decline.

Crooke, by the way, will run the Intel NAND foundry and SSD business while it is being transferred to SK hynix and then join that South Korean giant.

Asigra brings better backup ransomware protection to Office 365

Fingerprint samples

Asigra has added ransomware detection and quarantine to its Office 365 backup product. 

Cloud Backup with Deep MFA integrates with O365 and scans all files in real-time with signature-less malware and ransomware detection engines, isolating malicious code and alerting administrators of infection. The software also protects against immutability subversion attacks using step-up or Deep Multi-Factor Authentication as users access sensitive application controls. This prevents threats from penetrating backup and replication streams.

Eran Farajun.

Asigra EVP Eran Farajun said in press announcement: “For cloud and SaaS apps like MS Office 365, the customer’s backup is the last line of defence in cases where an attack has occurred. Only a sophisticated anti-ransomware suite is capable of identifying and quarantining malicious ransomware code while preventing infiltration into backup controls to ensure data is well-defended.”

The Asigra software enables users to schedule point-in-time backup copies of mailboxes and data residing in Office 365 Exchange Online, Office 365 Groups, SharePoint Online, and OneDrive for Business. Admins can determine backup frequency, retention duration and restoration granularity.

Ransomware producers can mount phishing expeditions against backup administrators and steal their credentials. They can they log in and reset backup immutability retention periods from months or weeks, to hours. Next they run a backup to its conclusion, and then delete it after the now minimal retention period.

Fingerprint samples

That deletion is then followed by a ransomeware attack and the victim finds there is no recent backup to use for recovery.

Deep MFA goes beyond username and password credentials by using fingerprint or facial recognition on smartphones. This personal identity check needs to be passed before any backup task is run. Malware actors can no longer login as before and are prevented from corrupting policy settings or deleting backup data. Sounds like a good – and overdue – idea.

Half the performance for half the price: NetApp’s new cut down StorageGRID load balancer

NetApp has launched a low-end StorageGRID appliance called the SG100 to cater for small deployments. The new system is a cut-down version of the load-balancing SG1000 and it delivers roughly half the performance and scale at half the price of its big sister.

There are six systems in the StorageGRID object storage line-up; the fast all-flash SGF6024, three higher-capacity all-disk SG6060, SG5760, SG5712, and a gateway load balancing SG1000 with no storage. The new SG100 is a combined gateway node (with load-balancer) and admin node appliance for SMB customers.

Steven Pruchniewski, NetApp tech marketing manager said in a blog post, the “new StorageGRID appliance unlocks the full power of on-prem object storage for smaller environments”. He claimed the SG100 “offers the simplest, more [sic] performant load balancing technology on the market in a cost-effective, purpose-built appliance designed for customers with small- to mid-size object storage needs.”

Our table showing the basic StorageGrid products positioned SG100 positioned at the far right, low-end position:

Here are NetApp comparisons of the SG1000 and SG100:

There is a mistake in the lower image, which should read 40 cores @ 2.1GHz for the SG1000.

Load balancing

Object storage load balancing software distributes the IO traffic across the available object storage nodes so as to avoid any one node getting overloaded and slowing down. NetApp’s software creates pools of storage nodes, adds and removes nodes on the fly, and establishes health checks.

SG1000 innards.

The optional management software provides an admin and storage tenant portal UI and API endpoint. It can also collect metrics from all nodes for monitoring and alerting.

According to NetApp, other object storage suppliers require customers to obtain a third-party load-balancer, whereas it has its own specifically-designed and integrated software and hardware.

Pricing is as follows: SG1000 = $99,000, SG100 = $49,900. The SG100 is available on Dec. 21, 2020.

Quantum gains media asset management chops with tuck-in acquisition

Quantum has acquired Square Box Systems, a small UK software developer of media asset management software. Terms are undisclosed.

The Warwickshire company is “a growing, profitable software business unit with strong gross margins that is in the late stages of transitioning to a cloud-based SaaS business,” Quantum CEO Jamie Lerner said today in the press announcement about the acquisition.

Square Box’s main product – CatDV – provides media management and workflow automation, and uses AI and machine learning techniques to catalogue and analyse video, images, audio files, and PDF digital assets. It enables search across local and cloud repositories, and provides access control across the data lifecycle. 

Square Box has over 1,500 commercial software deployments and tens of thousands of individual users worldwide, according to Quantum, which says many customers use CatDV with Quantum’s StorNext media file management.

Quantum intends to combine CatDV with StorNext in an all-in-one workgroup appliance and better serve the needs of smaller workgroups. These could be in corporate video, education, and houses of worship. 

CatDV screen.

CatDV customers are to be found in post-production, corporate video, sports, government and education markets. Quantum says the software has potential to branch out to other use cases dealing with large volumes of unstructured data – such as genomics research, autonomous vehicle design and geospatial exploration. 

Quantum said it will continue to maintain multi-vendor support for CatDV.

Rolf Haworth

Rolf Howarth, founder and CTO at Square Box Systems, now Principal Architect at Quantum, said in his statement:

“As CatDV grows and becomes a bigger player across the industry, there’s more we want to do, building on CatDV’s success and taking it to a new level. I am very excited at the prospect of working with Quantum, taking CatDV into new markets and solving new business problems, at the same time as continuing to work with our existing customers and partners.”

Square Box CEO Dave Clack is also joining Quantum as GM for Cloud Software and Analytics.

Lustre and S3 join Atempo’s Miria data moving party

Atempo has added support for Lustre and S3 in a major update for Miria, its enterprise data backup, archive and migration suite.

Update. Datadobi does work with Lustre. Comment added 16 Dec 2020.

Using parallel data movers, Miria 2020 can migrate petabytes to exabytes of unstructured data between NAS systems and on to disk file or object stores, tape and the public cloud.

Ferhat Kaddour, VP of Alliances, told a press briefing earlier this month: “The smallest migration we did this year was 80TB of 1KB files. The largest migration was 13PB for a Fortune 500 customer.”

Atempo data movement scope.

The updated Miria features:

  • FastScan for Lustre catches and consolidates incremental file system events during a migration without doing repeated full file tree scans, so saving time and resources. The same facility is used for GPFS, NetApp ONTAP and Isilon OneFS. It said competitor Datadobi does not support Lustre. [See comment below.]
  • S3-based Public cloud and object storage being used as a migration, backup or archive  source as well as a destination.
  • A new user Miria for Archiving interface.
  • Support for Arm-powered systems.

A Datadobi spokesperson said: “DobiMigrate is capable of migrating data from or to any POSIX compliant filesystem, including Lustre. This can be done either via the NFS v3 or v4 protocol or by accessing the mounted filesystem directly using the Linux filesystem APIs.”

Atempo supports the AWS public cloud today and will add support for Azure, GCP and Swift in Q1 2021. It also will add Miria support for the Nutanix AHV hypervisor in 2021 plus Huawei Fusion Compute and KVM. The company is watching the containerisation space but to date none of its customers require container support, it says.

2021 roadmap

Three new Miria products will be delivered in 2021: Miria for Analytics; Miria for Backup Admin; and Miria for Archiving Admin. Other ambitions include a disaster recovery product and a hierarchical storage management (HSM) product for tiering between primary and secondary storage with transparent user access to moved data. This will not use stubbing technology.

LINA, the Live Navigator workstation backup product, will add support for cloud service providers, adding backup-as-a-service with multi-tenancy next year. Atempo also aims to support for Cloud SAaS applications such as Salesforce.

About Atempo

Founded in France in 1992, it has raised $36m in venture funding, with the last round completed in 2007. The company says it has about 200 staff and more than 2,000 customers with maintenance contracts. DDN, Nutanix, Huawei and Qumulo use Miria software. We assume it is profitable.

Case study: The UK server and storage company at the heart of CERN, UK universities, and more

Atlas LHC CERN 648
Atlas LHC CERN 648

Sponsored It’s one thing hearing a vendor talking about a new technology. The really interesting part comes when someone else puts that new technology into real systems, in real customers, and starts running real world applications

One company that has been doing just that with Intel® Optane™ Technology is BroadBerry Data Systems, a UK-based company that is part of Intel’s very select group of Platinum Partners, giving it privileged access to the chip giant’s labs and engineers.

BroadBerry’s story began in 1989, when its founders began selling custom made PCs and servers in the UK. Unlike many of the home-grown vendors who sprung up back then, BroadBerry managed to navigate its way through the 1990s dotcom bubble and subsequent crash, as it began focusing on industrial and rack mountable server and storage systems and offering customers easy and extensive configuration.

Over 30 years on, the company has expanded across Europe and the US, building a customer list that would turn much bigger rivals slightly green, including England’s top ten universities and numerous research organisations, NASA and CERN, and commercial clients including, Google and Amazon.

One of BroadBerry’s advantages, says marketing manager Graham Hemson, is that as an independent, it can put its customers’ needs first and take its pick of industry leading components or software, for example Supermicro or Intel motherboards, or the Open-E storage software stack, without being locked down to particular suppliers or technologies. At the same time, it is very close to its customers – it has been working with CERN since 2016 for example – and is able to build them a complete solution, including networking and storage, precisely tuning it to the client’s needs and configuring it on site.

This approach also means it can move very quickly when it comes to validating and integrating new technology and components, something that can take months or more in larger, less nimble companies. This has meant it was particularly quick off the mark integrating Intel Optane into its server and storage solutions and putting the technology to work on some very demanding applications at some very interesting customers.

Intel Optane is persistent memory, based on 3D XPoint™ media technology. It offers non-volatile high capacity storage with low latency at near DRAM performance, and is both bit and byte addressable. This means it can be used in SSDs which sit on the NVMe bus and act as a replacement or supplement for conventional SSDs or be used on DIMMS offering the potential of a vast memory pool at a much lower cost than conventional DRAM.

More RAM or more storage? Yes, both BroadBerry’s clients are running demanding applications ranging from commercial big data, to genome sequencing, to geophysical analysis or, in the case of CERN, looking into the very nature of matter itself. This usually means they need lots of DRAM and lots of very fast storage, so the appeal of Optane as both storage and DRAM should be fairly clear.

“If you’re looking at high capacity products, starting from, let’s say 500TB, then obviously the cost of an all-flash solution can go up quite high,” says Matthew Dytkowski, BroadBerry’s technical pre-sales manager, with remarkable understatement. Even as IT budgets come under pressure, he continues, “Everyone’s looking to see how they can … still have a decent amount of capacity whilst retaining performance and this is where Optane is coming into play.”

Dytkowski says one immediately obvious use case for Optane storage was as a write cache device in systems running databases and other randomized workloads. This allows customers to use a tiered hybrid setup, whether with conventional flash drives or with cheaper but higher capacity hard disks, along with Optane storage. This delivers good results, he says, “for a fraction of the price of all flash arrays.”

This was a surprise to some customers, he adds. Whilst they understood the concept of NVMe storage, they didn’t know about Optane. But once BroadBerry explained the technology, and its advantages over conventional flash drives in terms of endurance, he says, “it changed the game”.

BroadBerry has also been able to improve the performance of customers’ existing installations of its kit by adding Optane – something that might not be an option with other vendor’s hardware, due to warranty and certification issues.

Life enhancing, even for older drives

Customers are also increasingly looking at using Optane as a write cache device in VMware/VSan configurations, or with Microsoft Storage Spaces Direct, says Dytkowski. Using Optane in these applications reduces latency to microseconds, he said, “and latency is king.”

By using Optane as dedicated write cache, alongside other cheaper NVMe drives with lower endurance, the life of the latter is extended, he adds. “And so again, by bringing in Optane, we are achieving great performance, great results. And with a much better price point.”

This year had also seen customers get increasingly serious about virtual desktop infrastructure, says Dytkowski, as tech departments worked out how to react to the shift to home working caused by the Covid 19 pandemic, while trying to contain data within the data centre and ensure security. Some customers even want to virtualize the workloads they would normally run on “heavy weight workstations”, he says.

This shift has played to the advantages of using Optane as NVRAM, says Dytkowski. “That allows customers to get more users or more virtual desktops on the one server, rather than buying multiple platforms.” At the same time, 2020 has highlighted the importance of the biotech space – and the technical issues researchers face when running massive workloads. Gene processing for example, eats up DRAM, which is very expensive. At the same time, many organisations in this field, being public sector or grant financed, have to be extremely careful with their budgets. Typically, clients are running 1TB of RAM per machine, so using Optane NVDIMMs and standard RAM in a 7:3 ratio means a drastic cut in TCO.

In the case of one research institute looking to acquire a CPU/RAM intensive HPC cluster for genome work, says Dytkowski, by introducing Optane memory, BroadBerry was able to spec additional hardware for the same budget: “They were very pleased, because adding an additional machine to the entire cluster can shorten the research period.”

One important point, says Dytkowski, is to work with the customer to ensure they the system is optimized towards their given workload to ensure they get the best out of Optane and the server or storage system in general. BroadBerry does extensive stress tests on its machines, and targets this towards whatever workload the customer has specified.

But the customer may have to do some reworking on their older applications to take full advantage of the extra horsepower Optane potentially gives them, he adds. For example, older SQL queries or single threaded applications might not see the full benefit, he explains, “but when you start to optimize, you unleash all the performance.”

And right now, unleashing all the performance you can get with storage is essential. As Dyktowski notes, “data demand is just crazy these days” and shows no sign of slowing down. At the same time, the sort of applications BroadBerry’s customers are running are going to demand ever more RAM. But BroadBerry’s experience at the sharp end shows a combination of a little optimization and a lot of Optane is closing the price performance gap.

Sponsored by Intel