We learnt of this via the cancellation of an archived joint Pure and AWS webinar about ObjectEngine, originally run on May 29, 2019. The webinar provider, ActualTech Media, changed the promo on its website: “PURE ASKED US TO REMOVE THIS – product discontinued-How It Works: Pure Storage ObjectEngine and AWS.”
Ask not for whom the bells toll
The webinar blurb said: “The Pure Storage ObjectEngine platform combines on-premises flash with the AWS cloud to modernise data protection for data-centric enterprises.” During the webinar attendees could find out “how this solution might finally be the real death knell for tape.”
Presumptuous. The bells tolled for ObjectEngine instead. ObjectEngine is not listed on Pure’s product webpage.
We asked Pure if it had canned Object Engine. We were sent this statement:
“With the knowledge that backup is a favored use case for hybrid and private cloud deployments and enabling backup and restore is a key focus area for Pure, we continue to help our customers improve the efficiency of their data protection workflows and better connect them to the cloud.
“Today, we are working with select data protection partners, which we see as a more cohesive path to enhancing those solutions with native high performance and cloud-connected fast file and object storage to satisfy the needs in the market.”
That’s a “yes”, then.
ObitEngine
Pure launched the ObjectEngine appliance in February last year. It used variable-length deduplication technology from StorReduce, a company that Pure acquired in August 2018.
The dedupe software runs on ObjectEngine//A hardware, with a FlashBlade as the underlying storage array, and in ObjectEngine//Cloud instances running in AWS.
ObjectEngine hardware: a 4-node, twin box OE//A270 with single FlashBlade backend box underneath. In effect, two dedupe servers feed reduced data to FlashBlade, and also rehydrate it.
Update: webinar date and status corrected; 3 July 2020.
VMware is buying Datrium, an HCI startup that pivoted to disaster recovery in the cloud, for an undisclosed amount.
But the fact that the company revealed the acquisition via a blog shows that VMware does not consider this to be a material acquisition.
John Gilmartin, VMware’s VP and GM of its SDDC Suite business unit, said in the blog: “VMware has announced its intent to acquire Datrium, to expand the current VMware Site Recovery disaster recovery as a service (DRaaS) offering with Datrium’s world-class cost-optimized DRaaS solution.”
VMware will combine “the consistent infrastructure and operations of VMware Cloud with Datrium DRaaS to reduce the cost and complexity of business continuity”.
The $4.5bn DRaaS market is the fastest growing segment for data protection use cases, Gilmartin notes, growing at 15 per cent CAGR according to IDC’s Worldwide Data Protection as a Service Forecast for 2019–2023.
He added: “After the deal closes, the Datrium disaster recovery (DR) service will expand on the existing performance-optimised VMware Site Recovery DRaaS solution with a cost-optimised option.” That translates to DR in AWS.
Datrium’s engineering teams will join VMware. The future for other Datrium staff was not revealed.
Background
Datrium was founded in 2012 and has taken in $165m in funding, with the last raise a $60m D-round in 2018.
The company began by developing disaggregated HCI (hyperconverged infrastructure) with hyperconverged nodes running storage controller software that linked them to a shared storage box.
As part of that this it devised a way of providing disaster recovery of its on-premises systems to other Datrium systems, and to a Datrium system in the public cloud. This evolved to recovering VMware on-premises applications in the cloud. The software has deep integration with the VMware Cloud in AWS and enables a data centre VMware site to failover to the VMware Cloud on AWS.
Datrium DR in AWS scheme for on-premises VMware systems.
Datrium marketed this as a way to defeat ransomware, seeing it as a killer app.
Diamanti, a hyperconverged appliance startup, has updated its Spektra management software to allow customers to deploy, replicate, and migrate Kubernetes-orchestrated applications across bare metal and public cloud infrastructures.
Diamanti CEO Tom Barton declared in a canned quote: “Our cloud-neutral Kubernetes management platform provides users the ability to make real-time, data-driven decisions, enable access to applications, and maintain data security across the data centre, cloud, and at the edge.”
Spektra 3.0 sits above Kubernetes, providing hybrid cloud data management to integrate on-premises Diamanti D20 hyperconverged clusters with the AWS, Azure and Google clouds.
Diamanti Spektra multi-tenant admin
Cloud-native apps will be mobile
According to 451 Research, stateful applications like databases, artificial intelligence (AI), and machine learning (ML) now make up a majority of containerised applications in the enterprise. Diamanti thinks enterprises will need to move these workloads for cost-optimisation, disaster recovery and geographic expansion.
Jay Lyman, principal analyst of cloud native and DevOps at 451 Research, said: “The more applications that are in containers – regardless of state – the more likely the organisation is to be truly agile and flexible in responding to challenges and opportunities.”
“Given that enterprises are seeking to containerise more applications, including stateful ones, we expect continued growth of data-rich applications and services in containers, as well as expanded use of data services in container applications.”
With Diamanti Spektra 3.0, users can provision and administer Kubernetes clusters hosted on-premises in the data centre or at the edge or in the cloud, and manage them from a single control plane. New features include improved resource management and access controls, application deployment and migration across multiple clusters, and policy-based replication for disaster recovery.
‘Full-stack visibility’
Diamanti said its full-stack visibility supports the multi-site DR and application migration, and no other suppliers can do this, not VMware (Tanzu), nor Rancher Labs, Red Hat OpenShift or Docker Enterprise.
Blocks & Files asked Diamanti how VMware and Rancher compare with this capability.
Diamanti failover preserves application state and data.
A spokesperson replied: “The main method for multi-cluster management today is merely connecting each clusters’ APIs to a central management plane. So while every vendor has the ability to visually see a cluster’s resources with some access control between clusters, the application-level features are limited to essentially ‘restarting; stateless applications in new clusters.
“That is, the existing solutions rely on powering up containers from a shared or imported image file in a new cluster – without state. There are methods to leverage a third party storage solution to migrate volumes and re-attach them in a new location, but this is a very manual process.”
It’s all integrated with Diamanti so that an application owner can target a migration or set up a DR policy for one of their applications from the same UI where they manage the application.
Diamanti will first add support for Azure followed by AWS, the Google Cloud Platform, and other cloud providers.
The new release of Diamanti Spektra is available on Diamanti D20 hyperconverged infrastructure from Diamanti, Dell or Lenovo. A Diamanti blog provides more information.
Samsung has formally announced the 870 QVO SSD – after details were leaked in Tom’s Hardware. This is intended as a workhorse unit to replace disk drives in PCs, and is priced accordingly.
Dr. Mike Mang, memory brand product biz team VP at Samsung Electronics, said: “We are releasing our second-generation QVO SSD which offers doubled capacity of 8TB as well as enhanced performance and reliability.”
The 870 QVO has a 6Gbit/s SATA interface and uses QLC flash, bolstered with a variably-sized SLC cache to speed IO. The drive comes in a 2-5-inch form factor and has 1, 2, 4 and 8TB capacity points.
Prices start at $129.99 for the 1TB model, which is certainly affordable for an SSD. For comparison, 1TB WD Blue 6Gbit/s 7,200rpm disk drive costs $44.99 on Amazon.
The 870 QVO succeeds Samsung’s 860 QVO drive, which topped out at 4TB and used 64-layer V-NAND technology organised into QLC cells. The 870 QVO uses newer 100+layer V-NAND. Samsung has not confirmed layer count but industry sources suggest it is 128.
Samsung’s ‘TurboWrite’ technology adjusts the 870’s SLC cache to 42GB for the 1TB model and up to 78GB for the 2TB, 4TB and 8TB drives.
The performance is up 98,000/88,000 random read/write IOPS. Samsung said the read IOPS number is 13 per cent higher than the 860 QVO, but our records show the 860 has a 97,000 maximum (queue depth of 32), making the 870 a mere one per cent faster. And the 870 QVO is one per cent slower at random writes, as the 860 QVO reached 89,000 (queue depth of 32 again).
In sequential bandwidth the 870 provides up to 560MB/sec reads and 530MB/sec writes, while the 860 delivered up to 550MB/sec reads and 520MB/sec writes.
So, the IO differences between the 860 and 870 are minuscule. How about endurance? The 860 offered 1.44PB at the 4TB capacity level – as does the 870.
Endurance numbers for the other capacity levels are 1TB – 360 TB written, 2TB – 720TBW, 8TB – 2.88PBW, and there is a three-year limited warranty.
The 1TB and 4TB drives are available now and the 8TB version is expected in August.
Micron has posted a 13.6 per cent uplift in revenues to $5.44bn for the third fiscal 2020 quarter ended 28 May. Net income eased 4.4 per cent to $804m.
Outlook for the next quarter is $6bn at the mid-point, which is 23.2 per cent more than last year. The US memory chip maker forecasts $21.4bn revenues for fiscal 2020 – 8.6 per cent lower than fiscal 2019.
Sanjay Mehrotra, president and CEO, said in a canned quote: “Micron’s exceptional execution in the fiscal third quarter drove strong sequential revenue and EPS growth, despite challenges in the macro environment.”
Pandemic impact
The company noted a limited Covid-19 impact on production in Malaysia early in the quarter, but was able to offset this with adjustments elsewhere. Mehrotra said: “The pandemic has impacted the cyclical recovery in DRAM and NAND, causing stronger demand in some segments and weaker demand in others.”
“Market segments driven primarily by consumer demand have seen a negative impact. Calendar 2020 analyst estimates for end-unit sales of autos, smartphones and PCs are meaningfully lower than pre-Covid-19 levels, even though estimates for enterprise laptops and Chromebooks have increased. The reduced level of global economic activity has also curtailed near-term demand.”
The general situation is that the pandemic is encouraging enterprises and the public sector to store and process more data at endpoints, and this needs DRAM and NAND. The next two quarters should see a healthy data centre outlook, improving smartphone and consumer end-unit sales, and new gaming consoles. Micron said these trends will drive DRAM and NAND demand upwards.
Micron’s DRAM business accounted for 66 per cent of revenues in Q3, up six per cent y/y, and NAND provided 31 per cent, up 50 per cent. This was a record quarter for NAND.
The DRAM and NAND chips are used in four business units:
Compute and Networking – $2.22bn revenues, up 7 per cent y/y
Mobile – $1.53bn revenues, up 30 per cent
Storage – $1.01bn, up 25 per cent
Embedded – $675m revenues, down four per cent – there was a slump in the auto market due to the pandemic
The earnings call revealed that Micron expects to gain a larger share of the SSD market over the next few quarters, with NVMe SSDs and QLC drives providing an opportunity.
The company has started customer shipments of 128-layer NAND chips and QLC NAND has grown to represent 10 per cent of its overall NAND production.
Huawei
Mehrotra said shipments to Huawei are affected by US government entity list stipulations, but Huawei accounts for less than 10 per cent of Micron’s revenues so the impact is not substantial. The US government wants to encourage more semiconductor manufacturing in the USA, but Micron will need solid financial reasons to transfer more production to its home country. At the moment this is lacking.
A chart of quarterly revenues and profits since 2016 shows Micron pulling out of a slump that started at the end of fy18. A couple more quarters under its belt will confirm if this is sustained.
Microsoft and Commvault are to integrate Commvault’s Metallic SaaS data protection with Azure Blob Storage and will develop other product integrations with native Azure services.
In a multi-year agreement between the two companies, Metallic will be a featured app for SaaS data protection in the Azure Marketplace. Metallic Backup & Recovery for Office 365 is already available on the site.
Sanjay Mirchandani, Commvault CEO, said today in a press announcement: “This is a new era for Commvault and our direction is clear – help our joint channel partners and customers simplify IT with enterprise-class, proven data protection solutions delivered through SaaS and protected in the cloud.”
Metallic Office 365 data flow diagram.
Metallic is a cloud-native backup and recovery service that uses AWS and Azure to store data.
With the Microsoft announcement, we envisage Metallic will be a data onramp for customer data to Azure. This data can be processed with Azure and also Commvault applications. There may be a Hedvig angle too; Commvault bought the startup last year to give it a software-defined storage capability.
Commvault said it will continue to support Metallic customers who want to continue using other clouds – i.e. AWS – for backend storage. It did not reveal if it plans to integrate Metallic with more cloud storage providers.
Update: Fujitsu questions answers added 3 July 2020.
Fujitsu has switched up from reselling NetApp arrays to OEMing them. The ETERNUS AF all flash and DX hybrid arrays now get NetApp-based entry and mid-rage models.
In a terse press release, Fujitsu provides few details about its four new entry and mid-range ETERNUS systems – AB, AX, HB and HX beyond saying they are “ideal for use in mission-critical systems, HPC, virtualised systems and file servers”.
Kenichi Sakai, head of Fujitsu’s infrastructure system business unit, provided a canned quote: “We see this enhanced strategic partnership with NetApp as a critical step in supporting digital transformation by enabling customers to effectively manage and leverage their data.”
The AX and HX are intended for virtualized systems and file servers, while the AB and HB systems are for mission-critical databases and HPC. They scale to 24 storage nodes and 26PB of capacity, with data stored in on-premises or sent to the public cloud.
Fujitsu intends to introduce combined Fujitsu and NetApp solutions for AI, hybrid cloud and HPC by the end of 2020. For example Fujitsu servers and NetApp arrays will be used for an AI system and Fujitsu high performance scalable file systems and servers will combine with NetApp storage for HPC installations.
The company is also developing a hybrid cloud system using NetApp Cloud Data Services that manages private and public cloud environments in the data centre with a single storage operating system.
ETERNUS
ETERNUS is Fujitsu’s storage brand and covers five product lines.
Until now Fujitsu also resold NetApp FAS, SolidFire, and AFF Series (hybrid/all-flash) arrays. It has been reselling NetApp AFF and FAS series systems for quite a while.
New products
An ETERNUS product table includes the four new systems;
Our understanding, based on the capacity levels, is that the incoming AB and AX replace entry and midrange AF models, and the HB and HX replace DX entry and mid-range models.
in its announcement Fujitsu says the AB and HB systems use NVMe and InfiniBand – but their spec sheets say different. The AB2100 is a SAS SSD system classed as NVMe-ready but InfiniBand is not mentioned. Fujitsu’s table says the AB supports 120 drives but the AB2100 spec sheet says 96 are supported.
Also the HB1100 is a hybrid flash/disk system with no NVMe and no InfiniBand. Something got lost in translation or maybe more systems are in the offing.
A Fujitsu statement said: “These are general statements on the press release for the product line of AB and HB. It applies though not to all single systems. Also, the correct number of drives supported by AB2100 is 96, which is officially stated in the press release.”
Fujitsu does not say if it is possible to upgrade from AB to AX to AF, or from HB to HX to DX. We suspect any upgrades would be disruptive and involve forklift trucks.
A Fujitsu statement said: “Depending on regional and local requirements, Fujitsu will closely collaborate with its customers defining the ideal way forward. This can include a transition from several system families to another … As such, Fujitsu will follow a very customer-centric case-by-case determination what’s the best way forward. Supportive programs will be launched depending on local requirements in the coming future.”
Fujitsu has been asked to clarify these various points and we’ll update his article when we hear back.
ETERNUS AB/HB will rollout worldwide, starting in Japan and Europe. ETERNUS AX/HX is available in Japan, with future availability pencilled for regions outside of Europe, curiously.
Fujitsu said: “Our new FUJITSU Storage ETERNUS AX / FUJITSU Storage ETERNUS HX systems and will be available in Japan only. These systems will be sold in Europe (including UK, Germany and all other regions/countries of operation) under the NetApp brand, known as NetApp AFF and NetApp FAS system families.”
So ETERNUS AX is NetApp’s AFF product and ETERNUS HX is NetApp’s FAS product.
No pricing information was supplied at time of publication.
Kioxia is shipping engineering samples of a new short ruler SSD format that delivers 35 per cent better performance than the company’s coming PCIe 4.0 SSD.
The upcoming E3.S variant of the EDSFF (enterprise and data center SSD form factor), is an enterprise NVMe SSD developed for PCIe 5.0, according to Kioxia, and is intended for public cloud, hyperconverged and general purpose servers and all-flash arrays. The E3.S format has a higher power budget than today’s 2.5-inch SSDs and PCIe 3.0.
Kioxia’s sample E3.S system, based on its 2.5-inch CM6 Series PCIe 4.0 NVMe 1.4 SSD, exhibited 35 per cent more performance with the same controller, 4 PCIe lanes, 3D TLC flash memory and 28W (+40 per cent) of power.
The image shows SSDs mounted on a 2U-size rack mounted server prototype that can house 48 such SSDs
The E3.S format is being standardised by a SNIA working group. E3.S SSDs should deliver more flash capacity in less space for more efficient use of power, rack space and storage footprint. They will feature:
Support for PCIe 5.0 and beyond through improved signal integrity
Better cooling, thermal characteristics and performance than 2.5-inch SSDs
Drive status LED indicators
Support for x8 PCIe lane configurations
There are short and long ruler standards with varying length, depth and width measurements; E1.L long ruler, E1.S (1-inch) short ruler and E3.S (3-inch) format, for example.
Intel is shipping the DC P4510 E1.l ruler format SSD and says it’s the only supplier shipping PCIe 4.0 drives currently.
Fujifilm reckons it can build a 400TB tape cartridge using Strontium Ferrite (SrFe) media. This is 33 times larger than the current LTO-8 cartridge and takes us out four more LTO generations to 2028 and beyond
We were briefed by Fujifilm on a recent IT Press Tour. Fujifilm is one of just two manufacturers of tape media – Sony is the other – and LTO is the dominant magnetic tape format. Tape is the preferred medium for archive storage – disk costs too much. But data keeps growing. Hence the need for capacity increases.
In 2017, IBM and Sony demoed areal density of 201Gb/in² and 246,200 tracks per inch. This means 330TB tape cartridges are theoretically possible – today’s LTO-8 tapes hold 12TB.
Current magnetic tapes are coated with Barium Ferrite (BaFe) and each LTO generation uses smaller particles formed into narrower data tracks. That results in more capacity per tape reel inside the cartridge. But if BaFe particles get too small a tape drive’s read/write head can no longer reliably read the bit values [magnetic polarity] on the tape. The signal to noise ratio is insufficient.
Step forward, Fujifilm which sees Strontium Ferrite (SrFe) media achieving around 224Gbit/in² area density and 400TB capacity. It says “the majority of the magnetic properties of SrFe are superior to those of BaFe, which will enable us to reach a higher level of performance whilst further reducing the size of the particles.”
If we look at a table of LTO tape generations we see we’re currently on gen 8 (LTO-8) with 96TB raw capacity cartridges. Successive LTO generations generally double capacity. Stream forward to LTO-12 and we have 192TB cartridges.
LTO tape generations to LTO-12. Blocks & Files has envisaged possible LTO-13 and LTO-14 generations.
Fujifilm expects SrFe media to be used in LTO-11 and LTO-12 tape.
The LTO generation gap
Every so often the LTO consortium announces a roadmap extension. The LTO-7 and LTO-8 formats were announced in 2010. In 2014, when LTO-6 was shipping, the consortium announced LTO generations 9 and 10. The Register envisaged an LTO-11 in 2015 and the actual LTO-11 and LTO-12 generations were announced in October 2017.
Blocks & Files expects the LTO consortium, by the end of 2021, to extend its roadmap beyond LTO-12, based on this roadmap extension history. Continuing the capacity doubling per generation progression we have seen so far, so a potential LTO-13 would have 384TB capacity.
A suggested LTO-14 would have 768TB capacity, beyond Fujifilm’s 400TB SrFE media cartridge. If we care to project further, an LTO-15 would have a 1.53PB capacity.
Blocks & Files thinks Fujifilm may need a post-Strontium Ferrite technology one generation past LTO-13.
When will that be? Assume two years to three years; say 2.5 years between LTO generations with LTO-8 shipping in 2019. We’ll see LTO-12 in 2030 and LTO-13, if it happens, in 2032/33.
Areal density headroom
How far can tape areal density improvements go?
We will use HDDs as a reference point, since that technology also stores magnetised bits in a recording medium layer deposited on a substrate; a platter in contrast to a tape ribbon.
Maximum HDD areal density is in the 700 Gbits/in² or greater area, with some drives having far higher areal density. A Seagate Skyhawk AI disk achieving 867 Gbits/in². Toshiba’s 2.5-inch MQ01 stores data at a higher density still; 1.787 Tbits/in². Put another way, HDD bits are a lot smaller than tape bits.
This implies tape media areal density improvements have a lot of headroom before they reach the current state of play in disk recording media material.
The actual read-write head technology is roughly equivalent between disk and tape. Since disk read/write heads can read and write such small bits then, in theory, tape read and write heads could do the same. So, we can keep on throwing more and more old data at tape archives and the technology will keep up.
Kaminario, Commvault and DataCore are the top software-defined block storage suppliers. However, open source Ceph is a follower in this market.
So says GigaOm, the tech analyst firm, which has published supplier comparisons for software-defined block storage, using its Radar marketscape methodology.
Software-defined block storage is characterised as software available for multi-vendor commodity servers. It is sold as software-only, pre-installed and configured bundles or pre-configured appliances. GigaOm report author Enrico Signoretti therefore does not include companies that make their software available on-premises on the supplier’s hardware only – such as Dell EMC’s PowerFlex or NetApp ONTAP.
DataCore, founded in 1998, is the longest established of the top trio, followed by Kaminario, which set up in 2008 as an all-flash array supplier and has since evolved into a software-defined supplier using certified hardware delivered by its channel. Commvault is a new entrant, courtesy of last year’s acquisition of Hedvig. Datera and StorOne are also rated as leaders, and StorPool is on the cusp of breaking into the top ranks.
Signoretti seems to regard NVMe-oF as ‘table stakes’ – companies that lag in support for the fast network protocol fare less well in the GigaOm radar. They include Red Hat, SoftIron and SUSE, which all use Ceph as their software platform, and DDN Nexenta.
Comment
Today, hardware suppliers dominate the on-premises block storage market. Software-defined block storage has not made noticeable inroads yet into SAN arrays or hyperconverged systems.
Proprietary software-defined block storage is in better shape than the open source alternative. Ceph-based companies have made less impact than newcomers such as Datera, Commvault-Hedvig and StorOne.
Ceph is the intended data storage equivalent of Linux for servers but the open source technology lags behind better-established proprietary competitors. For wider scale adoption, Ceph will need to raise its performance game.
OpenDrives, which provides video post production NAS systems, has been in the spotlight this week after releasing new customizable enterprise products in the form of three new NAS hardware series.
The firm claims its video post production platform can deliver 23GB/sec write and 25GB/sec read performance with an average of 13μs latency from a single chassis packed with NVMe SSDs. It has not yet not explained how it achieves this level of performance.
Products
There are three lines in the firm’s Ultra hardware platform range:
Ultimate Series – a 2U chassis with Ultimate 25 compute module and supporting NVMe SSDs and the fastest performer.
Optimum Series – 2U chassis with Ultimate 15 compute module and configurable with either all-flash NVMe or SAS disk drives or both.
Momentum Series – 2U chassis with Ultimate 5 compute module and supporting disk drive-based capacity-optimised modules. Designed to excel at write-intensive workflows, such as camera-heavy security surveillance.
OpenDrives Ultra hardware line
These have 2U base chassis and come with storage modules:
F – 8 x all-NVMe SSDs (960GB to 15.36TB)
FD – 24 x NVMe SSDs (1.92TB to 15.36TB)
H – 21 x SAS HDD (4,8,12, 16TB)
HD – 72 x SAS HDD (4,8,12, 16TB)
The F, FD and H have 2U enclosures while the HD comes in a 4U enclosure.
These Ultimate system are clusterable with a distributed file system, and run Atlas software. This is based on OpenZFS and provides data integrity, high availability (standby compute module) compression, inline caching in DRAM, RDMA, active data prioritisation for low latency, and a dynamic block feature. This fits incoming data to the most efficient block size from 4KB to 1MB.
Other features include single pane of glass management, single namespace, storage analytics and various data integrity techniques such as checksums, dual parity striping and inflight error correction.
Background
The company was founded in Los Angeles in 2011 by Jeff Brue and Kyle Jackson. Jackson left in 2015 and Brue passed away in 2019. They were media creatives that were looking for an efficient solution to tackle their large file, video, and imaging performance workloads – mostly film dailies, or the raw, unedited footage shot on a movie set during every day. It has since expanded from its original video market to healthcare, video games, e-sports, architecture, corporate video, and advertising agencies.
Chad Knowles joined in 2014 as the CEO and is classed as a co-founder in LinkedIn. Retired US Navy Vice Admiral David Buss became the current CEO in September 2019 with Knowles staying on as a Chief Strategy Officer and then relinquishing that to be a board member.
Sean Lee is now Chief Product and Strategy Officer.
The firm boasts revenue has increased by 83 per cent annually, every year for 5 years, though one doesn’t know from which base.
OpenDrives has so far taken in $11m in funding – peanuts in the great scheme of storage startup VC funding amounts.
Customers include Apple, Netflix, AT&T, Disney, Fox, Paramount, Universal, Sony, Warner Bros., CBS, ABC, NBC, HBO, Turner, Riot Games, Epic Games, Deluxe, Fotokem, Skydance, YouTube, Saatchi & Saatchi, Deutsch, NBA, NFL, PGA, NASCAR, Grammy Awards and Vox Media – a good roster to pin on the wall.
Competition
The 23GB/sec write, 25GB/sec read performance and average of 13μs latency sound good. How does it compare to the competition?
A Dell EMC Isilon F800 delivers up to 250,000 IOPS and 15 GB/s aggregate throughput from its up to 60 SAS SSDs in a single chassis configuration. OpenDrives doesn’t provide IOPS numbers because, it claims, its system’s algorithms and efficiencies are optimised for large file performance, not small file IO.
Dell EMC has recently announced a PowerScale F600 system, PowerScale being the new brand name for Isilon systems going forward. It didn’t release specific performance numbers.
However, the F600 supports 8 x NVMe SSDs and has superiority in CPU socket number, DRAM capacity, SSD speed, and IO port bandwidth which suggests that its IOPS and throughput numbers will be significantly higher than those for the F800. Blocks & Files thinks the F600 will deliver 5x more performance than the F800, meaning 1.25 million IOPS and more throughput as well. A literal 5x throughput improvement would mean 75GB/sec but we think this could be unrealistic.
However, again, envisaging the PowerScale F600 surpassing 25GB/sec throughput does seem realistic.
Qumulo can scale to high GB/sec levels beyond 25GB/sec but, we suspect, its single chassis performance may fall behind OpenDrives.
Net:net
OpenDrives is relatively unknown outside its niche but has a solid roster of impressive customers for its large file IO-optimised systems. If it can see off PowerScale/Isilon competition then that will speak volumes about the strength of its product.
Teradata has become much more serious about working in the public clouds and Western Digital has an NVMe Express-bolstering present for its zoned QLC SSD initiative.
Teradata’s head in the clouds
Legacy data warehouse Teradata has its head in the clouds in a seriously big way: AWS, Azure and GCP to be precise. It’s pushing more features into its Vantage-as-a-service products on the Big 3 of cloud. Customers get:
Reduced network latency via Teradata’s growing global footprint, and upgrades to compute instances and network performance;
Support for customer-managed keys for Vantage on AWS and Azure. Availability: The service level agreement (SLA) for availability is now 99.9% for every as-a-service offering. Guaranteed, higher uptime;
Quicker compressed data migration times withTeradata’s new data transfer utility (DTU) with 20% faster transfers;
Self-service web-based console gets expanded options for monitoring and managing as-a-service environments;
Integration with Amazon cloud services, including Kinesis (data streaming); QuickSight (visualization); S3 (low-cost object store); SageMaker (machine learning); Glue (ETL pipeline); Comprehend Medical (natural language processing); and
Integration with Azure cloud services, including Blob (low-cost object store); Data Factory (ETL pipeline); Databricks (Spark analytics); ML Studio (machine learning); and Power BI Desktop (visualization).
There will be more to come over time.
The Amazon and Azure Vantage enhancements are available now. They will also apply to Vantage on Google Cloud Platform (GCP), which will begin limited availability in July 2020.
WD’s Zoned Namespace spec ratified
The NVMe Express consortium has ratified Western Digital’s ZNS (Zoned Namespace) command set specification. WD has a pair of zoned storage initiatives aimed at host management of data placement in zones on storage drives.
For SMR (Shingled Magnetic Recording) HDDs:
ZBC (Zoned Block Commands)
ZAC (Zoned ATA Command Set)
For NVMe SSDs:
ZNS (Zoned Namespaces) for NVMe SSDs.
These are host-managed as opposed to the drives managing the zones themselves. That means system or application software changes. It also requires support by other manufacturers to avoid zoned disk or SSD supplier lock-in. This ratification helps make ZNS support by other SSD manufacturers more likely.
ZNS is applicable to QLC SSDs where data with similar access rates can be placed in separate zones to reduce overall write amplification and so extend drive endurance. They can also provide improved I/O access latencies.
WD’s ZNS concept.
The ZNS specification is available for download under the Developers -> NVMe Specification section of the www.nvmexpress.org public web site, as an NVM Express 1.4 Ratified TP.
WD has been working with the open source community to ensure that NVMe ZNS devices are compatible with the Linux kernel zoned block device interface. It says this is a first step and modifications to well-known user applications and tools, such as RocksDB, Ceph, and the Flexible IO Tester (fio) performance benchmark tool, together with the new libzbd user-space library, are also being released.
It claims public and private cloud vendors, all flash-array vendors, solid-state device vendors, and test and validation tool suppliers are adopting the ZNS standard – but these are not named.
Blocks & Files thinks ZNS support by other SSD suppliers such as Samsung, Intel, and Micron will be essential before storage array manufacturers and SW suppliers adopt it with real enthusiasm.
WD claimed that, with a small set of changes to the software stack, users of host-managed SMR HDDs can deploy ZNS SSDs into their data centres. More from WD here.
Shorts
Data warehouser Actian has announced GA of Vector for Hadoop. This is an upgraded SQL database with real-time and operational analytics not previously feasible on Hadoop. The SW uses patented vector processing and in-CPU cache optimisation technology to eliminate bottlenecks. Independent benchmarks demonstrated a more than 100X performance advantage with Vector for Hadoop over Apache Impala.
The Active Archive Alliance announced the download availability of a report: “Active Archive and the State of the Industry 2020,” which highlights the increased demand for new data management strategies as well as benefits and use cases for active archive solutions.
Backupper Assigra and virtual private storage array supplier Zadara announced that Sandz Solutions Philippines Inc. has deployed their the Cloud OpEX Backup Appliance to to defend its businesses against ransomware attacks on backup data.
AWS’Snowcone uses a disk drive to provide its 8TB of usable storage, not an SSD.
Enterprise Information archiver Smarsh has Microsoft co-sell status and its Enterprise Archive offering is available on Azure for compliance and e-discovery initiatives. Enterprise Archive uses Microsoft Azure services for storage, compute, networking and security.
Taipei-based Chenbro has announced its RB133G13-U10; a custom barebones 1U chassis pre-fitted with dual Intel Xeon motherboard and ready to install two Intel Xeon Scalable processors with up to 28-cores, 165W TDP. There is a maximum of 2TB of DDR4 memory, 2X 10GbitE connectivity, 1X PCI-Ee Gen 3 x16 HH/HL expansion slot and support for up to 10X hot-swappable NVMe U.2 drives. It has Intel VROC, Apache Pass, and Redfish compliance.
France-based SIGMA Group, a digital services company specialising in software publishing, integration of tailor-made digital solutions, outsourcing and cloud solutions, has revealed it uses ExaGrid to store its own and customer backups, and replicate data from its primary site to its disaster recovery site.
Estonia-based Diaway has announced a strategic partnership with Excelero and the launch of a new product, DIAWAY KEILA powered by Excelero NVMesh. Component nodes use AMD EPYC processors, PCIe Gen 4.0, WD DC SN640 NVMe SSDs, and 100GbitE networking. Sounds like a hot, fast box set.
FalconStor can place ingested backup data on Hitachi Vantara HCP object storage systems. This means data ingested by FalconStor through its Virtual Tape Library (VTL), Long-Term Retention and Reinstatement and StorSafe offerings can be deduplicated and sent to an HCP target system. Physical tape can ingested by the VTL product and sent on to HCP for faster access archive storage.
Hitachi Vantara was cited as a Strong Performer in the Forrester Wave Enterprise Data Fabric, Q2 2020 evaluation. But Strong Performers are second to Leaders and the Leader suppliers were Oracle, Talend, Cambridge Semantics, SAP, Denodo Technologies, and IBM. Hitachi V was accompanied asa Strong Performer by DataRobot, Qlik, Cloudera, Syncsort, TIBCO Software, and Infoworks. Well done Hitachi V – but no cigar.
Backupper HYCU has a Test Drive for Nutanix Mine with HYCU initiative. Customers can try out Nutanix Mine with HYCU at their own pace, with in-depth access and hands-on experience by launching a pre-configured software trial.
Data protector HubStor tells us it has revamped its company positioning as a SaaS-based unified backup and archive platform. Customer adoption remains strong and it’s adding one petabyte of data into the service each month as of recent months.
China’s Inspur has gained the number 8 position in the SPC-1 benchmark rankings with an AS5500 G3 system scoring 3,300,292 SPC-1 IOPS, $295.73/SPC-1 KIOPS and an 0.387ms overall response time.
Seagate’s LaCie unit announced new 1big Dock SSD Pro (2TB and 4TB SSD capacities) and 1big Dock (4TB, 8TB, and 16TB HDD capacities) storage for creative professionals and prosumers. Both are designed by Neil Poulton to look good on your desktop. The 1big Dock SSD Pro is for editing data-intense 6K, 8K, super slow motion, uncompressed video, and VFX content. The 1big Dock has direct ingestion of content from SD cards, CompactFlash cards, and USB devices and serves as the hub of all peripherals, connecting to the workstation with a single cable.
LaCie 1big Dock SSD Pro.
Micron Solutions Engineering Lab recently completed a proof of concept using Weka to share a pool of Micron 7300 PRO with NVMe SSDs and obtained millions of IOPS from the file system. The testing used six nodes in a 4 + 2 (data + parity) erasure-coding configuration for data protection. There’s more information from Micron here.
More than 4.5 million IOPS from Weka Micron system.
Nutanix Foundation Central, Insights and Lifecycle Manager have been updated to enable Nutanix HCI Managers to do their work remotely.
Foundation Central allows IT teams to deploy private cloud infrastructure on a global scale from a single interface, and from any location.
Insights will analyse telemetry from customers’ cloud deployments, including all clusters, sites and geographies, to identify ongoing and potential issues that could impact application and data availability. Once identified, the Insights service can provide customised recommendations.
Lifecycle Manager (LCM) will deliver seamless, one-click upgrades to the Nutanix software stack, as well as to appliance firmware – without any application or infrastructure downtime.
Nutanix and HPE have pushed out some new deals with AMD-based systems offering better price/performance for OLTP and VDI workloads, ruggedised systems for harsh computing environments, certified SAP ERP systems, higher capacity storage for unstructured data, and turnkey data protection with popular backup software. More from Nutanix here.
Cloud data warehouser Snowflake, with an impending IPO, today announced general availability on Google Cloud in London. The UK’s Greater Manchester Health and Social Care Partnership is using Snowflake in London. This follows Snowflake’s general availability on Google Cloud in the US and Netherlands earlier this year.
Storage Made Easy (SME) has signed an EMEA-wide distribution agreement with Spinnakar for its Enterprise File Fabric, a single platform that presents and secures data from multiple sources, be that on-premises, a data centre, or the Cloud. The EFF provides provides an end-to-end brandable product set that is storage agnostic, and currently supports more than 60 private and public data clouds. It supports file and object storage solutions, including CIFS/NAS/SAN, Amazon S3 and S3 compatible storage, Google Storage and Microsoft Azure.
StorageCraft announced an upgrade of ShadowXafe, its data and system backup and recovery software. Available immediately, ShadowXafe 4.0 gives users unified management with the OneXafe Solo plug-and-protect backup and recovery appliance. It also has Hyper-V, vCenter and ESXi support and consolidated automated licensing and billing on ConnectWise Manage and Automate business management platforms.
StorOne has launched its Optane flash array, branding it S1:AFAn (All-Flash Array.Next) claiming it’s the highest-performing, most cost-effective storage system on the market today and a logical upgrade to ageing All-Flash Arrays. CompuTech International (CTI) is the distributor of StorONE’s S1:AFAn. Use TRU price to run cost comparisons.
Europe-based SW-defined storage biz StorPool has claimed over 40 per cent y-o-y growth in H1 2020 and a 73 per cent NPS score. New customers included a global public IT Services and consulting company, a leading UK MSP, one of Indonesia’s largest hosting companies, one of Netherland’s top data centres, and a fast-growing public cloud provider in the UK. StorPool is profitable and hasn’t had any funding rounds since 2015.
TeamGroup announced the launch of the T-FORCE CARDEA II TUF Gaming Alliance M.2 Solid State Drive (512GB, 1TB) and T-FORCE DELTA TUF Gaming Alliance RGB Gaming Solid State Drive (5V) (500GB, 1TB), both certified and tested by the TUF Gaming Alliance.
TeamGroup gaming SSDs.
Frighteningly fast filesystem supplier WekaIO said it has been assigned a patent (10684799) for “Flash registry with write levelling,” and has forty more patents pending. Forty? Yes, forty.