Home Blog Page 343

Intel regains IO500 HPC bragging rights from WekaIO

Intel has leapfrogged WekaIO to claim the IO500 fastest HPC file system. WekaIO supplanted Intel to take top place in the previous IO500 league table, which was published in November 2019.

Compiled by The Virtual Institute for IO, the IO500 benchmarks measures bandwidth, metadata and namespace searching. Last November, WekaIO scored 938.95 versus Intel’s 933.64. In the latest ranking, Intel scored 1,792.98 with WekaIO hitting the same number as last time. Here are the top five results:

Intel also took the IO500 third slot with a Texas Advanced Computing Centre Frontera system using DAOS, which scored 768.80.

DAOS (Distributed Application Object Storage) is Intel’s open source file system for high performance computing. It accelerate file accesses using NVRAM and Optane storage-class memory plus NVMe storage in user space. 

The DAOS software stack is still under development – for example, Intel has adapted the IO driver to work better with the IOR and MDTEST components of the IO500 benchmark.

DAOS placed second on the November 2019 ‘Full List’, using 26 client nodes. Better performance can be expected with a larger set of client nodes, particularly for metadata tests that scale with the number of client nodes. In the latest test, Intel doubled node count to 52 and nearly doubled performance.

Over to you, Weka.

This Week in Storage, featuring Qumulo, Actifio and more

This week, AWS has made Qumulo filer software available in the AWS government cloud; Actifio backs up SAP HANA to object storage in GCP; and LucidLink is doing cloud collab stuff with Adobe Premiere Pro.

Qumulo AWS takes on Azure NetApp Files

Scalable file system supplier Qumulo has announced its availability in the AWS GovCloud (US) through the AWS Marketplace.

Qumulo says Government organisations can now integrate their file data with legacy applications in private cloudand cloud-native applications in AWS GovCloud (US) with a single file data platform.

The company is working with Corsec Security Inc. to gain various US government certifications for its software. The company said it aims to make Qumulo the strategic choice for all types of Controlled Unclassified Information (CUI) and unclassified file data., as well as the upcoming FIPS 140-2 and Common Criteria EAL2+ certifications of its platform.

NetApp, a Qumulo competitor, this week announced Azure NetApp Files is in the Azure government cloud

Actifio HANA DR costs 86 per cent less on GCP

Copy data manager Actifio is waving a tech validation report from ESG that says it reduced backup and disaster recovery (DR) infrastructure costs by 86 per cent when protecting SAP HANA workloads with Google Cloud Platform (GCP) object storage. The comparison is with legacy backup approaches using high-performance block storage. 

ESG found the same high levels of performance from a DR copy running off Google Nearline object storage as their production instances running on Google Persistent disk block storage.

ESG Senior Validation Analyst Tony Palmer said: “Cloud object storage is typically 10x inexpensive than the cloud SSD/flash block storage. Actifio’s ability to recover SAP HANA database in just minutes from cloud object storage, while delivering the I/O performance of an SSD/flash block storage is very unique in the industry and reduces cloud infrastructure costs by more than 80 per cent for enterprises.” 

You can download the ESG Actifio SAP HANA Technology Review.

LucidLink builds cloudy Adobe file construction

LucidLink, which supplies accelerated cloud-native file access software, is partnering with Adobe Premiere Pro so its users can edit projects directly from the cloud.

Generally, Adobe Premiere Pro video editing software users edit local files because access is fast. However, team working  and remote team working require multi-user access to remote files. LucidLink’s FileSpaces can provide teams with on-demand access to media assets in the cloud that are accessed as if they were on a local drive.

Sue Skidmore, head of partner relations for Adobe Video, said “With so many creative teams working remotely, the ability to edit Premiere Pro projects directly from the cloud has become even more important. We don’t want location to hold back creativity. Now Premiere users can collaborate no matter where they are.”

Filespaces provides a centralised repository with unlimited access to media assets from any point in existing workflows. The pandemic has encouraged remote working. Peter Thompson, LucidLink co-founder and CEO, provided a second canned quote: ”Our customers report they can implement workflows previously considered impossible. We are providing the missing link in cloud workflows with ‘streaming files.’”

Shorts

Actifio has announced technical validation and support for Oracle Cloud VMware Solution (OCVS), Oracle’s new dedicated, cloud-native VMware-based environment. OCVS enables enterprises to move their production VMware workloads to Oracle Cloud Infrastructure, with the identical experience in the cloud as in on-premises data centres. It integrates with Oracle’s second-generation cloud infrastructure. OCVS is available now in all public regions and in customer Dedicated Region cloud instances.

Taiwan-based Chenbro has announced the RB23712, a Level 6, 2U rackmount server barebone (no CPUs, fitted drives) with 12 drive bays designed for storage-focused applications in the Data Center and HPC Enterprise. It pre-integrates an Intel Server Board S2600WFTR with support for up to two, 2nd GenerationXeon Scalable Processors. The RB23712 offers Apache Pass, IPMI 2.0 and Redfish compliance, and includes Intel RSTe/Intel VROC options.

Microchip Technology has introduced the latest member of the Flashtec family, the Flashtec NVMe 3108 PCIe Gen 4 enterprise SSD controller with 8 channels. It complements the 16-channel Flashtec NVMe 3016 and provides a full suite of PCIe Gen 4 NVMe SSD functions. The 3108 is intended for use by M.2 and the SNIA Enterprise and Data Center SSD Form Factor (EDSFF) E1.S drives.

Microchip 3108

Nutanix says it has passed 2,500 customers for Nutanix Files. Files is part of a Nutanix suite for structured and unstructured data management, which includes Nutanix Objects, delivering S3-compatible object storage, and Nutanix Volumes for scale-out block storage.

Penguin Computing has become a High Performance Computing (HPC) sector reseller and solution provider of Pavilion Hyperparallel Flash Arrays (HFA).

Quantum has announced its ActiveScale S3-compatible object store software has been verified as a Veeam Ready Object Solution.

Synology has launched new all-flash storage and a line of enterprise SSDs. The FS3600 storage system is the newest member of Synology’s expanding FlashStation family of network-attached storage (NAS) servers. Synology has also announced the release of SATA 5200 SATA SSDs and SNV3400 and SNV3500 NVMe SSDs.

The FS3600 features a 12-core Xeon, up to 72 drives, and 56GbitE support. The new SSDS can fit in its enclosure and have 5-year warranties. They integrate with Synology DiskStation Manager (DSM) for lifespan prediction based on actual workloads.

Data replicator and migrator WANdisco said it is the first independent software vendor to achieve AWS Competency Status in data migration.

Zerto is reprinting a short Gartner report: “What I&O leaders need to know about Disaster Recovery to the cloud.” The report assumes that by 2023,” at least 50 per cent of commodity-server workloads still hosted in the data centre will use public cloud for disaster recovery.” It’s an eight-minute read and you can get it, with minimal registration.

IBM storage boss Ed Walsh leaves the company

Ed Walsh, general manager of IBM’s storage business, is leaving the company, according to sources who say he is not joining a competitor.

An IBM spokesperson said: “Yes, we can confirm Ed Walsh is leaving IBM and we wish him well at his new opportunity.”

Ed Walsh

Walsh was hired to run IBM’s storage division in July 2015. He joined the company from Catalogic, a copy data management company, where he had the CEO and president roles.

Walsh has also worked for IBM before, via the company’s 2010 acquisition of Storwize, a storage array supplier, where he was also CEO. 

IBM appointed a new CEO, Arvind Krishna, in April this year. The storage business within IBM recently recorded its third quarter of revenue growth, helped by z15 mainframe storage sales.

Western Digital loves flash more than hard drives

Western Digital is generating more revenue from flash and SSDs than from disk drives. Does this show that WD is losing out big time to Seagate in the key enterprise nearline HDD market? Or is it a forever thing?

It appears to be the latter, judging from CEO David Goekeler’s comments on the Q4 earnings call yesterday. “We believe flash is the greatest long-term growth opportunity for Western Digital and is an area where we’ve already had a tremendous foundation with consumer cards, USB drives and client and enterprise SSDs.”

Our sister publication The Register has covered WD’s Q4 and full fiscal 2020 year results and we’ve dived into the numbers to see how disks and SSDs fared relative to each other.

WD shipped 23.1 million disk drives in the quarter, down 16.6 per cent from 27.7 million shipped a year ago. HDD revenue was $2.05bn, down 3.7 per cent on the year ago’s $2.13bn. Reflecting capacity price trends, total HDD exabytes shipped – about 110 EB – was 12 per cent on the year, according to Wells Fargo senior analyst Aaron Rakers.

He told subscribers: “WD’s Nearline HDD capacity shipments grew ~30 per cent y/y, or leaving us to estimate flat q/q. We estimate nearline HDD capacity shipped at ~76 EBs, which compares to Seagate shipping 79.5 EB (+128 per cent y/y), and Toshiba shipping 8.4 EB (-14 per cent y/y) in the June quarter. We estimate WD’s non-nearline HDD capacity shipped at ~33 EBs, or -15 per cent y/y and -5 per cent q/q.“

So it looks like WD was not a beneficiary of Toshiba’s calamitous quarter i hard drives, which saw the company take a big hit from an outage at its Philippines disk drive plant.

Western Digital Q4 fy2020 flash and HDD revenue chart; two diverging curves.

WD says 14TB sales are doing well, but the HDD business has not been helped by Seagate’s dominance of the 16TB nearline disk business. WD was late to market with 16TB and 18TB drives and aims to remedy this by ramping 18TB drive production to one million drives in the next quarter.

WD expects cloud nearline disk drive buyers to slow purchases this quarter: “We feel like we’re definitely going into a digestion phase,” Goekeler said. “If we look at — we’re coming off of three really strong quarters of exabyte shipment and the demand signals we’re getting are going to be – are a little bit down for the next quarter or two.”

Let’s now take a quick peek at WD’s flash / SSD business, where revenue of $2.24bn was 48.6 per cent higher than the year ago quarter ($1.51bn).

Goekeler noted “healthy demand for our flash-based notebook solutions drove record revenue in our OEM end market,” in the earnings call, and added: “Enterprise SSD revenue in the quarter grew nearly 70 per cent sequentially and our revenue share increased to the low double digits. This will remain an important area of focus within our flash portfolio.”

To conclude, the disk drive business is now secondary to flash, and NAND wafers drive more revenue for WD than disk drive platters.

Fungible Inc: Why you need DPUs for data-centric processing

A flurry of add-on processor startups think there is a gap between X86 CPUs and GPUs that DPUs (data processing units) can fill.

Advocates claim DPUs boost application performance by offloading storage, networking and other dedicated tasks from the general purpose X86 processor.

Suppliers such as ScaleFlux, NGD, Eideticom and Nyriad are building computational storage systems – in essence, drives with on-board processors. Pensando has developed a networking and storage processing chip which is used by NetApp and Fungible is building a composable security and storage processing device.

Pradeep Sindhu.

A processing gap has opened up, according to Pradeep Sindhu, Fungible co-founder and CEO. GPUs have demonstrated that certain specialised processing tasks can be better carried out by dedicated hardware which is many times faster than an X86 CPU. The idea of Smart NICs, host bus adapters and Ethernet NICs with on-board, offload processors for TCP/IP offload and data plane acceleration, have educated us all that X86 servers can do with help in handling certain workloads.

In a press briefing this week, Sindhu outlined a four-pronged general data centre infrastructure problem.

  1. Moore’s Law is slowing and the X86 speed improvement rate is slowing while data centre workloads are increasing.
  2. The volume of internal, east-west networking traffic in data centres is increasing.
  3. Data sets are increasing in size as people realise that, with AI and analysis tasks, the bigger the data set, the better the data. Big data needs to be sharded across hundreds of servers, which leads to even more east-west traffic.
  4. Security attacks are growing.

These issues are making data interchange between data centre nodes inefficient. For instance, an X86 server is good at application processing but not at data-centric processing, Sindhu argues. The latter is characterised by all of the work coming across network links, by IO dominating arithmetic and logic, and by multiple contexts which require a much higher rate of context-switching than typical applications.

Also, many analytics and columnar database workloads involve streaming data that need filtering for joins and map reduce sorting – data-centric computation, in other words. Neither GPUs nor X86 nor ARM CPUs can do this data-centric processing efficiently. Hence the need for dedicated DPU, Sindhu said. 

Fungible Video

DPUs can be pooled for use by CPUs and GPUs and should be programmable in a high-level language like C, according to Sindhu. They can also disaggregate server resources such as DRAM, Optane and SSDs and pool them for use in dynamically-composed servers. Fungible is thus both a network/storage/security DPU developer and a composable systems developer.

It’s as if Fungible architected a DPU to perform data-centric work and then realised that composable systems were instantiated and torn down using data-centric instructions too.

A Fungible white paper explains its ideas in more detail and a video provides an easy-to-digest intro.

We understand the company will launching its first products later this year and look forward to finding out more about their design, their performance and route to market.

US government agencies get Azure NetApp Files

Microsoft has added Azure NetApp Files to US government region data centres, making it available to federal, state and local agencies.

Azure NetApp Files is a fully-managed Microsoft Azure cloud service based on NetApp ONTAP systems located in its regional data centres. It is currently available in Canada and the USA, in seven of Azure’s 60+ regions, with North Central US region availability expected by the end of 2020;.

Deployment of Azure NetApp Files to government data centre regions started in Virginia, followed by Arizona and Texas deployments. That adds three more regions to the seven listed by Microsoft.

Anthony Lye.

Anthony Lye, NetApp’s SVP and GM for the Cloud Data Services business unit, provided a quote. “Azure NetApp Files is a significant Azure differentiator, and a game changer for all organisations that want to fully harness the value of cloud.”

Lye said in a February NetApp blog: “Microsoft and NetApp co-created Azure NetApp Files … NetApp has an inside track at Microsoft. We are, for all intents and purposes, an engineering team within Microsoft.” 

Azure NetApp Files is delivered as an Azure first-party service, which allows customers to provision workloads through their existing Azure agreement. No additional NetApp ONTAP licensing is required.

Azure also has its own basic Azure Files service with SMB file shares. There are no listed Azure partnerships with other filesystem suppliers such as Qumulo, or WekaIO. Scality is developing scale-out like system for Azure’s Blob object storage service.

Azure Government has US Federal Risk and Authorization Management Program (FedRAMP) High Baseline level authorisation, meaning it meets security requirements for high-impact unclassified data in the cloud.

ScaleFlux computational storage makes Percona MySQL faster

Percona’s open source MySQL database runs faster with ScaleFlux computational storage SSDs. And they have the benchmarks to prove it.

The companies compared the ScaleFlux CSD 2000 with an Intel DC P4610 SSD and found that the CSD 2000 was up to three times faster in certain workloads such as compression

In their test results document, published today, Percona and Scaleflux state “Our testing verifies ScalesFlux’s claims that if a customer has a Write heavy or mixed Read/Write workload the ScaleFlux drive can improve their performance.”

This sounds attractive, but will customers bite?

The basic idea of computational storage is this: dedicated hardware in storage drives perform operations such as compression and decompression and video transcoding faster than a drive’s host server.

For it to succeed, any computational storage device needs to be faster than using the same host server with ordinary (non-computational) SSDs. So ScaleFlux passes this table stakes test. But is this enough to overcome any reluctance by customers to rely on single-sourcing computational storage drives to get the performance they want? Traditionally, customers prefer multiple sources for components such as disk drives and SSDs, to avoid supplier lock-in,

And then there’s the price performance angle. The extra queries per second delivered by the CSD 2000 have to be affordable and worth the money. Expect a spreadsheet blizzard from computational storage suppliers trying to prove their case.

The CSD 2000 benchmarks

The CSD 2000 includes hardware GZIP compression that effectively doubles capacity and delivers 40 to 70 per cent more IOPS than similar NVMe SSDs on mixed read and write OLTP workloads, according to Scaleflux.

For its benchmarks, Percona and ScaleFlux compared a TB CSD 2000 to an Intel DC P4610 SSD with 3.2TB capacity. This is a 64-layer, TLC, 3D NAND SSD with an NVMe interface running across a PCIe gen 3.1 x 4 lane connection.

The CSD 2000 is also an NVMe/PCIe gen 3 x 4  device, using 4TB ofTLC 3D NAND and with an FPGA doing storage computations, like compression, in the drive’s enclosure. As well as compression it has a customisable database engine accelerator. There are more details in its datasheet.

Percona’s database has a DoubleWrite Buffer to protect against data loss and corruption, in which a set of data pages are pulled from a buffer pool and held in the DW buffer until they are correctly written to the final database tablespace. The CSD 2000 has an Atomic Write feature which renders the Double Write Buffer redundant and it can be turned off, speeding database writes. Intel’s P4610 has no atomic write feature.

Read-only, write-only and mixed read/write workloads were tested with Percona v8.0.19 with a Sysbench data schema. The Intel drive used Percona with the DoubleWrite feature and, separately, server-based compression. The ScaleFlux drive was tested with and without the DoubleWrite feature being turned on.

In general, the ScaleFlux drive was faster than the Intel SSD, especially with DoubleWrites turned off. Here is a chart of mixed read/write results to show this, with a 2.5TB, 540 table, data set:

As the number of threads increases beyond 8 up to 256 the ScaleFlux advantage grows, with the Atomic Write becoming significant at 64 threads and up (bright blue bars). Turning off Percona’s DoubleWrite allowed the CSD 2000 to boost its performance from just over 2,000 queries per second at 64 threads to more than 2,700.

At the 64 to 128 thread level, the ScaleFlux Drive is up to three times faster than the Intel drive (light grey bars) when the host server is doing the compression in software. 

Read more in a blog.

Panasas auto-tunes filesystem storage for different IO patterns

HPC workloads vary from using very large files, through medium-sized ones to myriad small files and mixed workloads. An HPC  storage system must be tuned to match the IO patterns of its main workloads – unless, of course it is bought for one specific type of file IO,

Panasas, the HPC file storage supplier, has devised a workaround with a PanFS update that auto tunes to speed file access with different IO types.In its announcement today, Panasas claims all other parallel file systems require clumsy tiering and-or manual tuning to compensate for specific workload characteristics and changes. That leads to inconsistent performance and higher admin costs.

Panasas’s new feature, called Dynamic Data Acceleration (DDA), does this automatically using the same tiering and caching ideas and does it consistently faster with no need for manual tuning.

Panasas ActiveStor Ultra.

With DDA, NVMe SSDs store metadata, lower-latency SATA SSDs store small files, and large files are stored on low-cost, high-bandwidth disk drives. Storage-class memory NVDIMM holed transaction logs and internal metadata, and the most recently read or written data and metadata are cached on DRAM.

Small file IO has more metadata accesses – such as file and folder lookup – as a proportion of an overall file IO than large file IO, where the data portion off the IO can be many times larger than the metadata portion.

PanFS dynamically manages the movement of files between SSD and HDD, and uses fast NVMe SSDs, NVDIMM and DRAM to accelerate file metadata processing. That speeds small file IO much more than it would large file IO, where the metadata processing occupies a much smaller percentage of the overall file IO time.

DDA is included in the latest version of PanFS, which is now generally available to supported customers.

IBM adds cloud tiering to boosted memory mainframe storage array

IBM has improved performance and functionality for the mainframe focused storage array and virtual tape library (VTL) products.

The DS8900F flash array gets a 70 per cent increase in memory cache size from 2TB to 3.4TB, enabling the consolidation of workloads into a single DS8900F system. For example, IBM cites 8 x DS8870 arrays being consolidated into a single DS8950F.

David Hill, a senior analyst at Mesabi Group, provided a prepared quote for IBM: “IBM has done a good job in providing a comprehensive portfolio of storage for IBM Z solutions to address every enterprise data storage requirement and meet the demands of a modern, agile and secure to the core infrastructure.” 

The DS8910F can be integrated with the latest z15 model T02 mainframe and LinuxONE III model LT2 product, which were announced on April 14.

The DS8900F now integrates more closely with the TS7770 VTL, a disk array with a tape library interface;

  • The DS8900F can compress data before sending it across a TCP/IP link to the TS7770, which is configured as an object target and can store up to 3 times more data.
  • SP 800-131A compliant In-flight encryption is added to the DS8990F for sending data to TS7770s.
  • Transparent Cloud Tiering (TCT) is introduced to the DS8900F and TS7770, automating data movement to and from the cloud, and reducing mainframe utilisation by 50 per cent for this work.
  • The DS8900F’s Data Facility Storage Management Subsystem (DFSMS ) uses TCT to create full volume backups to the cloud which can be restored to any DS8900F system. 

A disaster recovery feature involves data sets copied from a grid of TS7770s and sent to cloud pools where they are managed by DFSMS. These pools can reside in IBM’s Cloud, AWS S3, IBM Cloud Object Storage on-premises, and RStor. Version retention time is enabled within each cloud pool. The data sets can be restored to an empty TS7770 outside the original grid.

Minor storage news from IBM has SAS interfaces being added to TS1160 tape drives and the TS4500 tape library getting support for the TS1160 drives. 

You can read an IBM blog about these announcements and a wider scope blog for additional mainframe-related news.

Why Kaminario changed its name to Silk

Four of the most important domesticated silk moths. Top to bottom: Bombyx mori, Hyalophora cecropia, Antheraea pernyi, Samia cynthia. From Meyers Konversations-Lexikon (1885–1892)

Silk is the new name for Kaminario, an on-premises all-flash array startup, that has transformed itself over the last three years into a cloud block storage software company.

Derek Swanson.

Blocks & Files took the opportunity to interview Derek Swanson, Silk CTO, about the company’s transition.

Blocks & Files: Why did Kaminario change its name to Silk?

Swanson: Lots of reasons. Flash and the public cloud are revolutions. … DevOps and the public cloud are forming a new paradigm … More and more enterprises are looking at putting storage in the cloud. Everyone wants to be cloudy, with a legacy footprint. We wanted to acknowledge what’s going on as we are now a cutting edge cloud storage company and not just a legacy, on-premises storage supplier.

Also we’re not a startup anymore. We have a 10-year old software stack that’s been fully ported to the public cloud. It runs with a consistent 300-400 microsecs latency and it’s not using a caching engine. … That contrasts with NetApp and its 10 to 30 millisecs performance in the cloud. … Everything is done in DRAM. … The user experience should be the same on-premises and in the public cloud.

We do all our development on GCP and have done for many years now. Our GCP deployment happened in August 2019. Now we have AWS as well with Azure following in the next few months.

Our controllers are active:active and storage performance scales by adding another compute engine.

Blocks & Files: What about high-availability in the public cloud?

Swanson: Tier one apps running in the public cloud need 100 per cent availability across zones and regions. We do snapshots but we need synchronous writes to multiple zones and regions. That requires write-many snapshots and synchronous replication. We have a robust solution coming later this year.

Blocks & Files: Does VisionOS have predictive analytics? What do they do?

Swanson: VisionOS sends telemetry to Clarity in GCP and we do predictive analytics for performance, capacity, hardware and software failures. We push out patches and firmware upgrades. A minor hardware event pattern can help predict a failure. We initiate pro-active support tickets and up to 92 per cent of all tickets are initiated by us, not customers. In fact we tell the customers.

Blocks & Files: Does Silk think there is a need for a unified file and object storage resource?

We provide a dedicated block store and frontend it with a unified file/object head today. For us to develop a unified file and object offering would be cost-prohibitive. We use other people’s front ends; Ceph, Gluster and Windows Storage Server, for example.

Blocks & Files: Will all storage suppliers have to go to a homogenous, unified on-premises and public cloud offering like Kaminario?

Swanson: The problem is that the big boys sell hardware. Nutanix took a huge hit by exiting the hardware business. The big storage guys couldn’t do it; there would be too big a revenue hit. Co-locating hardware in the public cloud is a short-term fix.

Blocks & Files: What do you think of Pure Storage’s Cloud Block Store public cloud implementation?

Swanson: It’s not a scalable architecture, using active:passive controllers. To provide scalable performance you must have dedicated processing power and memory. 

Pure uses NVRAM as a write cache with NVMe SSDs to commit writes. It’s very fast but it’s not DRAM – and you can’t do it in the cloud. Pure’s storage performance is slower in the cloud than what they get on-premises. It’s a limitation of their architecture. Fixing this would need a rewrite of its Purity OS.

Blocks & Files: How is Kaminario’s move to the hybrid, multi-cloud doing?

Swanson: Customers like it, particularly the on-premises migration element. We’re in a lot of proof-of-values (concepts). A surprising number of cloud-native customers are finding their public cloud storage performance is limited. They have no shared storage and can only get more performance by buying excess compute and networking in the cloud, meaning extra costs.

So we now have interest from cloud customers and Google as we solve storage performance problems in the cloud. Customers don’t need to rewrite their apps. We are partnering and co-selling with Google and hope to do the same with AWS.

Net:Net

The metamorphosis into Silk has taken Kaminario two and half years. In January 2018, the company made its Vision array OS and Clarity analytics software available on a pay-as-you-go basis to cloud service providers. It exited the hardware business at the same time, supplying its software to run on certified hardware built by Tech Data. This hardware and Kaminario’s software became available on subscription in June 2019.

Kaminario started talking about a single storage data plane spanning multiple clouds at the end of last year. And in April, the company announced the Vision OS roadmap, with porting to AWS, Azure and the Google Cloud Platform (GCP).

Google is becoming an important partner. If Swanson is right about Silk’s scalable and consistent performance advantage in the cloud then Silk could grow at the expense of all-flash array vendors, such as NetApp and Pure, with cloudy ambitions.

Factory outage torpedoes Toshiba disk drive shipments

Toshiba disk drive shipments fell by half in the second 2020 quarter – an outage at its Philippines disk manufacturing plant is thought to be to blame.

Wells Fargo senior analyst Aaron Rakers pointed to the “severe decline” in a note to his subscribers. “Toshiba shipped ~9.55 million total HDDs during the June quarter, which represents a 51 per cent y/y decline and down 32 per cent q/q.” 

Toshiba Information Equipment (Philippines) Inc., Binan.

Rakers doesn’t say what kind of outage affected the plant. And there is no mention of Covid-19 factors, such as lockdown, that may have affected Toshiba’s HDD ship numbers.

Toshiba’s Philippines plant produces nearline, 2.5-inch and  3.5-inch surveillance drives. Raker’s unit ship numbers for each in the quarter are:

  • Nearline – c941,000 – down 50 per cent y/y and down 32 per cent q/q
  • 2.5-inch – c5.5 million – down 58 per cent y/y and down 30 per cent q/q
  • 3.5-inch desktop – c1.82 million – down 35 per cent y/y and up 23 per cent q/q

Within the 3.5-inch desktop category Toshiba shipped 380,000 surveillance drives; down 71 per cent q/q from the prior 1.33 million drives with no y/y comparison.

Two of Rakers’ charts show these declines:

Toshiba’s shipments measured in exabytes reflect the same changes. For example, the company shipped c8.38 EB of nearline capacity, down from the c16.8 EB shipped in the first quarter. Seagate’s nearline capacity shipped in the second quarter was almost five times bigger at 79.5 EB.

Seagate last week reported a 128 per cent year-on-year rise in nearline disk sales for the second quarter.

Backblaze gets aggressive with AWS S3 egress fees

Storage pod
Storage pod

Backblaze has launched an S3 compatible APIS for its B2 cloud storage, which it claims is 75 per cent cheaper than AWS. Not convinced? The company is offering to pay AWS S3 egress fees for customers that migrate more than 50TB of data and keep it in Backblaze’s B2 Cloud Storage for at least 12 months.

Backblaze added S3 API support in May this year and started a 90-day beta test in which, it says, thousands of customers migrated petabytes of data. During the trial, Backblaze recorded interactions with more than 1000 unique S3-compatible tools from multiple companies. The results show “customers can point their existing workflows and tools at Backblaze B2 and not miss a beat”.

Gavin Wade, CEO of CloudSpot, a Backblaze customer, provided a prepared quote: “With Backblaze, we have a system for scaling infinitely. It lowers our breakeven customer volume while increasing our margins, so we can reinvest back into the business. My investors are happy. I’m happy. It feels incredible.” 

Backblaze storage pods; the red-coloured enclosures.

Cloudspot faced paying 9 cents/GB to move 700TB of data from AWS S3 to Backblaze. The company moved that data across in less than six days because AWS storage costs were eating into profits as business, and the amount of stored data, grew.

Backblaze’s B2 Cloud Storage is already popular, with the amount of data in its vaults growing at 25 per cent month on month. It costs  $0.005/GB per month for data storage and $0.01/GB to download data. There are no costs to upload data.

A final thought. AWS, with its enormous scale must be making money hand over fist from S3, when Backblaze can offer this service at one quarter of the price. This shows the cloud giant has plenty of scope for cost-cutting. But in the meantime cloud storage backup suppliers such as Datto and OVHcloud may be encouraged to follow Backblaze’s suit.