Home Blog Page 333

Enterprise SSDs cost ten times more than nearline disk drives

Nearline drives remain the sweet spot in the HDD market – and a key reason is that the 10x price premium that enterprise SSDs command remains stubbornly high and is not closing.

Update; Kioxia enterprise SSD market share data added. 28 August 2020.

SSDs are cheaper to operate than disk drives, needing less power and cooling, and are much faster to access. They are widely expected to mass- replace hard drives for mainstream enterprise workloads – at some point. But when? Wells Fargo analyst Rakers thinks demand for enterprise SSDs will soar when prices drop to a maximum five times higher than nearline HDDs. However, there is little sign of this happening in current price trends.

By contrast, SSDs are killing disk drives in the PC and nearline device storage market.

Rakers has pored over the the Q2 2020 SSD and HDD numbers collated by TrendForce, a market research firm. Nearline disk drive capacity, he reveals, shipped was about 165.6 EB, almost seven times greater than enterprise SSD totals. Enterprise SSDs cost a general $185/TB and nearline HDDs costs about $19/TB, meaning enterprise SSDs carry a 9.7x price premium in $/TB terms. And that premium is staying constant as this chart shows:

Rakers estimates the client SSD price premium over client HDDs is 4.9x. A chart shows how the client SSD price premium has dropped since 2012;

Why the different price premiums? We think this is in part related to capacity. Client storage drives are typically sub-5TB devices and speed of access is prized more than capacity. Also, low-access rate client backup data can be stored in the public cloud. Demand for larger capacity client drives is falling because of these two factors. 

Enterprises and hyperscalers have large volumes of nearline storage data and per-drive capacities are now growing to 16TB, 18TB and – soon -20TB. Even greater capacities are coming down the line. This means the $/TB cost of nearline drives will continue to fall, keeping pace with the declining $/TB cost of enterprise SSDs.

Market share

Samsung led the overall NAND SSD market in 2Q 2020 in revenue share with 31 per cent. The Kioxia-Western Digital joint fab actually shipped more – with the two companies collectively taking 33 per cent revenue share. Micron, SK Hynix and Intel had 13.7 per cent, 12 per cent and 11 per cent, respectively.

TrendForce estimates 13.872 million enterprise SSDs shipped in the second quarter, up 84 per cent year on year. Capacity shipped was circa 24.3 EB, up 116 per cent and accounting for about a quarter of all shipped flash capacity. PCIe enterprise SSDs accounted for 15.2 EB, up 179 per cent year on year and 62 .6 per cent of the total enterprise SSD capacity. SATA accounts for around around 5.5 EB and SAS 3.7 EB.

Enterprise SSD revenue in Q2 was about $4.5bn, up 95 per cent Y/Y. Supplier capacity market shares were – roughly – Samsung at 38 per cent, Intel at 24 per cent, SK Hynix at 15 per cent, Western Digital 12 per cent and Micron with 3.5 per cent. Kioxia said its market share wass 5.5 per cent.

There were some 67.45 million client SSDs shipped, bringing in around $3.9bn. Client disk drive revenue was about $1.24bn. There was about 27.4 EB of client SSD capacity shipped, slightly more than the 24.3 EB of enterprise SSD capacity shipped.

Supplier unit ship market shares were Western Digital at nearly 25 per cent, Samsung at about 22 per cent, SK hynix at 12.5 per cent, Kioxia 12 per cent and Micron on six per cent.

This week in storage with Lenovo, NetApp, and more

This week sees Lenovo and NetApp bundles, edge-focused TrueNAS boxes,, and faster-than-ever OpenEBS storage from MayaData. We’ll start with Lenovo and NetApp getting in a bundle.

Lenovo and NetApp expand bundled offerings

Lenovo has teamed up with NetApp to sell server and ONTAP storage bundles to UK small and medium-sized businesses. Kristian Kerr, NetApp VP for the EMEA Partner Organisation, said: “The data and storage market in this segment is a massive revenue opportunity that we can best address together.”

The bundles are based on Lenovo’s all-flash storage portfolio, including the ThinkSystem DM Series powered by NetApp ONTAP and the ThinkSystem DE Series powered by SAN OS, NetApp’s software for its E-Series arrays.

iXsystems gets edgy

iXsystems has launched the TrueNAS Mini X storage appliance for edge deployments. The appliance is positioned above the company’s Mini E system and uses the same TrueNAS Core v12 software as the company’s top-of-the-range M60 NAS filer, also announced this week.

The Mini X packs up to 85TB of HDD and SSD capacity across a 10GbitE networks link and the chassis has seven hot-swap drive bays. There are two models: the Mini X and the Mini X+.

The Mini X has 4 x RJ45 GbitE ports, 4 Atom CPUcores, 16GB to 32GB of DDR4 ECC memory, and the same USB 3.1 connectivity, and RJ45 IPMI remote management interface. The Mini X+ spec differ with two RJ45 10GbitE ports, eight Atom cores and 32GB-64GB of DDR4 ECC memory.

Pricing starts at $699 for the Mini X. A 70TB TrueNAS Mini X+ retails for under $3500 and up to 50TB of SSDs can be added for extra performance.

The Mini X and X+ systems can be purchased in fixed configurations at Amazon, custom configured at www.truenas.com, or via any TrueNAS partner.

OpenEBS goes faster with MayaStor

OpenEBS with a MayaStor software engine is faster than Portworx, Gluster and Ceph with mixed read and write IOPs, according to test runs by Jakub Pavlik. He is the director of engineering at Volterra, a cloud services startup.

MayaStor is a cloud-native storage engine from MayaData. It can use NVMe SSDs and is included as an option in OpenEBS open source software. OpenEBS provides container-attached storage to Kubernetes-orchestrated containers.

The software comprises a series of containers able to deliver per workload storage. Here is Pavlik’s chart, enlarged: 

News shorts

Acronis has announced TrueImage 2021 for home users, prosumers, and small businesses. It includes real-time anti-malware protection, on-demand anti-virus scans, web filtering, and videoconference protection.

Asigra has announced the general availability of Asigra Cloud Backup with Deep Multi-Factor Authentication. Deep MFA provides mission-critical layers of protection to secure policy settings and control to prevent backup data deletions or malicious encryption caused by malware (including ransomware), competitors or human error.

Datadobi is working with Melillo Consulting to provide data migration for Melillo’s healthcare customers. The two have moved over 1.2PB of data already for healthcare companies. One migration project, set to take two years, was completed in less than three months using Datadobi software.

Gajen Kandiah.

Hitachi Vantara has recruited Gajen Kandiah from Cognizant as its CEO. His predecessor Toshiaki Tokunaga retains his role as chairman of the Board. Kandiah blogs that Hitachi V will continue to focus on selling storage as part of its overall intelligent industry systems business, saying this makes Hitachi V unique. 

Kasten, a Kubernetes backup vendor, has updated the K10 Enterprise Data Management Platform with more automation, better infrastructure portability and improved self-service options. And in not very related news the company has published a downloadable comic title “Phippy in Space” eBook to explain its protection features for containerised storage.

Nebulon is developing a Kubernetes CSI driver for its cloud-defined, DPU-enhanced server SAN storage. Nebulon ON uses cloud-analytics to allow Kubernetes admins to self-service provision and manage their entire Kubernetes stack all while using the workflows they are familiar with in the public cloud. Nebulon’s single GraphQL-based API endpoint and template driven engine allows cluster administrators to configure a Kubernetes installation and then remain completely hands off while application developers self-serve infrastructure. Read more in a blog.

NordSec has added a cloud storage option for Nordlocker file encryption. Files get encrypted with a simple drag and drop. Users can keep encrypted files on their device or move them to the cloud. Cloud storage synchronises user data across multiple devices.

Nutanix has enlisted Intel’s help to better support the chip giant’s technologies in its software stack. The two companies have set up an ‘Innovation Lab’ to do the heavy lifting, which is “actively working on critical projects to better equip our customers and [has] already made progress in several priority areas,” Tarkan Maner, chief commercial officer at Nutanix, said in this week’s launch announcement.  The companies are to jointly establish physical labs – with both on-site and remote access – but have not indicated locations at time of writing.

Storage Made Easy‘s Enterprise File Fabric can now launch directly from Azure Marketplace.  The UK company says its data management software unifies siloed filesystems and object storage including Azure Files and Azure Blob Storage into a single easily managed infrastructure.

StorMagic this week announced its new SvSAN’s Witness as a Service (WaaS) is available through StorMagic Cloud Services. WaaS delivers 100 per cent uptime for edge and small data centre environments and supports 1,000 locations per instance. SvSAN now also includes software RAID-10, to protect customers’ edge servers that cannot accommodate onboard hardware RAID.

Parallel file system software shipper WekaIO has added a Kubernetes CSI plug-in to its WekaFS software. This is downloadable from GitHub.

NVMe of the People: WD launches superfast My Passport SSD

Western Digital has introduced an NVMe version of the WD My Passport portable SSD drive that reads data at 1,050MB/sec and writes at 1,000MB/sec – almost twice the speed of its immediate predecessor.

The palm-sized WD My Passport SSD comes in 500GB, 1TB and – soon – 2TB capacity points and plugs into a PC or laptop. The drive connects to its host across a USB 3.2 Gen-2 link with a USB-C cable and a USB-A adaptor.

A grey My Passport SSD.

The drive features AES-256 bit encryption. It comes with backup software to move stored files to a host or to a public cloud account but must be reformatted to work with Apple’s Time Machine backup.

The new drive joins the 400MB/sec My Passport Go in WD’s lineup. My Passport Go is also an SSD drive but features a wrap-around rubber bumper and USB 3.0 connectivity. It sold in 500GB and 1TB capacities.

The prior My Passport SSD delivers speeds up to 550MB/sec through its USB Type-C connector and came in a combination silver and black case. Its capacity points were 256GB, 512GB and 1TB. The new model offers more capacity, more speed and more colours – grey, blue, red and gold. It is also drop-resistant to 6.5 feet.

The WD My Passport SSD is currently available worldwide in the 500GB and 1TB capacities, but only in grey, at select e-tailers and retailers. It has a five-year limited warranty and WD’s suggested prices are $119.099 for 500GB and  $189.99 for 1TB. Additional colours and capacities will be available later this year. 

For comparison a 5TB My Passport portable product is a disk drive spinning at 5,400rpm with a Micro-USB Type-B connector. This is priced at $47 for 1TB and $125.99 for 5TB. For that you get 131MB/sec read speed, which is eight times slower than the NVMe My Passport SSD and four times cheaper at the 1TB capacity point. You get faster and drop-resistant TB with the My Passport SSD but you sure have to pay for it.

KumoScale adds online data migration to its storage bow

Kioxia America has added online data migration to its KumoScale flash box, to allow continued application access to data in drive maintenance windows.

Joel Dedrick, Kioxia America’s VP and GM for networked storage software, said in the launch announcement: “For mission-critical applications, downtime due to maintenance can be extraordinarily costly. Eliminating the need for maintenance windows is just one of the ways that we help our customers maximise the returns on their data centre investments.”

KumoScale storage software is based on NVMe-over Fabric and manages a JBOF (Just a Box of Flash), containing Kioxia NVMe SSDs. The commodity hardware array is made by Kioxia’s partners. Kioxia calls the result ‘software-enabled flash technology’.

The new features include online volume migration, in which a set of data is replicated from one SSD to another. Migration may be needed to reduce the IO load on the source drive or perform maintenance or to even out write access wear across a set of drives. The target SSD can be in the same KumoScale node or a connected one.

Kioxia America graphic

Normally, migration involves taking a drive offline and sending the volume to the target drive while it is disconnected from online access. Important applications may be quiesced or left waiting during this process.

With KumoScale’s online migration, write accesses to the source drive are redirected to the target drive while read accesses are left undisturbed. The volume data is moved across to the target drive. When migration is complete, read accesses to the source drive are remapped to the target drive and the source drive can be taken offline. There is no interruption of online access to the data volume during the process. 

Kioxia is actively adding features to KumoScale:

  • May 2020 – Thin provisioning added
  • April 2020 – snapshots and clones added
  • February 2020 – resilient provisioner service to map physical drives and nodes to host requests
  • September 2019 – support for Graphite and Prometheus telemetry frameworks
  • March 2019 – NVME-oF transports extended with TCP/IP on top of RoCE

Online migration now caps this list and expands KumoScale use cases to mission-critical apps that can’t suffer downtime.

GitHub

Kioxia has made the KumoScale API available via GitHub. The company said the API combines software flexibility, host control and flash native semantics to make flash easier to manage, quicker to deploy and more predictable in its behaviour.

With Kumoscale, host applications access data in flash storage using the API and so avoid accessing actual SSDs. The physical SSDs are virtualized by the API. This means the technology inside them can change without mandating change in the accessing applications.

The API also facilitates global wear management across the SSDs in a KumoScale box and can enable more consistent latencies by levelling out IO accesses across a population of drives.

Eric Ries, SVP for Kioxia America’s memory storage strategy division, provided this take: “We want the cloud computing and storage development community to benefit from all that this new technology has to offer. It opens the door for hyperscale storage developers to unleash the full potential of flash in their unique environments in a way that just isn’t possible with traditional storage methodologies.”

This ability for host server software to manage flash drive behaviour is a feature of Western Digital’s zoned flash drive initiative. Western Digital and Kioxia jointly operate NAND foundries making SSD chips.

The KumoScale API specification is available on the Kioxia America GitHub repository.

iXsystems supercharges OpenZFS storage

iXsystems claims it has the world’s fastest OpenZFS storage systems with the new M60 array and TrueNAS v12 software. With this week’s upgrade, TrueNAS users get twice the capacity headroom and an accelerated box to support storage operations.

Update: M60’s 15GB/sec bandwidth updated to 20GB/sec sustained.

OpenZFS is an open source version of the ZFS file system, and is the basis of iXsystems’s FreeNAS and paid-for TrueNAS products. There are an estimated one million-plus installed FreeNAS andTrueNAS systems.

Morgan Littlewood

Morgan Littlewood, biz-dev SVP at iXsystems, said: “There has been increasing demand from midsize and enterprise organisations for higher price/performance storage solutions  … With the M60, we are leading the Open Storage revolution, both with high performance hardware and the increased performance of TrueNAS 12.0.”

iXsystems’ hybrid or all-flash M50 array, launched in April 2018, has one or two controllers with 40 cores max, 3TB of RAM (348GB/controller), 32GB of NVDIMM write cache and up to 12.8TB of NVMe flash read cache per controller. There are up to four 100GbitE ports and eight external 60-bay expansion shelves that support 504 drives to scale upwards of 9PB. Maximum flash capacity is 2PB raw.

TrueNAS M60 array

The M60 has much faster hardware – up to 768GB RAM/controller with 64 cores total, two x 32GB NVDIMM write cache/controller but with the same NVMe read cache as before. It has up to 8 x 100GbitE ports, double the previous total. The system supports 12 expansion chassis with up to 1,248 x 18TB drives and 20PB raw capacity maximum. The all-flash version supports 4PB maximum raw flash capacity.

iXsystems suggests the M60 can run at one million IOPS with a 20GB/sec sustained bandwidth, compared to the M50’s 800,000-plus IOPS. We don’t have a maximum bandwidth number for the M50.

TruNAS v12

TrueNAS v12 unifies the software, documentation and branding of FreeNAS and TrueNAS. FreeNAS is renamed ‘TrueNAS Core’ and is available as an open source edition of TrueNAS. There are two supported TrueNAS editions: TrueNAS Scale and TrueNAS Enterprise. TrueNAS v12 runs on iXsystems hardware or existing customer hardware.

TrueNAS v12 provides Fusion pools with mixed SSDs and HDDs, putting metadata on flash. The ZFS subsystem gets asynchronous operations and vectorisation. Asynchronous write calls return control immediately while synchronous calls don’t return until they have been committed to storage. Vectorisation can speed processing. 

The iSCSI, SMB and NDS areas have been optimised to increase performance by more than 20 per cent.

Security improvements include data set encryption for remote replication, Key Management Interoperability Protocol (KMIP) support for drives and datasets and 2-factor admin authorisation. There are also API keys for TrueCommand vSphere and other REST API systems.

The M60 and TrueNAS v12 are both generally available.

AWS spins up FSx for Lustre HDD options

AWS is adding HDD-based shared storage options to Amazon FSx for Lustre.

FSx for Lustre is an open source, parallel access file system which, AWS says, offers sub-millisecond latencies, up to hundreds of GB/sec throughput, and millions of IOPS.

AWS has now given it cheaper high-performance disk drive options alongside existing flash-based storage. The company says throughput-focused workloads such as genome analysis, financial simulations, and seismic data processing are suitable for HDD storage.

SSD storage is a better fit for workloads that require low latency. The SSD storage provides sub-millisecond latency and disk drives provide single digit (1 to 9) millisecond latency.

There are two HDD options. One has 12MB/sec throughput per TiB of capacity and the other has 40MB/sec. Both allow users to burst to six times these throughput levels.

All metadata operations use SSD storage. The HDD storage option can be helped by provisioning SSD caching to 20 per cent of the HDD capacity.

HDD operations reduce costs by up to 80 per cent and start at 2.5 cents per GB per month (in the US-East, N. Virginia Region).

HDD file systems for Lustre are available today in all regions where Amazon FSx is available.

NetApp aims to make ONTAP Kubernetes-native

The infrared image from NASA's Spitzer Space Telescope shows hundreds of thousands of stars in the Milky Way galaxy

An internal effort to containerise NetApp’s ONTAP is leading to a cloud-native version of the operating system.

NetApp’s Project Astra is focused on developing application data lifecycle management for Kubernetes-orchestrated containerised applications that run on-premises and in the public clouds.

Eric Han
Eric Han

NetApp’s Project Astra lead Eric Han this week outlined progress to date via a blog post.

He said Astra’s early-access users had a common interest in wanting to run a full set of workloads in Kubernetes where the right storage is always selected, the performance is managed, application data is protected, and where the Kubernetes environment spanned on-premises and the public clouds.

The first stage of the early-access program involved Project Astra managing data for Google Kubernetes Engine (GKE) clusters with storage from the Google Cloud Volumes Service. Astra looked at unified app-data management with backup and portability, and then added logging of storage and other events, with an intent to look at persistent workloads.

NetApp is now extending Astra to handle Kubernetes Custom Resources and Operators (operators are the interface code that automate Kubernetes operations). Custom Resources are extensions of Kubernetes’ API. Han said a firm lesson is that Astra needs to work with application ISVs who are writing extensions in the form of Operators.

He also writes: “We are working to accelerate our next release into Azure to support AKS and Kubernetes in AWS with EKS.”

Then he drops a quiet bombshell: “In AWS, Google Cloud, and Azure, the Cloud Volume Service (CVS) provides storage to virtual machines. So, as part of that evolution, the Project Astra team has been redesigning the NetApp storage operating system, ONTAP, to be Kubernetes-native.”

Cloud (Kubernetes)-native ONTAP

A Kubernetes-native ONTAP is intended specifically to run in a Kubernetes-orchestrated environment – a strict subset of the cloud-native environment – and so is able to use all of Kubernetes’ features.

We note that Kubernetes-native ONTAP could in theory run on any Kubernetes cluster, whether on-premises or in the public clouds.

Han blogs: “This containerised ONTAP now powers some of the key regions in the public cloud, where customers see a VM volume that happens to be backed by a microservice architecture.”

He asserts: “Users really want a common set of tools that work for stateless and stateful applications, for built-in Kubernetes objects and extensions, and that run seamlessly in clouds and on-premises.”

We can expect a public preview of Project Astra in the next few weeks or months.

Comment

Much, maybe most, of NetApp’s business comes from selling its own hardware, running ONTAP software, being deployed on premises and paid for with perpetual licenses and support contracts. But by making ONTAP Kubernetes-native the lock-in to NetApp hardware is removed.

If the various ONTAP data services were also turned into Kubernetes-native software NetApp’s customers could in theory run their complete NetApp environments on commodity hardware and in the public clouds, presumably on a subscription basis. It’s the cloud way.

MinIO tilts at AWS with object storage subscription service

Footsteps on beach

Cloud-native object storage supplier MinIO has launched a subscription service – like Amazon but for private clouds.

Jonathan Symonds

“The MinIO Subscription Network experience looks and feels like it would on the public cloud,” MiniIO CMO Jonathan Symonds said in a blog.

The commercial license brings enhanced support with direct-to-engineering support, one hour SLA, access to a ‘Panic Button’, performance and optimisation audits and diagnostic health assessments of customer deployments. 

Enrico Signoretti, senior data storage analyst at GigaOM, provided an announcement quote: “MinIO has already established the object storage standards for performance and Kubernetes. By adding the Subscription Network experience to their recent features they seek to compete directly with AWS and the other public cloud providers.”

Seagate CIO Ravi Naik also chipped in: “The MinIO team continues to raise the bar on engineering excellence and willingness to work alongside customers to solve any issues. The simplicity of Subscription Network pricing and ease of use gives CIOs cost predictability and Tier I support.” 

Details

The license provides capacity-based pricing and monthly billing, unlimited customer seats for support calls, unlimited issues, annual architecture and performance reviews, up to five-year support on a particular release, guaranteed SLAs and security advisory notices.

There are two pricing tiers – Standard and Enterprise, which customers select based upon their SLA and legal requirements. Standard Tier is priced at $.01 per GB per month and Enterprise Tier at $.02 per GB per month. Pricing is month to month.

Capacity minimums start at 25TB and 100TB respectively. There is also a price ceiling, with no charges above 10PB for Standard and 5PB for Enterprise.

MinIO said its new Panic Button brings to bear the company’s entire engineering organisation within minutes – 24/7/365. Standard customers get one per year. Enterprise customers get an unlimited number.

MinIO’s software is open source, using the GNU Affero General Public License (AGPL) v3 license. This requires that full source code is made available to any network user of the AGPL-licensed work. MinIO’s commercial license provides exceptions to this obligation, protecting its customers’ own code.

According to MinIO, enterprises require commercial licenses where AGPL v3 is present, but subscribers still enjoy the benefits associated with open source – namely freedom from lock-in and freedom to inspect the source.

Portworx rolls out container storage update, boasts of sales momentum

Portworx, the container storage startup, has updated its software with more automation, backup and restore features, and performance improvements.

Portworx CTO and cofounder Gou Rao said in the launch announcement yesterday: “Enterprises fuelling their data transformation initiatives with Kubernetes cannot rely on traditional storage that fails to keep up with the scale and dynamism of Kubernetes environments, or cloud-native solutions that have yet to be proven at a global scale.”

Florian Buchmeier, DevOps engineer at Audi Business Innovation, provided a canned quote: “Portworx provides an enterprise-class alternative to the network-attached storage commonly available on the cloud but at one third the price and substantially higher performance.”

Portworx update

Portworx Enterprise 2.6 provides:

  • Node capacity rebalancing to avoid any one node getting over-burdened.
  • Portworx storage cluster continues to operate during temporary etcd outages – etcd is a key value store used as Kubernetes’ backing store for all cluster data.
  • Support for k3S Kubernetes edge application distribution with hundreds and thousands of endpoints.
  • Kubernetes application pods can now access proxy volumes – external data sources (e.g. NFS shares) from Portworx Volumes.

Portworx PX-Backup 1.1 adds the ability to selectively restore individual resource types from a backup instead of the entire application. It supports custom or generic Customer Resources Definitions, through which users add their own objects to the Kubernetes cluster and use them like native Kubernetes objects.

Application CPU, memory and storage qotas can be applied at the namespace level, ensuring restored applications are placed on clusters with sufficient resources. V1.1 enables admins to create policies to backup Kubernetes cluster namespaces or applications using wildcards for namespaces that get created later. Admins also get more metrics like number of protected namespaces, size of backups and alert status for backups and restores.

PX-Autopilot for Capacity Management 1.3 adds GKE pool management support, auto pool rebalance, and GitOps-based approval workflow. These help admins reduce cloud storage spend.

Portworx has also been certified for use with the Cloudera Data Platform (CDP).

Portworx Enterprise 2.6 will be generally available on August 24. PX-Autopilot 1.3 will be generally available on August 31. PX-Backup 1.1 is available in Tech Preview from today.

Money talk

Portworx claims revenues in its second 2020 quarter were a record, without specifying numbers, and followed on from its previous record-setting first quarter.

The company said it is winning some big-ticket orders, without specifying names. One customer is running Portworx software in a production environment with more than 1,500 nodes and 90 Kubernetes clusters. The company thinks this is unmatched in the industry.

Twenty-six customers have bought $250,000 or more worth of licenses. It completed a $1m-plus license sale in Q2 and two customers have bought $300,000-plus purchases to add backup and restore, disaster recovery, and Kubernetes-granular storage to their OpenShift environment. If Red Hat Open Shift customers are adding Portworx to their deployments a partnership would perhaps benefit both companies.

SK hynix touts its first NVMe consumer SSD: Good spec, good price

SK hynix has entered the consumer NVMe SSD market with a gumstick format Gold P31 drive intended for designers, content creators and PC gamers.

The P31 looks to be a decently-priced, decently fast and decently long-lived drive. It is selling today on Amazon for $74.99 (500GB) and $134.99 (1TB) .

The P31 uses 128-layer 3D NAND, which makes its the first 100-plus layer consumer SSD. Its M.2 2280 card can hold 500GB or 1TB of flash and comes with a fast SLC cache to accelerate the slow TLC (3bits/cell) flash.

This is SK hynix’s second consumer SSD, following the 2.5-inch format S31 SATA-3 drive which launched last November. The S31 uses sk Hynix’s own 72-layer TLC 3D NAND and delivers 560/525 MB/sec sequential read/write bandwidth. It is sold in 250GB, 500GB and 1TB capacities.

The P31 rockets along at 3,500/3,200 MB/sec and up to 570,000/600,000 random read IOPS rating. The device slows down when the SLC cache is full; 500,000/370,000 random read/write IOPS.

Latency is 45us write, 90us read and endurance is 1.5 million hours before failure and 0.4 drive writes per day over the five-year warranty. That means 750TB written for the 1TB model.

How does it stack up against the competition? Intel’s 665P has 1TB and 2TB capacities and is slower: up to 250,000 random read/write IOPS and 2,000MB/sec sequential read/write bandwidth.

Micron’s 2210 has 512GB, 1TB and 2TB capacity points, and is slower too: up to 265,000/320,000 random read/write IOPS and 2,200/1,800 MB/sec sequential read/write bandwidth. IT supports 60TBW at the 1TB capacity point, so sk Hynix is better on that count.

The Micron 256GB, 512GB, 1TB and 2TB 2300 drive is faster – up to 430,000/500,000 random read/write IOPS with its SLC cache, and 3,300/2,700 MB/sec sequential readwrite bandwidth. It has the same 600TBW rating at a 1TB capacity point as the 2210.

Fungible Inc: Our DPU composes much smaller data centre bills

Fungible, a California composable systems startup, claims its technology will save $67 out of every $100 of data centre total cost of ownership on network, compute and storage resources.

The company this week launched its first product, a data processing unit (DPU) that functions as a super-smart network interface card. Its ambitions are sky-high, namely to front-end every system resource with its DPU microprocessors, offloading security and storage functions from server CPUs.

Pradeep Sindhu, Fungible CEO, issued a boilerplate launch announcement: “The Fungible DPU is purpose built to address two of the biggest challenges in scale-out data centres – inefficient data interchange between nodes and inefficient execution of data-centric computations. examples being the computations performed in the network, storage, security and virtualisation data-paths.”

“These inefficiencies cause over-provisioning and underutilisation of resources, resulting in data centres that are significantly more expensive to build and operate. Eliminating these inefficiencies will also accelerate the proliferation of modern applications, such as AI and analytics.”

“The Fungible DPU addresses critical network bottlenecks in large scale data centers,” said Yi Qun Cai, VP of cloud networking infrastructure at Alibaba. “Its TrueFabric technology enables disaggregation and pooling of all data centre resources, delivering outstanding performance and latency characteristics at scale.”

Third socket

The Fungible DPU acts as a data centre fabric control and data plane to make data centres more efficient by lowering resource wait times and composing server infrastructures dynamically. Fungible claims its DPU acts as the ‘third socket’ in data centres, complementing the CPU and GPU, and delivering unprecedented benefits in performance per unit power and space. There are also reliability and security gains, according to the company.

Fungible says its DPU reduces total cost of ownership (TCO) for network resources 4 x, compute 2x and storage up to 5x, for an overall cost reduction of 3x.

As a composable systems supplier, Fungible will compete with Liqid, HPE Synergy, Dell EMC’s MX7000, and DriveScale. As a DPU supplier it competes with Nebulon and Pensando. The compute-on-storage and composable systems suppliers say they are focusing on separate data centre market problems. Not so, says Fungible. The company argues that this is all one big data-centric compute problem – and that it has the answer.

The technology

Compute-centred data centre traffic operations.

According to Fungible, today’s data centres use server CPUs as traffic cops but they are bad at inter-node, data-centric communications.

The Fungible DPU acts as the data centre traffic cop and has a high-speed, low-latency network, which it calls TrueFabric, that interconnects the DPU nodes. 

This fabric can scale from one to 8,000 racks.

Fungible DPU-centred data centre traffic operations.

The Fungible DPU is a microprocessor, programmable in C, with a specially designed instruction set and architecture making it super-efficient at dealing with the context-switching involved in handling inter-node operations.

These will have data and metadata traveling between data centre server CPUs, FPGAs, GPUs, NICs and NVMe SSDs.

Fungible TrueFabric graphic.

The DPU microprocessor has a massively multi-threaded design with tightly-coupled accelerators, an on-chip fabric and huge bandwidth – 4 PCIe buses each with 16 lanes, and 10 x 100Gbit/s Ethernet ports. This performs three to 100 times faster than an X86 processor at the decision support TPC-H benchmarks, Fungible says

There are two versions of the DPU. The 800Gbit/s F1 is for front-ending high-performance applications such as a storage target, analytics, an AI server or security appliance. The lower bandwidth 200Gbit/s S1 is for more general use such as bare metal server virtualization, node security, storage initiator, local instance storage and network function virtualization (NFV).

.

Both are customisable via C-programming and no change to application software is needed for DPU adoption. And both are available now to Fungible design partners wanting to build hyper-disaggregated systems.

UCSF ransomware attack: University had data protection but it wasn’t used on affected systems

The University of California San Francisco (UCSF) paid $1.14m in bitcoin (116.4 bitcoin) to ransomware attackers in June to recover encrypted files, despite having at least one deal in place providing it with data protection. However, Blocks & Files understands the University did not apply the vendor’s product to the affected systems’ files.

UCSF changed its data protection from Commvault to Rubrik in August 2019, according to an announcement by Susanna Chau of its Data Centre Services unit. Part of the reason was improved security. Chau said at the time that Rubrik’s Atlas file system is immutable and not accessible over the network, preventing ransomware attacks from getting to it.

On June 1 ransomware attackers encrypted files within “a limited number of servers within the School of Medicine” IT environment. B&F understands the Rubrik solution was not in place on the servers in question at the time of the attack. It is not known if the university had other mitigation in place; if it did, this clearly failed.

UCSF was able to limit the NetWalker ransomware attack as it was occurring by quarantining the compromised servers, thus isolating them from the main network. At the time, the University described the criminal targeting of those specific systems as “opportunistic”.

Clearly, the School of Medicine data was important, and UCSF soon began negotiations with the criminals. After haggling them down from an initial $3m demand, the UCSF IT crew received a decryption key and recovered the files towards the end of June.

A June 26 UCSF statement said: “The data that was encrypted is important to some of the academic work we pursue as a university serving the public good. We, therefore, made the difficult decision to pay some portion of the ransom, approximately $1.14 million, to the individuals behind the malware attack in exchange for a tool to unlock the encrypted data and the return of the data they obtained.”

UCSF Parnassus Heights campus

By paying the ransom, UCSF confirmed that its data protection arrangements for these servers was inadequate. The encrypted file contents self evidently could not be restored from any backups, if they existed.

Neither UCSF nor Rubrik would provide official statements and we don’t know what, if any, data protection measures were in place for the affected servers.

Our understanding after talking to sources close to the situation is that the encrypted file systems were not protected by the Rubrik software. Meanwhile, UCSF appears to be a satisfied Rubrik customer and continues to use its technology.

UCSF is a research university exclusively focused on health and has schools dedicated to Medicine, Pharmacy, Dentistry and Nursing. The School of Medicine has 2,719 full-time faculty staff across seven sites in San Francisco and a branch in Fresno in the San Joaquin valley.