Home Blog Page 403

HPE scales out 3PAR to build massively parallel Primera line

HPE’s Primera array, launched today, is an evolutionary upgrade of its 3PAR platform, with expanded InfoSight management and claimed Nimble array ease-of-use.

Our sister publication The Register carries the announcement story here. On this page Blocks & Files tries to figure out the speeds and feeds. This is a bit of a headscratcher as HPE has released next to no pertinent information – a glaring oversight for such a big launch.

Rant over. We have gleaned what we can and inferred the information from an HPE release, presentation deck, and HPE sponsored IDC white paper(registration required) as HPE has declined to supply data sheets.

3PAR for the course

Primera can be considered a next-generation 3PAR design as 3PAR’s ASIC-based architecture is still used.

According to HPE, Primera uses a new scale-out system architecture to support massive parallelism. This is highly optimised for solid state storage.

Steve McDowell of Moor Insights Strategy, said in an email interview: “Primera is absolutely an evolution of 3PAR. It was built by 3PAR engineers. It’s based around an update to 3PAR’s ASIC. The Primera OS is based on 3PAR’s operating environment. At the same time, HPE is being very careful to distinguish that it’s a new product. I think that says less about what Primera is today, and more about how it will be basis for HPE’s high-end storage moving forward. This is HPE’s “highend.next”.

We cannot compare Primera to the obvious competitive array candidates: Dell EMC PowerMax, Infinidat InfiniBox, NetAPP AFF and Pure Storage FlashArray, as we lack enough speeds and feeds information.

Primera will also compete with NVMe-oF startup products such as Apeiron, E8, Excelero and Pavilion.

McDowell said: “From a speeds/feeds perspective, I have no doubt that Primera will be competitive with PowerMax, AFF, FlashArray//X, and Infinidat. It’s less about technology in that space today, with all players being more or less equal depending on workload and day-of-the-week, and more about positioning and filling out the portfolio.” x

He thinks Primera will do well against Dell EMC: “The HPE sales teams have cracked the nut and figured out how to sell storage against Dell and Pure – those are the players who HPE is running into most as it closes business. Primera gives them great ammunition in that fight.”

Blocks & Files believes HP will focus on streamlined management through InfoSight and the 100 per cent availability guarantee as its main competitive differentiators, with performance once SCM and NVMe/NVMe-oF technologies are supported inside the array.

The hardware

There are three Primera models, the 630, 650 and 670.  HPE has not provided comparison information and, yes, we have asked.

These are built from nodes or controllers, the terms are synonymous, and up to four nodes can be combined in a single system. Each node plugs into a passive backplane; avoiding cabling complexity. The system comes in two sizes:

  • 2U24 with two controllers
  • 4U48 with four controllers

Each controller has two Intel Skylake CPUs and up to four ASICs. HPE says this is a massively parallel system ad we might have expected more nodes/controllers to justify that term. An HPE source said: “It’s massively parallelized inside the 4-node architecture, that’s true. But it’s not some gigantic scale-out box. It’s a high end box with all fancy data services that’s easy to consume.”

The 4U48 Primera node building block

We have asked HPE if the 2U24 building block has 24 drive slots, and the 4U48 one has 48 slots. A source tells us there are 12 drives per rack unit, which implies that there are 24 slots in the 2U controller and 48 in the 4U one.

Node à la mode

There are eight dual-purpose (SAS/NVMe) disk slots per controller pair. At time of writing HPE has not published raw capacity numbers per drive or revealed the available drive types.

An HPE source told us: “System is primarily all flash but there will be options to get it with spinning drives for archival type needs.”

A node can have up to 1PB of effective capacity in 2U (or 2PB in 4U), with additional external storage capacity expansion available in both form factors. HPE is not providing data reduction ratios nor does it detail expansion cabinet capacity details or numbers.

In this absence, we rely on our completely scientific speculative back of the envelope calculation and note a 2U x 24 slot system with 1PB of effective capacity would have 1PB/24 capacity per drive; 42.66TB/drive. If we assume a 1PB = 1,000TB and a 2.5:1 data reduction ratio then that gives us 16.66TB/drives. Possibly coincidentally, this is is pretty close to the 16TB drives Seagate has just announced.

A 4-node system can have up to eight Skylake CPUs and up to 16 ASICs. That’s the maximum system today.

Blocks & Files diagram of Primera hardware

There can be up to 12 host ports per node, hence 48 in total across the 4 nodes, These ports have 25GbitE or 32Gbit/s FC connectivity. NVMe-over Fabrics is not mentioned by HPE.

The nodes have redundant, hot-pluggable controllers, disk devices, and power and cooling modules.

Primera has a so-called “all-active architecture,” with all controllers and cache active all the time, to provide low latency and high throughput. HPE has not released performance numbers for latency or throughput.

This slides notes 1.5m IOPS, without saying if that is per-node or per system and what kind of IOPS they are.

Gen 6 ASIC

The sixth generation ASIC provides zero detect, SHA-256, X/OR, cluster communications, and data movement functions. Its design is said to optimise internode concurrency and feature a “lockless” data integrity mechanism.

Data reduction (Inline compression) runs in either a QAT (Intel Quick Assist Technology] chip or a controller CPU, depending on maximum real-time efficiency. This is determined by the system’s AI/ML-driven self-optimisation.

The data reduction is built-in and always-on but can be turned off.

One HPE source said Primera has dedicated hardware, the ASICS, to help with ultra fast media that would otherwise overwhelm the fastest CPUs.

Moor Insights’ McDowell thinks this ASIC may be used in the future to upgrade 3PAR systems.

Storage class memory

HPE said Primera is built for storage class memory, without specifying if any SCM media is actually used. We have asked and are waiting a reply.

In November 2018 HPE said it would add Optane caching to the 3PAR arrays, calling the scheme Memory-driven flash.

Services-centric OS

The Primera system features:

  • RAID, multi-pathing with transparent failover
  • Thin provisioning
  • Snapshots
  • QoS
  • Online replacement of failed components,
  • Non-disruptive upgrades
  • Replication options including stretch clusters
  • On-disk data protection
  • Self-healing erasure-coded data layout which varies based on the size of the system and is adjusted in real time for optimum performance and availability.

Features associated with high-end HPE storage – RAID, thin provisioning, snapshots, quality of service, replication, etc. – are implemented as independent services for the Primera storage OS.

Feature cans be added or modified without requiring a recompile of the entire OS. Such service upgrades take five minutes or less. HPE claims this approach enables Primera to be upgraded faster, easily, more frequently, and with less risk than other high-end storage systems. Blocks & Files understands that this may not be the case for the base OS code.

McDowell, told us: “The new OS uses containers to provide isolation for data services – this is different from 3PAR’s traditional approach. It’s also (interestingly) the approach that Dell has said is core to its forthcoming Midrange.next.”

According to the IDC white paper, “System updates are all pre-validated prior to installation by looking at configurations across the entire installed base (using HPE InfoSight) to identify predictive signatures for that particular update to minimise deployment risk.”

There is no mention of any file storage capability, although 3PAR has this with its File Persona.

Primera management

HPE stresses that its cloud-based InfoSight is AI-driven and manages servers, storage, networking and virtualization layers. It can predict and prevent issues, and accelerate application performance.

The IDC whitepaper states: ‘The system generally follows an ‘API first’ management strategy, with prebuilt automation for VMware vCenter, Virtual Volumes, and the vRealize Suite.”

HPE’s pitch here is that data centre systems and storage arrays such as Primera are becoming too complex for people to manage effectively, and AI software is needed to augment or replace human efforts.

The IDC white paper notes: “Fewer and fewer organizations will be able to rely entirely on humans to ensure that IT infrastructure meets service-level agreements (SLAs) most efficiently and cost effectively.”

Performance

InfoSight AI models trained in the cloud are embedded in the array for real-time analytics to ensure consistent performance for application workloads, according to HPE.

The system predicts application performance with new workloads using an on-board AI workload fingerprinting and headroom analysis engine

We are told Primera has consistent, but unspecified, low latency. An HPE source said: “Latencies even with large configurations under pressure are in the low hundreds of microseconds.”

This is maintained at scale. Our source said: “It’s easy for most systems to maintain low latency for small capacities and specific simple types of workloads (like doing single block size benchmarks across a small working set), but doing so across a maxed-out system subjected to very mixed real workloads is far harder.”

Primera is 122 per cent faster running Oracle, according to HPE, without revealing what the base system is or specifying the Oracle software used.

Data protection

Data protection is provided through Recovery Manager Central (RMC), which provides application-managed snapshots and data movement from 3PAR to secondary StoreOnceSystems, Nimble hybrid arrays, and onwards to HPE Cloud Bank Storage or Scality object storage for longer-term retention. Pumping out data to AWS, Azure and Google clouds is supported.

RMC provides application-aware data protection for Oracle, SAP, SQL Server, and virtual machines.

How Primera stacks up with the rest of HPE’s storage lines

Our HPE sources say that Primera replaces no existing storage product. However, Blocks & Files thinks Primera will ultimately replace the 3PAR line as HPE’s mission-critical storage array. For now 3PAR is mission-critical and Primera is high-end mission-critical.

Blocks & Files suggested 3PAR and Primera positioning.

Nimble arrays remain as HPE’s business-critical arrays for enterprises and small and medium-business. The XP arrays continue to have a role as mainframe-connected systems.

Primera will have data mobility from both 3PAR and Nimble arrays.

The overall HPE storage portfolio, including the to-be-acquired Cray ClusterStor arrays and new Nimble dHCI product, looks like this;

Net:net

Primera promises to be a powerful and highly-reliable storage array for hybrid cloud use, with potentially the best management in the industry. But until performance data is released we can’t judge how powerful. It appears to lack current NVMe-oF and SCM support and also lacks file capability. We expect these features to be added in due course.

Double-headed Seagate disk drives? Yes, on their way

Seagate will introduce 18TB, 20TB+ and double-headed disk drives by the end of 2020.

Seagate CEO David Mosley, signalled the company’s intentions at a recent investor briefing hosted by Wells Fargo Securities.

In his presentation Mosley said he expected 16TB nearline drives to be the company’s biggest product by early/mid-2020. It recently announced 16TB Exos, IronWolf and IronWolf Pro drives, and is the first hard drive vendor to announce 16TB drives.

Western Digital, Seagate’s arch-rival, also has a 16TB drive on its way. It will use eight platters and 16 heads, in contrast with Seagate’s nine platters and 18 heads in its 16TB drive. Fewer platters and heads mean lower costs.

Seagate is planning an 18TB Shingled-Magnetic Recording (SMR) version of this 16TB technology. It expects to intro 20TB+ HAMR-based nearline HDDs in calendar 2020.

It will also introduce double-headed drives featuring MACH.2 Multi- Actuator Technology by the end of calendar 2019. 

Aaron Rakers, Wells Fargo senior analyst, who attended the Seagate presentation, noted: “The first solutions will incorporate two actuators on a single pivot point with each actuator controlling half of the drives read/write head arms – providing as much as a 2x increase in performance (demonstrating ~480MB/s sustained throughput); the first major performance gain in HDDs seen in years.”

Market watcher

At the investor briefing Seagate pointed to strong growth in the surveillance market. It cited estimates by the market research firm TrendFocus that branded surveillance HDD shipments will grow from ~25.62 million in 2018 to ~48.2 million units shipped by 2025. This represents 13.5 per cent shipment compound annual growth. Over the same period average drive capacity for surveillance disks will grow from 4TB to 8TB.

Game consoles appear to be moving away from 500GB, 1TB, and 2TB HDDs to Flash/SSD storage. This will affect disk drive sales, but Seagate’s Mosley, said he had “no comment” about this right now. He did say Flash and HDDs will both play an important role in the anticipated expansion of gaming content.

Rakers said Seagate remains committed to participating in the enterprise SSD market, but does not anticipate any significant revenue ramp.

DDN adds lustre to HPC storage with EXA5

DDN, the high performance computing storage vendor, has launched the fifth generation of its Lustre-based EXAScaler storage system.

Lustre is a parallel file system used in high-performance computing – in the Cray-AMD Frontier exascale supercomputer for example.

The new EXA5 system is designed for artificial intelligence applications and uses Lustre v2.12. 

DDN bought Intel’s Lustre assets in June 2018.

DDN said today the themes for the new version are simpler implementation and scaling models, easier visibility into workflows and powerful global data management features.

DDN EXA5 schematic diagram.

The EXA5 can be configured with all-NVMe flash drives or as a hybrid disk and flash system.

There are ES200NV, ES400NV, ES7990, ES14KX and ES18K models. The ES200NV,ES400NV, ES14KX and ES18KX are all flash systems, with the ES14KX and ES18KX classed as enterprise systems. The ES18K can deliver up to 90GB/sec throughput per appliance.

The ES7990 is hybrid and modular.


You can get more information about these systems from a DDN slide deck.

The EXA5 can form part of DDN’s A3I enterprise server for artificial intelligence applications.

You can download an EXA5 datasheet here.

Pure Storage elbows way on to storage industry top table

Pure Storage outpaced the market in 2019’s first quarter. to make its debut as a top five enterprise storage supplier.

IDC’s first 2019 quarter enterprise storage tracker numbers reveal a flat storage revenue market, slipping 0.6 per cent y-o-y to $13.37bn with capacity up 14.1 per cent.

IDC’s numbers for the total enterprise storage market are:-

Note a says Dell Technologies consists of Dell and Dell EMC storage revenues. Note b says HPE revenues are those from HPE and the H3C Group, its joint-venture in China.

Storage arrays

IDC separates out the external enterprise storage market, meaning storage arrays:

Here the supplier ordering is different, with two suppliers sharing joint fifth place; IBM and Pure Storage.

We can plot the revenue growth rates to show big vendor disparities, and reveal some unexpected points. First for total enterprise storage:

Pure Storage is top, followed by Huawei, Lenovo, Inspur and Hitachi. Dell and HPE failed to keep pace but Fujitsu and ODM Direct did worse and IBM worst of all.

Second, here are the vendor growth rates in external storage:

Pure is still top but HPE is second, growing external storage sales faster than all ricvals except Pure.

Down in the negative growth category are Hitachi and, with the largest decline, IBM.

The Chinese vendors, Huawei, Lenovo, and Inspur, sell far more server-attached than array storage and this is why they do not appear in the external storage chart.

Also, Dell and HPE, unlike IBM, sell a great deal of server-attached storage, which pumps up their total enterprise storage revenues, IBM’s decline in both categories is steep, at -13.9 per cent in all storage and -12.1 per cent in external storage

Summary

IBM has work to do to avoid crashing out of IDC’s top 5 external enterprise storage supplier list and be replaced by Pure Storage. Our second thought is that HPE, with its 3PAR and Nimble arrays, is doing well and is growing more quicky than NetApp.

Also HPE numbers will receive a boost when the Cray acquisition is consolidated into the company, HPE buying Cray and its ClusterStor arrays, so they may grow further.

Could HPE overtake NetApp? That would be quite the upset.


What’s up with computational storage

Computational storage is an emerging trend that sees some data processing carried out at the storage layer, rather than moving the data into a computer’s main memory for processing by the host CPU.

The notion behind computational storage is that it takes time and resources to move data from where it is stored. It may be more efficient to do some of the processing in-situ where the data lives for applications such as AI and data analytics.where the data sets are very large or the task is latency-sensitive.

As with many emerging trends, different vendors and startups are developing a number of different technologies and approaches to computational storage, often with little or no standardisation between them.

To address this, the Storage Networking Industry Association (SNIA) has set up a technical working group to promote device interoperability and to define interface standards for deployment, provisioning, management and security of computational storage systems. The group is co-chaired by NGD and SK Hynix, and over 20 companies are actively participating.

A report from 451 Research containing an overview of computational storage will be published on the SNIA website from June 17.

Blocks & Files has been given access to information from 451 Research on the current players in the computational storage field, and we list some of the front runners and detail their respective offerings.

NGD Systems

NGD Systems achieves in-situ processing through the simple approach of integrating an ARM Cortex-A53 processor into the controller of an NVMe SSD.

The data still needs to be moved from the NAND flash chips to the processor, but that is accomplished using a Common Flash Interface (CFI), which has three to six times the bandwidth of the host interface.

The advantage of this approach is that the processor can run a standard operating system (Ubuntu Linux), and allows any software that runs on Ubuntu to be used for in situ computing in NGD’s drives. The drive itself can also be used as a standard SSD.

NGD has not specified the expected performance boost for applications using the latest Newport generation of it hardware, but said the previous generation accelerated image recognition by two orders of magnitude and some Hadoop functions by over 40 per cent.

Samsung

Samsung announced in October 2018 the SmartSSD. It describes the devices as a smart subsystem rather than a storage device – a server loaded with multiple SmartSSDs will behave like a clustered computing device.

Each Smart SSD is based on Samsung’s 3D V-NAND TLC flash plus a Xilinx Zynq FPGA with ARM cores. Samsung targets two types of workload; analytics, and storage services such as data compression, de-duplication and encryption.

Unlike NGD’s platform, the SmartSSD cannot run standard software, but Samsung and Xilinx have jointly developed a runtime library for the FPGA in the SmartSSD.

The devices are currently being tested by potential customers such as hyperscalers and storage system makers.

Bigstream, which develops tools for analytics and ML applications, demonstrated its software working with Samsung’s SmartSSDs and Apache Spark, providing a performance boost of threefold to fivefold.

ScaleFlux

ScaleFlux is another vendor combining processing with a flash drive. It is currently shipping the CSS 1000 series, sold as PCIe cards or U.2 drives with raw flash capacities of 2TB-8TB. A third generation is due later this year.

Each ScaleFlux CSS drive is based on a Xilinx FPGA that processes data as well as acting as the flash controller. It integrates into a host server and storage environment via a ScaleFlux software module, with compute functions made accessible through APIs exposed from the software module.

This software includes the flash translation layer (FTL) to manage IO and the flash storage, which is in the controller in a standard SSD.

This means it consumes some host CPU cycles, but ScaleFlux claims this it is outweighed by the advantages of running it as host software, such as the ability to optimise to suit specific workloads.

Moving some processing from servers to the CSS devices requires changes to code, and ScaleFlux offers off-the-shelf code packages to accelerate applications such as Aerospike, Apache HBase, Hadoop and MySQL, the OpenZFS file system, and the Ceph storage system

According to ScaleFlux, China’s Alibaba is set to use CSS devices to accelerate its PolarDB application, which is a combined transactional and analytic database. Alibaba is believed to have modified applications itself, using APIs and code libraries from ScaleFlux.

Eideticom

Eideticom‘s NoLoad accelerators are unusual in that they fit into a 2.5in U.2 NVMe SSD format, but contain a Xilinx FPGA accelerator and a relatively small amount of memory instead of flash storage.

The concept behind this architecture is that the PCIe bus can be used to rapidly move data between the NoLoad accelerator and NVMe SSD storage at high speed, with little or no host CPU involvement.

This allows the compute element of the computational storage to be scaled independently of storage capacity, and even beyond a single server node by using NVMe-oF.

In a demo at SuperComputing 2018, Eideticom showed six NoLoad devices compressing data fed to them by 18 flash drives at a total of 160 GB/sec, with less than five percent host CPU overhead.

Eideticom said a U.2 PCIe Gen3x4 NoLoad device can zlib-compress or decompress data at over 3 GB/sec.

Eideticom touts NoLoad for storage services such as data compression and deduplication, plus RAID and erasure coding. Long-term plans include accelerating applications such as analytics.

Nyriad

The New Zealand company Nyriad developed its technology originally for the massive data processing requirements of the Square Kilometre Array (SKA) radio telescope.

Instead of hardware, NSULATE is a Linux block device that functions as a software-defined alternative to RAID for high-performance, large-scale storage solutions. It uses Nvidia GPUs as storage controllers to perform erasure coding with very deep parity calculations for very high levels of data protection.

According to Nyriad the GPUs can be used concurrently for other workloads such as machine learning and blockchain calculations.

Nyriad partners include Boston Limited, which has an NSULATE-based system that also uses NVDIMMs, and ThinkParQ which has developed a storage server that incorporates NSULATE. Oregon State University uses that system for computational biology work.

Overall take

Computational Storage is an emerging technology, but will become commonplace in one form or another, 451 Research analyst Tim Stammers forecasts.

One reason for this is that new workloads using machine learning and analytics require faster access to data than conventional storage systems can provide, even those entirely based on flash. Computational storage looks to be one answer to this, according to 451, and its benefits could be amplified further by using it in combination with storage-class memories (SCM).

Western Digital joins US freeze-out: Huawei with the hard drives

Western Digital has stopped supplying Huawei, one of its biggest customers, to comply with the US ban on American companies trading with the Chinese tech giant.

Western Digital signed a strategic cooperation agreement with Huawei in April 2019, which aimed to strengthen an existing partnership that sees it supply Huawei with HDDs, SSDs and NAND flash storage for servers, as well as flash memory for other devices such as smartphones.

However, Western Digital CEO Steve Milligan told Nikkei this week that Western Digital has been forced to reconsider its relationship with Huawei following the US government’s decision in May to put Huawei on a trade blacklist.

The WD shipment freeze could damage Huawei as Western Digital is the world’s biggest manufacturer of hard drives globally. Seagate, the second biggest HDD maker, is also an American company. That leaves only Toshiba, by far the smallest of the big three, as a potential supplier.

Western Digital is also the world’s third-largest NAND flash chipmaker by market share, and the US appears to leave Huawei with only Toshiba, Samsung and SK Hynix as potential suppliers of flash memory.

The ban is a headache for WD, which said it is considering seeking a license from the US government to resume trade with Huawei.

“The tech supply chain in the world is quite entangled,” Milligan told Nikkei. Undoing that would be “dramatic [and[ not good [in the] longer term for either China or the US”,

The trade row between the US and China is already hitting the tech industry. The memory chip market is seeing prices hit because of oversupply, and manufacturers could even see the price fall below the cost of making them.

Storage Class Memory goes mainstream – next year

The research firm DRAMeXchange expects memory prices to bounce in 2020, and this will open up an opportunity for next-gen memory technologies, also known as storage class memory (SCM).

Excess DRAM and NAND inventory means prices today are in the doldrums. But restocking momentum will build through a gradual recovery in demand and price flexibility, according to DRAMeXchange.

This will see a rebound in memory prices in 2020 and allow next-gen solutions such as SCM a better chance to penetrate the server market.

Memory tricks

Established memory technologies DRAM and NAND are pushing the physical limits of production processes and it is becoming more difficult to raise performance and lower costs.

SCM technologies are widely seen as a possible solution for greater performance, typically slotting in as a new tier in between DRAM and NAND in the memory hierarchy.

Intel Optane DIMM

Intel’s Optane, for example, is based on 3D XPoint (regarded as a kind of phase-change memory or PRAM) and is available in a DIMM format that can fit easily into the memory slots in Intel’s newest Xeon servers.

This allows vendors to change server modules or Optane solutions at will to control the total cost of the completed product, DRAMeXchange notes.

However, economies of scale have yet to take effect for new memory technologies such as MRAM, PRAM and RRAM and prices are high. This has led vendors to target markets such as hyperscale data centres.

According to DRAMExchange’s parent company TrendForce, the number of hyperscale data centres constructed globally is projected to hit 1,070 by 2025, representing an annual growth rate of 13.7 per cent between 2016-2025.

Oracle uses machine learning to boost Exadata X8 performance

Oracle has pushed out Exadata X8, the latest iteration of its engineered system optimised for the Oracle database.

The launch marks the tenth anniversary of the Exadata platform.

Unveiled today, the Oracle Exadata Database Machine X8 introduces machine-learning capabilities drawn from the Oracle Autonomous Database. These include Automatic Indexing, which continuously tunes the database as usage patterns change.

The Exadata X8 also incorporates automated performance monitoring which can determine the root cause of issues without human intervention, according to Oracle. The company said the software does this using AI combined with real-world performance triaging experience and best practices.

More bangs per buck

On the hardware side, the Exadata X8 gets the latest Intel Xeon processors, including dual 24-core Xeon Platinum 8260 chips in the database server and 16-core Xeon Gold 5218 chips in the storage servers. Those storage servers also get a boost from 6.4TB NVMe PCIe 3.0 flash cards.

This delivers a 60 per cent increase in I/O throughput for all-flash storage and a 25 per cent increase in IOPS per storage server compared with the previous generation Exadata X7 at no extra cost, Oracle claimed.

Also new is Oracle’s Zero Data Loss Recovery Appliance X8 and a low-cost extended storage server.

In recovery

The Recovery Appliance provides up to ten times faster recovery of an Oracle Database than conventional data deduplication appliances, while providing sub-second recoverability of all transactions, Oracle said.

Meanwhile, the low-cost extended storage server is for infrequently accessed, older, or regulatory data, which can be stored at “Hadoop/object storage prices”.

Oracle said Exadata provides a huge RAM, flash and disk footprint for large data sets, with a full rack exceeding 3 Petabytes while raw flash can be up to 920TB.

Exadata is also claimed to deliver high performance and low cost by intelligently moving active data across its disk, flash and memory tiers.

Everspin samples 1Gbit STT-MRAM

Everspin, the non-volatile memory maker, has started pilot production of 1Gbit STT-MRAM components.

EverSpin’s spin torque transfer magnetoresistive RAM (STT-MRAM) offers higher write and read speeds than RAM but is non-volatile.

The company is targeting applications in data centres where high-performance persistence is critical in delivering protection against power loss. However, STT-MRAM may be too costly to compete with other non-volatile memory chips.

Everspin’s 40nm 256-megabit STT-MRAM product has been in volume production for more than a year and is used by IBM in its FlashSystem 9100 and Storwise V7000 systems.

EverSpin 1Gbit STT-MRAM

As Blocks & Files has noted previously, STT-MRAM may be faster than competing technologies but high manufacturing costs make it difficult to justify on price-performance grounds. This is a moot point for us but an existential issue for Everspin to tackle.

Everspin has not disclosed costs for the new 28nm 1Gbit components, so it is unclear if the higher density from a smaller production process has helped to address this.

In a press statement today, Everspin CEO Kevin Conley said 1Gbit STT-MRAM was a milestone on the way to larger market opportunities.

“We are also pleased that progress with both customer qualification and yield maturity continues to be on track with volume production expected to start in the third quarter,” he added.

As well as offering higher capacity, the 1Gbit parts are available in 8-bit and 16-bit DDR4 compatible interface versions, whereas the 256-megabit chip was compatible with DDR3.

Everspin said its chips use JEDEC-like interfaces with some modifications to take advantage of the persistence from MRAM technology which performs like a persistent DRAM but with no refresh cycle required.

Western Digital zones in to zettabyte storage

Western Digital wants to help cloud and hyperscale data centres store data more efficiently in the Zettabyte Era.

In an initiative launched yesterday, the storage giant threw its weight behind emerging zoned storage technologies such as zoned namespaces (ZNS) for SSDs and shingled magnetic recording (SMR) in hard drives.

The world will generate around 103 zettabytes of data per year by 2023, and this is a problem, Western Digital said.

Current data storage infrastructure is too inefficient to store all of this cost-effectively. And so the industry is seeking new approaches that offer better performance with lower TCO.

Western Digital believes that more and more of this data will be sequential or streaming in nature – written once only but accessed many times. It cites music, video, IoT sensor data and large AI/ML datasets as typical examples.

Because of this, approaches that optimally place data onto the storage medium rather than writing blocks into the first available space will make more efficient use of the storage capacity. This is where zoning comes in.

Zoning comes about because of separate but parallel developments in hard drives and SSDs.

In SMR hard drives, the shingled writing of tracks makes them overlap slightly, which does not affect reading but means that rewriting a single track would alter any others that it overlays.

For this reason overlapping tracks are grouped into bands called zones, and if any of the tracks need to be modified or re-written, the entire zone must be re-written.

Zoned Storage

Western Digital’s response to this is called the Zoned Storage initiative and includes ZonedStorage.io, a repository for open-source and standards-based tools and resources for ZNS and SMR. The aim is to jump-start application development initiatives and to help data centre engineers take advantage of zoned storage technologies.

According to Western Digital, SMR and ZNS will be key foundational building blocks of the new zettabyte era by delivering intelligence to application architectures.

In a statement provided by Western Digital, IDC, analyst Ashish Nadkarni said:

“Unifying the ZNS and SMR architectures via open, standards-based initiatives is a logical industry step, which can take advantage of SMR HDD areal density gains and new innovations in flash. Whoever enters the market first, will definitely have a competitive advantage on TCO and the learning curve.”

SMR disk track organisation

Western Digital thinks SMR drives are part of the solution to hyperscale data issues because growth in SMR areal density will track closely with global data demand growth.

The firm is currently demonstrating a 20TB SMR drive that it expects to ship in 2020, and estimates that 50 per cent of the disk capacity it ships will be SMR drives by 2023.

Rewriting history

Meanwhile, SSDs already have the rewrite issue, since the architecture of flash memory is organised into blocks that must be erased before new data can be written to them. But the controller hides this complexity from the host system and applications, which just see an SSD as a hard drive.

The emerging zoned namespaces (ZNS) standard for NVMe SSDs changes this by exposing a set of zones with the requirement that each zone must be written sequentially, matching the physical SSD requirements.

This enables the host host system to place data more efficiently, making over-provisioning unnecessary. This improves the cost-effectiveness of the device as more storage space is exposed to the host.


Conventional SSD data placement vs zoned namespaces

Scale Computing takes HCI to the edge

Scale Computing has unveiled a line of hyperconverged systems aimed at edge computing applications where reliable and easy to deploy solutions are required.

Available now, the HE500 series is described by Scale Computing as a set of right-sized HCI appliances that provide enterprise-class features to remote locations and boost edge computing with disaster recovery support for organisations that require this capability.

Scale Computing HE500 1U rack node

Scale said the models in the HE500 series are more lightweight than its existing HC1000 and HC5000 series, and are intended as an affordable option for remote multi-site deployments across industries, including healthcare, retail and manufacturing.

Close to the edge

Edge computing covers a broad spectrum of hardware with widely differing capabilities, but Scale defines it as computing infrastructure intentionally located outside the four walls of the data centre so that storage compute resources can be placed where they are needed.

In effect, the HE500 series operates as a micro data centre that can be sited where data needs to be collected and processed, such as a workshop, with network links to the main data centre or the cloud.

Scale Computing CEO Jeff Ready said in a statement today: “An increasing number of organisations need infrastructure at the edge of the network, as this is where their data is actually being created. However, complexity and cost have been significant barriers to implementing and maintaining this technology.”

The HE500 series delivers HCI technology at a lower price point, making it viable and affordable option for more organisations, he added.

However, affordable is a relative term for HCI. A three-node cluster of HE500 systems carries a starting price of $16,500.

HyperCore

In all fairness, this includes nearly 3TB of SSD storage and three Intel Xeon server nodes each with 32GB of memory, plus Scale Computing’s HC3 HyperCore Software. This works out at 40 per cent cheaper than a 3-node cluster with hybrid storage in the HC1000 appliance series. Rival HCI platforms are often more costly.

Scale sees HyperCore as its differentiator, since it runs on bare metal with nothing additional to license or install, unlike some other HCI platforms that layer software onto an operating system.

HyperCore is based on the KVM hypervisor with additional components such as the SCRIBE software defined storage layer. SCRIBE aggregates all block storage devices in all nodes of the system into a single managed pool of storage.

Virtual machines running on HyperCore have direct block-level access to virtual disks created from the clustered storage pool without the complexity or performance overhead introduced by using remote storage protocols.

Scale said that HyperCore provides an intelligent and automated layer that results in infrastructure that is simpler to operate. It also includes features such as virtual machine replication and recovery and snapshot-based thin cloning.

HE500 models

The HE500 series is available in either a 1U rack-mount or in a tower configuration. Units can be configured with disk drives, all-flash storage or hybrid options, with either 32GB or 64GB RAM per node.

Customers can choose between 1Gbit/s or 10Gbit/s Ethernet ports and three different processor options, all variants of Intel’s Xeon E-2100 line.

NGD ships 8TB M.2 SSD with in-situ data processing

NGD Systems has squeezed its computational storage platform into the miniature M.2 form factor, making it possible to embed in a wider variety of hardware.

NGD is targeting applications in edge computing, hyperscale and Open Infrastructure (OCP/Open19) environments.

The Newport M.2 provides high-performance, high-capacity, low-latency processing for edge computing applications that cannot afford a cluster of 1U or 2U servers to do their processing, whether due to size, power, or compute performance, NGD claimed.

Richard Mataya, EVP and co-founder of NGD Systems, said in a statement: “We believe this will fundamentally change the way data is stored and processed at the edge for content delivery networks (CDNs) and hyperscale environments.”

Edging along the M.2

Available today, the Newport M.2 offers 4TB or 8TB of storage in the M.2 22110 form factor – 22mm by 110mm. NGD claims this is twice the capacity of the next largest available M.2 NVMe SSDs, with an average power consumption of less than 1w per TB. The host interface is NVMe 1.3 PCIe Gen 3.0 x4.

The M.2 is the second product in NGD’s Newport Computational Storage Platform. The processor core is embedded into the SSD controller, enabling data processing in situ on the SSD without having to be copied out into the host computer’s memory. The idea is that this speeds up latency-sensitive tasks such as AI and data analytics.

In this case the processor is an ARM Cortex-A53 core running a 64-bit operating system, a version of Ubuntu Linux. This should enable the development of applications that run on the embedded core with minimal changes from code running on an X86 Linux system.

The Newport platform’s programmable computational storage services (P-CSS) allow support for AI and other application workloads, NGD said.

NGD’s Newport platform architecture

NGD’s first Newport product was a 16TB U.2 NVMe SSD in a standard 2.5in drive form factor, launched in March 2019.