Home Blog Page 413

WD keeps fast flash Optane substitute in the wings

Western Digital is developing lower latency flash drives that are faster and more expensive than ordinary 3D NAND but slower and cheaper than DRAM.

The technology addresses the same market niche as Intel’s Optane 3D XPoint and Samsung’s Z-SSD i.e. applications that need more speed but not at DRAM prices.

At the Storage Field Day 18 event on February 28, 2019 Western Digital discussed a new low-latency flash (LLF) technology positioned between 3D NAND SSDs and DRAM in terms of access latency.

Video grab showing WD VP Luca Fasoli standing by a model of WD’s 96-layer 3D NAND die.

WD presenter Luca Fasoli, VP for memory product solutions, said: “We can actually create customised devices that are very fast…They are in the microsecond range of access time.” He showed a chart positioning such a technology.

The new technology would be a tenth of DRAM cost but 3x more than 3D NAND, if we read the chart right. This LLF technology would also have cost reductions in the future on the NAND curve and not the less steep DRAM curve.

Low latency flash (LLF) will be faster than 3D NAND but slower than DRAM, Fasoli showed another chart to depict this.

There could be a range of LLF products and Fasoli said WD would introduce LLF product when it thinks the time is right. Whether it will introduce drive or NVDIMM format products remains to be seen.

Fasoli’s presentation is available here.

Life after PCIe. Intel gang backs Compute Express Link (CXL)

Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft are working together on Compute Express Link (CXL), a new high-speed CPU-to-device and CPU-to-memory interconnect  technology.

The CXL consortium is the fourth high-speed CPU-to-device interconnect group to spring into existence. The others are the Gen-Z Consortium, OpenCAPI and CCIX. They are all developing open post-PCIe CPU-to-accelerator networking and memory pooling technology.

Unlike the others, CXL has the full backing of Intel. Blocks & Files suggests that the sooner CCIX, Gen-Z and OpenCAPI combine the better for them. An Intel steamroller is coming their way and a single body will be harder to squash.

CXL

The CXL group is incorporating as an open standard body, a v2.0 spec is in the works and “efforts are now underway to create innovative usages that leverage CXL technology.”

CXL benefits include resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. We might expect CXL-compliant chipsets next year and servers and accelerators using them to roll out in 2020.

The CXL consortum aims to accelerate workloads such as AI and machine learning, rich media services, high performance computing and cloud applications. CXL does this by maintaining memory coherency between the CPU memory space and memory on attached devices.

Devices include GPUs, FPGAs, ASCs and other purpose-built accelerators, and the technology is built upon PCIe, specifically the PCIe 5.0 physical and electrical interface. That implies up to 128GB/s using 16 lanes.

The spec covers an IO protocol, a memory protocol to allow a host to share memory with an accelerator, and a coherency interface

V1.0 of the CXP specification has been ratified by the group and applies to CPU-device linkage and memory coherency for data-intensive applications. It is said to be an open specification, aimed at encouraging an ecosystem for data centre accelerators and other high-speed enhancements.

If you join the consortium you get a copy of the spec. The listed members above are called a funding promoter group. There is a CXL website with a contact form.

PCIe post modernism

The Gen-Z Consortium is working on pooled memory shared by processors, accelerators and network interface devices. Cisco, Dell EMC, Google, HPE, Huawei, and Microsoftare among dozens of members. Notable absentees includ Alibaba, Facebook and Intel.

OpenCAPI (Open Coherent Accelerator Processor Interface) was set up in 2016 by AMD, Google, IBM, Mellanox and Micron. Other members include Dell EMC, HPE, Nvidia and Xilinx. Intel was not a member and OpenCAPI was viewed as an anti-Intel group driven by and supporting IBM ideas. Check out the OpenCAPI website for more information.

Gen-Z and OpenCAPI have been perceived as anti-Intel in the sense that they want an open CPU-accelerator device memory pool and linkage spec, and not have the field dominated by Intel’s own QuickPath Interconnect (QPI.)

The CCIX (Cache Coherent Interconnect for Accelerators) group was founded in January 2016 by AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and Xilinx – but not Nvidia or Intel-. Its goal is to devise a cache coherent interconnect fabric to exchange data and share main memory between CPUs, accelerators such as FPGAs and GPUs, and network adapters. CCIX also has an anti-Intel air about it.

All this implies that the CXL group is primarily an Intel-driven grouping and set up in opposition to CCIX, Gen-Z and OpenCAPI. 

Toshiba embraces shingling for next-gen MAMR HDDs

Toshiba this week confirmed it will deliver both conventional and shingled MAMR hard drives.

Scott Wright, director of HDD marketing at Toshiba America Electronic Components, told us MAMR will be used to “advance the capacity of both CMR (discrete track) recording and to SMR (shingled track) recording.”

He added: “In theory, MAMR does not advance long-term areal density gain as far as what may be achievable with HAMR. MAMR is certainly the next step; HAMR is very likely an eventual future step up the AD (areal density) ladder.”

Areal Density

WD is adding shingled recording to its MAMR disk drives to increase areal density – and so capacity – to 16TB and then 20TB and beyond. MAMR SMR Drives are not drop-in replacements for conventionally recorded PMR disk drives but WD will also ship lower-capacity non-shingled MAMR drives. WD is also still researching HAMR and could move across to that technology eventually.

Seagate’s own energy-assist technology, HAMR (Heat-Assisted Magnetic Recording) will not, so far, use shingling.

Toshiba is investing in MAMR and HAMR and other magnetic recording technologies, and is working collaboratively with the leading storage heads and media vendors. It is less vertically integrated than Seagate and Western Digital which make their own components.

Toshiba said it would adopt MAMR at an investor conference in November 2018.


Its high-capacity 3.5-inch helium-filled drives have nine platters inside, compared with eight for WD and Seagate. This gives Toshiba more disk platter surface area to play with.

Wright told Blocks & Files: “In January we announced our 9-disk 16TB MG08 family (using TDMR.) Since our MG08 announcement, both TDK (heads technology) and Showa Denko (media technology) have made their own announcements about their components being used in the Toshiba’s MG08 16TB generation.”

Toshiba MG08 16TB disk drive

TDMR is Two-Dimensional Magnetic Recording, using two disk read heads to get a better read signal.

Nvidia acquires Mellanox for $6.9bn

Nvidia is buying Mellanox, the Ethernet and InfiniBand data centre networking supplier, for $6.9 billion.

Nvidia will acquire all of the issued and outstanding common shares of Mellanox for $125 per share in cash. Mellanox was capitalised at $5.9bn on Friday, March 8, the last trading day before the acquisition was announced.

Aaron Rakers, a senior Wells Fargo analyst writes: “We think this acquisition would provide Nvidia with greater scale in the data centre market where we think low-latency interface technology is becoming an increasingly important architectural component / consideration.”

Nvidia makes most of its GPU revenues from computer games equipment and has been moving into the data centre with Tesla GPUs, aimed at the AI and machine learning markets. Its market capitalisation is $91bn.

Announcing the acquisition, Nvidia said: “Together, Nvidia’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker.”

Its NVLINK technology links GPUs together and an InfiniBand/Ethernet NVLINK interface is a logical step.

The rise of NVMe over Fabrics technology is increasing demand for Ethernet and InfiniBand in data centre networking.

When two become one

With Mellanox under its wings, Nvidia will optimise data centre-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilisation and lower operating cost for customers.

Jensen Huang, founder and CEO of Nvidia, said: “The emergence of AI and data science, as well as billions of simultaneous computer users, is fuelling skyrocketing demand on the world’s data centres. Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant data centre-scale compute engine.”

Customer sales and support will not change as a result of this transaction.

Acquisition target

Mellanox has effectively been in acquisition play since June 2018 when it rebuffed a bid by Marvell. That triggered the interest of Starboard Value, an activist investor, which gained board-level influence over the company.

In October 2018, Mellanox hired an investment bank to help find a buyer, and Intel reportedly bid $5.5bn-$6bn in January 2019. Microsoft and Xilinx have been associated with $5bn bids for the company.

Mellanox posted 2018 revenues of $1.1bn, up 36 per cent on 2017, and net income of $134.3m. It lost $2.6m in 2017. Fourth quarter 2018 revenues were $290.1m, up 22.1 per cent, and net income was $42.8m. That compares to a $2.6m loss a year before. Mellanox said sales were helped by high demand for Spectrum Ethernet switches and LinkX cables and transceivers.

Vexata hives off software for partners to build 20 million IOPS arrays

Targeting cloud providers, Vexata has separated its software and hardware, to offering file and block access software on commodity servers.

in October 2017 Vexata introduced a 7 million IOPS storage system, the VX-100, with specialised controller and SSD drive module hardware and software. The system provides high-performance and the ability to scale performance and capacity separately.

VX-100 system

A diagram shows the basic design and illustrates the switched and lossless fabric connecting the FPGA-accelerated controllers and drive modules; the ESMs (Enterprise Storage Modules.)

VX-100 basic architecture

Vexata’s controller software was split into control and data planes and the overall design featured parallel access to the NVMe SSDs. The ESMS are intelligent and run VX-OS Data code for SSD I/O scheduling and metadata management.

The VX-100 recorded a good SPC-2 benchmark in September 2018, with high-performance and low cost:

VX-Cloud Data Acceleration Platform

Vexata new offering is called the VX-Cloud Data Acceleration Platform. In effect the VX-100 front-end IO controllers (IOCs) are now commodity X86 servers, still with FPGAs, and the intelligence of the back-end ESMs is aggregated into another X86 server with a chassis full of the NVMe SSDs.

Rick Walsworth, Vexata product marketing executive, told Blocks & Files: “For the VX-Cloud solution, the ESM hardware is replaced by the Data Store Nodes (x86 servers loaded with NVMe SSDs) where the VX-OS code has been modified to run on the open platform nodes and the IOCs are replaced by Data Acceleration nodes (x86 servers with FPGAs)  running VX-OS code in both the X86 CPU (control path) and FPGA (data path.”

Vexata says the design uses x86 servers that can scale within or across racks to deliver cloud-scale block and file data services for high-performance transactional databases, decision support systems, advanced analytics, high performance computing, machine learning and artificial intelligence workloads. Twenty million IOPS has been put forward as a performance marker.

The company says VX-Cloud use cases include risk analytics, trading system analytics, financial modeling, cyber-security, IoT analytics, autonomous AI, and deep learning. VX-Cloud works with database, analytics and AI platforms such as Oracle RAC, SQL Server, Postgres, SAS, Kdb+, Cassandra and TensorFlow.

Reference architectures

VX-Cloud has three main elements: Acceleration (front-end controllers), Distribution and Aggregation (back-end SSD controllers).

VX-Cloud attributes, showing file and block access protocols

The system provides multi-parity data protection, high availability failover, encryption, volume management, thin provisioning, snapshots and clones, in-line data reduction, REST APIs, and granular I/O monitoring and analytics. It provides block and a raft of file interfaces.

Vexata is working with Fujitsu and Supermicro to craft reference architecture systems using FPGA-accelerated servers as the controller base. It claims VX-Cloud achieves up to 20X improvements in IOPS and bandwidth at consistent low-latency (think 200μs) for random, mixed read/write traffic, at meaningfully lower $/GB acquisition costs compared to premium SSD tiers available from public cloud providers today.

VX-Cloud will be delivered through strategic partners in the first instance, and general availability is expected after June. So we should see VX-Cloud shippable on Fujitsu Primergy and Supermicro BigTwin servers in the USA later this year.

Your occasional storage roundup, including Datrium, Veeam, SUSE, Retrospect and more

Here we are in March already with another raft of storage news announcements for the start of the Spring season.

Datrium jumps aboard the subscription train

Sort of hyperconverged system supplier Datrium is adopting a subscription business model, called Datrium Forward.

Customers get:

  • Portable licenses that can be used and moved across heterogeneous hardware versions and cloud infrastructures
  • Term-based software licensing on cloud in 1+ year options, and in three- or five-year options on prem
  • Host software, with support included, is priced per node/year
  • Persistent storage software, with support included, is priced per TB/year.

Renewals for Datrium Forward are consistent into the future for 10 years or longer, so there’s no forklift-related sticker shock in later years.

The company say,Datrium’s data node, with commodity no-haggle pricing, is much cheaper when compared to land-locked appliance solutions that bundle hardware and software. it claims it is up to 95 per cent less than the list price of popular storage arrays – and its software delivers more than just storage. 

Tim Page, CEO at Datrium, says: “SANs and array-based systems are no longer a viable option for the enterprise given increasingly demanding workloads and the push to the cloud across industries. Organizations are expected to move critical workloads to the cloud and between clouds and are failing to do so with antiquated storage and HCI systems.”

Antiquated HCI systems? They were mostly developed less than 10 years ago.

As for enterprise viability, IDC’s latest storage tracker shows no hint of hyperconverged kit or SAN sales declining.

Never mind that. Page declares: “Datrium Forward makes it possible for customers to pay for value and achieve total data centre portability, instead of lining vendors’ pockets. This is the future of acquiring data center infrastructure and keeping it current.”

Retrospect looks forward

Thirty year-old and now private equity-owned backup vendor Retrospect has launched v16 of its software for Windows and Mac users.

EMC bought Retrospect in 2004 but sold it on to private equity in 2011. It’s a small and medium business backup vendor for customers with mixed Windows and Mac environments.

V16 introduces a premium version of the Management Console which adds the setup and sending of backup scripts to any Retrospect ‘engine’ so you don’t have to manually add scripts to each Retrospect instance.

It also adds concurrent Retrospect instance writes to a single backup set destination enabling an up to 16x increase in overall backup performance. This is called Storage Groups and, for users, means mean reducing the backup window, and utilisation of network resource and bandwidth. There should be a good increase in the potential RPO (Recovery Point Objective) , or the backup scope and number of devices being backed up within a backup window.

New deployment tools can automatically deploy client agents, initially in conjunction with Desktop Central for Windows and Munki for Mac systems. More integrations are planned in the future.

SUSE’s irritating survey

Open-source Linux supplier SUSE surveyed 2,000 UK adults and found:

  • Almost a third (31 per cent) of consumers believe the amount of data stored on their mobile devices has ‘increased significantly’ in the last five years
  • Two fifths (40 per cent of consumers store at least ten more applications on their mobile devices now compared to 2015, rising to more than 25 new applications for almost one fifth (18 per cent) of respondents
  • If caught short on storage space, only one in ten (11 per cent) consumers would keep all of their data but pay for more storage
  • In fact, over half of respondents (51 per cent) would delete data if they needed more space on their mobile device.

So what? Matt Eckersall, Regional Director, EMEA West at SUSE, says: “There is no doubt that consumer views, habits and expectations around data storage filter through to the enterprise.”

That’s a bit of a stretch in Blocks & Files’ view.

Eckersall adds: “The blurring of lines between work and personal life means consumer behaviour is often mirrored in the workplace. Data growth impacts both individuals and businesses, where concerns around how best to store data are now at an all-time high.”

Ah, we’re getting there; it’s about enterprise data growth, and…

SUSE says businesses need to factor consumer habits around data storage into enterprise storage infrastructure plans.

And the point is? SUSE is working with the Ceph  and openATTIC open source project communities to deliver enterprise storage technology that is intelligent, scalable and cost effective.

Heaven forfend but this is one of the more useless supplier-runs-a-survey-to-tell-us-we-need-its-product exercises we’ve come across.

Veeam’s N2SW gets Amazonian

Veeam’s N2SW has introduced N2WS Backup & Recovery 2.5 with the Resource Control console. This enables you to shut off single or groups EC2 compute instances and RDS (Relational Database Services) resources when idle. 

Uri Wolloch, CTO at N2WS, said: “Being able to simply power on or off groups of Amazon EC2 and Amazon RDS resources with a simple mouse click is just like shutting the lights off when you leave the room or keeping your refrigerator door closed; it reduces waste and saves a huge amount of money if put into regular practice.”

RDS and EC2 instances can be shut off on demand or according to pre-set schedules.

V2.5 also optimizes the process for cycling Amazon EBS (Elastic Block Store) snapshots into the N2WS S3 (Simple Storage Service) repository. It claims this can lead to north of 60 per cent for backups stored greater than 2 years with compression and deduplication enhancements. 

Two new AWS Regions are supported by v2.5: AWS Europe (Stockholm) Region and the new AWS GovCloud (US-East) Region. It offers offers automated cross-region disaster recovery between the AWS GovCloud (US-East or US-West) Regions.

The new release has an expanded range of APIs relating to configuring alerts and recovery targets.

Shorts

Backup supplier Acronis is sponsoring E-Prix race car concern DS TECHEETAH, which competes in the in the ABB FIA Formula E championship. Acronis is supplying backup, storage, and disaster recovery products and gets its logo on the DS E-TENSE FE19 car.

DS E-TENSE FE19 race car.

Cloud backup service provider Backblaze has an interesting blog about SSD reliability. It surveys the different types of SSDs and failure modes. The conclusion is: “selecting a good quality SSD from a reputable manufacturer should be enough to make you feel confident that your SSD will have a useful life span.”

Object storage supplier Cloudian has reported a fourth consecutive year of record revenue and 80 per customer count growth to more than 300. it shipped six times more appliances – more than 250PB –  than in the whole of the previous year. Cloudian claims it is the most widely adopted independent provider of object storage solutions – meaning it is better than Caringo, Scality or OpenIO.

Dell EMC has classed composable infrastructure supplier DriveScale as a Tier Enterprise Infrastructure Global Partner. DriveScale software is compatible with Dell EMC PowerEdge servers, Ethernet switches and data storage products. The twosome are targeting big data, machine learning, NoSQL and massively parallel processing solutions optimised for Kubernetes and containers on bare metal deployments in cloud and enterprise data centres.Elastifile, a pioneer of enterprise-grade, scalable file storage for the public cloud, today announced that it has become an

Cloud-native scale-out filer Elastifile has become a SAP PartnerEdge Open Ecosystem – Build partner. It saying this strengthens the bonds between Elastifile’s file storage and SAP’s cloud application suite.

In-memory computing supplier GridGain has launched GridGain Community Edition. It includes the Apache Ignite code base plus patches and additional functionality to improve performance, reliability, security and manageability. The software enables GridGain to quickly deploy patches and upgrades for the Apache Ignite community, faster than the normal Ignite release cycle.

An IBM Redbook explains how IBM Aspera sync can be used to protect and share data stored in Spectrum Scale file systems across distances of several hundred to thousands of miles. It explains the integration of Aspera sync with Spectrum Scale and differentiates it from solutions built into Spectrum Scale for protection and sharing. The Redbook also elaborates on different use cases for Aspera sync with Spectrum Scale.


TIBCO Software has acquired in-memory data platform SnappyData, which has Apache Spark-based, unified in-memory cluster technology. Tibco’s Connected Intelligence platform gets a  unified analytics data fabric that enhances analytics, data science, streaming, and data management. It says the result is up to twenty times faster than native Apache Spark, while scaling to support large Apache Spark-compatible stores.

Veritas Technologies is playing the survey game as well as SUSE. It says it’s unearthed just how deep the productivity crisis goes. On average, UK employees lose two hours a day searching for data, resulting in a 16 per cent drop in workforce efficiency. How can we fix this terrible problem? Would you believe that UK organisations that invest in effective day-to-day management of their data report cost savings and better employee productivity as a result. 

How do they do that? Did you know Veritas can help global organisations harness the power of their data with a centralised data management strategy?

Hot stuff. Wasabi, which offers cloud storage that is 1/5th the price and up to 6x the speed of Amazon S3, is opening its first EU data centre in Amsterdam. This complements two existing US data centres. David Friend, Wasabi CEO, says Wasabi’s “API is 100 per cent S3 compliant and there are no charges for anything other than the amount of data stored – no egress fees, no charges for PUT, LIST, DELETE or other API calls.” 

Customer

GPU-accelerated data warehouse developer SQream has signed LG Uplus, a mobile carrier owned by LG Corporation. This is expected to improve LG Uplus’ network operations and efficiencies, reduce costs and downtime, and offer better quality of service to customers.

LG Uplus becomes SQream’s first customer in South Korea as the company grows its worldwide market share within the telecom industry. The LGPlus HW includes IBM’s POWER9-based AC922 server with NVIDIA V100 Tensor Core GPUs and IBM FlashSystem 9100 storage.

IBM Cognitive Systems VP of AI and HPC, Sumit Gupta, said: “This combination of technologies is planned to allow LG Uplus to integrate with Hadoop and interface with base station probes for log analysis for analysis of the company’s very large data stores.”

People

Brett Shirk

Rubrik has appointed Brett Shirk as its Chief Revenue Officer, responsible for driving Rubrik’s global go-to-market strategy and reporting to Co-founder and CEO Bipul Sinha. He was VMware’s SVP and GM for the Americas. Before that he was with Symantec and Veritas, which Symantec acquired. Mark Smith resigned as Rubrik’s EVP for Global Sales and Business Development in December 2018 and his LinkedIn status is “Retired.” 

The Optane Persistent Memory mystery. Can it last the course?

When Intel announced Optane memory technology in July 2015 it claimed endurance was 1,000 times greater than NAND, At the time this was taken to mean NAND endurance limitations were eliminated.

Since then the company has been markedly reluctant to discuss real world examples of Optane DIMM endurance. This raises doubts about its utility as a DRAM extender and substitute.

What does Intel say?

Blocks & Files has asked Intel three questions about this:

  • What is the write endurance of Optane DC Persistent Memory?
  • Can host server applications write direct to Optane DC Persistent Memory in app direct mode or does Intel control such write access? If so, how does it control such access?
  • If Optane DC Persistent Memory has, for example, a write endurance of 200,000 cycles, this suggests its use would be restricted to read-centric persistent memory applications and prevent its use in write-centric persistent memory applications. What are appropriate application types for using Optane DC Persistent Memory?

Intel spokesperson Simon Read sent us this note: “Our next generation Xeon Scalable processors (codename Cascade Lake) supports Optane DC persistent memory in both operating modes, memory mode and app direct (application direct) mode.

“We will be unveiling more details on Optane DC persistent memory, including read/write endurance and performance, when we formally announce the product with the general availability of our next-generation Xeon Scalable processors.”

This suggests that the November 2018 announcement of Cascade Lake AP support of Optane DIMMs was a tad premature.

Obtaining Optane

Optane is Intel and Micron’s 3D XPoint technology offering storage-class or persistent memory with near-DRAM speed and higher density but not DRAM’s near-infinite lifecycle.

Intel says Optane can function as a persistent memory adjunct to DRAM, enlarging the memory space with sub-DRAM cost, near-DRAM speed and data persistence. It says that Cascade Lake AP processors support such Optane DC Persistent Memory. However, Intel is not revealing the endurance (write cycles or cycling) of its Optane DC Persistent Memory products, the Optane DIMMs.

According to Intel documentation, the 750GB DC P4800X Optane SSD handles up to 60 DWPD (Drive Writes per Day) for five years, or 82PB written. This is is excellent compared to NAND’s 1 – 10 DWPD but not 1,000 times better.

Mark Webb, an independent semiconductor analyst, says the Intel 3DXP cycling performance “was >1M cycles… until they actually made a product. Then it started to drop.”

At 30DWPD, the P4800X Optane SSD has total write/erase cycles of 32,850 per cell, Objective Analysis analyst Jim Handy has calculated.

Compare this to Intel’s DC P3700 1.6GB standard NAND SSD, which has 31,025 write/ erase cycles per cell. This is almost identical with Optane i.e. nowhere near 1,000 times better.

Handy also calculated that a 16GB Optane DIMM has 11,406 write cycles per cell and a 32GB version 5,703. Admittedly this was in 2017 and Optane controller technology could have improved.

Direct XPoint DIMM writes

Optane DIMMs, with capacities of 128GB, 256GB or 512GB, can be written in memory mode or app-direct (DAX) mode. Intel says “!data is volatile in Memory Mode; it will not be saved in the event of power loss. Persistence is enabled in the second mode, called App Direct.”

An analyst confidentially suggested to us that Optane “DIMMs actually work with Cascade Lake SP… not just AP. The actual volumes are very low mostly due to Intel controlling it. Apparently a lot of Intel hand-holding is going on.”

He said customers want “app-direct mode for [the] fastest NVMe storage/memory. But…this is a small market (<5 per cent of Server Market TAM.)”

And he thinks Intel prevents applications writing direct to Optane DIMM memory to prevent the stuff wearing out. Is this the case?

We asked Handy about this. “Yes, applications can write directly to the persistent memory, ” he replied.”The DIMM, though, does things that are not under the control of the application program, like wear leveling and encryption, so when you ask if Intel controls write accesses I would reply that they “kind of” do, but probably not in the way that you were asking about.”

He sent us an SNIA diagram showing how persistent memory could be accessed:

Intel is an SNIA member and is, we infer, in sync with SNA views.

If applications or system software can write directly to Optane DIMMs without limit, then the DIMM could wear out, and its endurance is hugely important.

The right write cycle number

With no statement as yet from Intel there is no industry analyst consensu concerning Optane DIMM endurance.

Howard Marks, Deep StorageNet founder said: ‘I’ve seen [Optane DIMM] estimates from 50,000-500,000 cycles. I don’t really know.”

Webb told us: “If you were to cycle 3DXP with reasonable BER (Bit Error Rate) specs and DPMs (Defective Parts per Million), with its on chip over-provisioning, we believe it can be cycled about 100K times with options being implemented for 200K+. Plus I have seen third party testing showing it wears out above those cycles.”

In that case the Optane DIMM, used in a write-centric app-direct environment, could wear out in a matter of months.

According to Handy, SK Hynix recently suggested, using internal modelling, that Optane endurance is 10 million cycles, with a slide deck image showing this:

“Existing SCM” means Optane.

So we have estimates of 50,000 to 500,000, 200,000 and 10 million write cycles plus this point from Rob Peglar, president at Advanced Computation and Storage LLC: “Raw endurance doesn’t really matter, since the DIMM controller hides a lot of it. You’ve seen what they are willing to publish using 3DX inside an SSD. Do not make the mistake of assuming that’s it, though.  DIMMs are far, far different than SSDs.”

Webb also implies Optane DIMMs have controllers: “You cannot write to it like main memory and you require complex software to manage it.” That software will run in a DIMM controller.

Handy said: ” The DIMM … does things that are not under the control of the application program, like wear levelling and encryption,” which also implies a DIMM controller.

The Optane DIMM controller

If there is an Optane DIMM controller, what does it look like? We think the Optane DIMM has the equivalent of an FTL, a Flash Translation Layer. In this case it would be like an XTL, an XPoint Translation Layer. This XTL would add latency to XPoint data accesses.

intel has not clarified this point but we presume it would look after wear-levelling and over-provisioning to extend the DIMM’s endurance. That requires a logical-to-physical mapping function with logical byte, not block, addresses. With SSDs the translation layer operates at the block level. Optane DIMMs are byte-addressable so the mapping would be at byte-level.

Optane DIMMs come in 128GB, 256GB and 512GB capacities and so byte-level mapping tables contain entry numbers equivalent to the byte-level capacity. For instance, 512 x 1024 x 1024 x 1024 bytes: 1.074bn plus the over-provisioned bytes. It would need DIMM storage capacity to hold these mapping tables entries.

Roughly speaking it is as if an Optane DIMM is a DIMM-connected XPoint SSD.

Dealing with limited Optane DIMM endurance

Suppose Optane DIMM endurance is low? Handy said: “The DRAM locations that receive the most wear are things like loop counters that aren’t likely to benefit from persistence, so they are unlikely to be mapped by the system software into  persistent memory.  … We will learn more once Intel chooses to let us know.”

Marks said: ‘Because there’s still a significant latency difference between the 64GB DRAM DIMM and the 512GB XPoint one applications will create a tiered model with indices and other frequently updated data structures in DRAM and colder, though still hot by current measure, data in XPoint.”

Peglar rejects the idea that low (200,000) Optane DIMM endurance would restrict applications to read-centric use – but so what? “The entire question is baseless, without foundation, ” he said. “The assumption is entirely incorrect…3DX IS NOT NAND. Also, even if the above assumption was true (which it isn’t) – what’s wrong with read-centric memory?  It beats the hell out of no memory, having to fetch from SSD.”

And so we wait, as we have a long time already, for Intel to reveal what it knows. Let’s hope it does not disappoint with overly limited write endurance.

A final point. If Optane DIMM write endurance is inadequate for its persistent memory role this opens the door to alternative PM technologies such as Crossbar’s ReRAM and MRAM from Everspin and, perhaps, Samsung.


RackTop pushes NAS with built-in security and compliance

RackTop Systems is looking to grow the market for its line of NAS appliances featuring built-in encryption and compliance controls thanks to $15m in Series A financing.

Founded in 2010 by US intelligence community veterans, RackTop specialises in what it calls CyberConverged arrays that integrate data storage and advanced security and compliance into a single platform.

BrickStor applicances in a 2U form factor

The firm’s BrickStor all-in-one data storage and management appliances come in a 2U rack-mount chassis with redundant power supplies and space for up to 12 3.5in SAS drives. There are two models: BrickStor Iron can be fitted with 12TB up to 168TB of disk capacity, while BrickStor Titanium has a capacity of 4.8TB of all Flash or up to 126TB of flash and disk. Both can be configured for dual 10Gbit Ethernet ports or 4 x 1Gbit Ethernet ports.

Eric Bednash, RackTop’s co-founder and CEO, claimed that BrickStor helps business with the problems of storing and managing large volumes of data, while at the same time protecting that data and addressing compliance requirements.

RackTop’s BrickStor architecture

The built-in encryption appears to come from RackTop using Seagate FIPS certified self-encrypting drives. The advantage here is that there is no impact on performance as there would be if the array controller had to encrypt and decrypt everything writes and reads.

Another feature of BrickStor, Secure Global File Share (SecureGFS), is designed to allow users to collaborate and share files internally and externally without sacrificing security or compliance, thanks to encrypted file sharing over the LAN and WAN.

RackTop will use the new funds on product development and to expand its sales channel. It is targeting customers in industries such as the public sector, financial services, health care and life sciences and claims to have customers worldwide already using its platform to manage upwards of 50 petabytes of data.

Participants in the funding round including Razor’s Edge Ventures, Grotech Ventures and Blu Venture Investors.

Cohesity says: We will bring applications to the data and tackle your infrastructure sprawl

Cohesity has launched an applications marketplace that will enable customers to buy third party apps on its Data Platform.

The company says the initiative brings applications to the data and enlists the help of ESG senior analyst Christophe Bertrand to tell us why this is important: “Bringing applications to the data, versus data to the applications, helps enterprises increase data intelligence and reduce infrastructure sprawl that contributes to the problem of data silos and mass data fragmentation.”

At launch four third-party and three in-house applications are available on the Cohesity MarketPlace.

The third-party apps are:

  • Splunk Enterprise – for data set analysis and investigation
  • SentinelOne Anti-Virus – check virus contamination using SentinelOne’s libraries
  • Clam Anti-Virus open-source application – runs directly on file data in the data platform
  • Imanis Data: backup NoSQL workloads into the data platform

The Cohesity applications are:

  • Insight: Search data as it is stored on the data platform for compliance, legal, or day-to-day business needs
  • Spotlight: Monitor modifications to file data to check for a potential internal or external security breach, like a ransomware attack. Search audit logs and obtain alerts on who is creating, modifying, accessing, or deleting files
  • EasyScript: Access to script creation elements, along with Cohesity APIs and sample scripts

Open Sesame!

Cohesity makes master versions of secondary data and provides copies to developers and applications that need them, and converges separate secondary data stores – backups, near line filers, archives, etc. – into a single silo called the Cohesity Data Platform. This saves storage space and eases management.

With the Pegasus v6.3 release, Cohesity has opened its data platform via a software development kit (SDK) that provides APIs, documentation and tools for building a custom Data Platform direct access app.

Cohesity and direct access to its Data Platform.

Direct access apps give customers faster access to stored data as they no longer have to tell the Cohesity software to make a copy of selected stored data and then provide access to this copy for their application. Now the application can work directly on the data, saving time.

Daniel Bernard, SentinelOne CMO, said; “By running the SentinelOne Nexus Application on the Cohesity DataPlatform, customers get next generation AI-powered threat prevention without having to transfer any files or connect their clusters to the internet, ensuring a greater degree of protection across all enterprise data, no matter where it is stored.”

For more information tune in to Cohesity founder and CEO Mohit Aron’s video presentation about direct access – Bringing Smartphone-like Simplicity to Secondary Data and Applications.

NGD Systems heralds in-situ processing for NVMe SSDs

NGD Systems has announced general availability of the first product in the Newport platform of high capacity NVMe SSDs.

The Irvine, Calif. startup makes big claims for the product family which provides computational storage via in-situ processing to alleviate host CPU-memory-storage bottlenecks. Performance scales as more drives are added.

Nader Salessi, CEO of NGD Systems, said the company offers the “industry’s highest capacity NVMe, smallest footprint, and the most power-efficient NVMe SSDs on the market. This makes it possible to perform in-situ processing within the storage device itself without having to trade power, space or cost to do so.”

Don Jeanette, storage research VP at TrendFocus, provided a canned quote: “By eliminating the need to move data before processing, the Newport Platform drastically reduces latency and system level power consumption.”

Let’s run through the spec for the first Newport product, a a 14nm ASIC-based 16TB U.2 NVMe SSD, which makes its debut today. NGD says this is the world’s densest and lowest-powered drive, needing 8w at peak load.

Newport has up to 16 flash channels and supports NVMe v1.3 and PCIe Gen 3.0 x4.

NGD Newport U.2 SSD.

According to NGD, a Hadoop Terasort with 4 Newport drives per node is faster than unassisted Hadoop nodes and needs 8-core hosts, as opposed to 16-core varieties..  A Microsoft image query processing time is up to 4x faster with Newport drives compared to host-only processing.

Microsoft image query processing times with and without NGD SSDs.

Newport breeding programme

Newport products have a quad-core, 64-bit ARM processor running Ubuntu and supporting Docker. 

There is an up to 8TB capacity M.2 “gumstick” card version, an up to 32TB EDSFF (ruler format) variant and the U.2 (2.5-inch) form factor runs up to 32TB of 3D TLC (3bits/cell) NAND. A larger-capacity AIC (add-in Card)  part will store up to 64TB. These additional Newport products will be released later this year and a a 32TB U.2 drive is expected in July 2019.

NGD is targeting Newport devices at hyperscalers and intelligent internet edge application. Computation at the edge limits the data sent to a central location, according to the company, and enables faster processing with a smaller edge device CPU. This saves electricity.  

NGD processor path diagram from an SNIA presentation.

Other drive-level computation work could involve video transcoding, network monitoring stream pre-processing, RAID parity calculations and compression.

Newport can be deployed just as a dense, low-power drive with in-situ processing turned off or as a computational storage drive offloading the host CPU.


What will Dell EMC’s unified mid-range storage box look like? Let’s not find out…

Blocks & Files tested out some ideas about Dell EMC’s upcoming unified mid-range storage box, the PowerTobe-decided, with Dell EMC and got precisely nowhere.

I thought I would be clever and suggest a set of features that Dell EMC could easily sign up to. I was wrong. A Dell EMC spokesperson firmly put me in my place, saying “it’s just too soon to give further details.”

Expect one more product release each for Unity, XtremIO, SC and ScaleIO, to prepare the way for and migration to PowerToBeDecided, followed by PowerToBeDecided sometime in the second half of 2019.

For what it’s worth, my suggested feature set is this:

  • Controllers based on Dell servers
  • A new file system to cope with burgeoning file data growth
  • A new software stack to cope with new storage media
  • SSDs for fast access data
  • Disk drives for bulk capacity data
  • NVMe drive support
  • NVMe-oF support (Ethernet, Fibre Channel and TCP versions)
  • Architecture supporting storage-class (persistent) memory
  • Support for persistent memory NVDIMMs and SSDs
  • AI-driven admin like PowerMax
  • AI-driven migration from Unity + XtremIO + SC + ScaleIO to PowerToBeDecided.

Hammerspace adds Kubernetes support

Hammerspace has added data management facilities to deploy stateful apps across multiple Kubernetes clusters, on premises or in the cloud.

Kubernetes is the popular orchestration tool for managing containers – application micro-services. These are naturally stateless – any data used dies with the container. Storage – persistent data – has to be added when needed. Several suppliers, such as Datera, IBM, and Pure Storage, have enabled their storage array to be used by containers orchestrated by Kubernetes. This makes the containers stateful – state is saved when they die.

Hammerspace says databases such as MySQL and MongoDB require persistent data to be accessible from any cluster across the hybrid multi-cloud as they burst-to-cloud and back. Hammerspace can provide that accessibility and says it makes file data cloud-native.

It’s Hammerspace time

Hammerspace is a Silicon Valley based startup that came out of stealth in October 2018. The SaaS company unifies distributed file silos into a single network-attached storage (NAS) resource. It can serve applications access to unstructured data in on-premises private, hybrid or public clouds, on demand.

Its software farms existing file storage metadata and provides data management services via a data control plane. Hammerspace supports file and object distribution across multiple clouds and locations, global search, stored item reporting and analytics applications.

Hammerspace conceptual scheme.

Now it can serve data to Kubernetes too, to help DevOps deploy Data-as-a-Microservice and scale stateful containerised apps across Kubernetes clusters.

David Flynn, CEO of Hammerspace, said in a press release announcing Kubernetes support: “Data must be abstracted from the infrastructure and managed at file-level granularity.  With Hammerspace, logic and contextual information can be stored as metadata for each data object, making data programmable and declarative, allowing developers to orchestrate the deployment of data along with their other microservices as they scale-out their apps.”