From today customers can build their own Converged Infrastructure systems by using Dell EMC Ready Stack blueprints. Previously this option was available for channel partners only.
Dell EMC has also given the VxBlock 1000 CI kit pre-built for customers a makeover.
Once upon a time, customers typically built systems using a large set of components – servers, storage arrays, interface cards, power supplies, network switches and a whole bunch of software. That meant relying on multiple vendors for support and management.
Then CI systems such as EMC’s VBlocks came along. These ready-built systems integrated Cisco servers and networking and EMC storage components into a single system from one manufacturer – VCE. There was a single management scheme and one support throat to choke. These products evolved into the VXBlock line.
NetApp introduced FlexPods combining NetApp storage and Cisco servers and networking products in a reference architecture product. Channel partners could build these and they were seen as a kind of CI-lite system.
Stacked up
Now Dell EMC has launched Ready Stack systems. They are called validated designs instead of reference architectures but the idea is the same: to give a blueprint for building systems by integrating components, all from Dell Technologies, that work together.
The company supplies blueprints for three systems.
Dell EMC PowerEdge MX servers and PowerMax storage plus VMware software combined into an Infrastructure-as-a-Service system
PowerEdge gen 4 servers, Unity storage array and vSphere
Ditto with Hyper-V instead of vSphere
Dell EMC said channel partners and customers can build their own design using any of its server, storage, networking and data protection products – as you have been able to do all along.
Cisco has taken Cohesity into its SolutionsPlus program, enabling internal sales teams to flog Cohesity software products.
Cisco’s Vijay Venugopal, senior director, product management, HyperFlex, had a supporting quote: “Through the new solution, Cisco and Cohesity can offer the entire data stack for both primary and secondary data in an integrated architecture.”
Cohesity software converges all secondary storage into one repository. The company’s backup and recovery products such as DataProtect are now available on Cisco UCS servers and HyperFlex hyper-converged systemsThe joint systems are available for all customers worldwide on March 15, 2019.
The Cisco SolutionsPlus agreement includes joint sales, marketing, service and support, and product roadmap alignment between Cisco and Cohesity.
Cisco is a strategic investor in Cohesity, participating in two funding rounds.
Druva has added Disaster-Recovery-as-a-Service based on the AWS public cloud. This means customers can avoid the expense of a second DR site or subscribe to DR facilities for the first time.
Druva began life as an endpoint backup and file sharing company with its InSync product. From there it moved to protect remote and branch offices with Phoenix, and on to provide data management-as-a-service with the Druva Cloud Platform. Druva bought CloudRanger in June 2018 to get into backup and DR for AWS.
Phoenix can automatically failover and spin up virtual machines in the AWS public cloud but CloudRanger goes further and this has enabled Druva to offer AWS-based DRaaS.
It works like this. An on-premises virtual machine or one in the VMware Cloud on AWS is backed up to Druva’s cloud in AWS. They are converted to AWS EBS snapshots and stored in the customer’s virtual private cloud in AWS. The EBS snapshot can be spun up as an EC2 instance if the originating VM or VMs fail.
A diagram shows this scheme:
Druva Cloud DR diagram
Customers can recover data to an on-premises VM (failback) or in the cloud (failover), across Amazon regions or accounts.
You can watch this Druva Cloud DR video for another spin on the scheme:
Cl
Druva’s on-demand elastic disaster recovery site in AWS automates runbook execution – a compilation of routine procedures – and the DR testing process.
Customers can use the service to replicate VMs, clone full VPCs, and move them across regions for test and development or for resiliency.
Cloud DRaaS, Phoenix, CloudRanger and InSync share a common management layer enabling customers to apply common policies and monitor data at a global level.
The new disaster recovery capabilities are generally available in Q2 2019.
You wait for a bus for ages and then two come along at once. So it is with NVMe/TCP – Toshiba and Lightbits Labs have introduced NVMe/TCP product on the same day.
NVMe/TCP is the NVMe protocol running across Ethernet and TCP. NVMe/TCP uses the data centre’s existing Ethernet infrastructure, unlike the RDMA version of NVMe-oF (RoCE). This means no custom content is required, Eric Burgener, IDC’s research VP, says, and lowers costs and eases management overheads.
Using ordinary Ethernet NVMe/TCP has a latency of roughly 200µs compared to RDMA-based NVMe-oF’s 100-120µs. The difference is of no practical consequence, especially when compared with latencies of around 100,000µs for iSCSI and Fibre Channel access to a SAN array.
Lightbit Labs diagram showing different NVMe-oF schemes
Lightbits Labs
Lightbit Labs, an Israeli startup, officially emerged from stealth today, backed to the tune of $50m by industry heavyweights Dell EMC, Cisco and Micron and sundry VC firms.
The company has introduced its LightOS software running inside a LightBox array with an optional LightField hardware accelerator.
LightOS presents a virtualised set of NVMe SSDs at accessing Linux servers which run a standard NVMe/TCP client driver. The LightOS provides a global flash translation layer (GFTL) which looks after wear-levelling across all the SSDs, doing a better overall job, Lightbits says, than drive-level FTLs.
Lightbits deployment diagram
LightOS GFTL features thin provisioning and compression, data striping across drives, erasure coding and quality of service (QoS) per-volume to prevent ‘noisy neighbours’ hogging an unequal share of the system’s resources in a multi-tenant environment. There is also a REST API.
The LightField card adds hardware acceleration to the LightOS feature set. No information about hardware components has been released.
Lightbits says its NVMe/TCP system is scalable and can provide millions of IOPS. Its design provides a low write latency (with a persistent write buffer), consistently low read average latency and and a low tail latency.
Tail latency is the random occurrence of higher latencies and keeping their rate and size down helps make an array’s behaviour more consistent.
LightOS and LightField are available separately to run inside a customers’ X86 servers, and LightOs supports ARM processors too. Both are ready to support QLC (4bits/cell) flash, the company says.
LightOS and LightField are available for purchase but we have no pricing or sales channel information. You can register for a demo here.
Toshiba KumoScale
KumoScale is Toshiba’s NVMe-oF array software, introduced in March last year. Conceptually it is Toshiba’s equivalent of LightOS.
KumoScale diagram.
KumoScale now supports TCP transport in its 3.9 production release, available today. It also supports RoCE v 2 networks.
Toshiba does not build its own NVMe SSD arrays that could run KumoScale, and no array-building partners have yet been announced by Toshiba. A demo was run in November 2017 – yes, 2017 – at Supercomputing 2017 using Newisys’s NSS1160G-2N NVMe-oF hardware platform.
Update: A Toshiba spokesperson said: “We have several systems partners (and their reseller networks) that we have qualified with KumoScale. These partners include Supermicro, Tyan and Quanta, and their products are in our HCL (hardware compatibility list) that we share with customers.”
Western Digital is developing lower latency flash drives that are faster and more expensive than ordinary 3D NAND but slower and cheaper than DRAM.
The technology addresses the same market niche as Intel’s Optane 3D XPoint and Samsung’s Z-SSD i.e. applications that need more speed but not at DRAM prices.
At the Storage Field Day 18 event on February 28, 2019 Western Digital discussed a new low-latency flash (LLF) technology positioned between 3D NAND SSDs and DRAM in terms of access latency.
Video grab showing WD VP Luca Fasoli standing by a model of WD’s 96-layer 3D NAND die.
WD presenter Luca Fasoli, VP for memory product solutions, said: “We can actually create customised devices that are very fast…They are in the microsecond range of access time.” He showed a chart positioning such a technology.
The new technology would be a tenth of DRAM cost but 3x more than 3D NAND, if we read the chart right. This LLF technology would also have cost reductions in the future on the NAND curve and not the less steep DRAM curve.
Low latency flash (LLF) will be faster than 3D NAND but slower than DRAM, Fasoli showed another chart to depict this.
There could be a range of LLF products and Fasoli said WD would introduce LLF product when it thinks the time is right. Whether it will introduce drive or NVDIMM format products remains to be seen.
Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft are working together on Compute Express Link (CXL), a new high-speed CPU-to-device and CPU-to-memory interconnect technology.
The CXL consortium is the fourth high-speed CPU-to-device interconnect group to spring into existence. The others are the Gen-Z Consortium, OpenCAPI and CCIX. They are all developing open post-PCIe CPU-to-accelerator networking and memory pooling technology.
Unlike the others, CXL has the full backing of Intel. Blocks & Files suggests that the sooner CCIX, Gen-Z and OpenCAPI combine the better for them. An Intel steamroller is coming their way and a single body will be harder to squash.
CXL
The CXL group is incorporating as an open standard body, a v2.0 spec is in the works and “efforts are now underway to create innovative usages that leverage CXL technology.”
CXL benefits include resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. We might expect CXL-compliant chipsets next year and servers and accelerators using them to roll out in 2020.
The CXL consortum aims to accelerate workloads such as AI and machine learning, rich media services, high performance computing and cloud applications. CXL does this by maintaining memory coherency between the CPU memory space and memory on attached devices.
Devices include GPUs, FPGAs, ASCs and other purpose-built accelerators, and the technology is built upon PCIe, specifically the PCIe 5.0 physical and electrical interface. That implies up to 128GB/s using 16 lanes.
The spec covers an IO protocol, a memory protocol to allow a host to share memory with an accelerator, and a coherency interface
V1.0 of the CXP specification has been ratified by the group and applies to CPU-device linkage and memory coherency for data-intensive applications. It is said to be an open specification, aimed at encouraging an ecosystem for data centre accelerators and other high-speed enhancements.
If you join the consortium you get a copy of the spec. The listed members above are called a funding promoter group. There is a CXL website with a contact form.
PCIe post modernism
The Gen-Z Consortium is working on pooled memory shared by processors, accelerators and network interface devices. Cisco, Dell EMC, Google, HPE, Huawei, and Microsoftare among dozens of members. Notable absentees includ Alibaba, Facebook and Intel.
OpenCAPI (Open Coherent Accelerator Processor Interface) was set up in 2016 by AMD, Google, IBM, Mellanox and Micron. Other members include Dell EMC, HPE, Nvidia and Xilinx. Intel was not a member and OpenCAPI was viewed as an anti-Intel group driven by and supporting IBM ideas. Check out the OpenCAPI website for more information.
Gen-Z and OpenCAPI have been perceived as anti-Intel in the sense that they want an open CPU-accelerator device memory pool and linkage spec, and not have the field dominated by Intel’s own QuickPath Interconnect (QPI.)
The CCIX (Cache Coherent Interconnect for Accelerators) group was founded in January 2016 by AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and Xilinx – but not Nvidia or Intel-. Its goal is to devise a cache coherent interconnect fabric to exchange data and share main memory between CPUs, accelerators such as FPGAs and GPUs, and network adapters. CCIX also has an anti-Intel air about it.
All this implies that the CXL group is primarily an Intel-driven grouping and set up in opposition to CCIX, Gen-Z and OpenCAPI.
Toshiba this week confirmed it will deliver both conventional and shingled MAMR hard drives.
Scott Wright, director of HDD marketing at Toshiba America Electronic Components, told us MAMR will be used to “advance the capacity of both CMR (discrete track) recording and to SMR (shingled track) recording.”
He added: “In theory, MAMR does not advance long-term areal density gain as far as what may be achievable with HAMR. MAMR is certainly the next step; HAMR is very likely an eventual future step up the AD (areal density) ladder.”
Areal Density
WD is adding shingled recording to its MAMR disk drives to increase areal density – and so capacity – to 16TB and then 20TB and beyond. MAMR SMR Drives are not drop-in replacements for conventionally recorded PMR disk drives but WD will also ship lower-capacity non-shingled MAMR drives. WD is also still researching HAMR and could move across to that technology eventually.
Toshiba is investing in MAMR and HAMR and other magnetic recording technologies, and is working collaboratively with the leading storage heads and media vendors. It is less vertically integrated than Seagate and Western Digital which make their own components.
Toshiba said it would adopt MAMR at an investor conference in November 2018.
Its high-capacity 3.5-inch helium-filled drives have nine platters inside, compared with eight for WD and Seagate. This gives Toshiba more disk platter surface area to play with.
Wright told Blocks & Files: “In January we announced our 9-disk 16TB MG08 family (using TDMR.) Since our MG08 announcement, both TDK (heads technology) and Showa Denko (media technology) have made their own announcements about their components being used in the Toshiba’s MG08 16TB generation.”
Toshiba MG08 16TB disk drive
TDMR is Two-Dimensional Magnetic Recording, using two disk read heads to get a better read signal.
Nvidia is buying Mellanox, the Ethernet and InfiniBand data centre networking supplier, for $6.9 billion.
Nvidia will acquire all of the issued and outstanding common shares of Mellanox for $125 per share in cash. Mellanox was capitalised at $5.9bn on Friday, March 8, the last trading day before the acquisition was announced.
Aaron Rakers, a senior Wells Fargo analyst writes: “We think this acquisition would provide Nvidia with greater scale in the data centre market where we think low-latency interface technology is becoming an increasingly important architectural component / consideration.”
Nvidia makes most of its GPU revenues from computer games equipment and has been moving into the data centre with Tesla GPUs, aimed at the AI and machine learning markets. Its market capitalisation is $91bn.
Announcing the acquisition, Nvidia said: “Together, Nvidia’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker.”
Its NVLINK technology links GPUs together and an InfiniBand/Ethernet NVLINK interface is a logical step.
The rise of NVMe over Fabrics technology is increasing demand for Ethernet and InfiniBand in data centre networking.
When two become one
With Mellanox under its wings, Nvidia will optimise data centre-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilisation and lower operating cost for customers.
Jensen Huang, founder and CEO of Nvidia, said: “The emergence of AI and data science, as well as billions of simultaneous computer users, is fuelling skyrocketing demand on the world’s data centres. Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant data centre-scale compute engine.”
Customer sales and support will not change as a result of this transaction.
Acquisition target
Mellanox has effectively been in acquisition play since June 2018 when it rebuffed a bid by Marvell. That triggered the interest of Starboard Value, an activist investor, which gained board-level influence over the company.
In October 2018, Mellanox hired an investment bank to help find a buyer, and Intel reportedly bid $5.5bn-$6bnin January 2019. Microsoft and Xilinx have been associated with $5bn bids for the company.
Mellanox posted 2018 revenues of $1.1bn, up 36 per cent on 2017, and net income of $134.3m. It lost $2.6m in 2017. Fourth quarter 2018 revenues were $290.1m, up 22.1 per cent, and net income was $42.8m. That compares to a $2.6m loss a year before. Mellanox said sales were helped by high demand for Spectrum Ethernet switches and LinkX cables and transceivers.
Targeting cloud providers, Vexata has separated its software and hardware, to offering file and block access software on commodity servers.
in October 2017 Vexata introduced a 7 million IOPS storage system, the VX-100, with specialised controller and SSD drive module hardware and software. The system provides high-performance and the ability to scale performance and capacity separately.
VX-100 system
A diagram shows the basic design and illustrates the switched and lossless fabric connecting the FPGA-accelerated controllers and drive modules; the ESMs (Enterprise Storage Modules.)
VX-100 basic architecture
Vexata’s controller software was split into control and data planes and the overall design featured parallel access to the NVMe SSDs. The ESMS are intelligent and run VX-OS Data code for SSD I/O scheduling and metadata management.
The VX-100 recorded a good SPC-2 benchmark in September 2018, with high-performance and low cost:
VX-Cloud Data Acceleration Platform
Vexata new offering is called the VX-Cloud Data Acceleration Platform. In effect the VX-100 front-end IO controllers (IOCs) are now commodity X86 servers, still with FPGAs, and the intelligence of the back-end ESMs is aggregated into another X86 server with a chassis full of the NVMe SSDs.
Rick Walsworth, Vexata product marketing executive, told Blocks & Files: “For the VX-Cloud solution, the ESM hardware is replaced by the Data Store Nodes (x86 servers loaded with NVMe SSDs) where the VX-OS code has been modified to run on the open platform nodes and the IOCs are replaced by Data Acceleration nodes (x86 servers with FPGAs) running VX-OS code in both the X86 CPU (control path) and FPGA (data path.”
Vexata says the design uses x86 servers that can scale within or across racks to deliver cloud-scale block and file data services for high-performance transactional databases, decision support systems, advanced analytics, high performance computing, machine learning and artificial intelligence workloads. Twenty million IOPS has been put forward as a performance marker.
The company says VX-Cloud use cases include risk analytics, trading system analytics, financial modeling, cyber-security, IoT analytics, autonomous AI, and deep learning. VX-Cloud works with database, analytics and AI platforms such as Oracle RAC, SQL Server, Postgres, SAS, Kdb+, Cassandra and TensorFlow.
Reference architectures
VX-Cloud has three main elements: Acceleration (front-end controllers), Distribution and Aggregation (back-end SSD controllers).
VX-Cloud attributes, showing file and block access protocols
The system provides multi-parity data protection, high availability failover, encryption, volume management, thin provisioning, snapshots and clones, in-line data reduction, REST APIs, and granular I/O monitoring and analytics. It provides block and a raft of file interfaces.
Vexata is working with Fujitsu and Supermicro to craft reference architecture systems using FPGA-accelerated servers as the controller base. It claims VX-Cloud achieves up to 20X improvements in IOPS and bandwidth at consistent low-latency (think 200μs) for random, mixed read/write traffic, at meaningfully lower $/GB acquisition costs compared to premium SSD tiers available from public cloud providers today.
VX-Cloud will be delivered through strategic partners in the first instance, and general availability is expected after June. So we should see VX-Cloud shippable on Fujitsu Primergy and Supermicro BigTwin servers in the USA later this year.
Portable licenses that can be used and moved across heterogeneous hardware versions and cloud infrastructures
Term-based software licensing on cloud in 1+ year options, and in three- or five-year options on prem
Host software, with support included, is priced per node/year
Persistent storage software, with support included, is priced per TB/year.
Renewals for Datrium Forward are consistent into the future for 10 years or longer, so there’s no forklift-related sticker shock in later years.
The company say,Datrium’s data node, with commodity no-haggle pricing, is much cheaper when compared to land-locked appliance solutions that bundle hardware and software. it claims it is up to 95 per cent less than the list price of popular storage arrays – and its software delivers more than just storage.
Tim Page, CEO at Datrium, says: “SANs and array-based systems are no longer a viable option for the enterprise given increasingly demanding workloads and the push to the cloud across industries. Organizations are expected to move critical workloads to the cloud and between clouds and are failing to do so with antiquated storage and HCI systems.”
Antiquated HCI systems? They were mostly developed less than 10 years ago.
As for enterprise viability, IDC’s latest storage tracker shows no hint of hyperconverged kit or SAN sales declining.
Never mind that. Page declares: “Datrium Forward makes it possible for customers to pay for value and achieve total data centre portability, instead of lining vendors’ pockets. This is the future of acquiring data center infrastructure and keeping it current.”
Retrospect looks forward
Thirty year-old and now private equity-owned backup vendor Retrospect has launched v16 of its software for Windows and Mac users.
EMC bought Retrospect in 2004 but sold it on to private equity in 2011. It’s a small and medium business backup vendor for customers with mixed Windows and Mac environments.
V16 introduces a premium version of the Management Console which adds the setup and sending of backup scripts to any Retrospect ‘engine’ so you don’t have to manually add scripts to each Retrospect instance.
It also adds concurrent Retrospect instance writes to a single backup set destination enabling an up to 16x increase in overall backup performance. This is called Storage Groups and, for users, means mean reducing the backup window, and utilisation of network resource and bandwidth. There should be a good increase in the potential RPO (Recovery Point Objective) , or the backup scope and number of devices being backed up within a backup window.
New deployment tools can automatically deploy client agents, initially in conjunction with Desktop Central for Windows and Munki for Mac systems. More integrations are planned in the future.
Almost a third (31 per cent) of consumers believe the amount of data stored on their mobile devices has ‘increased significantly’ in the last five years
Two fifths (40 per cent of consumers store at least ten more applications on their mobile devices now compared to 2015, rising to more than 25 new applications for almost one fifth (18 per cent) of respondents
If caught short on storage space, only one in ten (11 per cent) consumers would keep all of their data but pay for more storage
In fact, over half of respondents (51 per cent) would delete data if they needed more space on their mobile device.
So what? Matt Eckersall, Regional Director, EMEA West at SUSE, says: “There is no doubt that consumer views, habits and expectations around data storage filter through to the enterprise.”
That’s a bit of a stretch in Blocks & Files’ view.
Eckersall adds: “The blurring of lines between work and personal life means consumer behaviour is often mirrored in the workplace. Data growth impacts both individuals and businesses, where concerns around how best to store data are now at an all-time high.”
Ah, we’re getting there; it’s about enterprise data growth, and…
SUSE says businesses need to factor consumer habits around data storage into enterprise storage infrastructure plans.
And the point is? SUSE is working with the Ceph and openATTIC open source project communities to deliver enterprise storage technology that is intelligent, scalable and cost effective.
Heaven forfend but this is one of the more useless supplier-runs-a-survey-to-tell-us-we-need-its-product exercises we’ve come across.
Veeam’s N2SW gets Amazonian
Veeam’s N2SW has introduced N2WS Backup & Recovery 2.5 with the Resource Control console. This enables you to shut off single or groups EC2 compute instances and RDS (Relational Database Services) resources when idle.
Uri Wolloch, CTO at N2WS, said: “Being able to simply power on or off groups of Amazon EC2 and Amazon RDS resources with a simple mouse click is just like shutting the lights off when you leave the room or keeping your refrigerator door closed; it reduces waste and saves a huge amount of money if put into regular practice.”
RDS and EC2 instances can be shut off on demand or according to pre-set schedules.
V2.5 also optimizes the process for cycling Amazon EBS (Elastic Block Store) snapshots into the N2WS S3 (Simple Storage Service) repository. It claims this can lead to north of 60 per cent for backups stored greater than 2 years with compression and deduplication enhancements.
Two new AWS Regions are supported by v2.5: AWS Europe (Stockholm) Region and the new AWS GovCloud (US-East) Region. It offers offers automated cross-region disaster recovery between the AWS GovCloud (US-East or US-West) Regions.
The new release has an expanded range of APIs relating to configuring alerts and recovery targets.
Shorts
Backup supplier Acronis is sponsoring E-Prix race car concern DS TECHEETAH, which competes in the in the ABB FIA Formula E championship. Acronis is supplying backup, storage, and disaster recovery products and gets its logo on the DS E-TENSE FE19 car.
DS E-TENSE FE19 race car.
Cloud backup service provider Backblaze has an interesting blog about SSD reliability. It surveys the different types of SSDs and failure modes. The conclusion is: “selecting a good quality SSD from a reputable manufacturer should be enough to make you feel confident that your SSD will have a useful life span.”
Object storage supplier Cloudian has reported a fourth consecutive year of record revenue and 80 per customer count growth to more than 300. it shipped six times more appliances – more than 250PB – than in the whole of the previous year. Cloudian claims it is the most widely adopted independent provider of object storage solutions – meaning it is better than Caringo, Scality or OpenIO.
Dell EMC has classed composable infrastructure supplier DriveScaleas a Tier Enterprise Infrastructure Global Partner. DriveScale software is compatible with Dell EMC PowerEdge servers, Ethernet switches and data storage products. The twosome are targeting big data, machine learning, NoSQL and massively parallel processing solutions optimised for Kubernetes and containers on bare metal deployments in cloud and enterprise data centres.Elastifile, a pioneer of enterprise-grade, scalable file storage for the public cloud, today announced that it has become an
Cloud-native scale-out filer Elastifile has become a SAP PartnerEdge Open Ecosystem – Build partner. It saying this strengthens the bonds between Elastifile’s file storage and SAP’s cloud application suite.
In-memory computing supplier GridGain has launched GridGain Community Edition. It includes the Apache Ignite code base plus patches and additional functionality to improve performance, reliability, security and manageability. The software enables GridGain to quickly deploy patches and upgrades for the Apache Ignite community, faster than the normal Ignite release cycle.
An IBM Redbook explains how IBM Aspera sync can be used to protect and share data stored in Spectrum Scale file systems across distances of several hundred to thousands of miles. It explains the integration of Aspera sync with Spectrum Scale and differentiates it from solutions built into Spectrum Scale for protection and sharing. The Redbook also elaborates on different use cases for Aspera sync with Spectrum Scale.
TIBCO Software has acquired in-memory data platform SnappyData, which has Apache Spark-based, unified in-memory cluster technology. Tibco’s Connected Intelligence platform gets a unified analytics data fabric that enhances analytics, data science, streaming, and data management. It says the result is up to twenty times faster than native Apache Spark, while scaling to support large Apache Spark-compatible stores.
Veritas Technologies is playing the survey game as well as SUSE. It says it’s unearthed just how deep the productivity crisis goes. On average, UK employees lose two hours a day searching for data, resulting in a 16 per cent drop in workforce efficiency. How can we fix this terrible problem? Would you believe that UK organisations that invest in effective day-to-day management of their data report cost savings and better employee productivity as a result.
How do they do that? Did you know Veritas can help global organisations harness the power of their data with a centralised data management strategy?
Hot stuff. Wasabi, which offers cloud storage that is 1/5th the price and up to 6x the speed of Amazon S3, is opening its first EU data centre in Amsterdam. This complements two existing US data centres. David Friend, Wasabi CEO, says Wasabi’s “API is 100 per cent S3 compliant and there are no charges for anything other than the amount of data stored – no egress fees, no charges for PUT, LIST, DELETE or other API calls.”
Customer
GPU-accelerated data warehouse developer SQream has signed LG Uplus, a mobile carrier owned by LG Corporation. This is expected to improve LG Uplus’ network operations and efficiencies, reduce costs and downtime, and offer better quality of service to customers.
LG Uplus becomes SQream’s first customer in South Korea as the company grows its worldwide market share within the telecom industry. The LGPlus HW includes IBM’s POWER9-based AC922 server with NVIDIA V100 Tensor Core GPUs and IBM FlashSystem 9100 storage.
IBM Cognitive Systems VP of AI and HPC, Sumit Gupta, said: “This combination of technologies is planned to allow LG Uplus to integrate with Hadoop and interface with base station probes for log analysis for analysis of the company’s very large data stores.”
People
Brett Shirk
Rubrik has appointed Brett Shirk as its Chief Revenue Officer, responsible for driving Rubrik’s global go-to-market strategy and reporting to Co-founder and CEO Bipul Sinha. He was VMware’s SVP and GM for the Americas. Before that he was with Symantec and Veritas, which Symantec acquired. Mark Smith resigned as Rubrik’s EVP for Global Sales and Business Development in December 2018 and his LinkedIn status is “Retired.”
When Intel announced Optane memory technology in July 2015 it claimed endurance was 1,000 times greater than NAND, At the time this was taken to mean NAND endurance limitations were eliminated.
Since then the company has been markedly reluctant to discuss real world examples of Optane DIMM endurance. This raises doubts about its utility as a DRAM extender and substitute.
What does Intel say?
Blocks & Files has asked Intel three questions about this:
What is the write endurance of Optane DC Persistent Memory?
Can host server applications write direct to Optane DC Persistent Memory in app direct mode or does Intel control such write access? If so, how does it control such access?
If Optane DC Persistent Memory has, for example, a write endurance of 200,000 cycles, this suggests its use would be restricted to read-centric persistent memory applications and prevent its use in write-centric persistent memory applications. What are appropriate application types for using Optane DC Persistent Memory?
Intel spokesperson Simon Read sent us this note: “Our next generation Xeon Scalable processors (codename Cascade Lake) supports Optane DC persistent memory in both operating modes, memory mode and app direct (application direct) mode.
“We will be unveiling more details on Optane DC persistent memory, including read/write endurance and performance, when we formally announce the product with the general availability of our next-generation Xeon Scalable processors.”
This suggests that the November 2018 announcement of Cascade Lake AP support of Optane DIMMs was a tad premature.
Obtaining Optane
Optane is Intel and Micron’s 3D XPoint technology offering storage-class or persistent memory with near-DRAM speed and higher density but not DRAM’s near-infinite lifecycle.
Intel says Optane can function as a persistent memory adjunct to DRAM, enlarging the memory space with sub-DRAM cost, near-DRAM speed and data persistence. It says that Cascade Lake AP processors support such Optane DC Persistent Memory. However, Intel is not revealing the endurance (write cycles or cycling) of its Optane DC Persistent Memory products, the Optane DIMMs.
According to Intel documentation, the 750GB DC P4800X Optane SSD handles up to 60 DWPD (Drive Writes per Day) for five years, or 82PB written. This is is excellent compared to NAND’s 1 – 10 DWPD but not 1,000 times better.
Mark Webb, an independent semiconductor analyst, says the Intel 3DXP cycling performance “was >1M cycles… until they actually made a product. Then it started to drop.”
At 30DWPD, the P4800X Optane SSD has total write/erase cycles of 32,850 per cell, Objective Analysis analyst Jim Handy has calculated.
Compare this to Intel’s DC P3700 1.6GB standard NAND SSD, which has 31,025 write/ erase cycles per cell. This is almost identical with Optane i.e. nowhere near 1,000 times better.
Handy also calculated that a 16GB Optane DIMM has 11,406 write cycles per cell and a 32GB version 5,703. Admittedly this was in 2017 and Optane controller technology could have improved.
Direct XPoint DIMM writes
Optane DIMMs, with capacities of 128GB, 256GB or 512GB, can be written in memory mode or app-direct (DAX) mode. Intel says “!data is volatile in Memory Mode; it will not be saved in the event of power loss. Persistence is enabled in the second mode, called App Direct.”
An analyst confidentially suggested to us that Optane “DIMMs actually work with Cascade Lake SP… not just AP. The actual volumes are very low mostly due to Intel controlling it. Apparently a lot of Intel hand-holding is going on.”
He said customers want “app-direct mode for [the] fastest NVMe storage/memory. But…this is a small market (<5 per cent of Server Market TAM.)”
And he thinks Intel prevents applications writing direct to Optane DIMM memory to prevent the stuff wearing out. Is this the case?
We asked Handy about this. “Yes, applications can write directly to the persistent memory, ” he replied.”The DIMM, though, does things that are not under the control of the application program, like wear leveling and encryption, so when you ask if Intel controls write accesses I would reply that they “kind of” do, but probably not in the way that you were asking about.”
He sent us an SNIA diagram showing how persistent memory could be accessed:
Intel is an SNIA member and is, we infer, in sync with SNA views.
If applications or system software can write directly to Optane DIMMs without limit, then the DIMM could wear out, and its endurance is hugely important.
The right write cycle number
With no statement as yet from Intel there is no industry analyst consensu concerning Optane DIMM endurance.
Howard Marks, Deep StorageNet founder said: ‘I’ve seen [Optane DIMM] estimates from 50,000-500,000 cycles. I don’t really know.”
Webb told us: “If you were to cycle 3DXP with reasonable BER (Bit Error Rate) specs and DPMs (Defective Parts per Million), with its on chip over-provisioning, we believe it can be cycled about 100K times with options being implemented for 200K+. Plus I have seen third party testing showing it wears out above those cycles.”
In that case the Optane DIMM, used in a write-centric app-direct environment, could wear out in a matter of months.
According to Handy, SK Hynix recently suggested, using internal modelling, that Optane endurance is 10 million cycles, with a slide deck image showing this:
“Existing SCM” means Optane.
So we have estimates of 50,000 to 500,000, 200,000 and 10 million write cycles plus this point from Rob Peglar, president at Advanced Computation and Storage LLC: “Raw endurance doesn’t really matter, since the DIMM controller hides a lot of it. You’ve seen what they are willing to publish using 3DX inside an SSD. Do not make the mistake of assuming that’s it, though. DIMMs are far, far different than SSDs.”
Webb also implies Optane DIMMs have controllers: “You cannot write to it like main memory and you require complex software to manage it.” That software will run in a DIMM controller.
Handy said: ” The DIMM … does things that are not under the control of the application program, like wear levelling and encryption,” which also implies a DIMM controller.
The Optane DIMM controller
If there is an Optane DIMM controller, what does it look like? We think the Optane DIMM has the equivalent of an FTL, a Flash Translation Layer. In this case it would be like an XTL, an XPoint Translation Layer. This XTL would add latency to XPoint data accesses.
intel has not clarified this point but we presume it would look after wear-levelling and over-provisioning to extend the DIMM’s endurance. That requires a logical-to-physical mapping function with logical byte, not block, addresses. With SSDs the translation layer operates at the block level. Optane DIMMs are byte-addressable so the mapping would be at byte-level.
Optane DIMMs come in 128GB, 256GB and 512GB capacities and so byte-level mapping tables contain entry numbers equivalent to the byte-level capacity. For instance, 512 x 1024 x 1024 x 1024 bytes: 1.074bn plus the over-provisioned bytes. It would need DIMM storage capacity to hold these mapping tables entries.
Roughly speaking it is as if an Optane DIMM is a DIMM-connected XPoint SSD.
Dealing with limited Optane DIMM endurance
Suppose Optane DIMM endurance is low? Handy said: “The DRAM locations that receive the most wear are things like loop counters that aren’t likely to benefit from persistence, so they are unlikely to be mapped by the system software into persistent memory. … We will learn more once Intel chooses to let us know.”
Marks said: ‘Because there’s still a significant latency difference between the 64GB DRAM DIMM and the 512GB XPoint one applications will create a tiered model with indices and other frequently updated data structures in DRAM and colder, though still hot by current measure, data in XPoint.”
Peglar rejects the idea that low (200,000) Optane DIMM endurance would restrict applications to read-centric use – but so what? “The entire question is baseless, without foundation, ” he said. “The assumption is entirely incorrect…3DX IS NOT NAND. Also, even if the above assumption was true (which it isn’t) – what’s wrong with read-centric memory? It beats the hell out of no memory, having to fetch from SSD.”
And so we wait, as we have a long time already, for Intel to reveal what it knows. Let’s hope it does not disappoint with overly limited write endurance.
A final point. If Optane DIMM write endurance is inadequate for its persistent memory role this opens the door to alternative PM technologies such as Crossbar’s ReRAM and MRAM from Everspin and, perhaps, Samsung.
RackTop Systems is looking to grow the market for its line of NAS appliances featuring built-in encryption and compliance controls thanks to $15m in Series A financing.
Founded in 2010 by US intelligence community veterans, RackTop specialises in what it calls CyberConverged arrays that integrate data storage and advanced security and compliance into a single platform.
BrickStor applicances in a 2U form factor
The firm’s BrickStor all-in-one data storage and management appliances come in a 2U rack-mount chassis with redundant power supplies and space for up to 12 3.5in SAS drives. There are two models: BrickStor Iron can be fitted with 12TB up to 168TB of disk capacity, while BrickStor Titanium has a capacity of 4.8TB of all Flash or up to 126TB of flash and disk. Both can be configured for dual 10Gbit Ethernet ports or 4 x 1Gbit Ethernet ports.
Eric Bednash, RackTop’s co-founder and CEO, claimed that BrickStor helps business with the problems of storing and managing large volumes of data, while at the same time protecting that data and addressing compliance requirements.
RackTop’s BrickStor architecture
The built-in encryption appears to come from RackTop using Seagate FIPS certified self-encrypting drives. The advantage here is that there is no impact on performance as there would be if the array controller had to encrypt and decrypt everything writes and reads.
Another feature of BrickStor, Secure Global File Share (SecureGFS), is designed to allow users to collaborate and share files internally and externally without sacrificing security or compliance, thanks to encrypted file sharing over the LAN and WAN.
RackTop will use the new funds on product development and to expand its sales channel. It is targeting customers in industries such as the public sector, financial services, health care and life sciences and claims to have customers worldwide already using its platform to manage upwards of 50 petabytes of data.
Participants in the funding round including Razor’s Edge Ventures, Grotech Ventures and Blu Venture Investors.