Home Blog Page 409

Intel gets ready to go live with servers with 12TB Optane

Expect Intel to announce on April 2 that server makers will ship 4-socket, 112-core servers with up to 12TB of Optane memory, from July onwards.

This means the servers will run applications faster than servers using DRAM and SSDs alone.

Blocks & Files has joined up the dots from several Intel pronouncements to draw this picture.

Dot 1: Intel has a Data-Centric Innovation Day event scheduled for April 2. It will stream this live.

Dot 2: Rob Crooke, Intel’s SVP for its Non-Volatile Memory Solutions Group,  blogged on March 19: “We’re also excited about soon-to-be released Intel Optane DC Persistent Memory that will be available on next-generation Intel Xeon processors for data centres. This is redefining the memory and storage hierarchy and bringing persistent, large-scale memory closer to the processor.”

Optane DC Persistent Memory is 3D XPoint media supplied as a non-volatile DIMM with memory-channel connectivity as opposed to the Optane SSDs with slower PCIe bus connectivity.

Dot 3: Late last year Intel announced the availability of Optane DIMMs in a beta testing program for OEMs and cloud service providers, which “paves the way for general availability in the first half of 2019.”

The Cascade Lake AP upgrade of Intel’s data centre Xeon server CPU line was announced in November last year and these CPUs support Optane DIMMs.  Cascade Lake AP parts are single or dual-socket processors with up to 48 cores and 12 DDR4 channels per package.

The Optane DIMMs come in 128GB, 256GB and 512GB capacities. A 2-socket Cascade Lake AP could have 12 X DDR4 memory channels, each supporting 2 DIMMs, either DRAM or Optane DIMMs. There could be a maximum of 6TB of Optane memory.

The first Cascade Lake AP iteration is a multi-chip package combining two 24-core processors connected by a UPI link, into a single 48-core CPU. 

That would be 12 x 512GB Optane DIMMs, leaving 12 DIMM sockets for DRAM – the servers use a mix of DRAM and Optane.

Things have moved on

Dot 4: On March 15 Jason Waxman, GM of Intel’s Cloud Platforms Group, said Intel sees a need for 4-socket servers with up 112 cores – 28 per socket (processor) – and 48 DIMMs – 12 per processor.

These servers would support up to 12TB of Optane DIMM capacity and be available from July onwards.

Waxman is pointing to a second iteration of Cascade Lake AP with 28-cores/socket – 4 more cores than before, and 12 DIMMs per CPU (socket). This adds up to 6 memory channels per CPU, as before, and 24 memory channels in total.

12TB of Optane DIMMs in turn implies 24 x 512GB Optane DIMMs – six per CPU (socket) using up 3 memory channels and leaving 3 for DRAM.

Our conclusion is that Intel will announce 4-socket, 112-core Cascade Lake AP packages on April 2 that support up to 12TB of Optane memory. Server systems using this will be coming available from Dell EMC, HPE, Lenovo, Inspur, Supermicro and Quanta, with first shipments in July.

Falling NAND prices to drive NVMe SSD uptake, say industry watchers

The great NAND flash price slump will accelerate the uptake of SSD storage, industry sources have predicted, with PCIe/NVMe SSDs possibly accounting for half of the market by the end of the year.

Demand for flash plummeted in late 2018 and the first quarter of 2019 but is expected to bounce back in an improving market for smartphones, laptops, servers and other products that use NAND.

In the SSD market, suppliers will increase the downward pressure on 512GB/1TB prices, according to DRAMeXchange analyst Ben Yeh. Along with an increasing proportion of value PCIe SSDs in product shipments, this will drive a greater fall in average selling prices, increasing the uptake of SSDs in laptops.

There is plenty of room for growth in the enterprise market, where suppliers have all set their sights on high-margin PCIe/NVMe products, Yeh said. He added that more opportunities for competition will arise as demand from servers and data centres heats up.

PCIe SSDs will account for up to 50 per cent of the market by the end of 2019, according to a separate report in Digitimes.

This is driven by the shrinking difference in price between the two types. The report claims that the unit price for 512GB PCIe SSDs fell 11 per cent to $55 during the first quarter of 2019, compared to a corresponding price drop of 9 per cent for SATA SSDs, with the price gap continuing to narrow from the 30 per cent seen in 2018.

Digitimes also quote CK Chang, president of Apacer Technology, an SSD maker, who said consumer PCIe SSDs will gradually entirely replace SATA SSDs and also see broader mass adoption in industrial control systems and data centres.

UC San Diego: Optane is great but…different

Intel’s Optane DC Persistent Memory DIMM can make key storage applications 17 times faster but systems builders must navigate ‘complex performance characteristics’ to get the best out of the technology.


Researchers at UC San Diego put the Intel Optane DC Persistent Memory Module through its paces and found that application performance varies widely. But the overall picture is that of a boost in performance from using Optane DIMMs.

The same is true for the byte-addressable memory mapped mode, where performance for RocksDB increases 3.5 times, while Redis 3.2 gains just 20 per cent. Understanding the root causes of these differences is likely to be fertile ground for developers and researchers, the UC San Diego team notes.

The UC San Diego researchers state that Optane DC memory used in the caching Memory mode provides comparable performance to DRAM for many real world applications and can greatly increase the total amount of memory available on the system.

Like nothing I’ve ever seen

When used in App Direct mode, Optane DC memory with a file system that supports non-volatile memory will drastically accelerate performance for many real-world storage applications.

However, the UC San Diego researchers also warn that Optane’s performance properties are significantly different from any medium that is currently deployed, and that more research is required to understand how it can be used to best advantage.

Optane is Intel’s 3D XPoint non-volatile memory technology that is pitched as a new tier in the memory hierarchy between DRAM and flash storage.

The Optane DC Persistent Memory version slots into spare DIMM sockets in servers with Intel’s latest Cascade Lake AP Xeon processors.

Optane DIMMs can operate in two ways: Memory mode and App Direct mode. Memory mode combines Optane with a conventional DRAM DIMM that serves as a cache for the larger but slower Optane, delivering a larger memory pool over all. In App Direct mode, there is no cache and Optane simply acts as a pool of persistent memory.

The UC San Diego testers found that in Memory mode, the caching mechanism works well for larger memory footprints. Using Memcached and Redis, both configured as a non-persistent key-value store with a 96 GB data set, produced the results below.

Not surprisingly, replacing DRAM with uncached Optane DC reduces performance by 20.1 and 23 per cent for memcached and Redis, respectively, whereas enabling the DRAM cache means performance drops between 8.6 and 19.2 per cent.

Reducing performance may sound undesirable, but it enables applications to work with a much larger in-memory dataset, as the test server could accommodate 1.5 TB of Optane DC memory per socket, compared with 192 GB of DRAM.

Optane DC can be treated as if it were storage when used as persistent memory. It can also be accessed as byte-addressable memory – both were tested by the university team.

In the charts below, a number of database tools are tested out using Ext4 with and without a direct access (DAX) mode to support persistent memory, and the NOVA file system which was designed for persistent memory.

The blue and orange columns are a flash SSD and Optane-based SSD for comparison. The red column is where Optane DC has been used as byte-addressable, which requires applications able to map it into their address space then access directly with loads and stores.

You can download the full report from arXiv here.

Samsung preps third-gen 10nm DDR4 DRAM

Samsung Electronics has developed a third generation of 10-nanometer-class DDR4 memory parts for high performance applications. The Korean chipmaker says this will enable it to ramp up production to meet greater demand.

Samsung said the latest 8Gb DDR4 products were developed within 16 months of mass production starting of its 2nd-generation 10nm-class chips using a 1y-nm process. The latest chips will be manufactured using a newer 1z-nm process without the use of Extreme Ultra-Violet (EUV) processing.

Memories are made of this

Touting 1z-nm as the world’s smallest memory process node, Samsung said that it would improved manufacturing productivity by more than 20 per cent than the previous generation.

This will enable it to respond better to an expected increase in demand for DRAM. Samsung, along with other memory chip firms, saw profits slump recently as demand slowed in the latter half of 2018.

Mass production of the 1z-nm 8Gb DDR4 begins in the second half of this year. Samsung will target the chips at the next generation of enterprise servers and high-end PCs expected in 2020.

Samsung’s first generation of chips made using a process smaller than 20nm was dubbed 1xnm. The second, 1ynm shrank this down to the region of 14nm to 16nm, while this third generation – 1znm – is in the region of 12nm to 14nm.

Development of the 1z-nm DRAM paves the way for the IT industry to transition to next-generation DRAM interfaces such as DDR5, LPDDR5 and GDDR6, Samsung said.

XenData uses Wasabi hot storage to deliver cheap cloud archive service

XenData has teamed up with Wasabi to offer a hybrid cloud archive service that charges only for the volume of data stored in the cloud.

The new Cloud File Storage Service is operated by Wasabi, a cloud provider that sells storage services compatible with Amazon’s S3 storage platform, but with competitive pricing.

XenData develops storage solutions managing data on-premises and public cloud storage systems. As part of the service, customers get subscriptions for XenData’s Cloud File Gateway software that runs on Windows servers and allows any file-based application to read and write to the Wasabi object storage service.

XenData FS Mirror is also wrapped into the service. This enables any file-folder structure on the local network to be mirrored to Wasabi, providing data protection and disaster recovery copies of selected network file systems.

Flat rate

Similar products already exist but XenData and Wasabi claim their offering makes cloud storage practical for active archive applications because there are no data egress charges or API request fees. Customers pay only for the volume of data stored in the cloud at a flat rate of $0.01 per GB per month.

The Cloud File Storage Service enables customers to choose any of the three Wasabi regions: US-West, US-East and EU-Central. The new service can also be added to an existing XenData LTO data tape archive, allowing files to be easily copied or migrated to the Wasabi cloud.

XenData’s Cloud File Gateway supports standard network protocols such as SMB, NFS and FTP, and is fully compliant with the Microsoft security model based on Active Directory. The Gateway is said to be optimised for large files, making it ideal for industry sectors such as creative media, science and engineering.

Once in the cloud, remote sites and users can be provided with credentials to enable them to securely share file via read-only access to content uploaded via the Cloud File Gateway.

The XenData-Wasabi Cloud File Storage Service goes live on May 1, 2019.

Micron blames weaker DRAM and NAND Flash demand for Q2 revenue fall

Micron posted Q2 FY2019 revenues of $5.84bn, a whopping fall compared with $7.35bn for the same period last year. This is also down on the $7.91bn for the last quarter and the $8.44bn covering the fourth quarter of 2018.

However, CEO Sanjay Mehrotra was upbeat, saying the company had delivered solid results and healthy levels of profitability despite a challenging industry environment – net income was $1.62bn in the quarter. He expects demand to start rising again in the second half of the year once the big server makers soak up all their surplus inventory.

Micron will idle about five per cent of its DRAM wafer starts and reduce NAND Flash wafer starts by five per cent, largely affecting legacy process nodes. This will save about $500m.

Micron forecasts revenue $4.6bn- $5bn for fiscal third quarter. 

A wee DRAM

The latest figures highlight weak DRAM and NAND Flash demand trends, according to Wells Fargo analyst Aaron Rakers. He said investors appear to have reacted positively to Micron’s continued commitment to supply-side rationalisation.

Mehotra said DRAM prices fell more than anticipated, due to greater levels of customer inventory affecting demand at several enterprise OEM server makers. But he expects customer inventories to normalise by mid-year when shipment growth is likely to increase again.

Micron’s outlook for the year ahead

NAND markets remain oversupplied, which Mehotra attributes to the acceleration in bit growth driven by the industry transition to 64-layer 3D NAND. Nevertheless, he similarly expects growth in demand for NAND products in the second half of the calendar year.

Fourth generation 3D NAND

Mehrotra said Micron is making good progress on its fourth generation 3D NAND, which uses replacement gate technology – understood to be a variation of a charge trap cell design. This offers limited cost reductions, so Micron plans to use this only for select NAND products at first. It will convert the rest of its portfolio at a later date to the second node of replacement gate technology.

Mehotra re-iterated the company’s commitment to 3D XPoint, and said the technology will be a key enabler for numerous new applications, particularly in artificial intelligence and data analytics.

In January this year, Micron exercised its option to acquire Intel’s interest in the former joint-venture IMFT manufacturing facility in Lehi, Utah, and Micron expects to make customer 3D XPoint samples available before the end of the calendar year.

In SSDs, Micron said it is making progress on transitioning to NVMe while continuing to improve costs in SATA products, where it introduced consumer and client SSDs based on 96 layer 3D NAND during Q2. Micron intends to introduce cloud and enterprise NVMe SSDs later in this calendar year.

Bless. It’s VMware and Dell EMC’s first jointly engineered hybrid cloud infrastructure solution

VMware’s Cloud Foundation hybrid cloud stack has hit version 3.7 and is available from April as a component of a pre-built private cloud appliance running on Dell EMC VxRail hyperconverged infrastructure (HCI) kit.


VMware and parent Dell EMC have billed this as their first jointly engineered hybrid cloud infrastructure.

The virtualization juggernaut unveiled Cloud Foundation – its software stack for setting up on-and-off-premises hybrid clouds – at the company’s VMworld knees-up in 2016 as part of the firm’s hybrid cloud play. In essence the stack pulls together VMware’s vSphere, vSAN and NSX components into a “Software-Defined Data Centre” (SDDC) stack, with the twist that it can be deployed on qualified hardware as an on-premises private cloud, or from the public cloud as a service.

Since launch, Cloud Foundation has popped up on several cloud platforms, most notably as VMware Cloud on AWS, but also IBM Cloud, Rackspace and CenturyLink. It can also be deployed on certified hardware that meets VMware’s vSAN ReadyNode validated server configuration specification.

Shapes and sizes

The VxRail appliances come in a variety of configurations, in either a 1U or 2U chassis with varying CPU speeds, core counts, memory sizes, as well as physical disk capacity and SSD caching capacities.

But as with other HCI systems, the software layer is the key, and Dell EMC has promised that Cloud Foundation on VxRail will be lifecycle-managed as one complete, automated, turnkey system.

Dell EMC and VMware hold an advantage here in their ability to commit to the synchronous release of VxRail and VMware software updates, and so keep the on-site stack consistent with instances of Cloud Foundation running in the cloud. With the vRealize management suite, VMware shops should also be able to oversee public and private cloud resources from the same console.

On VxRail, Cloud Foundation calls for a minimum cluster size of four nodes, and can scale up to eight racks, with each comprising up to 32 1U or 16 2U nodes.

The Cloud Foundation 3.7 release also supports fully automated deployment of VMware’s Horizon 7 virtual desktop infrastructure software.

This article was published first on The Register.

Komprise gives users more archiving knobs to twiddle

Data management startup Komprise today updated its software with support for user and policy driven archiving. It has also released a standalone NAS migration tool.

Komprise has only been around for a couple of years, entering the market with promises to save enterprise customers money by identifying infrequently accessed data and automatically migrating it to more cost-efficient storage locations such as the cloud. It claims to offer savings of up to 75 per cent on the costs of NAS storage.

A new feature in Komprise Intelligent Data Management 2.9 is the ability to support both user and policy driven archiving. The firm said that certain business users want to archive some data themselves outside of automated policies. In response it has introduced user driven transparent archiving, enabling customers to use their knowledge of data relevance to help manage their own data.

How Komprise Intelligent Data Management works

Users can analyse data growth, designate projects to be archived, and access archived data as if it were still on the primary storage, exactly as before from its original location without business disruption, according to Komprise.

Migration tool

Also new is Komprise NAS Migration 1.0, a standalone product that automates the process of moving data from an old NAS to a new one without disruption. This appears to be simply the breaking out of the NAS migration feature of the main Komprise suite as a lower cost alternative for users requiring just this function.

The company said customers can start out with NAS Migration then upgrade with ease to the full Komprise Intelligent Data Management release at a later date.

Komprise Intelligent Data Management 2.9 is priced at $130/TB before volume discounts, and Komprise NAS Migration 1.0 is priced at $60/TB before volume discounts.

Portworx nabs $27m investment, updates container storage platform

Portworx, which specialises in storage for container deployments, has secured an extra $27m from investors as it seeks to expand into new global markets. The firm has also updated its flagship Portworx Enterprise platform with a focus on security and disaster recovery.

Founded four years ago by former executives from Ocarina Networks, Portworx is one of a number of startups that have sprung up to address the need for an effective storage and data management layer for the container platforms that are all the rage in developer circles.

Portworx must be doing something right, as it claims to have experienced a 400 per cent year on year increase in bookings, and has raised $27m in Series C funding to help fund expansion. The firm said this latest funding round was led by Sapphire Ventures and Mubadala Investment Company, with support from existing investors Mayfield Fund and GE Ventures. Perhaps significantly, new financing also came from Cisco, HPE, and NetApp.

The investment follows closely on the release of Portworx Enterprise 2.1, which brings new features focused on security and disaster recovery. This includes new role-based access controls as part of the platform’s PX-Security layer, plus a new PX-DR data recovery layer. This enables disaster recovery with zero data loss between data centees located in a single metropolitan area, Portworx claims.

Both are designed to address shortcomings in the ubiquitous Kubernetes container orchestrator. Kubernetes does not currently support authorisation and access control using standard enterprise systems like Active Directory or LDAP, and so PX-Security now enables this, offering role-based authentication, authorisation, and ownership on a per container data volume basis.

Likewise, PX-DR is touted by Portworx as the first step towards Kubernetes-native disaster recovery. It offers three levels of HA and Data Protection for mission critical apps: within a single data centre or multi-availability zones; across data centres or clouds within a metropolitan area; and across data centres spanning the world.

The new features will be available in Portworx Enterprise 2.1 from March 31.

Dell EMC and Nvidia tout turnkey AI systems through US resellers

Five US channel partners can build ready-to-run AI systems using Dell EMC and Nvidia gear.

The reference architecture (RA) specifies A Dell EMC/Nvidia box with an Isilon F800 all-flash filer paired with Nvidia’s DGX-1 server

Isilon F800

Dell EMC announced Ready Architecture initiative in a November 2018 blog post. It has moved on a tad with turnkey RA systems built by, and sold through, Dell EMC and Nvidia US channel partners, who can add further components. 

They can also scale compute and storage independently in this RA design.

The systems are initially available in the Americas from  FusionStorm/Computacenter, Insight, Presidio, Sirius and WWT.

Target application areas include genomics, precision medicine, advanced driver assistance systems (ADAS) and autonomous driving (AD), video analytics and content enrichment, fraud detection/prevention and other AI workloads.

Ready Architecture systems are an upgrade for the earlier deep learning Ready Solution which paired the PowerEdge C4140 server with Nvidia Tesla V100 GPUs, all-flash Isilon storage and Dell EMC networking.


Pure Storage takes its machine-learning platform hyperscale

Pure Storage has updated its AIRI platform for accelerating artificial intelligence workloads with a hyperscale configuration that includes Nvidia’s DGX-1 and DGX-2 GPU boxes and Mellanox networking.


The firm today also unveiled an alternative platform aimed at the more mainstream enterprise market. This is based on its FlashStack collaboration with Cisco, using UCS servers and switches combined with Nvidia GPUs and Pure Storage FlashBlades.

The first iteration of Pure Storage’s AIRI (AI-Ready Infrastructure) launched a year ago, based on Nvidia’s DGX-1 GPU-accelerated Intel boxes and followed later by a more modest AIRI Mini configuration.

Hyperscale AIRI adds DGX-2 nodes and Mellanox interconnects

Hyperscale AIRI takes the portfolio in the opposite direction and is designed to eliminate the challenges that prevent organisations from deploying AI at scale, Pure Storage said.

It can scale out to multiple racks of Nvidia’s DGX-1 and the more powerful DGX-2 systems, with Infiniband and Ethernet fabrics available as interconnect options, plus Pure Storage FlashBlades providing the storage layer.

The platform offers data scientists an AI infrastructure with cloud-like elasticity thanks to Nvidia’s NGC software container registry, the AIRI scaling toolkit and integration with Kubernetes and Pure Service Orchestrator.

The appearance of Mellanox networking in this platform is notable because Nvidia is acquiring the firm precisely because of its high performance connectivity expertise.

FlashStack for AI

FlashStack has been around for a few years as a converged infrastructure solution using Pure Storage all-flash arrays, Cisco UCS servers and Nexus switches. The two partners have updated this to FlashStack for AI by adding Nvidia GPUs to a Cisco UCS C480ML server and upgrading the storage to FlashBlades.

Cisco said that with these upgrades, FlashStack for AI enables enterprise customers to extend their existing infrastructure to support AI/ML workloads without having to add new infrastructure silos.

FlashStack for AI

Hyperscale AIRI is available worldwide now, while FlashStack for AI is due to ship at the end of April. No details on pricing have been offered, but the original AIRI was estimated to cost somewhere upwards of $1m.

This article was published first on The Register.

Aparavi does more things with archives in more storage clouds

Aparavi, a startup offering cloud-enabled data management services, has released a software update to provide better insight and management of archived data and bulk data migration across clouds. It has also increased its list of supported cloud targets.

Aparavi Active Archive launched in 2018 as a SaaS-based platform to actively manage an organisation’s data – especially unstructured data – for long-term policy-based retention, either on-premises or to one or more cloud platforms. Its approach is described as cloud-active data pruning, which seeks to reduce long-term storage costs by archiving files that are no longer needed. The company was itself founded in 2016.

New features in this Active Archive update include Direct-to-cloud – the ability to archive data directly from source systems to the user’s cloud destination of choice. This minimises local storage requirements as there is no need to allocate resources for data staging.

Archive data directly from source to cloud

Aparavi claims that updated classification and tagging capabilities make it easier to identify and tag data for future retrieval purposes such as compliance, reference, or analysis, based on individual words, phrases, dates, file types, or patterns. The platform includes pre-set classifications such as “confidential” or “legal”.

The flip side of this is finding the relevant data again, and Active Archive now features a more intuitive query interface that can search on-site or across multiple clouds, using “Google-like” search patterns. Files can be searched by metadata, such as classification, tag, date, or file type. However, current file types supported only include text files, PDF, and modern Microsoft Office formats. Future updates will add images and legacy Microsoft Office formats.

Active Archive now supports more cloud platforms as storage targets, with the list covering AWS, Backblaze B2, Caringo, Cloudian, IBM Cloud, Microsoft Azure, Oracle Cloud, Scality, and Wasabi. Meanwhile, a new bulk migration tool allows users to shift data easily from one location to another, including onto or off-premises.

Aparavi charges on a subscription basis for Active Archive, with monthly or annual plans available. The lowest tier is a 10TB plan, which costs $325 per month. A free trial is also available.