Home Blog Page 401

And now we are 10. Scality RINGs the changes

At a press briefing yesterday to mark the company’s tenth birthday, Scality co-founder and CEO Jérôme Lecat talked about competition, its roadmap, and a new version of its RING object storage software.

In no particular order, let’s kick off with Scality’s competitive landscape – as the company sees it. According to Lecat Dell EMC is Scality’s premier competitor, and NetApp, StorageGRID is in second place. It encounters NetApp about half the time it meets Dell EMC.

The next two competitors are IBM Cloud Object Store and companies touting solutions based on Ceph. Collectively, Scality encounters them half the time it competes against NetApp.

Roadmap

Scality is testing all-flash RINGs with QLC (4 bits/cell) flash in mind. NAND-based RINGs would need less electricity than disk-based counterparts and  may be be more reliable in a hardware sense.

It is working with HPE to integrate RING with Infosight, HPE’s analytics and management platform. HPE has also launched a tiered AI Data Node with WekaIO software installed for high-speed data access, along with Scality RING software for longer term data storage.

Read a reference config document for the AI Data Node here.

RING8

Scality has updated its RING object storage, adding management features across multiple RINGs and public clouds, and new API support.

RING8, the eighth generation RING, has:

  • Improved security with added role-based access control and encryption support,
  • Enhanced multi-tenancy for service providers,
  • More AWS S3 API support and support for legacy NFS v4,
  • eXtended Data Management (XDM) and mobility across multiple edge and core RINGs, and public clouds, with lifecycle tiering and Zenko data orchestrator integration.

Details can be found on a datasheet downloadable here (registration needed.)

An analyst at the briefing suggested Scality is making pre-emptive moves in case Amazon produced an on-premises S3 object storage product.

Edge and Core RINGs

An edge RING site will be a smaller RING, say 200TB, with lower durability, such as a 9:3 erasure coding system. It will be used in remote office/branch office and embedded environments with large data requirements. Scality calls this a service edge. We might think of them as RINGlets.

The edge RINGs replicate their data to a central and much larger RING with higher durability, such as 7:5 erasure coding. This can withstand a higher degree of hardware component failure.

Download a RING8 datasheet here (registration needed.)

Dell-Nutanix duopoly cements grip on HCI market

Dell and Nutanix together account of nearly three quarters of hyperconverged (HCI) systems revenues. HCI revenues have surpassed converged systems and the integrated platform is in decline.

This is revealed by IDC’s Q1 2019 Worldwide Quarterly Converged Systems Tracker. The tech analyst firm organises the market into three categories;

  • Certified reference systems and integrated infrastructure – pre-integrated, vendor-certified systems containing server, storage, networking, and basic element/systems management software.
  • Integrated platforms – integrated systems sold with pre-integrated packaged software and customised system engineering; think Oracle ExaData
  • Hyperconverged systems – collapse core storage and compute functionality into a single, highly virtualized system with a scale-out architecture and all compute and storage functions coming from the same x86 server resources.

The category revenue numbers for the quarter are:

  • CRS & IS – $1.4bn (9% y-o-y)        – 36.6% of market
  • IP             – $556m (-13.3% y-o-y)   – 14.8% of market
  • HCI          – $1.8bn (46.7% y-o-y)     – 48.6% of market
  • TOTAL    – $3.75bn – growth y-o-y not revealed

In the CI category Dell said its revenue share was 55.3 per cent, comprised of Dell EMC VxBlock Systems, Ready Solutions and Ready Stack.

IDC publishes top supplier revenue numbers for the HCI market, showing branded systems. It also divvies up the revenues by software supplier.

Blocks & Files highlighting.

Dell has more than twice the revenue of second-placed Nutanix; and both dwarf HPE. The market grew 46.7 per cent, with Nutanix and HPE increasing revenues at less than that rate. But Nutanix has been moving to a subscription, software-only business model. It also supplies software to run on other vendors’ hardware. So like VMware, it gains more market share when the numbers are cut by HCI software supplier.

Blocks & Files highlighting.

VMware and Nutanix dominate the software market, with 70 per cent combined share. VMware revenues grew 36.3 per cent year on year, the market as a whole at 46.7 per cent, Nutanix and HPE and the rest-of-market category growing at less than trend. Also-rans in IDC’s Rest of Market category include Cisco, Datrium, Maxta, NetApp, Pivot3 and Scale Computing.

Wells Fargo senior analyst Aaron Rakers provided data for NetApp and Cisco:

  • Cisco’s HyperFlex revenue was ~$82m, just a shade behind HPE, and up 37 per cent – again similar to HPE’s 36.2 per cent growth rate.
  • NetApp’s Elements HCI revenue was estimated at ~$46M, up 128.4 per cent, making NetApp the fastest-growing supplier in this group.

VMware’s dominance increased over the year while Nutanix market share eased from 32.2 per cent to 28.9 per cent. Nevertheless Nutanix revenues are more than six times higher than third-placed HPE.

IDC has split out a separate HCI category, calling it Disaggregated HCI; systems designed from the ground up to only support separate compute and storage nodes. It doesn’t publicly reveal any numbers in this segment but says an example supplier is NetApp, with its Elements HCI.

Blocks & Files would add Datrium and HPE’s latest dHCI Nimble array to this category. IDC doesn’t publicly reveal the overall size of this niche, its growth rate or the supplier shares.

Qumulo adds fatter drives to flash, hybrid and archive systems

Qumulo has added larger capacity models to its all-flash, hybrid and archive systems along with a software update that includes real-time analytics.

The three new Qumulo products.

The all-flash P series gets a new P184T model positioned above the existing top-of-the-line P92T. It has 184TB of raw capacity, consisting of 24 x 7.68TB NVMe SSDs instead of the P92T’s 24 x 3.84TB drives.

P184 product added to all-flash P series

The higher-capacity QC class offers a mix of flash speed and disk capacity. The QC24, QC40 C series come in a 1U chassis while the larger QC104, 208, 260 and 360 are built with a 4U enclosure. A new C168T model uses 12 x 14TB disk drives instead of the C72T’s 12 x 6TB drives.

C168T product added to hybrid C/QC products. P series

It also has 3.8TB of flash – pretty much double the C72’s 1.92TB.

And then there were two

Qumulo’s nearline archive filers are the K series. A new K168T slots in above the existing K144T, with 168TB of capacity (the K1t-44T has 144TB(/ The increase is achieved by using 14TB disk drives instead of the K144T’s 12TB drives.

K168T nearline archive product announced.

A Qumulo software update increases write performance by up to 40 per cent on its all-flash P series.

The real-time analytics shows how much capacity is used by storage, snapshots, and metadata. It also reveals capacity usage trends and makes usage spikes visible. A security audit feature tracks which users accessed files and what they did during the access.

The Qumulo C-168T is available now, the K-168T is available July 9, and the P-184T is available on July 23 The new software features and functionality are in v2.12.4 of Qumulo’s software, which is available now.

SK hynix lays it thin for 128 layer NAND

SK Hynix today said it has begun mass producing the world’s first 128-layer NAND product and will kick off sales in the second half of the year.

A quick transition to 128-layer tech will give SK Hynix production cost and NAND density advantages over the competition. The Korean NAND fabber is currently ramping up sales and production of its 96-layer die technology, as are Intel, Micron, Samsung and Toshiba/Western Digital.

SK hynix 128-layer product.

The 128-layer product is formatted as TLC (3bits/cell) and provides a 1Tb die. 128-layer NAND is one-third more dense than 96-layer alternatives.

Some competitors have 1Tb QLC (4bits/cell) die tech, but SK hynix said QLC represents less than 15 per cent of the market. According to the company TLC accounts for more than 85 per cent of the NAND market.

The SK hynix technology has logic at the base of the die with NAND cells stacked in layers above this. It says the 1Tb 128-layer die increases bit productivity per wafer by 40 per cent compared to its 96-Layer NAND.

It will develop a 1Tb UFS 3.1 product for mobile phone use, using 128-layer tech, halving the chips needed for 1TB of phone memory compared to 512Gb technology and needing 20 per cent less power.

In 2020 SK Hynix will develop 2TB consumer SSDs using this technology, and also 16TB and 32TB NVMe data centre SSDs.

SK hynix is also developing next-generation 176-Layer NAND as well, 48 layers more than 128, representing an approximate 30 per cent increase in density.

Huawei confusion hampers Micron revenue forecasts

Micron’s latest quarterlies show it remains stuck in the DRAM and NAND demand doldrums. But recovery is in sight. Maybe.

Micron blamed industry over supply and steeper than expected price cutting for the sales decline. The company has also had to traverse the minefield laid by the US Department of Commerce with the trading ban imposed on Huawei, a significant Micron customer.

Some products can still lawfully be shipped to Huawei, Micron said. But it noted “considerable ongoing uncertainty surrounding the Huawei situation, and we are unable to predict the volumes or time periods over which we’ll be able to ship products to Huawei.” This makes revenue prediction difficult.

Revenues for the third fiscal 2019 quarter, ended May 30, plunged 38.6 per cent to $4.8bn and net income fell 78 per cent to $840m. Nevertheless, the company beat analyst expectations and shares rose 10 per cent yesterday in after hours trading.

In response, the company is reducing capital expenditure in fiscal 2020, “to help improve industry supply-demand balance”, CEO Sanjay Mehrotra said in a statement.

DRAM accounted for 64 per cent and NAND contributed 33 per cent to the quarter’s revenues. DRAM revenues fell 45 per cent y-o-y and NAND revenues fell 25 per cent.

Micron made $2.21bn investments in capital expenditures in the quarter and free cash flow was $504m. It ended the quarter with $7.93bn in cash, marketable investments, and restricted cash.

Supply and demand

According to Micron, there are signs of an uptick in demand. It thinks the market will return to good year-on-year DRAM and NAND bit demand growth in the second half of calendar 2019.

In the meantime it is reducing over-supply by continuing with a previously announced five per cent idling of DRAM wafer starts, and reducing NAND wafer starts by 10 per cent. Previously it had announced a five per cent reduction.

Micron expects healthy cost savings in DRAM and NAND this fiscal year due to technology improvements. DRAM processes are moving to 1y-nm and on to 1z-nm and 96-layer NAND production is ramping up. The company also reported good progress in 28-layer technology.

The fourth fy19 quarter outlook is for $4.5bn ± $200m. A year ago revenues were $8.44bn. This significant fall will possibly end at the bottom of Micron’s revenue trough.

StorONE’s Seagate SSD demo is fast for a virtual appliance

Startup StorONE claims record performance from its TRU S1 storage software running in a virtual appliance with 24 Seagate SSDs.

Is software ran more than three times faster last year using WD SSDs housed in physical servers.

StorONE has written its own storage stack which supports all storage protocols (block – FC and iSCSI, file – NFS, and object – S3) on the same drives. It supports all drive types in the same server and an unlimited snapshot capability.

A YouTube video shows the latest demo. The StorONE TRU S1 software ran in two ESXi VMs forming a high-availability pair, which in turn executed in a 2-node, 2U Supermicro SBB server, a virtual appliance, filled with the SSDs.

This storage target talked to two initiators running in Dell PowerEdge servers across ISCSi links.

The SSDs were Seagate ST1600FM0003 devices, 1200.2 1.6TB, 12Gbit/s SAS interface drives announced in 2015. This is fairly old technology.

The results were

  • 500,000 random read IOPS (4K blocks)
  • 180,000 random write IOPS
  • 10GB/sec sequential read bandwidth (128K)
  • 5GB/sec sequential write bandwidth
  • Average latency <0.2ms

This is with full data protection running and unlimited snapshots.

The StorONE Seagate SSD demo video

The WD SSD demo last year used 24 x SS200 3.84TB SAS SSDs. Its performance numbers were:

  • 1.7m random read IOPS (4K blocks)
  • Random write IOPS not supplied
  • 15GB/sec sequential read bandwidth (128K)
  • 7.5GB/sec sequential write bandwidth
  • Average latency <0.3ms

That was more than three times the random read IOPS and 50 per cent more sequential read and write bandwidth than in the latest StorONE-Seagate demo. The latency was a smidgin longer but, against the faster IOPS and bandwidth numbers, pretty inconsequential.

Gal Naor, StorONE founder and CEO, told us: “We managed to achieve the amount of IOPS with a StorONE virtual solution while the 1.7m IOPS was achieved with physical servers. 250,000 IOPS with a single storage virtual node is a very high number.”

“Other vendors, in order to get 500,000 IOPS, need many many nodes and DRAM. Our CPU and memory usages are very low and there is huge space for many VMs.” StorOne used a single VM.

So that’s us told. Physical servers can go faster than their virtual counterparts and StorONE VMs go faster than other peoples’.

You look stunning, darling! WD snuggles up to Veeam in an IntelliFlash

Western Digital has linked its IntelliFlash all-flash and hybrid disk/flash arrays to Veeam’s storage API, to produce a storage snapshot plug-in for Veeam and protect vSphere VMs more quickly.

The solution uses Veeam Availability Suite to facilitate Veeam’s backup operations in vSphere environments by eliminating backup window bottlenecks. This minimises the impact on applications during the VM stunning process, according to WD.

An ESX virtual machine is quiesced (stunned) when a backup is taken. Obviously this can cause IO latency problems as IO requests coming in to the stunned VM are stored and run when the backup completes. Snapshot-based protection reduces the duration of VM stunning duration as they are quicker to complete than a backup process.

Western Digital Corporation IntelliFlash rack.

WD said the plug-in improves recovery point objectives. It gives IntelliFlash users access to Veeam features such as granular recovery with Veeam Explorer for Storage Snapshots, agentless backup, Instant VM Recovery, Veeam DataLabs Secure Restore, SureBackup and SureReplica.

Comment

IDC lists Western Digital in the ‘Other’ category of storage array suppliers. IntelliFlash arrays compete with a long list of all-flash arrays from other vendors: Apeiron, Dell EMC, E8, Excelero, Hitachi Vantara, HPE, Huawei, IBM, Kaminario, NetApp, Pavilion Data Systems and Pure Storage, for example.

This Veeam integration is not a game changer for WD. Still, anything that removes potential sales objections should be welcome.

Cloudian pushes Xtreme performance with Seagate backing

Cloudian has increased HyperStore storage capacity by 80 per cent, courtesy of a strategic deal with Seagate.

The object storage software company today launched Xtreme, the third member of the HyperStore line-up. joining the 1500 and 4000.

Cloudian 4000

The 1500 is a 1U rack enclosure with 12 x 12TB disk drives, adding up to 144TB raw capacity. These are 7.200rpm, SAS drives which are hot-swappable. The system functions as a single node in Cloudian’s object system. There are two SSDs for handling metadata operations.

The 4000 fits 2 nodes in a 4U chassis. This contains 70 disk drives with up to 12TB capacity. Maximum raw capacity is 840TB and the TB per rack unit (U) are 210; better than the 1500’s 144TB/U.

Open the Seagates

Xtreme is a rebadged Seagate EXOS system running HyperStore software and intended for private cloud installations. The system incorporates 96 disk drives, each holding to 16TB in capacity, providing up to 1,536TB raw capacity. That’s 384TB/U, or 83 per cent bigger than the 4000. Ten Xtreme’s in a rack will hold 15.36PB raw.

The EXOS chassis has 100 drive bays, and four are used for 1.92TB SSDs – two per node – speeding the metadata operations.

A point release of HyperStore, v7.1.5, is needed to run Xtreme.

Seagate and Cloudian

Seagate disk drive tech will be incorporated into the HyperStore line more quickly, according to Cloudian. This implies that Seagate HAMR and multi-actuator drives will arrive in HyperStore products closer to Seagate’s product announcements.

The Seagate tie-in gives Cloudian a roadmap to 20TB-plus and beyond capacity drives with two read/write heads per drive – faster IO in other words – as well as to all-flash nodes.

Seagate’s Ken Claffey, GM for Seagate’s enterprise data centre solutions, told us the company had to pay a lot of attention to heat dissipation and controlling vibration, because of the drive packing density. The implication here is that disk drive manufacturers can do this better than disk array suppliers.

Cloudian CEO Michael Tso said the Seagate relationship meant customers could have confidence that HyperStore will follow the disk drive and chassis price/capacity/performance trends over the next three to five years. This is relevant to enterprise customers who install multi-petabyte systems on-premises.

Claffey pointed out that Seagate sees a lot of growth in private cloud business.

At present the HyperStore Xtreme costs under 0.5c per GB/month. This should lower as disk capacities grow.

Why did Cloudian partner with Seagate? Western Digital has its own ActiveScale object storage product line. Now Seagate, via Cloudian, can respond to that.

WD tends to go up the storage stack with its own products, while Seagate prefers to do it through partnerships and OEM deals, such as this one with Cloudian.

Cloudian recently announced native S3-compatible object storage for VMware. It now has a box that can provide such storage for a huge number of VMs.

Vexata execs depart for greener shores

Struggling Vexata has lost its chief marketing officer and a product and solution marketing VP.

Vexata is an extreme high-performance array startup that downsized in March 2019, At the time Rick Walsworth, VP product and solution marketing, told us the cuts reflected the company’s switch from direct sales to partner-led sales.

Walsworth quit the company last month, moving to VMware to run VMware Cloud Foundation Product Marketing.

Vexata CMO Ashish Gupta has also left, joining security firm Banyan as its marketing head. He told an audience in a Tech Field Day video recorded on June 19 that it was his third day at the company.

At time of writing, Gupta remains featured on Vexata’s leadership page as its CMO.

Founded in 2013, Vexata has raised $54m in four funding rounds, of which the most recent was a $5m top-up in 2017.

Hello from the other side. Mangstor completes EXTEN reboot

Mangstor, a pioneer in NVMe-over-Fabrics array technology which crashed and nearly burned, has escaped the undead. The company, now known as EXTEN Technologies, has launched HyperDynamic, a NVMe-oF target drive chassis management and operation software product for OEMs and system integrators.

EXTEN website

The change of name from Mangstor to EXTEN took place in 2017. But this is the first sign of product since then.

HyperDynamic

EXTEN’s HyperDynamic software runs in embedded systems that front-end a chassis containing NAND or Optane Solid state drives and functions as target termination point for NVME-oF links. These can use RDMA or TCP across Ethernet. iWARP. InfiniBand and OmniPath connections- but not NVMeFC – are supported. RDMA and TCP can run in parallel.

Because of this support for multiple link types EXTEN classifies HyperDynamic as software-defined NVMe-oF. It is built on a shared-nothing architecture and uses a microservices-based design.

EXTEN software schematic

The embedded system can use Intel or AMD X86-64 processors, as well as ARM chips and also custom SoC (system-on-chip) enclosures. HyperDynamic supports Redfish and Swordfish open industry management standards within its RESTful  API.

It includes automatic volume provisioning and service classes for bandwidth management.

Boxing clever

EXTEN has teamed up with AIC, a Taiwanese OEM, to produce a target NVMe-oF storage device. This consists of:

  • AIC J2024-04 JBOF (box of flash drives) with 24 hot-swappable, dual-ported NVMe drives 
  • 4 hot-pluggable Broadcom PS1100R Stingray controller cards for management with
    • 64 PCIe lanes for networking 
    • 64 for storage
  • HyperDynamic SW running in Stingray cards
  • Support for drive-level redundancy with RAID 0/1/5/6/10/50  
  • Uses standard MPIO and Linux drivers
  • High-availability dual-controller and dual-port, fail-over support with no single point of failure.

EXTEN said the software has composable data paths, enabling system administrators to share storage resources in multiple locations across compute nodes.

The software provides more than 40GB/sec of bandwidth per node. NICS from Broadcom, Chelsio, Intel, Mellanox, and Solarflare are supported. It adds less than one microsecond of overhead latency to an NVMe access to a direct-attached drive.

TL;DR

The HyperDynamic software stack looks capable. But EXTEN needs partners to implement it and take to market. It has promise as a technology for skilled DIY NVMe-oF customers and also as NVME-oF software for system builders who don’t wish to use NVMe-oF storage from mainstream vendors or startups such as Apeiron, E8, Excelero and Pavilion Data Systems.


HPE composes SimpliVity love letter to Synergy

HPE plans to decompose SimpliVity hyperconverged systems into its Synergy pool of composable system.

HPE launched Synergy composable systems technology in 2015. The underlying idea is to organise major IT components in an IT resource pool, thus ensuring they are not stranded inside fixed server configurations. Envisage a rack-level collection of processors, memory, storage, virtualization software and network connectivity.

Synergy rack storage tray

When a server system is needed to run an app it is dynamically composed from the appropriate elements. The app is executed and the resource elements are returned to the pool for re-use.

Synergy processor tray

According to HPE, composable systems lower system management burden and improve resource usage. Benefits include:

  • 25 per cent lower IT infrastructure costs by eliminating over-provisioning and stranded capacity
  • 71 per cent less staff time per server deployment and 30 per cent higher application team productivity by increasing operational efficiency and rapid deployment of IT resources
  • 60 per cent more efficient IT infrastructure teams by reducing complexity and manual tasks

Dell EMC, Western Digital, DriveScale, Liqid and Kaminario are  all developing composable systems.

HPE Synergy is $1.5bn business

At HPE Discover in Las Vegas this month, CEO Antonio Neri said in a keynote: “We…knew that the future was about composability, so we were the first to bring to market a composable cloud experience to the datac entre in HP Synergy, which is now a $1.5bn business.”

HPE said more than 3000 customers have bought Synergy systems and report 78 per cent year-over-year revenues growth.

The company recently added ProLiant gen 10 360, 380 and 560 servers its Synergy-based composable rack infrastructure. Once composed these can support Nimble arrays and vSAN, and a pool of hyperconverged SimpliVity.

Decomposing SimpliVity

But HPE wants to go further than including SimpliVity merely as a usable element in a composable system The company wants to dynamically compose SimpliVity systems. In other words, take the compute, networking, storage and software elements of a SimpliVity system and turn into a composed system at application run-time.

Hyperconverged systems combine and integrate servers, storage, virtualization software and network connectivity into a single and scale-out system.

Paul Miller,VP of marketing for HPE’s converged data center infrastructure business, discussed the combination of Synergy and SimpliVity last week in an interview with our sister publication The Next Platform.

Miller said the company is “not composing hyperconverged yet. That’s what we want to do, compose hyperconverged as a workload. We’re going to focus on unique workloads that you can instantaneously compose.”

Net:net

HPE thinks composable systems are the next step in the route to simpler and more automated IT operations, better resource utilisation and lower cost of ownership

But there is an interesting side effect when hyperconverged systems become composable. With hyperconvergence, tightly integrated server, storage, virtualization and network connectivity are treated as Lego blocks and operated as single, scale-out systems.

In a composable set-up, a hyperconverged system becomes a software abstraction, a composability template, defining a set of IT resources for a particular kind of workload. Hyperconvergence as an orderable and separate IT technology ceases to exist.

Adding SimpliVity to Synergy gives you synergies, so to speak. The marketeers can certainly have fun with that notion. 

Private cloud is more expensive than trad IT storage. Who knew?

Private cloud storage costs more than traditional IT storage and both cost way more than public cloud storage.

That is our summary of recent IDC numbers, covered by Wells Fargo senior analyst Aaron Rakers in his weekly newsletter to clients.

IDC has totted up storage spending and capacity shipped numbers for the first 2019 quarter. The analyst firm provides a breakdown between public cloud, private cloud and traditional IT, enabling us to work out the cost of each kind of storage.

The spending numbers:

  • Public cloud   – $2.813bn -down 14.8 per cent y-o-y
  • Private Cloud – $2.157bn – up 23.8 per cent y-o-y
  • Trad IT            – $6.024bn – down 3.3 per cent y-o-y
  • TOTAL            – $10.994bn – down 2.5 per cent y-o-y

The capacity shipped numbers;

  • Public cloud   – 51.62 EB    – down 20.4 per cent y-o-y
  • Private Cloud – 8.881 EB    – up 11.4 per cent y-o-y
  • Trad IT            – 31.957 EB  – up 17.0 per cent y-o-y
  • TOTAL            – 92.458 EB  – down 7.7 per cent y-o-y

This means we can calculate the overall cost per EB of public, private and traditional IT storage;

Clearly, public cloud is the best value storage per dollar spent. Traditional IT storage is next, albeit more than three times more expensive. But private cloud storage is almost five times more expensive than public cloud storage and more costly than traditional IT storage.

Why should that be the case? Any ideas?