Home Blog Page 428

Samsung DRAM sales slump

Poor memory chip sales caused lower than expected operating income and revenue for Samsung’s fourth 2018 quarter.

Samsung’s operating profit in these preliminary results was $9.6bn – 21 per cent less than consensus average forecast of expectation of $12.4bn.

Samsung operating profit was $15.9bn in the third quarter and the expected profit is almost 30 per cent lower than a year ago.

Samsung expects consolidated sales revenues for the quarter to be between $52.5bn and $54.3bn-  9.9 per cent worse than the $53.4bn mid-point estimate – and down from $58.4bn in Q3.

Analysts attribute the shortfall to poor DRAM sales in the smartphone, notebook, and PC markets, and strong competition for smartphones. It comes after Apple reported slumping iPhone sales in China. Apple is a large buyer of Samsung DRAM chips.

NAND sales have not been identified as contributing to Samsung’s revenue and profits slump.

Analyst fingers also point to the China-US trade tariff discussions as acontributing factor. Chinese state regulators have accused Samsung, SK Hynix and Micron of price-fixing DRAM and NAND chips.

Industry analysts think new Samsung smartphones and general server CPU upgrades will send DRAM demand upwards later this year. Even so, Samsung may have to cut prices.

Unlucky for some: Chris Mellor’s 13 predictions for storage tech in 2019

Here are my personal predictions for IT this year and how they will affect enterprise storage.

Multi-cloud adoption will (a) encourage enterprises to adopt fewer infrastructure component suppliers and (b) abstraction layers to emerge that virtualise cloud services. Cloud suppliers may fight against this if they do not want to become commoditised.

Multi-cloud adoption will encourage enterprises to implement data management strategies to control and manage secondary data sprawl. Much as we have an Open Systems Interconnection (OSI)  model perhaps we need an Open Data Management model  that characterises and standardizes the data management functions of a computing system without regard to its underlying internal structures and technologies.

Backup: for more and more enterprises backup will cease being a stand-alone application and become part of a data management strategy. Security will be a necessary part of data management as well.

Backup: suppliers will increasingly move into data management. Acronis, Commvault, Druva and Veritas are examples of this.

Data Management: Dell EMC, HPE and NetApp will partner with or acquire data management companies like Actifio, Cohesity, Delphix and Rubrik to help customers control secondary data.

Object Storage: object storage and file storage will start to converge, with flash-based object storage providing radical object storage access speed improvements.

NAND: QLC (4 bits/cell) flash arrays will capture a portion of the nearline disk storage market and the fast-access archive market.

File Software: new generation file software (WekaIO, Elastifile, etc.) will make progress but not really take-off in 2019. It needs another year to get established.

NVMe: In the foregone conclusion category – NVMe-oF will be widely adopted by SAN suppliers.

Storage-Class Memory: SCM will make relatively slow progress into servers as Intel is the effective sole supplier and its Optane product not good enough.

Public Cloud IoT Edge: AWS, Azure and Google Cloud Platform will all produce IoT Edge systems, as will IBM and Oracle for their clouds.

IoT Edge: The IoT Edge concept will blur to include core on-premises data centres at one extreme and dim-witted light-bulbs at the other.

Artificial Intelligence: AI will become increasingly nebulous as there is no definition of what AI is, compared to, for example a SAN, NAS or object storage system. Machine Learning its better defined. We need application-specific machine learning system benchmarks. Ideally we could do with application-specific AI benchmarks as well, if that’s remotely possible because then we could find out which supplier’s AI products are better than others.

Your occasional storage digest featuring Exagrid, WANdisco and WekaIO

Early January storage roundup news shows speed has paid off for deduplicating backupper ExaGrid and WANdisco and WekaIO. One W-vendor sends lots of data across networks at high speed. The other provides fast access to files using parallel IO streams.

Exagrid carries on growing

When boring is good. ExaGrid has grown its business yet again, adding 100 new customers in 2018’s fourth quarter, and making more money from its deduplicating backup-to-disk arrays.

ExaGrid reported 12.2 per cent growth in 2016, 14.5 our cent in 2017, and 20 per cent last year. The target for 2019 is 25 per cent and 30 per cent in 2020.

CEO Bill Andrews attributes the growth to more efficient dedupe. Exagrid’s product “beats every solution on the market when it comes to backup performance, restore performance, cost up front, and cost over time.,” he said

For example,  the company claims ingest is three times faster and restores/VM boots are up to 20 times faster than its closest competitor, understood to be Data Domain.

Andrews claims the EX63000E product scales to a 2PB full backup in a single system with an ingest rate of over 400TB per hour. It supports a replicated second site with storage of up to 4PB for disaster recovery and long-term retention, making it the largest and fastest system on the market.

Exagrid maintain sales functions in 30 countries and wants to increase this to 50 in 2020.

WANdisco almost gets first multi-cloud customer

Data replicator WANdisco has won a $565,000, three-year deal with an unnamed giant mobile network operator for its Fusion for Multi-Cloud offering. WANdisco replication enables the transmission of active on-premises data to remote sites and/or public clouds. The customer uses the software to replicate data continually across multiple Amazon cloud environments and locations.

WANdisco won the contract while working with AWS and says this is its first contract where a client is using multi-tenancy and multi-cloud (region) aspects of its product. It says a multi-cloud strategy prevents vendor lock-in and cuts total cost of ownership by reducing switching costs across cloud vendors.

David Richards, WANdisco chairman and CEO, said: “It is becoming increasingly unlikely that businesses of this scale will chose to depend on a single cloud vendor for their data requirements. We are seeing businesses with complex data requirements choose multiple suppliers for a range of specialised data use cases.”

In this case a single cloud vendor, AWS, is being used, albeit with multiple regions. So it is sort-of multi-cloud.

WekaIO gets AWS blessing

AWS has granted Storage Competency Status for Primary Storage to scale out file system software startup WekaIO.

Weka provides Matrix high-performance parallel access filer software and AWS says it’s good enough  for AWS high performance computing users and using WekaIO in AWS provides HPC file access as fast as an on-premises HPC site.

Matrix has S3-compatibility, and customers such as TRE ALTAMIRA and Untold Studios use it in their AWS HPC operations. It is available in the AWS Marketplace.

People peregrinations

Fast network interconnect supplier Mellanox has appointed Doug Ahrens as its new CFO. He comes from being CFO at GlobalLogic. The interim CFO was Eric Johnson and he reverts to his role of VP and corporate controller. Mellanox has been rumoured to sell itself with both Xilinx and Microsoft fingered as potential acquirers. Ahrens’ appointment may signal Mellanox is not for sale.

Software-defined storage outfit Virtuozzo has appointed Alex Fine as CEO. This is his first CEO gig, and his resume includes stints at CloudBlue, Ingram Micro, Odin and Parallels. Previous CEO George Karidis, who left in September last year, is now a COO at PacketHost. Virtuozzo is entering the hyperconverged infrastructure market with a Virtuozzo Infrastructure Platform product.

WekaIO has opened a Midwest US region office in Detroit to increase its presence with car makers. Richard Dyke, WekaIO’s Sales VP, said: “Detroit is the heart of the US automotive industry and therefore a critical territory for us.”  

IBM OEMs Actifio to create virtual data pipelines

IBM is OEM’ing Actifio’s Virtual Data Pipeline technology to supply production data copies to downstream users for faster application development and analytics.

IBM has confirmed the agreement with us and is working up the details for a big announcement before IBM THINK in February.  The deal should strengthen Actifio’s ability to compete with other data virtualisers and managers such as Delphix and Cohesity. A Big Blue badge of approval should also improve Actifio’s sales prospects with enterprises beyond the IBM channel.

Actifio’s Virtual Data Pipeline (VDP) captures data in its native format from production application sources such as Oracle, SAP,  SQL Server and Exchange, in physical or virtual environments, on-premises or in the cloud.

A single master copy of each data source is maintained with post-initial capture changes tracked at the block level and incrementally merged into the master copies.

Actifio Virtual Data Pipeline concept

Users can apply masking to master copy data to filter sensitive data. The results are delivered for testing in software development, analytics, data protection, long-term storage in the cloud, and governance.

Policies can be set to define data workflows and data-using functions can request fresh data copies via a REST API or clicking in a UI.

IBM InfoSphere Virtual Data Pipeline does exactly the same and we understand it uses Actifio’s Virtual Data Pipeline technology.

IBM’s InfoSphere Virtual Data Pipeline

IBM confirmed the OEM deal though PR spokesperson Lucy Linthwaite: “We launched a new offering in December 2018 called InfoSphere Virtual Data Pipeline, which is based on the Actifio Sky virtual appliance. … This is a traditional OEM agreement with Actifio.” 

Brian Reagan, Actifio CMO, said more details on Actifio’s alliance with IBM are on their way and talked about expanding Actifio’s market reach through more than 4,000 IBM sales reps.

Actifio started in 2009 as a copy data management company. Its software makes a golden master copy of production data and pumps out fast virtual copies to non-production users that need it. This can save days and sometimes weeks of development time, the company says .

It now refers to itself as a multi-cloud data management services company, and picked up $100m in E-round funding in August 2018.


Journalist jumps ship to Nutanix

A couple of job announcements for you to ring in the New Year.

Greener Grass

Nutanix has recruited TechTarget France journalist Christophe Bardy as solutions strategist in the SE and Specialists organisation for EMEA.

Bardy was one of the world’s foremost journalists covering enterprise storage. 

Christophe Bardy

He co-founded LeMagIT, a leading web publication for IT professionals in France, which was bought by TechTarget in December 2012. Bardy became Deputy Editor in Chief for infrastructure, covering storage, networking, cloud, servers, virtualization, and operating systems.

Qumulo hires Molly Presley

Scale-out file system hardware and software vendor Qumulo has hired Molly Presley as its Director of Product Marketing.

Molly Presley

Presley’s CV includes stints at DataDirect Networks and tape library vendor Spectra Logic.

She spent two years at Quantum, passing through troubled times at the firm as Jon Gacek, the CEO when she joined, resigned and was succeeded by two others in sequence, and then a third, Jamie Lerner,  who is now re-organising the business.

IBM makes Spectrum Scale play nicely with software containers

IBM has announced Spectrum Scale file system support for containers. The release means that companies building containerised IT environments can use their existing Spectrum Scale storage systems as opposed to buying new products.

Spectrum Connect enables the provisioning, monitoring, automation and orchestration of IBM block storage in containerised, VMware and Microsoft PowerShell environments. Spectrum Scale is integrated with Spectrum Connect using IBM Storage Enabler for Containers v2.0, an open source software product. 

Storage Enabler for Containers v2.0 extends Spectrum Connect’s containerised storage support to Spectrum Scale v5.0 and later versions. It also supports mixed Fibre Channel and iSCSI deployments in the same Kubernetes cluster and Kubernetes Service Accounts for pod authorisation procedures.

The upshot is that stateful microservices such as MongoDB and PostgreSQL can use Spectrum Scale storage when running in containers. Admins specify persistent volumes using Kubernetes or IBM Cloud Private orchestrator. They then provision the storage via Spectrum Connect storage service for Kubernetes storage class objects.

Comment+

IBM is doing a good job in keeping its legacy high-performance computing Spectrum Scale (the rebranded GPFS -General Parallel File System) relevant for container software shops.


SoftIron, a proprietary ARM hardware company, loves open source Ceph

Interview SoftIron is developing and building its proprietary ARM-powered HyperDrive storage hardware while strongly promoting Ceph open source storage software. 

It seems an unusual combination. SoftIron also thinks Ceph will fulfil a Linux-like destiny in open-source software-defined storage. This is bullish, to say the least.

Curious about both things, I determined to find out more about Softiron and its Ceph-boosting activities.

Here is my interview with  Jason Van der Schyff, SoftIron’s VP Operations. His answers are edited for brevity.

Jason Van der Schyff

B&F: Why was SoftIron founded and by whom?

Van der Schyff: SoftIron was founded in 2012 by Norman Fraser and Phil Straw. The reason for the founding is signalled in the name SoftIron; the idea that hardware, electronics in general and software have anonymous relationships, but if software and hardware are targeted together different results can be achieved. SoftIron can now prove that results are profound on many vectors. SoftIron has manufacturing, performance, reliability, quality, OpEx and cost advantages.

B&F: What is its product development history, please?

Van der Schyff: SoftIron was iconic in the early days of ARM64. SoftIron produced the first production ARM64 server in 2014 and has designed on nearly all early ARM64 silicon that was destined for the data center. In addition, the company has worked on a number of software projects for key people in the ecosystem. 

Knowing ARM well, ARM64 developments, all of the silicon available and being involved in the software issues from firmware and all the way through the operating system to workloads, SoftIron knew where all the bones were buried in the ARM64 continuum. With this knowledge, SoftIron spotted first the ability to seed a software-defined storage platform that leverages ARM64.

B&F: Why and when did it set up offices in the USA?

Van der Schyff: From the very founding of the company in 2012, SoftIron was international with initial presences in both the UK and USA. Today, SoftIron’s main technological development is based in Silicon Valley along with core operations and administrative functions, along with a mostly remote Engineering team spanning the UK, and other areas of the USA. The USA is important to us for a few reasons. 

Firstly, the proximity to Silicon Valley is an obvious one, but more importantly, from the outset, we were determined to maintain complete control over our production facility so we wanted it in our backyard. It’s a core tenet of ours, that we design and build everything from scratch in California, and this offers our customers complete peace of mind when it comes to knowing their hardware isn’t at risk by being manufactured offshore. Being able to tell our customers that their appliance is made in America is very important to us.

HyperDrive

B&F: With HyperDrive you design your own HW to use Ceph. Why not write your own SW?

Van der Schyff: SoftIron does create its own software, and software engineering is the biggest footprint of engineering in the company. From firmware to UEFI, to IPMI and BMC, to Linux distribution, drivers and application and management applications, we target the best experience, control, and security to serve ease of use, performance, and quality. 

SoftIron believes that there is a place for its own software when it serves better results for the customer, but the time has come in the world of storage for customer freedom in storage software. 

We believe that the application domain belongs in open source, and represents customer choice and freedom. Ceph is quickly becoming the “Linux of storage”. SoftIron is part of that drive and as such joined the founding board of the Ceph Foundation. This marriage of custom targeted hardware and supporting software provides the very best experience, provides true ease of use, performance and a practical platform for enterprise production deployment for a holistic total product.

B&F: Describe the hardware in HyperDrive and the choices you made in using it.

Van der Schyff: HyperDrive is actually a portfolio of products based on a technology platform of mechanical, electronic and software technology developed from the founding of SoftIron. Over the coming years, many products will be produced on this platform.

SoftIron HyperDrive custom Board Management Controller (BMC)

B&F: Any FPGAs, or ASICs in HyperDrive?

Van der Schyff: Yes, targeted acceleration is an area SoftIron is focused on. Watch this space for both FPGA and ASIC assets. A prime example is HyperCast, a product that does video transcoding using a “sea of ASICs”. This trend will continue across the HyperDrive storage portfolio in time.

B&F: HyperDrive outperforms a Broadwell reference system. What about Skylake?

Van der Schyff: SoftIron has products that outperform Skylake for point to point comparisons for the same architectural reasons as for Broadwell. In Q4’18 a number of new innovations will redefine density and performance in the Ceph space. AMD EPYC based systems and very innovative multiprocessor systems with networking matrix and SDN technology will move the bar in this regard.

B&F: Please compare the pricing of a HyperDrive system vs a reference X86 system.

Van der Schyff: SoftIron is building both ARM64 and X86 products, and the distinction in the end product is usually not delineated by cost but rather by function. SoftIron is focused on the best product that serves the customer, and all customers are different. SoftIron is liberated by its technology platform such that we can make many configurations based on variable components. One of these variables is the processing type.

B&F: How important is Ceph to the storage industry?

Jason: The storage industry has an emerging open source icon – Ceph. In open source, there is a trend; When one or more open source projects get to enterprise feature parity within an existing industry there are usually a number of competing projects that anneal to one or perhaps two areas of traction.

We have seen this with databases (MySQL and SQLite), operating systems (Linux), and web servers (NGINX, Apache) as examples. The storage industry really wants an open source option and we believe that Ceph has reached this credibility. It is the true ‘Swiss Army knife’ of storage with its ability to do block, object and file storage with a focus on consistency and correctness before any other factor. Apache and Linux arguably won because of this approach.

A white paper capturing our thoughts on this matter can be found here. 

B&F: How would you compare and contrast Spectrum Scale (GPFS) and Lustre with Ceph?

Van der Schyff: GPFS and Lustre have a distinct sector of their own within the storage market as a whole. Lustre and GPFS are known for scale and performance and they are very good at what they do. Ceph is a more ambitious and wide-ranging storage technology that has a very active and open development community that is involved in many trends of technology that consume storage (and therefore Ceph). 

In certain corners of industry GPFS and Lustre are an obvious choice where requirements are distinctly polar. Looking forward many consumers of storage have to make strategic and not polar choices on storage technology. Federated data lakes, technology leverage and consolidated support and operations are possible with Ceph. With a targeted hardware platform with portfolio choices with targeted acceleration will blur the void between these worlds.

B&F: Where do you think the most storage innovation is taking place?

Van der Schyff: Storage media innovation is rapid and very competitive. Technologies like 3D flash are showing densities that rival spinning media but with a quantum leap in performance. Quietly, the hard drive vendors have to stay relevant and are doing some impressive work, although this is not popular press.

Open Source is starting to become an ever more credible way of doing scale and enterprise storage. Ceph is arguably one of the widest ranging and fastest moving projects in this space.

B&F: What does the Ceph roadmap look like?

Van der Schyff: It is very active, moving very fast in terms of feature enhancements and has a focus on improved ease of use through management, containerization, and performance increases in areas like CephFS. The Ceph community is very active, at both an individual and organisational level, with a very focused and competent leadership group.

B&F: What are the near- and mid-term goals for SoftIron?

Van der Schyff: Build out a portfolio of HyperDrive products. Fill out other areas of our technology platform and make the very best Ceph product we can, and serve our customers on their terms. Also, as evidenced by our joining the Ceph Foundation, we look forward to leading – and contributing to – the future of Ceph, and demonstrating its undeniable role in delivering world-class enterprise storage solutions.

B&F: How many staff does it have and is new funding being sought?

Van der Schyff: The company is small but with a presence in the United Kingdom and the United States. The company is aggressively scaling, and we expect to double in size over the next 12 months. SoftIron is currently closing an investment round of growth capital.

Iguazio builds Google-like Outpost for IoT retail edge

Unlike Amazon’s Outpost and Azure Stack,  Google has no public cloud on-premises presence. Iguazio says it can provide one for Google customers.

The Israeli start-up claims superstores with multi-branch retail outlets want to use public cloud facilities but won’t use Amazon because it is a direct competitor through the Whole Foods acquisition. 

Iguazio is working with Trax, a retail edge app vendor, and integrating Google Cloud Platform to provide an Amazon Outpost-like service. It says the Iguazio/Trax/GCP combo delivers public cloud benefits that enable retailers to optimise in-store operations across the estate.

Yaron Haviv, Iguazio founder and CTO, said: “Retailers who left AWS due to the Whole Food acquisition are a target market for Google and this solution enables them to build intelligent stores and compete with Amazon Go.”

Making Trax

Trax ranks in the top 25 Fastest Growing Companies on Deloitte’s Technology Fast 500 list. The company “monitors, predicts and optimizes store-and-field performance in real-time to improve on-shelf availability, optimize click-and-collect processes and modernize the shopping experience”.

In other words it uses video and still cameras and software to say where customers have gone in a store and what they have bought. To do this in real-time is compute+store intensive.

Trax CTO Yair Adato said: “We recognised that we needed an edge-to-cloud solution that…allowed us to focus on our application, versus the management of our infrastructure.”

This is where Iguazio comes in. In essence it provides the glue between Trax and Google Compute Platform. Iguazio supplies the storage platform and infrastructure, and its software integrates with store-level workflows and apps for local analytics. Also, Iguazio enables data movement to the cloud where it integrates with federated store-level workflows.

Trax-generated data points include videos, images of products on shelves, from cameras and other devices and are stored in Iguazio and processed by Trax software to turn analogue imagery such as known products on shelves into digital information,

This data is then used to track real-time store performance at product level and to analyse performance with different pricing and shelf placement options, and in turn drive shelf re-stocking.

The Trax software analyses tens of thousands to hundreds of thousands of data points, depending on store size, which are stored in Iguazio databases on Iguazio array hardware. This is linked to Google Cloud which receives selected data for further analysis looking across a set of retail branches, and also using Trax software and other applications.

Iguazio storage and analytics

Israel-based Iguazio is not your usual storage array or software-defined storage company. The company’s technology holds metadata in quick NVMe flash storage providing fast data access for Big Data analytics apps. Iguazio launched its analytics software and storage hardware in 2016. It provides NVMe-supporting storage arrays with high IOPS and low latency, but these arrays are in place to get its software running fast, the hardware being subservient to the software.

A diagram shows Iguazio’s ideas in a retail environment.

Iguazio cloud-native integrated cloud to edge (ICtoE) scheme. The Google Cloud ‘Outpost’ is our terminology

The white arrows in the diagram shows selected and filtered data flowing upstream to the Google cloud.

The grey arrows shows control commands and containerised software, developed centrally, cloud-native and orchestrated by Kubernetes, flowing out to the stores, where it executes. Machine learning models, developed centrally, are also shipped out to the stores to be used in the analytic routines there, as are serverless functions.

These analytics routines determine simple things, such as how many packs of 300cl Cola were sold per shelf space unit per hour, to more complex points. At which shelf level should the Cola be presented, and how many cans should be in a row on the shelf? How do end-of-aisle notices affect sales? Is it more profitable to sell five cans of branded Cola or ten cans of own-brand Cola? How effective are special offers?

The Wisdom of Clouds

Retail has become an intensely data-driven environment.  Bricks and mortar retailers need to offer the best shopping experience they can, to match Amazon, and to be as fast-moving and flexible as Amazon in their stocking, pricing, delivery and operational measurement and analytical operations.

Iguazio asserts that at a central level this can mean using the public cloud because it can provide the speed and flexibility needed.

But operators can’t maximise individual store-level operations with just a central public cloud, because there’s too much data to send upstream to the cloud, and decision-making takes too long.

They need mini- or micro-data centres in each store, for real-time store-level optimisation, and a central core data centre operation, which could be in the cloud. For example, Google’s in this case. It is used for business optimisation across the bricks and mortar estate, and for application and machine learning model development to continually improve individual store operations.

Superstore as IOT edge device

Iguazio would have us think of a superstore branch as an Internet edge location, needing real-time optimisation of its activities, with analysed and filtered store data fed to a central location for analysis and business optimisation across an entire retail branch estate, with hundreds of stores.

Iguazio claims that the Google Cloud Platform (GCP) is the best central location and says it can effectively provide an on-premises presence for CGP with data upload for central processing. Further, the Iguazio/Trax offering has a GCP presence and so provides a common environment across the store branches and the central, head-quarters operation.

The GCP advantage is based on it being a public cloud, and so scalable, elastic, with usage-based pricing and cloud-native SW development orchestrated by Kubernetes and supporting serverless computing.

Retail branch system

A retail branch Iguazio system has 4 Intel servers, each with an additional GPU for image and AI processing, and 24 x NVMe disks in an off the shelf 2U enclosure. This has more than 100 CPU cores and 100TB of usable data, he says it performs like a rack of traditional servers.

“This solution is way faster and denser than Amazon Snowball, and unlike Snowball can run any service on the embedded Kubernetes, it supports 100x faster serverless functions with Iguazio Nuclio and an extremely fast flash-optimized database and file system.” 

Nuclio is Iguazio’s own serverless computing technology.

Containers and Kubernetes are the best pairing for application development and deployment because applications can be developed and distributed quickly. 

Iguazo says the combination of its managed platform and Google Cloud services enables the collection and analysis of large volumes of data at the edge, while using Google Cloud and applications there, such as Trax, for deep learning, AI, data aggregation and central control. 

Aparna Sinha, Group Product Manager for Kubernetes and GKE, Google Cloud, sings off the same hymn sheet: “We are excited to collaborate with Iguazio to deliver a solution that enables real-time analytics of store data, all centrally managed from Google Cloud.” 

Shopping for the storage lesson

The storage lesson here is that complex edge environments need much more than basic commodity storage hardware and software. At retail branch level the computing environment is dense, intricate and rich. Real-time responsiveness is needed and only; this is Iguazio’s message, only specialised   database storage software and hardware can support the retail edge application and sensor-driven environment, and enable its real-time operation.

Iguazio, in this respect, has a workflow integration characteristic in common with Quantum and its StorNext product. The workflows are so distinctive though, that neither company’s products will work in the other’s environment. The two companies’ products don’t overlap. 

Iguazio’s strategy is to prevent other less workflow-focused storage suppliers entering its Internet edge markets by integrating its product with the data-generating devices and applications there better than competitors. It’s found a way to make its fast storage array offering differentiated from virtually all the other suppliers and that is no small feat.

Your occasional storage digest. It’s a Rap

Storage can be fast, storage can be slow.

The disks can be vast  and tapes a no-no.

Storage is easy, storage is hard,

It leave you scarred, get you riled.

Get the smarts; read Blocks and Files

Grand-DiskMasterFlash.

After that eccentric introduction – can you do better? – here is a collection of recent storage news bytes.

AtScale scales up funding

Data warehouse virtualizer AtScale did well enough this year to get a $50m funding round. Its products connect business intelligence tools with either on-premises or in-cloud data warehouses. AtScale supports cloud data platforms like Snowflake, Google BigQuery, Amazon Redshift and Microsoft Azure SQL Data Warehouse.

AtScale and Amazon RedShift

It was founded in 2013 and has this funding history.

  • September 2013 – $2m Seed round
  • June 2015 – $7m A-round
  • May 2016 – $11m B-round
  • October 2017 – $25m C-round
  • December 2018 – $50m D-round

Total funding is $95m over five years. The new cash will be used to develop its product, strengthen third-party supplier relations, and boost sales and marketing headcount.

This year it registered more than 50 new customers, entered the European and Asian geographies, and set up partnerships with Cloudera, Microsoft and Oracle.  That helped in procuring the D-round.

NAND glut persists

TrendForce’s DRAMeXchange says NAND industry shipped capacity (bit output) was higher than expected in 2018 because 64-layer production yields were strong. However demand for NAND slumped because of fears about the looming trade war between China and the US, the shortage of Intel CPUs, and the lower-than-expected sales of new iPhone devices, despite the year-end busy season.

Flash foundries are trying to slow down planned production expansion but DRAMeXchange thinks that oversupply of flash will persist due to high inventories and low seasonal demand , despite  

Contract prices of NAND Flash products in 1Q19 are expected to drop by around 10 per cent, as will client SSD contract prices. Enterprise SSD contract prices will fall by more than 10 per cent due to stronger competition in the sector. DRAMeXchange also thinks the market situation for module makers in 1H19 will also be tough, as they have to clear inventories each month.

World’s most popular storage OS gets updated

FreeNAS has had over 10 million downloads, making it the world’s most popular storage OS. 

V11.2 of the software ,which supports file, block and object access, has been released by iXsystems.

It features a new Angular-based and device-independent web interface, incorporating Google’s JavaScript framework. There are new virtualization and container subsystems, plus support for Self-Encrypting Drives (SED.)

The updated plugin and virtualization infrastructure simplifies the integration of third-party applications by using the FreeBSD bhyve hypervisor and iocage Jail management subsystems. Both have web interfaces and REST APIs.

FreeNAS UI.

Plugins enable FreeNAS to deliver application and network services like Bacula, ClamAV, Plex, Nextcloud, Gitlab, Jenkins, and Zoneminder, all with OpenZFS integration.

Vendor-agnostic CloudSync services have integration to AWS, Azure, Backblaze, Box.com, Dropbox, and Google. There is data encryption both in-flight and at-rest with most providers. FreeNAS supports S3 so it can function as a local S3 object store.

Users can manage FreeNAS via REST and WebSockets APIs.

Rozo goes up the Amazon

Scale-out NAS software supplier Rozo Systems has made its RozoFS filesystem available on AWS.

RozoFS has clustered nodes and  supports billions of files and petabytes of capacity, according to CEO Pierre Evenou. He says its code has the performance of scale-out NAS and cost-efficiency of object storage. It uses Mojette Transform-based erasure coding  and says its implementation makes it fast;  ten times faster than Scality’s back in 2015.

Rozo diagram.

On-premises RozoFS-using apps can use the AWS version without modification. Rozo says RozoFS uses its “very fast metadata services” to provide asynchronous incremental replication between an on-premises storage system and a AWS-based copy.

Incremental changes in the on-premises file system can be quickly computed, without lengthily scanning the file systems. The source Rozo cluster uses all its nodes to parallelise synchronisation of the two clusters. The cloud copy can be automatically updated as frequently as desired without, Rozo claims, affecting application performance.

This reduces production dead times due to lengthy data synchronisation between on-premises and cloud filesystems.

RozoFS in Amazon has a claimed 10-minute setup time and requires a minimum of  four storage and two metadata nodes. It scales out by adding storage nodes. 

Tintri flies again

DDN-owned hybrid and all-flash array supplier Tintri had announced  Tintri Global Center (TGC) v4.0 software, the first major release since DDN bought Tintri for $60m  in September.

TGC 4.0 enhances VM and VMstore visibility, analytics and diagnostics across a multi-VMstore environment. It enables users to take global actions across a pool of arrays, and improved algorithms deliver more accurate recommendations for VM placement.

This is based on TGC tracking granular VM metrics in the hypervisor to optimise VM placement in the array and manage performance.

Admin staff can relocate Virtual Machines (VMs) from one Tintri array to another, with near-zero impact on the host, storage or network. Storage-level snapshots and policies are migrated and preserved as part of the process.  Such migrations are completed an average of 10x faster than traditional Storage vMotion operations.

Tintri TGC 4.0 is now available to Tintri by DDN customers.

Wisdom about workloads

Virtual Instruments has announced the v6.2 version of WorkloadWisdom, its production storage workload modelling and performance validation platform.

WorkloadWisdom provides workload modelling, workload creation, performance reporting, and test management across major storage technologies.

WorkloadWisdom example of imported production application workloads with immediate user visualisations of behaviors and placements directly from SMBv3 workloads.

VI has teamed up with SANBlaze Technology, a supplier of storage emulation technologies, and so this release delivers non-volatile memory express (NVMe) workload modelling and testing over Fibre Channel (FC). That means end-users and storage vendors, can test the effects of NVMe and FC-NVMe technologies to their data centre and products.

Modelling is based on customers’ production workloads and is integrated with Virtual Instruments’ VirtualWisdom infrastructure performance management and analytics platform.

V6.2 adds a new and improved analysis policies for NAS performance probes and SAN/NAS performance probes. There is a new single-click data verification option built into workload models that enables byte-level data verification and error reporting. It also adds DFS, a new option for SMB workloads to perform testing on a distributed SMB file system.

Short notes

Acronis’s data protection is being used by the NIO Formula E racing team, and Acronis now sponsors teams in Formula 1, Formula E, Formula 2, Formula 3, Supercars, and other motorsport series. It has also signed multiple partnerships with teams in other sports, including the English Premier League (Manchester City.)

Cloudian has announced certification of its object storage with XProtect video management software from Milestone Systems. 

Backup provider HYCU has announced HYCU-X which provides 1-click auto deployment and auto-configuration for secondary storage using Nutanix Storage Dense nodes. It’s available through the Nutanix CALM Marketplace. HYCU has also added SAP HANA support for production environments, with impact-free backup. It’s also added support for Nutanix Volume Groups and  enhanced reporting.

Kaminario all-flash array storage, with its Cloud Fabric composable infrastructure and consumption pricing scheme, has been bought by razorblue,  a UK IT managed services, consultancy, ISP and hosting supplier.

Rubrik has a case study with the Mercedes-AMG Petronas Motorsport team. It protects the 500GB or so of data  generated every race weekend from the Merc team’s cars. The team used tape before adopting Rubrik and had a full-time employee managing backups, dealing with 50 tapes/day, and taking 2 hours to recover data. Now it takes seconds and the full-time employee isn’t needed.

People moves

Jonathan Chadwick, former VMware exec and a longtime independent board member and advisor to enterprise technology brands, has joined Cohesity’s board of directors.

Liem Nguyen has become VP Marketing for InfiniteIO. His background includes marketing stints at Dell, Commvault and being an SVP at Touchdown PR in the USA.

Read Fenner has been appointed as VP of Global Sales for StorCentric-owned Nexsan. He’ll direct sales there and oversee Drobo sales as well Previously, he worked at Buffalo Technology. Mark Walker joined the Nexsan team earlier this year as Channel Sales Director for UK and Ireland.

Ritek’s optical delusion baffles Blocks & Files

Optical disk demand will sky-rocket in 2020 as cloud computing centres come to realise optical disk archives are cheaper and last longer than tape or disk archives. So says Gordon Yeh, chairman and CEO of Ritek, an optical disk maker.

In an interview with Digitimes, he cites the 2018 Cisco Global Cloud Index, which forecasts annual global cloud IP traffic will grow from 6 zettabytes (6,000 EB) in 2016 to 19.5 ZB by the end of 2021.  Data centre storage growth will almost quadruple to 2.6 ZB during the same period, the Cisco study reports. We were unable to find any specific mention of archival storage growth in the study, but we can safely infer that this will grow at a similar rate.

According to Yeh, 70-80 per cent of cloud data centres use tape to store data, but some, like Facebook, are already starting to use optical disks.

He says optical disk unit storage cost will fall to the same level as tape and, one- two years later, disk drives. This will fuel a big increase in demand for optical diska  in 2020. Yeh claims optical disk storage needs less stringent temperature and humidity control than tape. Also it does not require the powered standby rotation of archive storage hard disk drive platters.

Optical magnifying glass

This sounds like Yeh is talking standard big-up-the-company’s prospects. Or maybe he knows something about optical disk formats and capacities that we don’t.

Optical disk cartridges hold much less data than tape cartridges and transfer data more slowly. These disadvantages could outweigh the minor data centre environment and power usage advantages of optical disks.  

For example, Sony’s gen 2 Optical Disk Archive Cartridge has a 3.3TB write-only capacity and 1Gbit/s write, 2Gbit/s read transfer rates. That roughly equates to 128MB/sec write and 256MB/sec read.

Sony Gen 2 Optical Archive disk cartridge

Current LTO-8 tapes have a 12TB raw capacity, 30TB compressed and 360MB/sec raw/800MB/sec compressed transfer rate.

LTO-9 in 2020 will have a planned 24TB raws/60TB compressed capacity and transfer data at 708MB/sec raw.

Micron cuts capex in response to declining memory market

Micron is reining in capital spending in the face of a glut in the DRAM and NAND memory markets. 

The chip giant yesterday reported $7.91bn revenues Q1FY2019, missing estimates of $8.02bn and sending shares down seven per cent on the day. Net income was a healthy $3.29bn, slightly above analyst estimates.

The results mark the end of nine quarters of sequential revenue growth.

Micron’s revenue estimate for Q2fy2019 is $5.7bn-$6.3bn. We used the mid-point of $6bn for our chart – it’s the light blue bar. As you can see, this is annual and sequential revenue fall.

Worse is to come. On the earnings call Sanjay Mehrotra, CEO, talked of “revenue headwinds from the inventory adjustments at several customers and industrywide CPU shortage”. 

Even so Mehrotra said the company is “well-positioned to deliver healthy profitability throughout the year. We remain bullish on the long-term secular growth trends driving the memory and storage industry.”

He thinks the market will pick up again in the second half of 2019 but few industry watchers share his optimism on this score.

And Micron is certainly taking no chances. The company is scaling back capital spending from $10.25bn-$10.75bn to $9.bn- $9.5bn.

On the earnings call CFO Dave Zisner also talked of implementing OPEX controls such as limiting headcount growth, lower discretionary spending and holiday work schedule slowdowns.

My memory isn’t what it was

DRAM accounted for 68 per cent of overall revenue in the quarter. 

  • Revenue down 9 per cent Q/Q and up 18 per cent Y/Y 
  • Average selling prices (ASPs) down high single digits percent range Q/Q 
  • Shipment quantities flat Q/Q

NAND contributed 28 per cent of overall company revenue in the quarter. 

  • Revenue down 2 per cent Q/Q and up 17 per cent Y/Y 
  • ASPs down low to mid-teens per cent range Q/Q 
  • Shipments up low to mid-teens per cent range Q/Q

Mehrotra said this about the NAND business:

The transition from Planar to 3D NAND in the industry and successful ramp of 64-layer across the NAND manufacturers has resulted in oversupply in the market over the last several quarters. 

DRAM is in a bad situation as well:

In data centre markets, we saw reduced revenue coming off a record setting fiscal fourth quarter, due primarily to inventory adjustments at our customers. We expect this headwind will persist for a couple of quarters. We are seeing some cloud customers go through a digestion period following very strong growth over the last two years.”

DRAM demands weakened through the course of our fiscal first quarter. Since the start of this fiscal second quarter, the weakening demand trend has continued and our near-term visibility is limited. Due to a lengthy period of rising DRAM prices, we believe some of our customers had decided to carry higher than normal inventory levels and as DRAM supply caught up with demand, these customers are bringing down their inventory levels.

Micron will produce more DRAM and NAND bits in FY2019 than in FY2018. The company forecasts 35 percent NAND bit growth and 15-16 per cent for DRAM in FY2019.

Business unit performance

Not all of Micron’s four business units were hit, as this Micron chart shows. 

Zisner said the CNBU’s sequential decline was “driven by the impact of inventory adjustments at some of our customers in the graphics, enterprise, and cloud markets”.

Mehrotra said smartphone unit demand continued to “weaken, particularly at the high end, in what is seasonally a slow quarter for mobile”.

Zisner attributed the slowdown in the storage business unit to “weaker pricing and the ongoing transition from SATA to NVMe SSDs. The impact of this transition will continue through calendar 2019. Our strategy to move bits from SBU components to high-value solutions in mobile is also contributing to a decline in revenue for SBU.”

The automotive part of the embedded business did well, with increasing demand for in-vehicle infotainment and ADAS (Advanced Driver Assistance Systems.)

More earnings call info

Mehrotra dropped some hints about what was coming during the earnings call.

  • “We strengthened our No. 1 share position in SATA enterprise SSDs, gaining about three percentage points of market share sequentially according to industry reports.”
  • “We are working to further expand our NVMe product portfolio and plan to introduce SSDs targeting client, enterprise, and cloud markets through the course of calendar 2019.”
  • “We expect the SSD market opportunity will continue to shift from SATA to NVMe.”
  • “Fiscal 2019 will be a year of transition for our SSD portfolio and we expect our SSD share gains to resume in fiscal 2020.”
  • “The growth of our high-value NAND solutions in fiscal 2019 will be driven by our mobile managed NAND products, where we believe we have significant opportunity to increase share.”
  • “Our engagement with our customers … now includes collaboration on our 3D XPoint product roadmap.”

Micron expects to introduce 3D Xpoint memory products towards the end of 2019.


Seagate says tape to cloud pricing is ‘confidential’

LTO tape
LTO tape

Seagate’s Lyve Data Services will use per-customer, tape-to-cloud project pricing in its data migration centres but commercial details are opaque.

The company last week announced Lyve Data Services (LDS) in conjunction with Tape Ark, a Perth, Australia tape migration specialist. LDS will use Tape Ark’s software and services to migrate data from off-site tape vaults to the public cloud. Seagate’s two data migration centres, one in Amsterdam, Holland, the other in Oklahoma, will use many Seagate drives in their operations. 

What does this service cost? Seagate is not saying. “Each project is evaluated and priced on specific client needs,” the company told us. “We cannot disclose specifics on the financial aspects of the agreement which remain confidential between Seagate and Tape Ark.” 

In other words, it is old-style pay-what-we-think-you-can-afford pricing.

According to Seagate, a billion tapes are housed offline and moving their data to the public cloud affords easier access, mining and analytics. All well and good but this is small beer to Seagate, an $11bn/year corporation and one of the world’s top two disk drive manufacturers.

What is Seagate’s angle? Here is its answer. “Seagate Lyve Data Services is about helping our customers solve their data challenges. Our newest offering in migration is a continuation of that mission. Together with TapeArk, Seagate will help customers unlock the value of their data by making it more available, secure, and efficient through this service.”

Statement of intent

“Helping our customers solve their data challenges,” is a fine marketing statement – as is enabling customers to “unlock the value of their data.” But they tell us little.

Google teamed with Iron Mountain in April 2016 for an LTO tape to Google Cloud migration in April 2016. Two years on, we have no inkling of how much business this is doing. Perhaps Seagate knows better?

We can assume that Seagate detects strong demand to move tapes out of the likes of Iron Mountain and pour their data into the welcoming embrace of AWS, Azure and Google.

Lyve Data Services will then become a massive data migration on-ramp to the public cloud, where the data could well be stored on tape again.