Home Blog Page 414

Your occasional storage digest featuring Huawei, InfiniteIO, Seagate, Spectrum Scale, StorONE and Tintri

Today is the opening day of the Dell Technologies World event in Las Vegas with wall-to-wall-and-back-again announcements. Before we dive in and get ‘Dell-ified’ here are a set of news items to remind you of life beyond Dell.

We look at Huawei, InfiniteIO, Nutanix, Seagate, Spectrum Scale, StorONE and Tintri, now owned by DDN.

Huawei results

Huawei has become a $100bn-plus business, with $107.13bn revenues in its 2018 year, and $8.81bn net income. This makes it bigger in revenue terms than Dell and IBM.

Here is a chart showing Huawei’s revenues and net income since 2010:

Huawe has three main business units: Carrier, Consumer and Enterprise, and their relative contributions have changed the years with consumer revenues now outpacing the company’s original carrier business.

Trend lines added to show growth changes

Revenues from the carrier business declined in 2018 from 2017’s $45.7bn, perhaps reflecting the effects of Huawei’s issues with the USA which claims it is s a security risk.

Huawei is unique in combining a telecommunication carrier business, a smartphone operation and an enterprise server/storage product set. It is as if someone combined Apple’s iPhone,  HPE’s servers and storage, and elements of both Ericsson and Cisco networking.

If the smartphone business is now mature and the carrier business ex-growth then Huawei will have to look elsewhere for continued growth. Going by the trendlines on the chart it is unlikely that the steady but relatively low-growth enterprise business could take up the reins if smartphone revenues have peaked.

InfiniteIO

InfiniteIO has release Infinite Insight, an app that scans files in on-premises NAS arrays and identifies those not accessed for some time.

The free of charge app can scan millions of files, and simulates policies, based on file size and the last time that files were accessed or modified. It can move inactive files to a private or public cloud tier. The app generates a shareable report on estimated, yearly cost savings. InfiniteIO said users can set up the app in accordance with financial and operational policies.

A customisable dashboard providing a view of the user’s file environments.

InfiniteIO offers hybrid cloud tiering and fast file access products with real-time analytics and metadata management.

You can download the app.

Seagate HAMR head nanophotonics

Seagate is investing £47.4m into its R&D facility in Northern Ireland, and Invest Northern Ireland is tipping in another £10m. The investment will create 25 new jobs created among the 120 or or positions involved in the project to research and develop nanophotonics – the study of the behaviour of light at the nanometre scale.

Seagate has carried out research into disk recording heads, such as its developing Heat-Assisted Magnetic Recording (HAMR) heads, at its Springtown wafer manufacturing plant in Londonderry, Northern Ireland since 1994.

Jeremy Fitch, executive director of business solutions, Invest NI, said: “Seagate first came to Northern Ireland in 1994, investing £50 million and creating 500 new jobs. Fast forward 25 years and the facility now employs 1,400 people and it is estimated that the company has invested in excess of £1 billion in capital here.”

Spectrum Scale release

IBM released Spectrum Scale version v5.0.3 last week. New features include cloud services changes with an auto container spillover feature. A new container is automatically created during reconcile when the specified threshold of files is reached. This enables improved maintenance.

The Microsoft Azure object storage service is supported.

The Watch folder (based on Apache Kafka Technology) in Spectrum Scale 5.0.3, watch folder has a clustered watch capability. With this an entire file system, fileset, or inode space can be watched.

You can check out a summary of the changes.

StorONE signs up Tech Data

Israeli storage startup StorONE has signed a distribution deal with world-wide distributor Tech Data.

StorONE’s software-defined storage offering is available plus service and support through Tech Data. It says its technology can support multiple enterprise functions, reaching breakthrough performance from all-flash array (AFA), ordinary and NVMe SSD hardware, create high performance, multi-application secondary storage systems, supply persistent storage to virtualised environments, and/or use lower-cost commodity components to provide high capacity, low footprint solutions.  

The StorONE product line is hardware-agnostic and Tech Data can sell it bundled with any drive (ordinary and NVMe SSD, disk drives), any protocol (block, file and object), four storage configurations (AFA, high capacity, high performance and virtual machine appliance.) It’s available now.

Joe Cousins, VP for global computing components at Tech Data, said:  “StorONE’s Software Defined Storage (SDS) offerings…deliver high performance, high capacity and complete integrated enterprise data protection.”

Tintri progress

Tintri, the storage array startup picked up from bankruptcy by DDN in September 2018, said it is making progress in re-establishing its business.

The company, now called Tintri by DDN, is not revealing any numbers, but claimed it had a good first quarter, with a significant uptake from larger-scale enterprise customers, and sales exceeding targets.

Tintri noted a 40 per cent increase in engineering support staff in the quarter and more sales hires. It plans to recruit more staff in all parts of the company this year including go-to-market, engineering and support.

Paul Bloch, co-founder and president of DDN, seems pleased with his acquistion: “2019 is the year of renaissance for Tintri by DDN,” he said in a statement.”…Now it’s time to focus on bringing some Tintri by DDN magic to a broader customer base…We are poised for a breakout year with Tintri products that are uniquely designed to simplify customers’ virtual environments. We are very excited about the Tintri by DDN roadmap.”

Phil Trickovic, Tintri’s president of worldwide sales, also delivered a quote, in which he said Tintri will “expand our predictive analytics and machine learning capabilities. We are also making great progress on the soon-to-be released DB Aware module as well as our expansion into macro/micro data centre management capabilities.”

Tintri has historically been known for its virtual machine-aware management and provisioning of storage. Extending this to database-aware storage sounds interesting and will give Tintri a useful differentiator.

– – – –

That’s our Monday mix completed. Now prepare yourselves for the Dell Technologies news deluge – or Delluge.

Xilinx buys Solarflare

FPGA-maker Xilinx is buying Solarflare Communications and its accelerated network interface technology for an undisclosed price.

Solarflare began life in 2001, and has taken in more than $305m in funding i at least 14 rounds. The latest was in June last year, led by Oak Investment Partners and with a contribution from Xilinx.

The company develops accelerated network interface cards (NICs), built with FPGAs and ASICs, and application acceleration software. They are used by nearly every major global exchange, commercial bank and hedge fund.

The latest XtremeScale NICs support NVMe-over-TCP and use kernel bypass technolgy to achieve low latency.  Lightbits Labs and Solarflare have pioneered the NVMe/TCP market which provides NVMe transport across standard Ethernet instead of the more expensive lossless, data centre-class Ethernet needed for the slightly faster NVMe-oF using RcCE.

FPGA tour

Xilinx, founded in 1984, is a fabless business that designs FPGAs, such as its Virtex and Versal product families,  and CPLDs (complex programmable logic devices). It is publicly owned, capitalised at $30.11bn, and, with Intel, dominates the FPGA market.

Xylinx ZYNQ MPsoc used in ZF’s AI-based automotive control unit

As well as investing in Solarflare, Xilinx has built a single-chip, FPGA-based, 100Gbit/s SmartNIC with Solarflare that can process 100 million receive and transmit packets per-second, using less than 75 watts.

Xilinx wants to combine its technology with Solarflare’s high-speed NIC tech and application acceleration software, to develop a converged SmartNIC platform for data centre use. This chimes with its intention to move beyond flogging FPGAs to build a so-called platform company with its products supporting an applications ecosystem.

Salil Raje, EVP and GM for Xilinx’s data centre group, said: “Solarflare has been a pioneer in key areas such as high-speed Ethernet, application acceleration, and NVMe-over-fabrics, which are the critical components needed to build the next generation of SmartNICs for cloud and enterprise technologies.”

The acquisition is expected to close in six months’ time, subject to regulatory review and customary closing conditions.

Xilinx was reportedly interested in buying Ethernet and InfiniBand networker Mellanox. That company went to Nvidia in March 2019 for $6.9bn.

Intel mulls over making its own NAND

Intel may be losing interest in making NAND memory as it sees technology and manufacturing advantages ebb away in a price-slashing, commoditised market.

In an earnings call this week to discuss Intel’s challenging first quarter, 2019 results CEO Bob Swan revealed the evaluation criteria that determines which technologies it manufactures or abandons to the commodity makers.

Which brings us to the NAND problem, outlined by CFO George Davis in the call. “Our memory business was down 12% due to continued NAND pricing pressures, offset by NAND data centre and client bit growth,” he said. “Operating income for this group is down driven by NAND ASP deterioration and demand softness, resulting in inventory revaluations.”

He added: “We expect Q2 revenue to be $15.6 billion, down 8% year-over-year. Our data-centric businesses are expected to decline in the high single digits year-over-year as memory pricing declines weigh on our NAND business and DCG customers continue to consume inventory and absorb capacity.”

Analyst Timothy Arcuri asked Swan: “Bob, I wanted to ask you just a question strategically about memory. And I wanted to understand, obviously, you need to make cost point, but relative to NAND, why the need to make NAND on your own? It seems like you could sell the factory and maybe strike some sort of a supply agreement and save a lot of free cash flow, particularly after a quarter rotation can cost you 100 basis points.”

Swan speaks

Swan provided a careful, detailed and lengthy answer, starting with: “Yes. First, maybe just some context on when we talk about expanded TAM (total addressable market) – maybe the [three] criteria we think about and then I’ll try to apply them to where we are in memory.”

“First, we look for technology inflections where we think we have a real advantage, whether it’s process manufacturing or performance-oriented design that is worth pursuing… number two, such that we can play a more important role in the success of our customers; and third, in an area where we think we can get attractive returns for our investors.

“So those are the three criteria that we’re applying, and we’re going to be increasingly disciplined on the third aspect of those criteria.”

Optane, good. NAND? less so

Intel has two memory products: Optane (3D XPoint) and NAND.

Swan said Optane is strategic and fulfils all three criteria: “As it relates to memory, we have a high-performance Optane product that we think is really differentiated, coupled with our [Xeon] CPU that can do things best in industry that’s really needed to keep pace with the increased performance of CPU processing. So strategically, we think it’s really important. Technically, we think we have a real advantage. And third, we think we can get good returns.”

But NAND is a different matter: “As it relates to NAND, we think we have process technology advantage. We’re in the stage where we’ve gone from 32-layer to 64-layer now.

“The profitability of the NAND business pre this massive decline in ASPs (average selling prices) was okay last year as we were ramping the business. And our challenge going forward is we’re just going to have to execute better on the NAND business, so we can check that third box of attractive returns for our investors.”

And then the kicker: “And I don’t want to — when the market’s plummeting I don’t want to conclude what the right decision is. I want to maybe look through the horizon a little bit to get to the right decision. 

“But clearly, we got to generate more attractive returns on the NAND side of the business, and the team is very focused on making that a reality. And to the extent there is a partnership out there that’s going to increase the likelihood and/or accelerate the pace, we’re going to evaluate those partnerships along the way so it can be enhancing to the returns of what we do in the memory space.”

Wells Fargo senior analyst Aaron Rakers thinks Intel’s comments on NAND Flash suggest the company is “definitely evaluating a potential strategic change, including potential supply partnerships.”

IMFT ending

Micron and Intel have a joint manufacturing partnership called IMFT, through which they made 3D NAND and 3D XPoint chips. Intel withdrew from the 3D NAND part of that and is making 3D NAND at a plant in Dalian, China.

Micron is buying out Intel’s part of IMFT.

Although Intel makes 48 and 64-layer 3D NAND the market is moving to 96-layers, with Micron, Samsung, and Western Digital/Toshiba making it already, and SK Hynix headed in this direction.

Intel must invest money in the Dalian fab to transition to 96-layer NAND manufacturing. Bob Swan’s remarks reveal the decision is up in the air.


Dell: What next?

Michael Dell is the king of US computer hardware-based system companies, brushing past Cisco, HPE, IBM and everyone else to assume the throne. But what next for his eponymous company?

It’s approaching $100bn in annual revenues. Does it want to become a $150bn business and how might it accomplish this?

Let’s start the conversation with this chart.

Chart 1. Dell, IBM, Cisco and HPE annual revenues.

It shows annual revenues for Cisco, Dell, HPE and IBM, using IBM financial years, starting from 2010, when HPE ruled the IT markets, with IBM in second place. But HPE choked on its own structure and decided commodity hardware, namely PCs and printers, were not its thing and divested them into a separate company in November 2015.

At a stroke revenues fell below its three great rivals and HPE is still trying to recover its mojo.

Cisco grew steadily until the end of 2015 when it looked poised to overtake a declining Dell. But then it had a wobble and started level-pegging as the Chambers era came to a close. His replacement Chuck Robbins is re-igniting growth but in revenue terms Cisco is way below Dell’s league.

And IBM. Oh dear. A $98bn revenue company has become an $80bn company due to many mis-steps, including a failure to capitalise on commodity hardware.

Dell though has grown and grown. It is now at $91bn and climbing, with $100bn revenues in its sights. Founder Michael Dell still holds the reins and masterminded Dell’s path to the top of the heap by embracing hardware commoditisation, and buying EMC for $67bn, with its VMware crown jewels.

The company also went through a public-to-private and back to public ownership in a financial engineering sequence, starting with going private in 2013 and returning to public ownership in December 2018.

All this has enabled it to overtake the declining Big Blue, surpass HPE and leave Cisco behind.

Bigger IT companies include Microsoft, Huawei, and Amazon and Google in overall terms. Let’s add Microsoft revenues to the chart and see how they stack up.

Chart 2. Microsoft revenues included.

As you can see, it is bigger than Dell. For Microsoft, 2015 was a crossover revenue year in which it surpassed falling HPE and shrinking Big Blue. The Windows and server system software company has become a $100bn plus turnover business after Satya Nadella restructured it following the end of Steve Ballmer’s run as CEO in 2014. It entered the public cloud with Azure, gaining revenues as it built up its Office 365 business.

Let’s add Huawei to see how the picture changes.

Chart 3. Huawei revenues added.

Huawei overtook Dell in revenue terms in 2014. It has grown faster than Microsoft, although not passing it, to become a $100bn plus revenue business, despite problems with the US and recent security concerns.

Dell, Microsoft and Huawei form a triumvirate, leaving old US hardware system-based IT companies behind. Now, let’s add the AWS part of Amazon to the chart:

Chart 4. AWS revenue line addition.

AWS is a $26bn revenue business on its own. Amazon in total of course is much bigger but the AWS component is the appropriate comparison with Dell.

Similarly with Google, which is looking to become a $150bn-plus revenue company overall. GCP, its Cloud Platform, is what we would compare with Dell’s revenues and that is not split out from the overall Google number.

Google said it was a $4 bn run rate business at the end of 2017, which would place it well below AWS and also Azure.

These growth curves on the chart represent the new IT, with the old IT behemoths shrinking or lost with no place to go: HPE, IBM, Cisco, Oracle – yesterday’s IT giants becoming today’s also-rans. Yes, they are multi-billion turnover companies but they are not growing at anything like the rate of Dell, Microsoft and Huawei which are each two to three times their size in revenue terms.

Chart 5. Oracle revenues included.

At Oracle, which is paralleling Cisco at the $30bn – 40bn/year level, founder Larry Ellison is now CTOing the ship and annual revenues have been stuck at $35bn-$40bn since 2011.

The only old school, hardware system-based supplier with growth fuel left in its tanks, according to these charts, is Dell. Cisco could buy Ericsson or HPE buy Arista or Juniper to bump up their revenues to a higher level, but neither has shown an ability or inclination to make acquisitions at the level of a Dell-EMC.

But all we have looked at is the annual revenue number so far. What about profit and market capitalisation?

Profit and Market Capitalisation

As we’re chart happy we’ll continue, and chart revenues versus net income for Cisco, Dell, HPE, Huawei, IBM, Oracle and Microsoft for the 2018 year:

Note. Cisco’s 2018 profits were actually $100m, with $10.4bn assigned to a one-off charge due to the US Tax Cuts and Jobs Act. For our comparison purposes we’ve added it back in.

Dell is in bottom place as its paying off debt from the EMC acquisition and recording losses. HPE has the lowest profit/revenue combination and there is a rough correlation (lower left to upper right diagonal) between increased revenues and higher profit numbers.

Cisco makes more profit from a dollar of revenue than the other suppliers, while Huawei makes less. Microsoft stands tall as the joint revenue and profit king.

Are these profit-revenue relationships correlated with the companies’ market capitalisation?

Huawei is excluded as it is not a publicly traded business.

If we assume that the natural order of things should be that companies with greater revenues and profits have a higher market capitalisation, then suppliers should be clustered around the lower left to upper right diagonal .

HPE, Oracle, Cisco and Microsoft are roughly in that position and in that order. Outside this group, IBM has a lower market capitalisation than its revenues and profitability on their own would suggest.

Dell is capitalised at a lower value than IBM, despite higher revenues, and has a higher market capitalisation than HPE, even though it makes losses.

Microsoft’s market capitalisation is very much higher than the other suppliers.

We can generally assume that, the higher a company’s profits and market capitalisation are, the better position it is in should it want to make significant acquisitions as a way of growing its business.

What’s next for Dell?

Looking at these charts gets Blocks & Files wondering what is next for Dell? Does it want to become a $150bn company? Or will it run out of steam?

Is Michael Dell, sitting there in Round Rock, content with what he’s got? Surely not. He’s not reached 55: the man is in his prime.

We think three dominating trends in IT are affecting Dell.

First, the public cloud has strong growth potential as do SaaS companies like SalesForce, Workday, ServiceNow, Splunk and Atlassian. The three main public cloud suppliers are increasing revenues at a fast clip.

Secondly, the Internet of Things will vastly increase the number of intelligent devices and the data they produce will need processing.

We see the rise of AI and machine learning as part of the IoT and public cloud. It’s not a dominating trend on its own.

Thirdly, China represents a huge market and also a whole bunch of relatively new world-stage companies such as Huawei and Alibaba. No US IT supplier is making it big in China. For Dell to think its next growth surge could come from China is unlikely.

But Dell should be able to organically grow into being an IoT supplier. However, is that enough? Does that represent a route to becoming a $150 billion company?

Can Michael Dell, will Michael Dell, pull another rabbit out his hat – and say, buy a SaaS company? Buying a public cloud supplier to get into that great game looks impracticable, but buying a SaaS company? That looks hard but not impossible, not to someone who pulled off the EMC acquisition.

Blocks & Files is convinced Michael Dell isn’t done yet. There’s more to come.

Your occasional storage digest, featuring Azure, Pure Storage, Datrium, Hitachi Vantura and more

Today has been a blockbuster blizzard for enterprise storage news. Blocks & Files will guide you through the day’s events with this handy round-up. Let’s start with Microsoft.

Azure moves in front of AWS

Microsoft is whipping Amazon in commercial cloud computing revenues.

Wells Fargo senior analyst Aaron Rakers has calibrated revenue numbers for Alibaba, Microsoft’s commercial cloud and AWS since 2012’s third quarter, and produced the chart below.

it shows Azure exceeding AWS commercial cloud revenues from 2017 onwards. Not so all-conquering then, Amazon?

Delta Lake cleans up data lakes

Analytics company Databricks said its Delta Lake open source technology cleans up large data sets, also known as data lakes, that are plagued by  failed writes, schema mismatches and data inconsistencies through mixing batch and streaming data. Dirty data lakes become cleaned up Delta Lakes.

According to Databricks, developers can code and debug locally on their laptops to develop data pipelines. They can access earlier versions of their data for audits, rollbacks or reproducing machine learning experiments. They will be able to convert existing Parquet, a commonly used data format to store large datasets, to Delta Lakes in-place, avoiding the need for Parquet data reading and re-writing.

Delta Lakes can be plugged into any Apache Spark job as a data source.

Viacom, Edmunds, Riot Games, and McGraw Hill are using the technology. The Delta Lake project can be found at delta.io and is available under the permissive Apache 2.0 license.

Buy Datrium and become backup-free? 

Hybrid converged vendor Datrium has gained its ninth US patent, #10,235,044, which concerns deduplication, specifically, converged primary-and-backup deduplication technology. The premise is that if you use its kit you can stop doing backups.

Datrium CEO Tim Page proclaimed: “Our team has developed the world’s only platform that keeps deduplication, compression, erasure coding and data encryption – in transit and at rest – always on, providing the highest levels of performance, while far surpassing industry norms for enterprise security and reliability.”

Datrium said it provides effective deduplication without affection performance of high-IOPS applications. The technology is based on a content-addressed architecture that is said to optimise data at rest, on premises or in the cloud, and data in motion among data centres and the public cloud.

According to Datrium, previous industry approaches to data deduplication has limitations. “They either supported narrow use cases such as streaming sequential backup to disk or required flash, which was cost prohibitive for snapshot retention and data protection.”

Hugo Patterson, chief scientist at Datrium, said: “We are the first to build a system that combines industry-leading performance and scalability for primary workloads, built-in long-term retention of capacity-optimised application snapshots on affordable media, and bandwidth-optimised data mobility.”

Datrium dedupe tech handles primary storage random IO and enables snapshot storage on less expensive media, such as disks and S3 in the cloud, so that multitudes of snapshots can be retained. Hence the claim that it frees enterprises from having to do backups.

Buy Datrium and become backup-free? Could be worth a look from the Total Cost of Ownership point of view.

Hitachi Vantara DPaaS and STaaS

Hitachi V has introduced a Data Protection service and enlarged its storage-as-a-service offering.

Both are managed services with consumption pricing schemes and have SLAs available. The DPaaS scheme backs up data on Hitachi and other suppliers’ arrays. There are four service classes and deployment is within 60 days of purchase. Hitachi V calls this “fast deployment”.

The STaaS offering provides file, block and object storage on-premises or in a third party data centre. It also takes up to 60 days to deploy.

The block and file options are available in two tiers – performance and capacity – and STaaS is available fully or partially managed.

Pure the main storage array market share gainer

Pure Storage has produced 2014-2019 storage array market share numbers showing it has grown its market share the most, followed by HPE. Dell EMC, IBM and NetApp have all lost market share, according to Pure, as its chart below shows.

This comes from a Pure blog and the chart is animated.

Pure gleefully notes: “For the past several years, Dell’s collection of acquired storage assets dropped share faster than any vendor in the business [and] NetApp is celebrating because they lost only a couple of percentage points.”

Pure boasts “a compound annual growth rate (CAGR) above 65 per cent in the last four years (FY15-19). Meanwhile, the Enterprise Storage Market has grown just over 1 per cent in that same time frame.”

This is an astute piece of marketing by the Pure people. However, Gartner’s recent picture of all-flash array vendor shares casts a different light, as you can see below.

SSA means Solid State Arrays.

Hard times for hard disk drives

Nidec, a maker of disk drive spindle motors, forecasts global hard disk drives shipments at 309 million in 2019, down 18 per cent on 2018. Shipments will fall a further six per cent in 2020 to about 290 million drives.

High-capacity and nearline drives will total 54 million in 2019, up from 41 million in 2017 and 51 million in 2018. This is the growth HDD sector, with 60 million shipments slated for 2020. Due to SSD cannibalisation other HDD categories such as mission-critical, PC, mobile and consumer electronics, are all expected to decline.

Our thanks to Wells Fargo senior analyst Aaron Rakers for sharing the numbers and chart.

Virtual Instruments parades end-to-end NVMe-oF testing

Virtual Instruments (VI) can model and test an end-to-end NVMe-over Fabrics infrastructure and you can see the results at Dell Technologies World, Las Vegas, April 29 – May 2.

Building on its August 2018 reseller deal with SANBlaze and its VirtualLUN tech, VI will show a Dell server talking via a Connectrix switch (OEMed MDS 9100 32Gbit/s FC from Cisco) to a PowerMax array.

VI WorkloadWisdom does the storage workload modelling. SANBlaze VirtualLUN software will generate a storage IO load to exercise the PowerMax array. Cisco SAN Analytics and SAN Telemetry streaming will produce switch data to be collected by VI’s VirtualWisdom infrastructure performance management software.

That software will also receive telemetry data from the PowerMax array, and it can then provide end-to-end performance reporting on the set-up. It means that a customer could have a potential NVMe-oF deployment modelled and tested before pulling the trigger on buying the kit.

WANdisco revenues wilt in cloud shift

Replicator WANdisco reported revenues of $17m for calendar 2018, down 10.2 per cent on 2017’s $19.6m. The operating loss worsened from -$9.7m in 2017 to to $22.1m in 2018.

The first 2019 quarter looks better with $4m in revenues, up 38 per cent year on year.

WANdisco is shifting to cloud subscription sales, and will take a perpetual license revenue hit while it is negotiating the switch-over.

Chairman and CEO Dave Richards said, in a statement: “We have significantly extended our relationship with Microsoft, gaining co-sell status that allows our WANdisco Fusion platform to be sold as a standard offering with Microsoft’s Cloud Solution, Azure.”

He added: “We also have begun to see a significant structural shift in the composition of our revenue base, from large, difficult-to-forecast on-premises transactions toward more predictable, annual recurring cloud revenues.  We see significant opportunities to expand our addressable market in cloud and as annual recurring revenues increase over time, develop a smoother, increasing revenue profile for our firm.”

Zerto goes into the blue with Azure

Zerto says it’s doing great with Azure and wants to do more.

The company provides DR services in the public cloud and has sold Azure services through a formal co-sell relationship with Microsoft since April 2018. Zerto claimss a 330 per cent increase in co-sell deals from 2017 to 2018. But as it does not proved a base figure for 2017, one can reasonably ignore this boast.

More pertinently, Zerto said it is now a top 10 Microsoft co-sell partner worldwide. The company is working with the Azure Sponsorship program to support Microsoft End of Support (EOS) migrations and roadmap coordination. This year will see Microsoft ending support for Windows Server 2008 and SQL Server 2008.

Zerto said it can assist users with workload migration from dead ended productss. Microsoft doesn’t want to lose these EOS customers and is offering three years’ support to migrate workloads to Azure.

Not to be outdone, Zerto is offering up to $3,000 of Azure credits for assessments, proof of concepts and deployments to current customers, prospects or partners.

In other news Zerto has hired David Macmillan as its VP for the EMEA region. He’s a 35-year IT vet with previous roles at MuleSoft, Jive, Initiate and IBM.

Shorts

Acronis is opening up API access to its Cyber Platform for developer use by ISVs, OEMs, and service providers, with software development kits (SDK) and sample code. New open APIs provide access that was previously available only to Acronis and select integration partners such as ConnectWise, Microsoft and Google.

The company is also prepping an Acronis Total Protect product that merges backup, cybersecurity and system management capabilities.

DriveScale will exhibit its Composable Platform at Dell Technologies World, Las Vegas, April 29 – May 2, using PowerEdge Servers, Ethernet switches, and Dell EMC arrays. It will show its reference architecture.

The company can talk about ATT Xander division company AppNexus, where it composes infrastructure with 1,000s of Dell EMC servers.

Hyperconverged infrastructure supplier Pivot3 is supporting the latest version of Citrix Virtual Desktops. There is a reference architecture you can check out.




Read This! The Bottomless Cloud is abundantly clear

Book review Customers should store all their data in a bottomless cloud because that data can be used to generate new business with a value exceeding its storage cost. So says David Friend – who runs a cloud storage service company.

Friend, CEO and president of Wasabi, and ex-CEO of Carbonite, worked with Thomas Koulopoulos, chairman of futurist think tank the Delphi Group, to pen The Bottomless Cloud, a 64-page e-book.

Wasabi Technologies CEO and President David Friend

The authors argue that businesses should embrace an abundance view of data instead of a scarcity view and become data-driven. That’s the way the world is going and those who fail to adapt to this new paradigm will end up in the dustbin of history.

The Big Switch revisited

Friend and Koulopoulos use the history of electricity generation to draw an analogy with developments in IT.

Companies originally supplied their own power, then bought it from all-in-one utilities and now are buying it from a more efficient disaggregated supply chain.

Likewise, until recent times companies computed and stored data on-premises. But this is not a core competency and companies are bad at doing this. As a consequence, all-in-one cloud suppliers such as Amazon, Microsoft with Azure, and Google,have sprung up, offering to do it better.

And they do, in this Cloud 1.0 model, but at a cost of customer lock-in and by deploying the same scarcity of resource model as the on-premises era. Computing and storage cost serious money and have to be rationed. Just like, the authors say, long-distance telephony was once rationed by cost.

Today telephony costs have dropped, the rationing has effectively ended and the networking benefits of communication are taken for granted.

So it is, or should be, with data storage.

The authors do not trot out the argument that data is the new oil. Data is not a commodity – unlike Storage- because data is valuable to and pretty much unique to each individual business. This is unlike an electrical watt or volt or drop of oil which is the same as every other watt, volt or drop of oil.

Cloud 2.0

Storage costs should fall because users can then move to an abundance model of data instead of a scarcity model. Cloud 2.0 embraces an abundance model, and the authors proclaim this will lead to a “bottomless cloud” era.

They say Cloud 1.0 uses a scarcity mindset to build external economies of scale. Cloud 2.0 uses an abundance mindset to build economies of scope. This is a neat insight.

Economies of scope

Economies of scope “create a digital ecosystem in which companies can easily connect to and expand into adjacent markets and new business models. Companies that compete based on economies of scope are obsessively data-centric.

“One of the best examples is Nike, which has transformed itself from a provider of sneakers and clothing into a lifestyle and fitness company that is intimately linked to technology and healthcare vendors.”

Friend and Koulopoulos say digitisation is a first step. The next is datafication, which they define as “the wholescale transformation of the business model by rebuilding the business around the data. And this is not only transactional data and documents, but the much larger treasure trove of behavioural data that is suddenly available.”

The authors add: “The objective is to adopt a mindset of abundance and keep as much data as possible available for as long as possible. It’s what some Cloud 2.0 providers call “Hot Storage,” implying that all data is equally accessible and potentially valuable.”

A storage service supplier would say that

A storage service supplier would say that,  wouldn’t they! It is all very well to advocate an abundance model because the more you store the more you pay. The money incentive flows mainly one way.

The authors reject this thinking and argue this data will unleash new business models and revenue flows. Here are some of their thoughts.

  • In an era of abundance, the value of data is in how much of it you can capture and mine; the greater the volumes, the greater the potential value.
  • New business models are emerging that rely on nearly  unlimited storage to spur innovation and uncover new opportunity.
  • We are firmly of the opinion that businesses using on-premises and Cloud 1.0 solutions are anchoring themselves to the past.
  • For organizations that live and die based on their ability to innovate products, services, and business models, the choice of Cloud 2.0 and the eventual movement to the bottomless cloud is essential.”
  • To be clear, we see the Bottomless Cloud as a mandatory rather than an optional change in the way data will be stored and managed, and how it will impact the success of every business.

There’s more in this ebook, such as the notion that people and devices will have digital twins describing them and enabling realtime  predictions to be made about their behaviours.

And then there are the gee whiz extrapolated statistics; “By 2100 there will be 100 times as many computing devices as there are grains of sand on all of the world’s beaches, and every one of those devices will be churning out data.”

If you find you have an odd half-hour then download and check out this Wasabi-Delphi ebook about the cloud with no bottom.

Investors file class action against Quantum and two former officers over mis-stated revenues

Investors have launched a class action in Northern California against Quantum Corporation and two former board members, ex-CEO Jon Gacek, and ex-CFO Fuad Ahmad.

The lead plaintiff Globis Capital Advisors L.L.C. accuses the defendants of deliberately violating GAAP rules to artificially boost Quantum’s share price which subsequently collapsed and left investors facing losses.

The class action alleges that Quantum recognised revenues in certain large and multi-year transactions for a large cloud project before they should have been recognised under GAAP rules.

It accuses the defendants of improperly and prematurely booking $20m revenues from this project in the first half of Quantum’s fiscal 2017. Further, they “expected to book a second $20 million of revenue from the Large Public Cloud Project during the second half of fiscal 2018.”

This project revenue booking enabled Quantum to report that scale-out storage revenues were growing faster than its legacy data protection revenues were declining. Also the second $20m of revenue from the large public cloud project did not materialise.

The complaint says “Quantum’s share price more than doubled during the initial portion of the Class Period, rising from below $4.00 per share in April 2016 to above $8.00 per share in April 2017.”

Investors, who thought the rising Quantum share price was justified, were misled. They saw the share price decline as the “Defendants’ prior representations concerning Quantum’s financial results and internal controls were revealed to have been materially false and misleading by corrective disclosures beginning on February 8, 2018.”

Gacek left Quantum in November 2017. Ahmad went in June 2018.

The lead plaintiff will file an amended complaint when Quantum posts accurate accounts and SEC reports for the affected periods.

Separately, Quantum yesterday settled an investor lawsuit alleging breach of fiduciary duty. The deal sees the company improve its corporate governance and is subject to shareholder approval.

Background 

Quantum told the SEC it believes the company mis-recognised revenues from sales transactions and investors and the SEC cannot rely on its stated accounts since its fourth fiscal 2015 quarter through to the fourth fiscal 2017 quarter.

The company first revealed the likely mis-statements affecting up to $60m of previously recognised revenue in September 2018, following an SEC subpoena, and set up a special committee to investigate. Quantum admitted deficient financial reporting and controls, dismissed accounting firm PriceWaterhouseCoopers as its auditor and hired Armanino LLC to re-audit accounts. The company will refile for the affected and subsequent periods as soon as possible but it is still unable to say when this will be completed.

Because of this reporting failure the NYSE stopped listing Quantum shares, in January this year.

As of December 31, 2018, the end of Quantum’s most recently completed fiscal quarter, approximately $5m and $15m of prematurely recognised revenue in the historical periods may be recognised in future periods, subject to GAAP rules.

Jay Lerner, Quantum’s new CEO, unveiled his company vision in a press tour in December 2018. Read our analysis. And this month the company launched two scale-out storage product lines.

But to investors the company will remain the same-old, same-old until it sorts out this awful accounting mess. The class-action plaintiffs have a strong case.

The case is 3:18-cv-000923-RS in the Northern District of California.

Lightbits Labs enters all-flash array fray

Lightbits Labs has introduced the NVMe-connected SuperSSD all-flash array using its LightOS and LightField accelerating hardware cards.

The SuperSSD is hooked up to accessing servers across dual 100GbitE links and a standard Ethernet infrastructure using NVMe-over TCP. There is no need for lossless, converged data centre-class Ethernet, although RDMA over Converged Ethernet is supported.

Physically it consists of an X86-based controller and 24 x 2.5-inch hot-swap, NVMe SSDs inside a 2U rack shelf unit.

Lightbits SuperSSD

The LightOS contributes a global flash translation layer (GTFL) and thin-provisioning, with data striped across the SSDs. There is a RESTful API providing a standard HTTPS-based interface and CLI support for scripts and monitoring. It has a quality of service SLA per volume, so avoiding noisy neighbour-type problems.

LightField FPGA cards provide hardware compression and append-only, erasure coding. This means that, during SSD failure, the LightField card collects all redundant data for reconstruction without passing the data to the host CPU. 

Each LightField accelerator card can simultaneously run compression at 100Gbit/s and decompression at 100Gbit/s, without affecting the SuperSSD’s write/read throughput or latency.

The system is claimed to provide zero downtime. Samer Haija, Director of Product, said; ” SuperSSD is a single controller per node architecture. High-availability is achieved by configuring multiple boxes as a cluster for node level failures.  When configuring multiple boxes for HA, the setup could be configured as active/active or active/passive.”

The SuperSSD is designed to support the scale, performance and high-availability needs of customers running applications needing fast and parallel block access to data, such as AI and machine learning. It is scalable from 64TB  to 1PB of usable capacity; the raw capacity is not revealed. The system can both scale up in capacity, by adding SSDs, and scale out, by adding appliances.

Lightbits system access diagram

Lightbits says each SuperSSD is a two node target, and can support up to 16,000 volumes / node or 32,000 total.  Therefore, it could support up to 32,000 compute nodes.

We asked what kind of SSDs are supported, such as 3D NAND and TLC? The answer from Haija was; “Lightbits SuperSSD currently supports two variants of NVMe SSD’s from two different vendors that are 3D TLC 64-layer with drive capacities that range from 4TB to 11TB.  We are QLC-ready and our SSD offering will continue to support the latest available and released NVMe SSD’s (96 layer TLC and 64 Layer QLC).”

The SuperSSD offers up to 5 million 4K input/output operations per second (IOPs) and with an end-to-end latency consistently less than 200us. The appliance’s own read and write latency is <100us.

The LightOS GTFL provides wear-levelling and garbage collection across the SSDs instead of leaving it to each drive itself and so achieve better endurance. According to Haija, system endurance is dependent on workload and itsIP allows it to achieve minimally 4 DWPD (Drive Writes Per Day) and up to 16DWPD over 5 years.

The LightField cards provide 4X or more compression which is optionally enabled. There can be up to two cards per appliance, meaning each card can support up to 12 SSDs. The cards feature hardware Direct Memory Access for acceleration of the LightOS GFTL.

LightField FPGA card

The gets us thinking that LightOS receives a read request from an accessing server, and it is satisfied by the LightField card doing a DMA transfer from its attached SSDs it reads, with their striped data, to the LightOS memory buffers. In effect we have memory caching,

Lightbits says the SuperSSD is a storage server and can provide in-storage processing so that user-defined functions can run in it. We asked how this was done but no details were provided. The LightField card is said to be “re-programmable and extendable with user-defined in-storage processing functions.”

Blocks & Files understands that this is the in-storage processing capability.

A prepared quote from Eran Kirzner, the CEO and co-founder of LightBits Labs, said: “AI and machine learning are exceedingly resource intensive, involving access to distributed devices and massive amounts of data. As the amount of data analysed increases, direct-access storage  solutions become completely ineffective, complex to manage, costly and potentially a single point of failure.

“By ensuring zero downtime, SuperSSD is the only storage solution that can support continuous AI and machine learning processes, reliably, cost-effectively, and with high-performance and simplicity.”

Competition

Blocks & Files reckons the competition for this kind of array are the NVMe-oF startups; Apeiron E8, Excelero and Pavilion Data Systems for example, Kaminario and the mainstream suppliers such as  Dell EMC, IBM, HPE, Hitachi Vantara, NetApp, and Pure Storage.

These are all good suppliers with fast-access and reliable kit.

We asked LightBits what is the SuperSSD’s differentiation from competing NVME all-flash arrays? Haija said the SuperSSD is;

  • [A] target-only solution; no need to change network or clients
  • Based on standard hardware and SSD’s
  • Compatible with existing infrastructure
  • Isolates SSD failures and (undesired) behaviour from impacting rest of infrastructure and applications minimising service interruptions
  • Higher performance and capacity, reduces number of nodes required and allows for ease of scale. 

With respect to Lightbits, most of these are features any all-flash array connected via NVMe-over Fabrics across a standard storage access network, such as TCP/IP or Fibre Channel, could claim as well.

This is significant as NVMe/TCP is natively slower, meaning it has a longer latency, that NVMe-over Fabrics using RDMA across lossless Ethernet. Perhaps Lightbit’s HW acceleration counters this disadvantage.

This means the “higher performance and capacity” point will need to be proved and be significant, as a few microseconds faster latency will be of no substance unless it delivers faster application execution because the latency savings per IO add up to a large number.

The company is an early stage start up that emerged last month with $50m in funding. This is a promising start and the unnamed backers clearly think the company is on to something. But the field is crowded and the competition is strong. We asked Haija what he thought Lightbits’ strengths were.

He said; “Our team is comprised of a unique combination of successful leaders with a track record in storage, compute and networking.

“Our capabilities are reflected in our core software (LightOS and GFTL) that tightly integrates the HW to create an easy to deploy, high performance, low latency, reliable product that can improve SSD utilisation by 2x which is what the market needs to achieve the next level of scale and efficiency. 

“With our leadership position in NVMe/TCP, LightOS and GFTL we have a head start that we plan to maintain by continuing to invest in a robust roadmap.”

That leadership position needs proving with customer sales.

Availability

The Lightbits Labs SuperSSD appliance comes with a 5-year warranty and is available for purchase now, along with LightOS software and the LightField storage acceleration card. We have no channel partner details.

Lightbits has two offices in Israel, in Haifa and near Tel-Aviv, and a third in San Jose, CA. Get a LightOS product brief here and a LightField product brief here.

Micron skewers the SKUs with 9300 NVMe SSD refresh

Micron has used the launch of the 9300 to prune its NVME SSD line-up. More importantly, the drive outclasses the competition, according to our informal survey of products made by rival vendors.

The 9300 drives are available now but Micron has not published pricing at time of publication.

The 9300 predecessor, the Micron 9200, had two formats, three products and 8 capacity levels. With the 9300 refresh Micron offers a single format, two products and six capacities.

The NVMe-connected SSDs use TLC (3bits/cell) 3D NAND, with the 2017-era 9200s using 32-layers and the 9300s using 64-layers. The earlier 9100 Series used planar or single layer MLC (2bits/cell) flash.

Micron 9200 speeds and feeds

A PCIe gen 3 x 8 lane interface helps achieve the million random read IOPS for the half-height, half-length add-in card format. The U.2 or 2.5-inch drive format has the standard 4 PCIe gen 3 lanes.

Micron’s 9300 packaging

The 9300 range junks the add-in-card format, standardising on the U.2, and also retire the ECO high capacity variant. The 9300 PRO and 9300 MAX products each have three capacity levels: 3.84TB, 7.68TB and 15.36TB for the PRO; and 3.2TB, 6.4TB, and 12.8TB for the MAX.

The 9300 Pro is for read-intensive use while the 9300 PRO is for mixed read-write workloads. The latter has more cells set aside for over-provisioning, hence its smaller capacity,

Micron 9300 speeds and feeds

Blocks & Files has made a chart comparing the 9100, 9200 and 9300 products on maximum random read IOPS and capacity to provide at-a-glance positioning.

AIC means Add-In Card.

We can instantly see that the absence of an 8-lane enhanced 9300 add-in card product means the 9300s are slower at random reads than the 9200 ECO add-in-card.

The 9300s are more power-efficient than the 9200s, needing 14 and 21 watts when sequentially reading and writing. That is 28 per cent better than the 9200s.

They also have a shorter latency – 86μs / 11μs read/write versus the 9200’s 92-150μs / 21μs read/write. Both the 9200 and the 9300 are faster in this respect than the earlier 9100 whose latency was 120μs / 30μs read/write.

Competing drives

Of the competing suppliers Samsung does not have TLC 64-layer U.2 product, preferring to build M.2 gumstick card drives. Seagate does likewise with its FireCuda 510 and Barracuda 510 drives using Toshiba NAND.

Toshiba’s CD5 uses 64-layers of TLC flash with a capacity range of 960GB to 7.68TB and 500,000 maximum random read IOPS. It is outclassed in capacity and speed.

Western Digital’s Ultrastar DC SN630 uses the same kind of flash, and comes in read-intensive (up to 7.68TB) and mixed-use (to 6.4TB) form. Random Read IOPS top out at 363,750, so this product set is outclassed by the Micron 9300s as well.

Intel has a D5 P4350 which uses QLC (4bits/cell) 64-layer NAND and maxes out at 7.68TB. It does up to 427,000 random read IOPS and doesn’t measure up to the 9300s either.

Dive into the 9300 section of its website to find technical and product briefs.

Dell and Cisco renew VxBlock wedding vows

Dell Technologies and Cisco have signed a multi-year extension to their VBlock partnership.

This is the converged infrastructure combo that integrates Dell EMC storage with Cisco UCS servers and Nexus networking systems.

The set-up began in 2009 with the VCE joint venture between EMC, VMware and Cisco. The current product is the VxBlock 1000.

The extended agreement runs for an unspecified time and aligns product roadmaps and developments sales initiatives between the companies.

Cisco and Dell plan joint announcements later this year regarding the VxBlock 1000. Expect a gradual move to composable VxBlock systems and improved multi-cloud back-end capabilities.

Our sister publication The Register has more detailed article about the Dell-Cisco tie-in here.

.

Quantum Corp scrubs up governance to fend off lawsuit

Quantum is to change its corporate governance to settle a lawsuit that alleged breaches of fiduciary duty.

The lawsuit was instituted by plaintiff Dennis Palkon and the defendants were ex-CFO Fuad Ahmad, ex-CEO Jon Gacek, ex-CEO Adalio Sanchez, and former board members Raghavendra Rau, Alex Pinchev, Clifford Press, Marc Rothman, and activist investor and also board member Eric Singer of VIEX Capital Advisors.

Quantum’s ex-CEO Jon Gacek

The Palkon Action asserted claims against the individual defendants “for breach of fiduciary duty, abuse of control, gross mismanagement, and unjust enrichment”, in 2017-2018 when scale-out storage revenues rose and Quantum claimed it had won a significant deal with a large cloud provider worth an initial $20 million. This is the subject of separate and ongoing litigation.

Since this period Quantum has detected mis-filed quarterly and full-year accounts which it is still struggling to correct. As a consequence it was ejected from the NYSE in January 2019.

The lawsuit contends that, as a result of the alleged wrongdoing “Quantum’s stock price traded at artificially inflated prices…and that when the truth about the false and misleading statements was revealed to the public on or about February 8, 2018…Quantum’s stock price suffered a market decline, causing Quantum to sustain significant damages, including losses to its market capitalisation and harm to its reputation and goodwill.”

The settlement stipulations are:

  • Board independence reforms, including appointment of Lead Independent Director, director term
  • limits, meetings in executive session, stockholder meetings, and Board diversity,
  • Creation and maintenance of a Disclosure and Controls Committee,
  • Audit Committee reforms,
  • Adoption of a Compensation Clawback Policy to address and remedy any future misconduct by the Company’s CEO, CFO, or any other officer or director,
  • Hiring and maintenance of a Compliance Officer,
  • Director training, continuing education, evaluation and reporting, and annual self-assessments,
  • Adoption of a Confidential Whistleblower Program,
  • Code of Business and Ethics reforms.

Quantum appears to have escaped lightly. It has not conceded any of the plaintiff’s claims and will pay $800,000 for the expenses of his legal team. Palkon receives no money. A settlement hearing will be held later this year and shareholders can file any objections to the settlement at the hearing.


Your Easter enterprise storage getaway

We have picked out four themes for you in our Easter enterprise storage roundup: flash arrays are taking over from disk arrays; backup to the cloud is booming, and storage vendors are finding rich pickings in for genomics and video surveillance.

All-flash arrays rule

IHS Market research numbers show that 2018 could be the last year hybrid array shipments are greater than all flash arrays. Hybrid arrays include all-disk and flash+disk arrays.

The IHS Markit forecast for 2019 has all flash performance arrays growing 23 per cent to reach 33 per cent of array revenue share, and hybrid performance arrays declining 15 per cent to 30 per cent.

Year-over-year server external storage revenue, including array and server expansion storage, rose seven per cent to $9.8bn in the fourth 2018 quarter, according to the IHS Market’s “Data Center Storage Equipment Market Tracker” . The hybrid array category led the array market at $3.5bn, with a year over year decline of 6 per cent. The overall flash array category grew 40 per cent annually to reach $3.2 billion.

The IHS people said:

  • The Total server external data centre storage market will reach $63bn by 2023, up from $30bn in 2018, for a five-year compound annual growth rate of 11 per cent.
  • Capacity-optimised (i.e., spinning disk) arrays declined 4 per cent, year over year, in 2018, as purchasing shifts to flash.
  • Enterprises accounted for 44 per cent of storage equipment revenue in the fourth quarter, followed by cloud service providers with 41 per cent and telcos at 14 per cent.

Cloud backup market

Verified Market Research has published the “Cloud Backup Market Size and Forecast to 2025” report and predicts that the cloud backup market will reach $10.25bn by 2025, up from $1.65bn in 2016. This equates to 25.9 per cent compound annual growth rate from 2017 to 2025.

The report says storing backups in the cloud is cheaper than on-premises. Thirteen vendors are listed as the main suppliers in this area: Acronis, Asigra, Barracuda, CA, Carbonite, Datto, Druva, Efolder, IBM, Iron Mountain, Microsoft, Symantec and Veeam.

Cloudian issues news in threes

Object storage software supplier Cloudian has announced availability of a Cisco Validated Design for its HyperStore software on Cisco UCS servers. The two say it’s designed for cloud infrastructures, with S3 compatibility enabling data management across public and private cloud environments.

Secondly, Cloudian announced a collaboration with Telestream to deliver media-aware storage and content processing of video and audio files. It combines Telestream Vantage with Cloudian’s HyperStore for the search and analysis of media files by using automated enriched metadata tagging.

Vantage processes and analyses media files, providing services such as auto-transcription and captioning, keyword analysis, ingest workflow and asset QC automation, and active library management functions. It automatically creates enriched metadata, a media overlay, that is stored with the associated content in HyperStore.

That means customers can search, discover and analyse media files and develop programmatic metadata-driven workflow automation within the HyperStore environment.

Thirdly, Cloudian’s HyperStore has been certified for use with Veritas Enterprise Vault as a vault store. It claims customers can reduce archive costs by up to 70 per cent compared to NAS and tape-based systems. Cloudian suggests it can be used to replace EMC Centera systems.

There’s more information available here.

ExaGrid exerts itself for Zerto

ExaGrid, with its deduping to disk array backup target, and Zerto with continuous data protection disaster recovery software, have a combined backup/disaster recovery offering

Zerto on-premises software produces a stream of backup data which goes to an on-premises ExaGrid array with data changes constantly recorded to the Zerto Elastic Journal so that recovery points are kept up to date.

Zerto writes long-term daily, weekly, monthly, and yearly backups directly to ExaGrid’s so-called landing zone, with no deduplication, for fast restores of recently added backups. In parallel with this backup, but not inline, ExaGrid deduplicates the data into the ExaGrid repository for long-term retention. As data grows, appliances are added to the ExaGrid system in a scale-out fashion.

ExaGrid and Zerto can replicate data in the onsite ExaGrid system to an offsite location, public cloud, or a secondary physical ExaGrid system to ensure that long-term retention data is protected from site disaster.

Qumulo goes to Genetech

Scale out filesystem supplier Qumulo has had its software certified with Genetec Security Center. Genetec provides IP-based video surveillance, access control, automatic license plate recognition (ALPR), communications, and analytics.

Basically Genetech data is stored as files in a Qumulo filer.

The video surveillance market was valued at $6.89bn in 2018 and is expected to reach $68.34bn  by 2023, at a CAGR of 13.1 per cent. That’s massive growth and all those videocam files will need storing somewhere.

It’s no surprise storage media and filesystem suppliers, such as Western Digital, Pivot3 and Qumulo, are piling in to get a piece of the action.

Veeam’s fast growth continues

Backup and now cloud data protection software supplier Veeam reported a 16 per cent increase in total bookings year-over-year (YoY) in its first 2019 quarter. It now has more than 343,000 customers, consistently gaining  an average of 4,000 new customers each month. The privately-held company does not publish revenue or profit or loss statements.

Veeam is looking cloudwards to continue its growth and, hopefully, attain the longed for billion dollar a year revenue target. According to IDC cloud-based data protection as a service is growing at a 16.2 per cent CAGR – ” much faster than the 3.4 per cent CAGR for traditional data protection and recovery software.” That should help Veeam.

  • Annual Recurring Revenue increased 30 per cent YoY,
  • Veeam Availability Suite 9.5 update 4, released at the beginning of Q1, it has had more than 200,000 downloads to date
  • Veeam Backup for Microsoft Office 365 is the fastest growing product in Veeam history, with 152 per cent revenue bookings growth YoY
  • Veeam reported 31 per cent YoY growth in its overall cloud business for Q1’2019

WekaIO, Western Digital and genomics data storage

WekaIO’s Matrix file system is storing files on Western Digital’s ActiveScale array in a genomics data demo. The two suppliers, along with Sentieon and PetaGene, are showing an optimised genomics data analysis workflow at Bio-IT World in Boston this week.

ActiveScale hardware

Sentieon produces tools for genomic data analysis, while PetaGene provides genomics data compression systems which use Sentieon tech. The ActiveScale array supplies integrated data tiering and remote backup to the cloud.

PetaGene’s compression can deliver up to 90 per cent reduction of BAM and FASTQ.gz file sizes, without any loss of information, resulting in more than 50 per cent net savings in overall storage costs. The compression also lowers data transfer time

The suppliers say their combined system offers good storage performance and capacity scaling for genome sequencing workloads.

David Hiatt, director of business development at WekaIO, bigs up the size of the task at hand: “Genomic workloads are among the most challenging for storage systems with billions of small files and intense metadata operations. … an integrated solution reduces total storage costs and dramatically improves data accessibility.”

Shorts

In-memory compute supplier Hazelcast has announced general availability of Hazelcast Jet, claimed to be the industry’s fastest stream processing engine that rivals Apache Spark. Capable of ingesting, categorizing and processing vast amounts of data with ultra-low latency, Hazelcast Jet provides a real-time data processing system that simplifies deployment, and is specifically designed for environments such as IoT, edge and cloud.

Backup vendor HYCU has announced a distribution agreement with Tech Data ANZ to expand its footprint across Australia and New Zealand for the HYCU for Nutanix business.

VAST has support for NFS over RDMA. It’s designed specifically for the AI use cases and, in effect, enables kernel bypass. In this respect it’s similar to Quobyte with its TensorFlow plug-in.