Home Blog Page 319

Azure Purview plugs black holes in a data universe

The infrared image from NASA's Spitzer Space Telescope shows hundreds of thousands of stars in the Milky Way galaxy

Microsoft has released Azure Purview, a tool to uncover hidden data silos wherever they live – on-premises, across multiple public clouds or within SaaS applications.

Julia White, Corporate VP for Azure, blogged: “For decades, specialised technologies like data warehouses and data lakes have helped us collect and analyse data of all sizes and formats. But in doing so, they often created niches of expertise and specialised technology in the process.

Julia White

She said: “This is the paradox of analytics: the more we apply new technology to integrate and analyse data, the more silos we can create.”

Microsoft’s Alym Rayani, GM for Compliance Marketing, wrote in a blog: “To truly get the insights you need, while keeping up with compliance requirements, you need to know what data you have, where it resides, and how to govern it. For most organisations, this creates arduous ongoing challenges.” Purview reduces the arduousness factor.

Azure Purview is a unified data governance service with automated metadata scanning. Users can find and classify data using built-in, custom classifiers, and sensitivity labels – Public, General, Confidential and Highly Confidential markers. They can create a business term glossary.

Discovered data goes into the Purview Map. A Purview Data Catalog enables users to search the Map for particular data, understand the underlying sensitivity, and see how data is being used across the organisation with data lineage; knowing where it came from.

Purview Catalog

Purview contains 100 AI classifiers that automatically look for personally identifiable information and sensitive data. The app also pinpoints out-of-compliance data. 

The existing Microsoft Information Protection software uses sensitivity labels to classify data, helping to keep it protected and preventing data loss. This applies premises or in the cloud across Microsoft 365 Apps, services such as Microsoft Teams, SharePoint, Exchange, Power BI, and third-party SaaS applications.

Azure Purview extends the sensitivity label approach to a broader range of data sources such as SQL Server, SAP, Teradata, Azure Data Services, and Amazon AWS S3, thus helping to minimise compliance risk.

Using Purview, admins can scan their Power BI environment and Azure Synapse Analytics workspaces with a few clicks. All discovered assets and lineage are entered into the Data Map. Purview can connect to Azure Data Factory instances to automatically collect data integration lineage. It can then determine which analytics and reports already exist, avoiding any re-invention of the wheel.

B&F thinks Purview’s ultimate usefulness will depend on how many data sources it can explore and interrogate. The fewer the black holes in an enterprise’s data universe the more effective Purview’s data governance will be.

Azure Purview is free to use in preview mode until January 1, 2021.

The terrible Tib, GiB and PiB game

The data storage industry uses MiB, GiB, TiB and PiB measures and different MB. GB, TB and PB measures and this confuses the heck out of many people. The reason we are in this mess is because of a K-sized conundrum.

Update; K and k case problem identified. A reader tells me: “Please note that the SI prefix for 1000 is k, not K.  Thus it is also kB and not KB. But for the kibi, a capital K was chosen – KiB.” 8 December 2020.

In normal number usage K, meaning kilo, stands for 1,000 of something. A kilogram is 1,000 grams. A kilometre is 1,000 metres. This is base 10 arithmetic. It is the International System of Units (SI) measuring scheme, in which the kilo prefix signifies ten to the power three (103) and means 1,000.

But, in computing, base 2 arithmetic – a binary numbering scheme -is used to describe DRAM and cache size, for example, in bytes for the Windows OS. By extension, this numbering scheme has been applied to disk and SSD capacities. A kilo, two to the power 10, (210 ), is 1,024 -a KB in shorthand. And this scheme leads to the MB, GB, TB, PB and EB terms in general use today.

The SI mega prefix is (106), the giga prefix is (109) and so on with TB, PB, EB and beyond. However, in 1998, the IEC (International Electrotechnical Commission) came up with the KiB (210), MiB etc. nomenclature to differentiate the two numbering systems.

The base 10 and base 2 difference is unimportant – so long as base 10 kilos and base 2 kilos are kept separate. But in recent times we have seen the SI kilo definition creeping into to computing, with some companies inserting an ‘i’ to indicate they are using the SI scheme, as in MiB (mebibyte), GiB (Gibibyte),TiB, and so on.

So we have a case of K confusion, and M, G, T, P and E confusion as well. 

The actual difference between the two schemes gets larger as the units increase in size. A KB is 0.976563 of a KiB. A MB is 0953,674 Mib. A GB is 0.931323 GiB. A TB is 0.909495 TiB. KiBs are bigger than KBs, MiBs are much bigger than MiBs, GiBs are very much bigger than GBs, and TBs are exceedingly larger than TiBs. 

For example, a GB is 1,073,741,824 bytes whereas a GiB is 1,000,000,000 bytes; that’s a whopping 73,741,824 byte difference.

Toshiba is a notable exponent of base10 capacity calculations. In its product spec for the MG08 series the company says: “Toshiba defines a megabyte (MB) as 1,000,000 bytes, a gigabyte (GB) as 1,000,000,000 bytes and a terabyte (TB) as 1,000,000,000,000 bytes.” However, the company also uses base level ‘MB’ in the same document. Weird.

Toshiba MiB usage.
WD Red disk drive MB usage.

Capacity measures over time are used to indicate throughput. The telecommunications and networking industry generally measures in base 2 arithmetic, MB/sec for example. But the Toshiba MG08 disk drive’s throughput is given as 262 MiB/sec, and Western Digital uses MB/sec. Inconsistency reigns.

Apple is another example of base 10 madness. Until iOS 10 and Mac OS X Leopard, the company used base 2 numbering for memory and disk capacity but then switched to base 10. That makes product generational product capacity compares difficult, as well as Mac and Windows system capacity comparisons.

OK, so may it’s no biggie except using two different measurement systems for the same thing is…crap, as anyone who has to tussle with imperial measurements knows.

So let’s agree a common standard and clean up the K confusion afflicting our industry. Otherwise the confusion will only to get worse as we move on from today’s petabyte era to tomorrows’s exabyte world. The Storage Networking Industry Association (SNIA) is the obvious body to make this happen.

Lenovo boosts low end all-flash array with end-to-end NVMe

Lenovo DM5100F all-flash array in 2RU x 24-slot chassis

Lenovo has juiced up its entry level all-flash array with NVMe SSDs, NVMe/FC access and faster Fibre Channel support. The company said the new ThinkSystems DM5100F array is suitable for analytics and AI workloads.

Lenovo teamed up with NetApp in August to produce the all-flash ThinkSystem DM Series. According to the company, the new system delivers 45 per cent higher performance than its precursor, the DM5000, which uses SAS SSDs and 16Gbit/s FC access.

DM arrays use NetApp ONTAP software, while the hybrid flash/disk DE Series use SAN OS, NetApp’s software for its E-Series arrays.

The DM5100F scales out to 48 NVMe SSDs, with capacity topping out at 737.28TB. This is less than the DM5000 which holds 144 SAS SSDs for a maximum 2.2PB capacity.

The DM5100F’s maximum controller memory is 128GB, twice that of the DM5000F’s 64GB. The new model also has 16GB of NVRAM – double the DM5000F’s 8GB. The increases reflect the greater burden on the DM5100F controller from the NVMe SSDs, NVMe/FC access and overall increased IOPS performance. 

Lenovo’s new array requires ONTAP 9.8, which is also available for the other DM Series models. 

Lenovo ThinkSystem DM5100F array’s 2RU x 24-slot controller chassis

All the DM Series arrays now get S3 object access support, adding to existing block and file access protocols (FC, iSCSI, NFS, pNFS, SMB, NVMe/FC). There is transparent failover and management of object storage. Customers can add cold-data tiering from the SSDs to the cloud, or replicate data to the cloud.

A new DB720S Fibre Channel switch links servers to the DM and DE Series arrays, and it adds 64Gbit/s Fibre Channel speed plus lower access latency to the existing 32Gbit/s and 16Gbit/s switches in Lenovo’s product locker. (This is an OEMed version Broadcom G720 switch.)

Cloud-based management 

Lenovo has released Intelligent Monitoring 2.0, an update of its cloud-based management tool for the DM and DE Series arrays. This enables customers to monitor and manage storage capacity and performance for multiple locations from a single cloud-based interface. V2.0 improves the analytics and adds AI-based prescriptive guidance.

Pure overtakes NetApp in Gartner magic quadrant for primary arrays

Pure Storage is rated highest in Gartner’s 2020 primary arrays Magic Quadrant (MQ), overtaking NetApp, last year’s front runner.

Gartner defines primary arrays as all-flash or hybrid flash and disk on-premises storage arrays, that deliver block services (structured data workloads) and possibly file and object access.

Now for the Blocks & Files standard MQ explainer. The magic quadrant is defined by axes labelled ‘ability to execute’ and ‘completeness of vision’, and split into four squares tagged ‘visionaries’, ‘niche players’, ‘challengers’ and ‘leaders’.

The 2020 primary arrays MQ Leaders’ box is split into two groups, with IBM moving up from the trailing leaders to join Pure, NetApp, Dell and HPE in the, umm, leading leaders.

Notable changes include Inspur moving up from niche players to challengers. Oracle has fallen back in the niche players box.

Five vendors have dropped out of the 2020 MQ. They are Western Digital, which sold its IntelliFlash line to DDN, NEC, Infortrend, Synology and Kaminario (now rebranded as Silk.) Gartner’s MQ criteria for inclusion require the supplier to provide on-premises arrays, and this criteria excludes Silk, which has morphed into a software-only vendor

To merit inclusion in the MQ, suppliers must also have generated more than $50m in sales revenue over the past year. This may explain the exit of NEC, Infortrend and Synology. It could also account for the non-appearance of VAST Data and StorONE, which might otherwise have been expected to appear in the visionaries box

Gartner analysts list three strategic planning assumptions:

  • By 2025, 50 per cent or more of enterprises will have moved to an OPEX storage consumption model.
  • By 2025, 20 per cent or more enterprises will be using NVMe-oF. Just five per cent use it currently. 
  • By 2023, at least 20 per cent of enterprises will be using cloud storage management tools to link their arrays to the public cloud for backup and disaster recovery.

Could it be Magic (Quadrant)?

Here is the 2020 primary array Magic Quadrant, kindly made available by Infinidat.

And, for comparison, here is last year’s primary array MQ;

The green diagonal represents a balance of completeness of vision and the ability to execute.

Cohesity writes two new chapters for its everything DMaaS story

Cloud Love

Cohesity this week officially launched DataProtect-as-a-Service. The data management vendor also said it will SaaS-ify SiteContinuity, the new disaster recovery software it announced in September. The moves show the company’s progress in delivering all its data management functions as a service (DMAaS).

Cohesity made the twin announcements at Amazon’s re:Invent 2020 yesterday – an appropriate venue as the services are built atop the AWS public cloud. (Cohesity proclaimed its intention to deliver DataProtect-as-a-Service and partner with Amazon in October – you can read our story here.)

Matt Waxman

Matt Waxman, VP of Product Management at Cohesity, said Cohesity Data Management-as-a-Service “removes the complexities of managing infrastructure”.

Cohesity will continue to offer on-premises software for customers that want to retain their on-premises infrastructure. Management of all versions of Cohesity DataProtect is handled through the company’s Helios cloud admin console.

Cohesity argues that customers prefer a unified DMaaS product set, covering several aspects of data management, such as backup, disaster recovery, file saturate and so forth, with single point management. The alternative is a mix of point products that may not cover the same data management functions.The company said it will make more SaaS announcements in coming quarters.

The data management industry is moving wholesale towards DMaaS, with Commvault announcing a DRaaS yesterday, joining Zerto and Druva, which have operated backup-as-a-service for some time. Rubrik offers the Polaris management facility as a service and we expect it to follow the SaaS course for its data management software.

Cohesity DataProtect delivered as a Service is available immediately in the US and Canada through resellers and in the AWS Marketplace, and elsewhere in the coming quarters. SiteContinuity delivered as a Service will be available in early access preview in early 2021 and general availability is planned for Spring 2021.

The next game changer? Amazon takes on the SAN vendors

Amazon has re-engineered the AWS EBS stack to enable on-premises levels of SAN performance in the cloud. Make no mistake, the cloud giant is training its big guns at the traditional on-premises storage area networking vendors.

The company revealed yesterday at re:Invent 2020 that it has separated the Elastic Block Store compute and storage stacks at the hardware level so they can scale at their own pace. AWS has also rewritten the networking stack to utilise its high-performance Scalable Reliable Datagrams (SRD) networking protocol, and so lower latency.

The immediate fruits of this architecture overhaul include EBS Block Express, the “first SAN built for the cloud”. AWS said the service is “designed for the largest, most I/O intensive mission-critical deployments of Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics that benefit from high-volume IOPS, high throughput, high durability, high storage capacity, and low latency.”

Pure conjecture from us, but Amazon could hit the SAN storage suppliers squarely in their own backyards by introducing EBS Block Express to the AWS Outposts on-premise appliance.

Mai-Lan Tomsen Bukovec, VP Storage, at AWS, said in a statement: “Today’s announcements reinvent storage by building a new SAN for the cloud, automatically tiering customers’ vast troves of data so they can save money on what’s not being accessed often, and making it simple to replicate data and move it around the world as needed to enable customers to manage this new normal more effectively.”

Mai-Lan Tomsen Bukovec

AWS noted that many customers had previously striped multiple EBS io2 volumes together to achieve higher IOPS, throughput or capacity. But this is sub-optimal. The alternative – on-premises SANS – are “expensive due to high upfront acquisition costs, require complex forecasting to ensure sufficient capacity, are complicated and hard to manage, and consume valuable data center space and networking capacity”

Now EBS io2 Block Express volumes can support up to 256,000 IOPS, 4,000 MB/second of throughput, and 64TB of capacity. This is a fourfold increase over existing io2 volumes across all parameters. The new volumes have sub-millisecond latency and users can stripe multiple io2 Block Express volumes together to get better performance.

Decoupled compute and storage

AWS yesterday said decoupling of compute and storage in the EBS service has enabled it introduce a new class of Gp (general purpose) volume for general purpose workloads such as relational and non relational databases. Capacity grows in lockstep with improvements in performance ((IOPS and throughput) with the existing Gp2 volumes and this means customers can end up paying for storage that they don’t need.

AWS has addressed this with Gp3 volumes, to enable users to utilise a claimed 4x performance increase over Gp2 volumes – without incurring a storage tax. As well as independent scaling, Gp3 volumes are priced 20 per cent cheaper than Gp2. Migration from Gp2 to Gp3 is seamless, AWS says, and handled via Elastic Volumes.

Tiering and replication

The Archive Access (S3 Glacier) and Deep Archive Access (S3 Glacier Deep Archive) tiers, announced in November with S3’s Intelligent-Tiering, are now generally available. Customers can lower storage costs by putting cold data into progressively deeper and lower-cost AWS archives.

S3 Replication enables the creation of a replica copy of customer data within the same AWS Region or across different AWS Regions. This is now extended to replicate data to multiple buckets within the same AWS Region, across multiple AWS Regions, or a combination of both.

AWS io2 Block Express volumes are available in limited preview.

Pliops strengthens board with appointment of Mellanox founder

Eyal Waldman

Mellanox founder Eyal Waldman has joined the board of Pliops, an Israeli data storage-tech startup. His role will be to help guide Pliop’s growth and scale its technology to new use cases. He will also advise on financial decisions, personnel and overall strategy, and meet key customers and partners.

“Pliops is one of those companies that is poised to make a huge impact. This is a pivotal time in the data centre and I’m looking forward to working with the Pliops team as they roll out their technology,” said Waldman, the former CEO of Mellanox, which was acquired earlier this year by Nvidia for $7bn. He left Nvidia in November.

According to Waldman, “Pliops is tackling the most challenging issues that are vexing to data centre architects – namely, the colliding trends of explosive data growth stored on fast flash media, ultimately limited by constrained compute resources.”

Eyal Waldman
Eyal Waldman.

To date, Pliops has raised $40m to fund the development of a storage processing unit (SPU), which we consider to be a sub-category of the new class of data processing units (DPU). The Pliops card hooks up to a server across a PCIe link and accelerates and offloads the server’s X86 CPUs. The company had originally targeted launch for mid-2019 but is now sampling its storage processors to select customers and expects general availability in Q1 2021.

Waldman’s Mellanox experience, connections and know-how should help the company in a competitive environment that is heating up.

Pliops must contend with VMware and Nvidia’s Project Monterey DPU vision. Nvidia also told us this week of its plans to add storage controller functions to the Bluefield SmartNIC.

Also Pliops SPU is similar in concept to another startup, Nebulon and its SPU, which has a cloud-managed and defined software architecture. Nebulon said it has bagged HPE and Supermicro as OEMs.

Commvault gives VMware workloads some more loving in latest DR software release

Commvault has updated its new DR software with recovery automation for VMware workloads.

The upgrade also sees Commvault Disaster Recovery gain orchestration to, from and between on-premises and Azure and AWS environments. The orchestration can be within zones or across regions, and features simple cross-cloud migration support. It seems reasonable that Commvault will in due course add Google Cloud support.

ESG senior analyst Christophe Bertrand gave his thumbs up to the upgrade: “Commvault Disaster Recovery’s multiple cloud targets and speedy cross-cloud conversions make it extremely compelling. With everything going on in the world today, a true disaster could be right around the corner for any company. It’s critical to have enterprise multi-cloud tools in place to mitigate data loss and automate recovery operations immediately.”

The competition between DR specialist Zerto, which recently moved into backup, and data protector Commvault, which recently moved into DR, is hotting up. Cohesity has also moved into automated DR with its SiteContinuity offering.

Commvault released Commvault Disaster Recovery in July. Its automated failover and failback provide verifiable recoverability and reporting for monitoring and compliance. The software enables continuous data replication with the automated DR capabilities, capable of sub-minute Recovery Point Objectives (RPOs), along with near-zero Recovery Time Objectives (RTOs). 

Commvault cites additional benefits for the software such as cloud migration, integration with storage replication, ransomware protection, smart app validation in a sandbox, and instant mounts for DevOps with data masking. The latter feature moves it into the copy data management area, competing with Actifio, Catalogic, Cohesity, Delphix and others.

Google builds out Cloud with Actifio acquisition

Google is buying Actifio, the data management and DR vendor, to beef up its Google Cloud biz. Terms are undisclosed but maybe the price was on the cheap side.

Actifio has been through torrid time this year. The one-time unicorn refinanced for an unspecified sum at near-zero valuation in May. It then instituted a 100,000:1 reverse stock split for common stock which crashed the value of employees’ and ex-employees’ stock options.

Financial problems aside, Google Cloud is getting a company with substantial data protection and copy data management IP and a large roster of enterprise customers.

Matt Eastwood, SVP of infrastructure research at IDC, provided a supporting statement: “The market for backup and DR services is large and growing, as enterprise customers focus more attention on protecting the value of their data as they accelerate their digital transformations. We think it is a positive move for Google Cloud to increase their focus in this area.”

Google said the acquisition will “help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios.” It also expressed commitment to “supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”

This all suggests Actifio software will still be available for on-premises use.

Ash Ashutosh, Actifio CEO, said in a press statement: “We’re excited to join Google Cloud and build on the success we’ve had as partners over the past four years. Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”

Ash Ashutosh video.

Actifio was started by Ashutosh and David Chang in July 2009. The company took in $311.5m in total funding across A. B, C, D and F-rounds. The latter was for $100m in 2018 at a $1.3bn valuation.

What Actifo brings to Google Cloud

Google Cloud says Actifio’s software:

  • Increases business availability by simplifying and accelerating backup and DR at scale, across cloud-native, and hybrid environments. 
  • Automatically backs up and protects a variety of workloads, including enterprise databases like SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, as well as virtual machines (VMs) in VMware, Hyper-V, physical servers, and Google Compute Engine.
  • Brings significant efficiencies to data storage, transfer, and recovery. 
  • Accelerates application development and reduce DevOps cycles with test data management tools.

All-flash arrays shine in anaemic quarter for HPE storage

HPE revenues have returned to pre-pandemic levels – more or less – but data storage lags behind the rest of the business, with revenues down three per cent Y/Y to $1.2bn.

However, all-flash arrays (AFA) and hyperconverged were bright spots. AFA revenue grew 19 per cent Q/Q driven by increased adoption of the Primera AFA, which was up 43 per cent Q/Q ,and the Nimble AFA, which was up 27 per cent Q/Q. We don’t have Y/Y numbers for these two products.

Antonio Neri, HPE CEO
Antonio Neri

In the earnings call CEO Antonio Neri said: “In storage, we have been on a multiyear journey to create an intelligent data platform from edge-to-cloud and pivot to software-as-a-service data storage solutions, which enable higher level of operational services attach and margin expansion. And our strategy is getting traction.

“Our portfolio is well positioned in high-growth areas like all-flash array, which grew 29 per cent year over year; big data storage, which had its sixth consecutive quarter of growth, up 41 per cent Y/Y; and hyperconverged infrastructure where Nimble dHCI, our new hyperconverged solution, continued momentum and gained share, growing 280 per cent Y/Y. We also committed to doubling down in growth businesses and investing to fuel future growth.”

HPE emphasised Q/Q growth to show it is climbing out of a pandemic-caused drop in revenues. Big Data grew 27 per cent Q/Q thanks to to increased customer demand for AI/ML capability. Overall, storage accounts for 16.7 per cent of HPE’s revenues. (A minor point – in HPE’s compute business the Synergy composable cloud business grew five per cent Q/Q.)

CFO Tarek Robbiati said: ‘Our core business of compute and storage is pointing to signs of stabilisation, and our as-a-service ARR (annual recurring revenue) continues to show strong momentum aligned to our outlook.”

For comparison, NetApp yesterday reported Q2 revenues up 15 per cent Y/Y, while Pure Storage last week reported revenues down four per cent Y/Y.

HPE’s outlook is for a mid-single digits revenue decline Y/Y next quarter.

NetApp’s high-end AFA sales lead it out of pandemic recession

NetApp has posted its second successive quarter of revenue growth, thanks to an unexpected boost in high-end all-flash storage array sales.

The company recorded $1.42bn in revenues for the second fiscal 2021 quarter ended October 30, 2020,three per cent higher than a year ago and above guidance. Net income fell 43.6 per cent to $137m.

CEO George Kurian’s said in a press statement: “I am pleased with our continued progress in an uncertain market environment. The improvements we made to sales coverage in FY20 and our tight focus on execution against our biggest opportunities continue to pay off.”

Quarterly revenue buy fiscal year chart shows NetApp climbing out of a revenue dip

Highlights in the quarter included a 200 per cent jump in public cloud services annual recurring revenue (ARR) to $216m, and all-flash array run rate increasing 15 per cent to $2.5bn. NetApp said 26 per cent of its installed systems are all-flash, which leaves plenty of room to convert more customers to AFA systems.

Hardware accounted for $332m of the $749m product revenue, down 18 per cent Y/Y, with software contributing $417m, up 14 per cent. Product revenue in total declined three per cent Y/Y.

On the earnings call, CFO Mike Berry said the company is “on track to deliver on our commitment of $250m to $300m in fiscal ’21 Cloud ARR and remain confident in our ability to eclipse $1bn in Cloud ARR in fiscal ’25.”

The outlook for NetApp’s Q3 is $1.42bn at the mid-point, one per cent up on the same time last year. NetApp hopes that Covid-19 vaccination programs will lead the overall economy to growth after that in calendar 2021.

NetApp gives PowerStore a kicking

Kurian’s prepared remarks included this sentiment: ”We are pleased with the mix of new cloud services customers and growth at existing customers. We saw continued success with our Run-to-NetApp competitive takeout program, an important component of our strategy to gain new customers and win new workloads at existing customers.”

That program targets competitors’ product transitions, such as Dell’s Unity to PowerStore transition. Dell bosses recently expressed impatience about PowerStore’s revenue growth rate in its quarterly results.

Kurian talked about market share gains in the earnings call: “If you look at the results of all of our major competitors, [indiscernible], Dell, and HP, there’s no question we have taken share. I think our product portfolio is the best in the market.” He called out high-end AFAs as doing well – which was unexpected according to Berry. This drove NetApp’s “outperformance in product revenue and product margin”.

Kurian gave PowerStore a kicking when replying to an analyst’s question: “I think as not only we have observed, but many of our competitors have also observed, the midrange from Dell has not met expectations. It is an incomplete product. It is hard to build a new midrange system. And so it’s going to be some time before they can mature that and make that a real system. And you bet we intend to take share from them during that transition… We’re going to pour it on.”

Riding the disk replacement wave

NetApp’s AFA revenue growth should continue, according to Kurian. “We think that there are more technologies coming online over the next 18 to 24 months that will move more and more of the disk-based market to the all-flash market. We don’t think that all of the disk-based market moves to all-flash. But as we said, a substantial percentage of the total storage market, meaning let’s say 70 to 80 per cent will be an all-flash array portfolio.”

He is thinking of QLC flash (4bits/cell) SSDs as they enable replacement of nearline and faster disk drives. Kurian said: “QLC makes the advantage of an all-flash array relative to a 10k performance drive even better. So today, there are customers buying all-flash arrays, when they are roughly three times the cost of a hard drive. With QLC, that number gets a lot closer to one and a half to two times.”

Also, the “economics of all-flash are benefited by using software-based data management”.

Micron shrugs off Huawei hit, raises Q1 financial guidance

Micron has upped revenue guidance for the first quarter ended December 3 from $5bn-$5.4bn to $5.7bn-$5.75bn.

The US chipmaker has also increased its gross margin and EPS guidance for the quarter. Investors are happy and the stock price rose 5.7 per cent in pre-market trading.

Micron said it switched production from Huawei to other customers more quickly than it has previously anticipated. Huawei, hitherto Micron’s biggest customer, is subject to a US trade ban.

In addition, Micron may have recorded stronger than expected DRAM sales, according to Wells Fargo analyst Aaron Rakers.

It will be interesting to see if this is Micron-specific news or Samsung and SK hynix are also benefiting from an end-of-year boost.