Gartner has revised the criteria for the hyperconverged infrastructure Magic Quadrant to include HCI software players only. The changing goal posts has led to the ejection of Cisco, Dell EMC, HPE, Huawei and Red Hat from the 2020 HCI MQ edition.
Last year, the tech analyst firm said HCI systems were available as software to run on standard hardware or as integrated appliances – hyperconverged integrated systems (HCIS).
For 2020, Gartner has tightened the definition to HCI software that provides virtualized compute, storage, networking and associated management from a single instantiation running on server hardware from multiple server providers. Long story short, out go the HCIS products, leaving only HCI software.
Here is the HCI MQ diagram for 2020.
Compare and contrast with the Gartner 2019 HCI (and HCIS) MQ:
There are no ‘Challengers’ and Microsoft is the sole ‘Visionary’. We think this week’s Azure Stack HCI launch has something to do with this – read our article, Dell launches pay as you go Azure Stack HCI.
Pivot3 moves from Challenger to a ‘Niche’ player – and although Gartner hates it when we use such terms, this is effectively a demotion. Scale Computing, StorMagic and Starwind get promoted in the Niche players’ box.
We were surprised by Red Hat’s absence from the Gartner 2020 HCI MQ. Gartner explains the company was “dropped due to a narrowing of its focus to only three use cases.” For inclusion in the MQ, the analyst firm requires HCI vendors to meet user requirements in at least four use cases such as core IT, cloud, edge, mission-critical and VDI.
N.B. Standard Gartner MQ explainer: the magic quadrant is defined by axes labelled ‘ability to execute’ and ‘completeness of vision’, and split into four squares tagged ‘visionaries’, ‘niche players’, ‘challengers’ and ‘leaders’.
Dell customers can now run on-premises HCI infrastructure via the Azure stack. The hybrid cloud system also allows for pay-as-you-go pricing and “optimise[s] the operational experience of Microsoft’s next generation of Azure Stack HCI.”
The “full stack lifecycle management experience… reduces manual tasks by 82 per cent, greatly reducing the risk of data entry mistakes or installation option guess work,” Dell’s director of HCI product marketing, Shannon Champion said in a blog post. Dell has been quick off the mark with this launch as Microsoft announced general availability on Azure Stack HCI just a couple of days ago.
To give it its very full name, the Dell EMC Integrated System for Microsoft Azure Stack HCI*, runs atop Dell PowerEdge servers. The all-in-one, validated HCI system is managed via the Azure portal, Windows Admin Center with OpenManage integration, and PowerShell.
Dell EMC Integrated System for Microsoft Azure Stack HCI.
Dell Azure Stack HCI is based on Hyper-V and HCI stack in the Azure Cloud and accesses cloud services in Azure such as site replication, cloud backup and Kubernetes.
Dell EMC sells two other HCI systems. VxRail is its VMware-based HCI product, and the XC products run Nutanix software.
Azure Stack history
To recap, Azure Stack, launched in 2017, is the Azure cloud embodied in on-premises X86 hardware and incorporating Hyper-V and Windows Server-based software-defined compute, storage, and networking technologies.
Azure Stack HCI (hyperconverged infrastructure) launched in March 2019, and put Azure Stack into a Microsoft-certified X86-based HCI system from a Microsoft partner. This is a rebrand of the Windows Server Software-Defined (WSSD) systems that were available from the same hardware partners, and links to the Azure cloud for infrastructure management services.
Microsoft began previewing Azure Stack HCI as a service in July 2020. This charges on a consumption, per-core basis, similar to Azure cloud instances. A deployment wizard installs the on-premises Azure Stack cluster, linking it to the Azure cloud.
*The Dell EMC Integrated System for Microsoft Azure Stack HCI should be shortened to Dell EMC’s ISMASHCI. We like it, and it’s easier to type. Are you listening, Dell?
High performance computing and enterprise needs are converging, according to DDN, which is developing technology in response.
In a press briefing this week, the HPC storage veteran gave us a sneak peak into its product roadmap. DDN has two divisions: At Scale, the HPC, supercomputer and enterprise AI unit; and Enterprise, which includes the acquired Tintri, Nexenta and IntelliFlash businesses.
Tom Ellery, DDN’s SVP field operations and GM for the enterprise division, said DDN is the world’s biggest privately-held storage company in the world, with more than 11,000 customers, 10EB of shipped capacity, 21 years of profitability and a quarterly run rate “north of $100m”.
The roadmap sees the company cross-pollinating the two divisions with each other’s technology and and adopting an ‘intelligent infrastructure’ that incorporates AIOps-influenced management. In particular , Ellery said, it developing Enterprise Division products, specifically the Tintri VMstore and the Intelliflash array.
VMstore
In early 2021 the VMstore gets a new all-NVMe drive platform offering more performance than existing VMstore arrays such as the T7000.
Tintri T7000.
VMstore software adds Oracle database support, equivalent to the SQL database-aware functions announced in July. Oracle DMAS will be able to use Oracle concepts to manage and operate VMstore.
DDN will add Open Cloud data mobility, a data mobility service to move on-premises workloads to AWS S3. A Tintri AWS SpinUp service will converts them to EC2 format and instantiates them.
It seems logical that DDN will over time extend Open Cloud Mobility to cover Azure and the Google Cloud.
VMstore management software will also get Intelligent anomaly detection to identify ransomware and other malware activity. The product roadmap includes the ability to archive snapshots on a networked IntelliFlash array and used there for disaster recovery.
IntelliFlash
IntelliFlash and DDN’s A3i system will get a Kubernetes CSI driver within a few weeks.
IntelliFlash rack.
More importantly, a new Tintri IntelliFlash hardware system is coming early in 2021. This will scale up to a 4 x 90-bay chassis system, offering 5PB-6PB capacity and 25GB/sec throughput
IntelliFlash 4.0 will release in 2021, adding erasure coding and encryption. The update has container support via CSI, VMware Tanzu and OpenStack. DDN thinks the latter will increase its appeal to telco customers.
IntelliFlash 4.0 will also provide on-premises analytics and an S3 gateway. The technology can be consumed as a physical appliance, a virtual storage appliance (VSA) or in the public cloud. The VSA can be a VM or container on a virtual server or a VM/container in a hyperconverged system. It could be a virtual machine in the public cloud or run as a full stack in those clouds.
The DDN At Scale division will provide DDN-branded IntelliFlash for Structured data. DataFlow, an Atempo product, will move data between IntelliFlash and DDN’s Exa5 Exascaler unstructured data (file) array. It can also migrate data from the older GridScaler to ExaScaler.
Firecrest
DDN also discussed Firecrest, a software-defined storage that combines Tintri IntelliFlash and NexentaStor functionality. It will run as a VSA (virtual storage appliance) and should be ready around the middle of 2021.
DDN Firecrest, Inc. was incorporated in October 2019 by a company agent who also acted as the agent for the incorporation of Nexenta by DDN in September, 2019.
We think that Tintri is also looking to provide a unified management facility, with intelligent analytics, across its product lines.
WOS going on?
Not a lot has been heard about DDN’s WOS (Web Object Scaler) object storage system in the last couple of years. DDN added an S3 interface to WOS in 2014, but ExaScaler now also has an S3 interface and ExaScaler has the same capacity as WOS. No new hardware platform is planned for WOS and our impression is that WOS is in a care and maintenance mode.
Trilio, the Kubernetes data protection startup, has bagged $15m in funding, including $12m in B-series round and $3m in debt finance. The funding mix suggests that the company is still in the “you have a lot to prove” stage.
Trilio will invest the money in product development and sales and marketing to take its software to global markets. Competition is intense and it sees the need to grow quickly.
David Safaii
CEO David Safaii said in a statement: “We’re pleased to close this round of funding with a committed group of investors that bring operational expertise in scaling enterprise software organisations… We’re excited to bring our unique value to more customers looking to address requirements for Kubernetes backup and recovery, migration, DR and application mobility.”
Trilio began life in 2013 to provide data protection for Kubernetes-orchestrated stateful containers with its TrilioVault product. It is classed as a leader in the November 2020 GigaOm Radar for Kubernetes Data Protection report. The company said it experienced 300 per cent revenue growth in the first half of 2020, but without a base line, that is a meaningless figure.
Competing startups Portworx and Kasten were acquired by Pure Storage and Veeam earlier this year, for $370m and $150m respectively. This makes Trilio and fellow startup Robin.io potential acquisition targets for established data protection suppliers that lack cloud-native data protection technology.
The Trilio round was led by SKK Ventures, with participation from Plug and Play and existing investors .406 Ventures and Jack Egan.
IBM’s enterprise external storage revenues tumbled 21.6 per cent in the third quarter while HPE and Huawei outgrew an anaemic market. However, there is a helluva long way to go before anyone catches up with Dell.
IDC’s quarterly Storage Tracker Q3 2020 show the enterprise external storage market eased 1.8 per cent Y/Y to $6.74bn. IDC research analyst Greg Macatee said the product segment “continued to face headwinds due to the effects of the global pandemic”.
According to IDC, the market bright spots in Q3 were China’s booming external OEM market, and no-brand ‘ODM’ vendors selling direct to hyperscalers. “Collaboration tools and content delivery networks were key drivers of ODM sales as consumers continue to demand these types of at-home services on top of traditional enterprise-driven ODM Direct infrastructure consumption,” Macatee said.
Dell Technologies led the suppliers with a 28.9 per cent revenue share of £1.95bn, down 6.1 per cent on the year, and HPE in second place climbed 7.3 per cent to $729.6m. NetApp and Huawei were in joint third place but arrived there differently. NetApp revenues of $638.5m were down two per cent Y/Y while Huawei’s $633.3m revenues were 23.7 per cent higher.
Hitachi and IBM bagged joint fifth place, with Hitachi revenues falling 10.3 per cent to $376.2m, while IBM slumped 21.6 per cent to $312.3m. We assume IBM’s mainframe storage sales are in a cyclical downturn.
Again, Pure Storage did not appear in IDC’s top supplier listings. The company posted $410.6m revenues, down 4.2 per cent for its third fiscal quarter ended November. The reporting period overlaps for two months (minus a day) with calendar Q3 2020. Pure revenues grew two per cent Y/Y in the prior fiscal quarter, a one month overlap with Q3 2020, so we cannot ascertain the direction of revenues in calendar quarter terms. IDC knows but isn’t releasing the numbers publicly.
A chart shows how the vendors have performed over time:
The chart shows that Pure Storage isn’t catching up and overtaking a declining IBM or Hitachi.
IDC estimates the total all-flash array market was worth $2.6bn in the quarter, down 0.4 per cent Y/Y. The total was slightly lower than the $2.8bn total revenues for hybrid flash/disk arrays, which was down 0.7 per cent.
Charting the supplier Y/Y growth percentage changes we see:
HCI darling Nutanix has completed its search for a CEO with the hiring of Rajiv Ramaswami, a senior exec at arch-rival VMware.
“I have long admired Nutanix as a formidable competitor, a pioneer in hyperconverged infrastructure solutions and a leader in cloud software,” he said in a press statement today.
Ramaswami succeeds Dheeraj Pandey, who in August announced his plans to retire. With his replacement at the helm, Pandey leaves the company he co-founded on December 12.
Rajiv Ramaswami
“Rajiv is the right leader at the right time,” Pandey said. “With a future-proof business model, a loyal and expanding customer base, and a strong technology portfolio, I look forward to seeing Rajiv take the helm to lead this incredible team.”
This is Ramaswami’s first CEO position. He has been VMware’s COO for products and cloud services since October 2016 and, according to Nutanix, led several important acquisitions at VMware, and also played a key role in the company’s ongoing transition towards subscription and SaaS model.
He comes from a data networking background. Prior to VMware, he was an EVP and GM at Broadcom for six years. There was a 7.5 year stint at Cisco and the resume then goes back in time through Nortel, Tellabs and IBM.
An independent schools group in Wales was hit by a ransomware attack in September, during which the perpetrators deleted files belonging to staff and pupils, and encrypted Veeam onsite backups held on disk and tape.
The attackers used Sodinokibi ransomware to penetrate the IT systems of Haberdashers’ Monmouth Schools – which is comprised of five schools – and demanded £500,000, rising to £1m after six days, to decrypt the data.
The malware variant penetrated the schools through a domain admin account, working its way through the main infrastructure to knock out file servers, Exchange, and SQL servers.
Haberdashers’ School, Monmouth
In a soon-to-be-published case study, Haberdashers’ Monmouth Schools’ IT director Fred Welsby said the attackers “had found all the devices and servers on the network, created a domain admin account and started trawling through our data to see what was valuable to us. There was nothing they couldn’t do.
“We did have… backup software on-premises – and one of the backup servers was on domain. That was fully encrypted, so they hit our backup systems as well.
“I came into work to find my engineer calling it ‘a disaster’. Nobody could log onto any computers. Teachers and pupils had no access to any of our services, databases or email systems. Basically it was back to paper and pencil.”
Fortunately, the schools had a second line of defence. After previous malware attacks, Welsby had arranged to store backups offsite in a Redstor cloud facility. These comprised 15TB of data stored in encrypted form in a geographically separate data centre. The ransomware gang was unable to attack this.
Following the attack, Welsby called Redstor, a UK cloud data management provider. The company restored a SIMS (Schools Information Management System) server and Pass server into VMware. Welsby said: “We were able to recover that server to the previous day with Redstor, so the loss of data was very minimal. The cloud backups were unaffected and were critical in restoring our systems.”
He said having offsite backups was an “absolute godsend”.
Computerworld, a Bristol-based reseller and Haberdashers’ Monmouth’s main IT provider, helped get the school’s most important services up and running, including on-premises hosted email and Microsoft 365 authentication.
The schools’ IT director said: “It was a very bad attack, but it could have been a lot worse. Had we not had a cloud backup system, we would have been with very limited services for a month or longer.”
Haberdashers’ survived the attack with a day or so of downtime and no need to pay the ransom. Its experience shows that onsite backup alone is not sufficient for ransomware data protection. To ensure a truly robust defence, make sure you also air-gap your data to a separate date centre.
Famously, in the case of an embarrassing ransomware attack at the University of California San Francisco in June this year, the uni had a data protection deal in place that was both immutable and not accessible over the network. However, it didn’t actually use it on the affected systems. This led the institution to cough up a whopping $1.14m in bitcoin to recover the encrypted files after a certain number of servers within its “School of Medicine IT environment” were locked up, presumably along with valuable research, by criminal hackers.
So if there is an additional protip to be had besides actually having an offsite, airgapped backup system, it is: switch the darned thing on.
Veeam declined to comment on this ransomware attack.
Hitachi Vantara has plumped up the VSP E series arrays with two midrange additions at “aggressive price points”, plus a NAS gateway and a managed service.
Update: E790 and E590 differences added. 10 December 2020.
The company launched the high-end E990 NVMe all-flash array in April. The new E790 and E590 join the E Series range, along with the HNAS 5000 file storage gateway. Hitachi Vantara has opened up Hitachi Virtual Storage as a Service (VSaaS) to cater for customers that don’t want the hassle of managing the hardware.
VSP E790 base chassis.
The E790 and E590 have less than 66uS data access latency. Hitachi Vantara is not supplying speeds and feeds data apart from this and saying the two have active:active controllers inside their 2RU base chassis.
The E790 controller CPU is a 4 x 16 core 2.1 GHz Xeon, delivering 4 million IOPS and 22GB/sec, while the E590 uses a 4 x 6 core 1.9 GHz Xeon outputting 6.8 million IOPS and 32GB/sec. The two systems have the same maximum internal raw capacity of 361TB but the E590’s max external capacity is 144PB compared to the E790’s 216PB. Neither system can have expansion trays. The E590 is priced below the E790.
The shared base chassis can hold 24 SSDs, whose capacities can be 1.9TB, 3.8TB, 7.6TB, or 15.3 TB. RAID levels 5, 6 and 1 are available; specifically RAID6 (6D+2P, 12D+2P, 14D+2P), RAID5 (3D+1P, 4D+1P, 6D+1P, 7D+1P), and RAID1 (2D+2D, 4D+4D). External access can be via 24 x 16 or 32Gbit/s Fibre Channel ports or 12 x 10Gbit/s iSCSI.
Hitachi said the array increases effective capacity, courtesy of data reduction software and provides a 4:1 effective capacity guarantee. The E Series incorporates embedded management software to speed installation and provisioning storage to applications. They can also be managed by Ops Center, Hitachi’s enterprise-grade management software.
There are three series of products in Hitachi Vantara’s VSP (Virtual Storage Platform) all running Hitachi’s SVOS RF software:
E Series all-flash NVMe and SAS drive arrays with up to 21m IOPS, and down to 70μs latency
F Series all-flash SAS array with 600K to 4.8 million IOPS
G Series of hybrid flash/disk array with up to 4.8 million IOPS.
Managed VSP service
According to the company, VSP STaaS provides predictable rates for easy budgeting and a fast self-service console for users to adapt the array to changing needs. Hitachi STaaS is available through resellers, and managed VSP arrays can be located on-premises or in a co-location centre.
Hitachi Vantara STaaS provisioning screen.
It has a pay-as-you-go scheme with a four-hour service activation period when a user requests more capacity. The array is delivered in an over-provisioned state to allow for growth.
E Series competition
Lacking speeds and feeds data from Hitachi, we are unable to position the new arrays precisely. But we think the E Series arrays will line up against Dell’s PowerMax and PowerStore arrays, NetApp’s ONTAP all-flash arrays, HPE Primera and Nimble arrays, and IBM’s Flash Systems. These arrays typically support unified file and block workloads without having an additional file gateway system.
Hitachi’s gateway enables file performance to scale separately from block performance.
Pure Storage has introduced a managed storage service delivered via partners. There are four block and two unified file and object tiers in its service catalog. HPE offers a managed service in its GreenLake portfolio, and Zadara also has a managed array service .
Getting NASty with a file gateway
The HNAS 5000 family of clusterable NAS gateway systems front-end a VSP array and store files on it, with CIFS and NFS access supported. Using rules and policies, files can be offloaded to any type of remote target including public clouds.
The controllers are accelerated with FPGAs to deliver faster file services, and have twice the throughput of the previous HNAS system. That would be the HNAS 4000 series, announced in April 2016. It offered between two and eight nodes with maximum usable capacity ranging from 4PB to 16PB.
Hitachi Vantara supplied HNAS model details in a datasheet
File system metadata is stored on the fastest-available storage. Metadata for small files, which is called an onode, is packed together. Up to eight onodes or files can be packed into one filesystem block, resulting in space saving, latency reduction and fewer writes to the storage.
The HNAS systems support file and block workloads, with deduplication, with an FPGA doing the hashing, and compression to increase effective storage capacity. They also support Kubernetes and Ansible.
HNAS offers linked, writable snapshot clones. With linked clones, thousands – even millions- of copies of data sets are created very rapidly while using near-zero extra capacity.
The file system namespace, with a cloud extension, supports up to 840 billion objects. The “cloud” means S3 targets, such as AWS, Azure and IBM COS, Hitachi’s HCP object system and IBM”s on-premises Cloud Object Storage. Files are moved to the cloud according to settable policies. Object replication with throughput throttling improves the system’s quality of service (QoS).
Hitachi says little to no administration, configuration or tuning is required and scheduling is not necessary. The system supports fully synchronous active-active clustering of up to 500km with automated takeover, when implemented with global-active device metro clustering. The latter feature ensures continuous operations with nonstop data access.
A cluster rolling upgrade process enables migration from HNAS 3000 and 4000 systems to the HNAS 5000.
Anything Micron can do SK hynix thinks it can do better. And so the company has announced its own 176-layer 3D NAND technology, sending 512Gbit TLC (3bits/cell) die samples to SSD controller companies in recent weeks.
Micron began volume shipments of 176L NAND last month and so has several months advantage over SK hynix and Samsung. The latter has said it will probably ship 176L NAND in the second quarter next year.
SK hynix said it will ship 176L SSDs for mobile products by the middle of next year, and they will have improved maximum read speed by 70 per cent and maximum write speed by 35 per cent, over the 128L product presumably.
Consumer and enterprise SSDs will follow later in 2021. SK hynix will also develop a 1Tbit die using the176L technology. B&F thinks this might use QLC (4bits/cell) technology to get 33 per cent of the capacity increase, with the remainder coming from more cells in each of the 176 layers.
Jung Dal Choi
Jung Dal Choi, head of NAND development at SK hynix, put out a statement: “NAND flash industries are striving to improve technologies for high integration and maximum productivity at the same time. SK hynix, as a pioneer of 4D NAND, will lead the NAND flash market with the industry’s highest productivity and technology.”
To justify this claim, SK hynix is bigging up the performance and technology advantages of its 176L product. For example, the company claims:
The industry’s best number of chips per wafer
Bit productivity improved by 35 per cent compared to SK hynix’s 128L technology
Cell read speed increased by 20 per cent over 128L NAND by adopting 2-division cell array selection technology
Data transfer speed also has been improved by 33 per cent to 1.6Gbit/s
By ‘4D NAND’, SK hynix is referring to the placement of the peripheral CMOS logic for the die underneath the stacks of cells, just like Micron. This helps to decrease the die’s footprint and SK hynix dubs the technology ‘Peripheral circuits Under Cell’ (PUC).
SK hynix 176 layer dies.
SK hynix’s 176L technology is based on charge trap cells, like Micron, and the cell stack is actually based on two separate 96-layer strings, again like Micron’s 176L product.
The 2-division cell array selection technology involves logically divides a cell into two to make reads faster. The halving means the call has a lower resistance, which shortens the time needed for a sensing voltage to be applied and so improves the read speed time.
The claimed die-level performance and technology advantages are all very well. But how will they translate into speed and cost benefits for SSDs? Let the benchmarks begin!
The Tape Storage Council has released the TSC 2020 Outlook, containing its thoughts on the year and projections for the future.
Update: detail added on LTO-7 Type M and IBM and Oracle enterprise tape shipments; 8 Dec 2020.
To summarise, the industry body reckons magnetic tape storage has a good future. It is great for archive data, beating disk and public cloud, But dig a little deeper, the number of tape users will like shrink.
The TSC report cites the annual tape media shipment report for 2019, released by the three LTO Program technology providers – HPE, IBM and Quantum. Capacity shipments rose to record amounts in 2019 and more than 225 million LTO cartridges and 4.4 million drives have shipped since LTO’s introduction. Here is the LTO providers chart.
The report showed a record 114,079 PB of total LTO tape capacity (compressed) shipped in 2019. Aggregate capacities in 2018 and 2019 do not include the enhanced capacity of LTO-7 Type M shipments.
The TSC explained that, because the implementation of LTO-7 Type M at a higher capacity of 9.0 TB can be implemented at the end user level, the exact number of LTO-7 cartridges that have been initialised to deliver 9.0 TB is unknown to the Technology Provider Companies. Rather than estimate this, all LTO-7 cartridge shipments are assumed at 6.0 TB capacity.
The capacity shipments for IBM and Oracle enterprise tape shipments are not included as they are not reported by IBM or Oracle.
Cheaper than disk
The TSC report cites an Information Storage Industry Consortium (INSIC) roadmap. This indicates the current areal density scaling rate of HDDs to be about 16 per cent CAGR and tape to be at 33 per cent CAGR.
INSIC, is basically a tape drive and library suppliers’ group, with Oracle. owner of the proprietary StorageTek tape format, joining HPE, IBM and Quantum. The INSIC roadmap indicates the current cost advantage of tape systems over HDDs will grow wider.
Tape’s areal density is much lower than HDD but contains a much larger total recording surface area in a cartridge, with about 1000X the area of a 3-5-inch disk platter. Tape doesn’t need as high an areal density as disks to continue its $/TB advantage.
Cloud archives use tape
The TSC outlook features charts and comparisons saying how well tape compares to disk and to the costs of public cloud archive stores. It also points out that several public cloud deep archives use tape, such as AWS’s Glacier.
The larger the archive data set the greater tape’s cost-saving advantages become. Tape is storage for large archive data set organisations and hyperscalers and becoming history in other enterprise data centres.
The Commitments
The TSC cites an Enterprise Strategy Group report, 2020 Tape Landscape, which contains this chart looking at tape users’ commitment to tape;
ESG found a majority increasing their commitment to tape, but 39 per cent were not. A second chart looking at the intentions of the uncommitted.
A majority, 69 per cent, intend to replace their tape storage in three years. This indicates that tape storage ownership will become concentrated in fewer customers’ hands over the next three years.
You’re not in a hurry, are you?
Tape’s two big advantages are low cost and longevity. The big drawback is its lengthy file access time as a tape cartridge has to be mounted in a drive and the reel unwound. This two-stage operation can take many seconds, an age for today’s users accustomed to instant access to data.
Disk offers millisecond-level access but is more expensive, while SSDs offer microsecond-level access but are more expensive still. Neither have tape’s reliability over time, according to the TSC.
In this week’s digest we peek into in-memory technology, then look at application performance management in a hybrid and multi-cloud world and then present you with some NAND shipment data.
BTW, there is no Kubernetes storage news this week – we assume this is a temporary halt.
In-memory InsightEdge
GigaSpaces has launched the InsightEdge portfolio of in-memory computing products, which include AIOps functionality. The range comprises:
InsightEdge Smart Cache SQL compatible distributed caching tier, optimised for rapidly changing data and multi-criteria queries
InsightEdge Smart ODS (Operational Data Store) distributed integration hub that aggregates and offloads from multiple back-end systems of record and data stores on premise and on cloud.
InsightEdge Smart Augmented Transactions which unifies streaming, real-time transactional (ACID compliant) and analytical processing for insights.
GigaSpaces said the products feature in-memory speed, colocation of business logic and data in memory, secondary indexing and server-side aggregations.
Virtana’s AIOps platform goes multi-cloud
Virtana has announced its applications workload optimisation software now supports on-premises and multiple public cloud environments. The AIOps features “know before you go” technology by providing intelligent observability into which workloads to migrate. It also ensures that unexpected costs and performance degradation are avoided once workloads are operating in the cloud.
Virtana claims the hybrid infrastructure optimisation capabilities of the Virtana Platform can deliver a return on investment (ROI) of as much as 145 per cent over a three-year period. Supported clouds include AWS, Azure, Google Cloud Platform, Oracle, and VMware on AWS.
Over time, all Virtana standalone products will be incorporated into the platform. Multiple deployment options will include SaaS, managed service, and on-premises.
Kash Shaikh, CEO of Virtana, said in a statement: “We are still in the early innings of public cloud adoption. Many [enterprises] tell us they moved some of the workloads back. When we ask why, enterprises say they do not know where to start, lack tools and visibility that can help them plan and execute migrations, and find managing workloads across hybrid environments complex.”
He says the Virtana Platform provides the tools to migrate applications and information about which ones to migrate and which not. Insights about workloads include application dependencies, how they will perform in various cloud environments, and their underlying IT infrastructure requirements. Users get real-time application visibility and tools for taking action.
NAND capacity ship rises and price fall rates slow
NAND exabyte shipment growth rate and average selling price (ASP) falls are slowing, according to the Semiconductor Industry Association, as reported by Wells Fargo senior analyst Aaron Rakers to his subscribers. Two of his charts show this:
The Compound Annual Growth Rate (CAGR) is slowing but a 36 per cent CAGR is still healthy. Next up, a price chart:
N.B. the log chart above is measured in $/GB and that make it look like as if the ASP is bottoming out to near zero. But if the units of measurement were $/TB we could see that continued price declines are still feasible.
To conclude, shipments are rising but not as fast as before. Prices are falling but not as fast as before. The NAND industry is maturing.
Shorts
Micron reported a two hour-plus production halt due to a power outage at its wafer fab in Taoyuanone, Taiwan, on December 3. UPS facilities couldn’t cover for the lost power. The power supply has been restored and production will return to normal over the next few days, meaning up to a week of lost production. Wells Fargo analyst Aaron Rakers said: “Fab outages can be disruptive as it can result in the scrapping of in-process wafers, as well as equipment damage (e.g., furnace tubes, pumps, etc.) and time needed for equipment cleaning.” The loss could be ~3 per cent of total industry monthly wafer capacity and ~12 per cent of Micron’s total estimated DRAM wafer capacity.
The U.S. Department of Defense has announced the latest sanctions against four Chinese companies. This includes SMIC, the leading foundry in China, which is now placed on the DoD list of Chinese military companies. The move will threaten SMIC’s upstream supply of semiconductor equipment and materials, its R&D of advanced processes, and China’s attempt at semiconductor independence.
Seagate shipped its first HAMR-based 20TB nearline HDDs for revenue in late November, but made no public announcement. The news came out at an investor conference. Seagate said it can make 20TB HDDs with PMR technology.
SUSE has closed its acquisition of Rancher Labs, bringing together Linux, a Kubernetes management platform, and edge capabilities. SUSE reckons it and Rancher’s combined strengths position the company to win the $5bn embedded market, making it a threat to Red Hat’s market share. SUSE says its competitors’ lack of independence means their customers are locked into heavy and monolithic tech stacks – whereas SUSE and Rancher can take a more modular approach and adapt to customers’ changing needs.
Veeam Software has announced general availability of the latest version of Veeam Backup for Microsoft Office 365, its fastest-growing product. V5.0 adds purpose-built backup and recovery for Microsoft Teams, so users can find and restore Teams data, including entire groups, specific channels and settings.
With Intel selling its NAND and SSD business to SK hynix what happens to Rob Crooke, GM of the now-being dismantled Intel Non-volatile Solutions Group (NSG)? On completion of the deal (probably 2H 2021) He will join SK hynix, where he will manage the NAND division that SK hynix is acquiring from Intel. In the meantime he continues to be Intel GM of NSG, which now hold Intel NAND products only. Alper Ilkbahar is GM of the Optane Group.
Acronis has released the 2020 Acronis Cyberthreats Report, a review of the current threat landscape and projections for the coming year. It predicts 2021 will be the “year of extortion”. Remote workers and managed service providers will be targeted by cyberattackers, and data exfiltration will outpace data encryption.
Materialize, a streaming SQL database startup, has raised $32m in B-series funding to build its engineering team, prepare the business for growth and extend product rollout. Kleiner Perkins led the round and Lightspeed Venture Partners also participated in the B round after it led the earlier $8m Series A round in 2019. Additional investors include executives from Cockroach Labs, Datadog and Rubrik.
Iguazio says it is now AWS Outposts-ready. The startup makes data science platform software to accelerate storage IO workflows for machine learning, AWS SageMaker customers can use it to develop and deploy complex ML pipelines in weeks instead of months, the company says.
Pure Storage‘s Portworx has also achieved the AWS Outposts Ready designation.
Spectra Logic and Aveco, have announced a fully integrated media asset management (MAM) and archiving system. This is based on the Aveco ASTRA MAM and Spectra BlackPearl Converged Storage System. Users can manage and protect their content from ingest to production to archive and distribution with lifecycle management, automated tiering and cloud connectivity.
Varada, a small Israel-based startup, will make its Data Virtualization Platform available from December 8. It offers simplified data ops management and the ability to prioritise query acceleration by each query’s business value. The platform allows Varada to attack Snowflake and Athena). Compared to Snowflake, Varada said it eliminates lock-in and does not require users to move their data into a proprietary format. Varada also claims it is much less expensive to operate at scale.
The infrared image from NASA's Spitzer Space Telescope shows hundreds of thousands of stars in the Milky Way galaxy
Microsoft has released Azure Purview, a tool to uncover hidden data silos wherever they live – on-premises, across multiple public clouds or within SaaS applications.
Julia White, Corporate VP for Azure, blogged: “For decades, specialised technologies like data warehouses and data lakes have helped us collect and analyse data of all sizes and formats. But in doing so, they often created niches of expertise and specialised technology in the process.
Julia White
She said: “This is the paradox of analytics: the more we apply new technology to integrate and analyse data, the more silos we can create.”
Microsoft’s Alym Rayani, GM for Compliance Marketing, wrote in a blog: “To truly get the insights you need, while keeping up with compliance requirements, you need to know what data you have, where it resides, and how to govern it. For most organisations, this creates arduous ongoing challenges.” Purview reduces the arduousness factor.
Azure Purview is a unified data governance service with automated metadata scanning. Users can find and classify data using built-in, custom classifiers, and sensitivity labels – Public, General, Confidential and Highly Confidential markers. They can create a business term glossary.
Discovered data goes into the Purview Map. A Purview Data Catalog enables users to search the Map for particular data, understand the underlying sensitivity, and see how data is being used across the organisation with data lineage; knowing where it came from.
Purview Catalog
Purview contains 100 AI classifiers that automatically look for personally identifiable information and sensitive data. The app also pinpoints out-of-compliance data.
The existing Microsoft Information Protection software uses sensitivity labels to classify data, helping to keep it protected and preventing data loss. This applies premises or in the cloud across Microsoft 365 Apps, services such as Microsoft Teams, SharePoint, Exchange, Power BI, and third-party SaaS applications.
Azure Purview extends the sensitivity label approach to a broader range of data sources such as SQL Server, SAP, Teradata, Azure Data Services, and Amazon AWS S3, thus helping to minimise compliance risk.
Using Purview, admins can scan their Power BI environment and Azure Synapse Analytics workspaces with a few clicks. All discovered assets and lineage are entered into the Data Map. Purview can connect to Azure Data Factory instances to automatically collect data integration lineage. It can then determine which analytics and reports already exist, avoiding any re-invention of the wheel.
B&F thinks Purview’s ultimate usefulness will depend on how many data sources it can explore and interrogate. The fewer the black holes in an enterprise’s data universe the more effective Purview’s data governance will be.
Azure Purview is free to use in preview mode until January 1, 2021.