NetApp is adding fast NVMe-over-TCP access to ONTAP, providing an upgrade to accelerated storage access for its iSCSI-using FAS and AFF array customers and for iSCI users generally.
NVMe-over-TCP provides remote direct memory access for data across an Ethernet TCP link. The idea of using NVMe in this way, extending the PCIe bus across a network fabric, with NVMe-oF, was first developed with lossless Ethernet (RoCE), and then extended to Fibre Channel, which NetApp already supports.
Eric Burgener, Research Vice President, Infrastructure Systems, Platforms and Technologies Group at IDC, provided a statement: “With faster-than-expected adoption of NVMe-based all-flash arrays in recent years, new technologies like NVMe over Fabrics (NVMe-oF) will continue to fuel the evolution of the enterprise storage industry. NVMe/TCP is expected to be a key technology to drive mainstream market adoption due to its ubiquity and ease of deployment. Because it is based on Ethernet, it doesn’t require new hardware investment. It is particularly attractive for hybrid-cloud deployments.”
NetApp is specifically announcing that the next major release of ONTAP, v9.10.x, will include NVMe/TCP support. Octavian Tanese, NetApp’s SVP for Hybrid Cloud Engineering, tells us there will be an easy upgrade path to NVMe/TCP in this coming release, which, we think, might arrive before the end of the year.
NVMe/TCP is not quite as fast as NVMe over ROCE or FC but it is way faster than standard iSCSI or Fibre Channel access to SAN data, as a general latency table indicates:
iSCSI and Fibre Channel — around 1,000 to 1,500µs;
NVMe/TCP — about 200µs;
NVMe/FC — about 150µs;
NVMe/ROCE — 100–120µs.
NetApp NVMe/TCP graphic.
Because NVMe/TCP uses standard Ethernet, then the same cabling that supports iSCSI external storage access can support the radically faster NVMe/TCP access. By adding NVMe/TCP support to ONTAP, existing ONTAP features are available to NVMe/TCP users — data reduction, management, protection, storage efficiency and so forth.
NVMe/TCP will also be supported by ONTAP running in the public cloud, providing an NVMe namespace covering both the on-premises and public cloud environments.
Other supplier support
NVMe/TCP is supported by startups like Lightbits Labs and also by disk drive supplier Toshiba in its Kumoscale product sold through partners such as Quanta, Supermicro and Tyan.
Startup Infinidat supports NVMe/TCP access to its InfiniBox arrays. Another startup, Pavilion Data, supports NVMe/TCP as well as NVMe over RoCE. Pure Storage said it had NVMe/TCP support on its roadmap back in June last year but nothing has appeared yet.
NetApp looks to be the first major incumbent storage supplier to support NVMe/TCP, ahead of Dell, HPE, Hitachi Vantara, IBM and Pure.
Analysts are predicting Intel’s Optane 3D XPoint memory capacity ships could exceed those of DRAM in 2028.
Update: Jim Handy points added. 7 September 2021.
We have just learned about a report by Coughlin Associates and Objective Analysis called Emerging Memories Take Off, courtesy of Tom Coughlin. The report looks at 3D XPoint, MRAM, ReRAM and other emerging memory technologies and says their revenues could grow to $44 billion by 2031. That’s because they will displace some server DRAM, and also NOR flash and SRAM — either as standalone chips or as embedded memory within ASICs and microcontrollers.
The emerging memory market is set to grow substantially with 3D XPoint revenues reaching $20 billion-plus by 2031, and standalone MRAM and STT-RAM reaching $1.7 billion in revenues by then. The report predicts that the bulk of embedded NOR and SRAM in SoCs will be replaced by embedded ReRAM and MRAM.
A chart shows XPoint capacity ships crossing the 100,000PB level in 2028 and so surpassing DRAM, whose capacity growth is slowing slightly.
Note log scale.
The chart shows XPoint capacity shipped being 1000PB this year. That number will grow 100x to 100,000PB in 2028.
Jim Handy
Jim Handy.
We asked Jim Handy, the Objective Analysis co-author of the report, about how they came to their XPoint revenue amount.
He told us: “3D XPoint shouldn’t need a lot of wafers to achieve high revenues. You may recall that, during the technology’s 2015 introduction, Intel and Micron said that it would be “10 times denser than conventional memory” (meaning DRAM). That means that it takes 1/10th as many XPoint wafers as DRAM wafers to make the same number of exabytes.”
We should note though: “That said, the Objective Analysis 3D XPoint forecast is admittedly optimistic. It’s based on a server forecast that is ordinary enough, then makes assumptions about the acceptance of 3D XPoint DIMMs in those systems (officially known as “Optane DC Persistent Memory Modules”). Optane SSDs are not a big part of the equation, although they are getting more traction in data centers than they did in PCs.
“By the end of the forecast period (2031) we assume that XPoint DIMMs will have penetrated a little over 50 per cent of all servers, and that the majority of the memory in those servers will be Optane DIMMs, with a much smaller DRAM for the really fast work, much like a cache.”
Where will Intel get the manufacturing capacity as its Rancho Rio plant, where Optane chips are made, is more of a development fab than a mass-production facility?
Handy tells us: “Given Intel’s manufacturing aspirations, Rio Rancho should be a very tiny portion of the company’s production by 2031. If XPoint volume gets high enough the economies of scale will drive down the costs and make it profitable, which XPoint has not been to date. If it’s profitable, then other companies will be interested in producing it, should Intel choose to source it externally.
“This all depends on Intel’s success in getting major server purchasers to adopt Optane, and on Intel’s willingness to continue to subsidize the technology. Both are difficult to predict.”
This puts Micron’s March 2021 withdrawal from the XPoint market in a new light. Did it have different figures for XPoint capacity growth?
Handy thinks: “I believe that Micron in 2015 expected for the XPoint market to develop faster than it did, and for Optane SSDs to be better accepted than they have been. With the lack of a sufficiently large market, and with the subsequent lack of the economies of scale, Micron had no clear path to profits. It’s unsurprising that the company dropped out of that business.”
It used to be thought that WD was betting big on a MAMR technology change — a big bang, as it were — like the change from longitudinal to perpendicular magnetic recording (PMR). Not so, says Dr Siva Sivaram, WD’s President of Technology and Strategy. Microwave-Assisted Magnetic Recording (MAMR) is part of WD’s energy-assisted perpendicular magnetic recording (ePMR) strategy. There will be a continuous stream of technology advances around ePMR, and MAMR is not being delayed.
Dr. Siva Sivaram.
We were briefed by Dr Sivaram after the OptiNAND news broke — the use of added embedded flash in a disk drive controller to provide NAND storage for drive metadata instead of storing it on disk. In its announcement, WD said: “we expect an ePMR HDD with OptiNAND to reach 50TB [in] the second half of this decade.” Which we took to mean full MAMR wasn’t needed until then.
What is full MAMR? It is surely the use of a write head with a microwave generator beaming microwaves at the bit area under the write head, making it more receptive to receiving a write signal setting its magnetic polarity. This enables smaller bits, greater areal density and higher disk capacity.
Two recent WD announcements do use energy-assist, but not in this way. The September 2020 18TB UltraStar DC HC550 and DC HC650 uses ePMR tech, applying an electrical current to the write head to lower the jitter and improve the strength of the write signal. This month’s OptiNAND adds a NAND-enhanced drive controller SoC to the mix, which processes drive metadata in a faster and more granular way, enabling tracks to be placed closer together and so raising capacity in a sample drive from 18TB to 20TB.
Dr Sivaram said: “MAMR is not being pushed away.” The ePMR technology applies to the drive’s data plane, whereas OptiNAND applies to its control plane. MAMR is part of WD’s overall ePMR technology — a series of improvements that electrically improve areal density. According to Dr Sivaram, “This is still on track.”
He says: “ePMR is a large bucket. All aspects of MAMR and HAMR are included within it.” The DC HC550/HC650 announcements referred to generation 1 of WD’s ePMR technology. There will be others. The 50TB ePMR disk prediction for the 2025–2030 period could well involve microwave use.
SMR and OptiNAND
Shingled Magnetic Recording (SMR) media disks could be one of the biggest beneficiaries of OpitiNAND technology. In an SMR write event modifying existing data on the drive, a whole block or zone of tracks has to be erased and rewritten with the new data inserted. The OptiNAND can make that operation faster, reducing an SMR drive’s write lag and bringing its performance closer to traditional drive performance.
The details were not revealed, but we might envisage that the size of an SMR write zone — the block of tracks treated as an entity — could be reduced, shortening the time needed for a data rewrite operation. We might envisage a 22 to 24TB SMR/OptiNAND drive could be coming.
Also, OptiNAND’s use means the drive’s control plane can run in parallel with data plane operations. When metadata has to be read from disk that is not the case.
Seagate and Toshiba
Dr Sivaram said: “We will have products across the board with OptiNAND.” WD is at an advantage, he says, because it has HDD and NAND firmware engineers sitting in the same room — because it makes both disk drives and SSDs. Its disk drive competitors, Seagate and Toshiba, do not.
Our thinking is that Seagate and Toshiba will be talking to NAND suppliers, such as Micron, Samsung and SK hynix, and perhaps cheekily Kioxia (WD’s NAND joint-venture partner), about adding their embedded flash to disk drive controllers.
Toshiba already uses the bias current technology to improve its disk write signals with its Flux Control MAMR concept.
In theory Seagate could use similar technology and so gain a capacity jump without going to HAMR — adding a laser heating element to its write head and reformulating the drive recording medium. Will it? That is a big, big question.
Seagate has a US disk drive head bias current optimisation patent — number 6115201 — for determining a maximum magnitude of bias current that can be safely applied to a head of a disc. It was filed in 1998. It’s a relatively small disk drive engineering world and we can imagine Seagate is well up to speed with the technology.
It would not be a surprise if, one, Seagate introduced its own bias current/flux control technology and, two, both it and Toshiba added NAND to their disk controllers to store drive metadata data off the disk, process it faster, and raise areal density.
Quantum is building out its credentials as an ADAS (Advanced Driver-Assistance Systems) supplier, reckoning a contribution of a reference architecture will help acceptance of its storage HW+SW into ADAS workflow systems. DataCore is hiring product and sales management VPs to increase its momentum management, following a string of double-digit growth quarters. SoftIron has a new storage router to enable, it hopes, enterprise-wide adoption of Ceph.
And we have our own string of news bytes to follow that — especially one about Backblaze’s ransomware ecosystem exploration.
Quantum reference architecture for ADAS
File and object storage, management and workflow supplier Quantum announced the release of an end-to-end reference architecture for Advanced Driver-Assistance Systems (ADAS) and Autonomous Driving (AD) systems.
It combines ultra-fast automotive and mil-spec NVMe edge storage device with StorNext software to capture, manage, and enrich vast quantities of sensor data to help drive the future of autonomous vehicles.
Jamie Lerner, President and CEO, Quantum, said: “Although still relatively nascent, organisations developing autonomous vehicles are at a crossroads. The volume of data being captured is increasing exponentially, presenting an urgent need for speed, capacity and cost-efficiency in the data management lifecycle.”
Quantum ADAS Reference Architecture diagram.
Test vehicles typically capture terabytes of sensor data per hour generated by multiple video cameras, LiDARs, and Radars. ADAS/AD development systems rely on collecting and processing these large amounts of unstructured data to build sophisticated Machine Learning (ML) models and algorithms, requiring intelligent and efficient data management.
The Quantum R6000, with a removable storage canister, is an ultra-fast automotive & mil-spec edge storage device explicitly developed for high-speed data capture in challenging, rugged environments including car, truck, airplane, and other moving vehicles. StorNext software can help store and direct the data from the R6000 to ADAS/AD workflows.
DataCore exec hires
Software-defined storage supplier DataCore has hired Abhijit Dey as its Chief Product Officer and Gregg Machon to be its VP for Americas Sales. Dey comes from Agari, with time at Druva, Veritas and Symantec before that. Machon’s job history in reverse order is Radiant RFID, Qumulo (VP worldwide channels & OEMs), HPE and Nimble, with SolidFire and EMC before that.
Abhijit Dey (left) and Gregg Machon (right).
DataCore has made a significant investment in R&D, resulting in an increase of technical talent of more than 40 per cent in the last two years alone while modernising software development and testing practices, opening a centre of excellence in Bangalore, India, and a new office in Austin, Texas.
It had its 12th consecutive year of positive cash flow and double-digit growth in net new revenue over the last few quarters. This is a period in which the company has added an average of over 100 net new customers per quarter, with a strong performance in government, healthcare, and CSP (cloud service provider) verticals.
SoftIron’s new storage router
SoftIron, punting itself as the world leader in task-specific appliances for scale-out data centre solutions, announced general availability of its latest HyperDrive Storage Router, the HR61000 — an intelligent services gateway that provides interoperable high-throughput storage transactions for organisations using S3 or legacy protocols such as iSCSI, NFS, and SMB.
It provides gateway services and legacy file and block integration for enterprise applications. Combined with SoftIron’s Ceph-based HyperDrive Storage appliances, organisations can use it to gain virtually limitless storage scalability, while consolidating and simplifying their legacy storage systems management.
Networking — 2x NICs (100Gbit/sec); Data resiliency — High Availability per service/protocol; Storage Protocols — iSCSI, SMB, NFS, Custom, CephFS; Management — 1x 1GbE, IPMI, Hyperdrive Manager; Power supply — Redundancy power (Dual Supplies); 120v–240v; 50Hz–60Hz; Power consumption — < 165 watts; Dimensions — 1 Rack Unit
The HR61000 Storage Router is available for POC and purchase today, either via traditional purchasing (CAPEX) and as-a-Service (OPEX) options.
Shorts
Data protection and cyber-security supplier Acronis is entering into a training partnership with Nuremberg-based qSkills. qSkills will be offering training for Acronis products to partners and end users across EMEA.
A Backblazeblog opens the door on the ransomware economy and its ecosystem of players; developers, organised crime syndicates, brokers, operators, etc. It is a fascinating read.
Backblaze ransomware ecosystem diagram.
Object storage supplier Cloudian announced record bookings for the first half of its fiscal year ending July 31, increasing 50 per cent over the same period last year. The growth was driven by strength in both reorders from existing customers and sales to new customers. The company now has approximately 650 customers worldwide, up 40 percent over the past year.
Commvaultsued Cohesity and Rubrik in April last year. Itand Rubrik have now come to an agreement on all outstanding patent litigation proceedings between themselves. Commvault CEO Sanjay Mirchandani writes: “We have reached an amicable settlement that respects our mutual intellectual property and is in the best interest of our company and shareholders.” Will a similar agreement follow between Commvault and Cohesity?
DIGISTOR Citadel encrypted SSD.
Secure, data-at-rest (DAR) suppler DIGISTOR and embedded cyber security supplier Cigent Technology announced a tech partnership to expand data security across the entire lifecycle of a storage drive from initial deployment to end-of-life for military, defence, and critical infrastructure applications. The effort will combine Cigent’s Dynamic Data Defense Engine (D³E) with DIGISTOR encrypted SSD storage products.
Data lake analysis startup Dremio has launched a global partner network which includes cloud, technology, consulting, and system integration (SI) partners such as AWS, Intel, Microsoft, Tableau, Privacera, dbt Labs, Twingo, InterWorks, and others. Features include a dedicated partner account manager, business planning, one-on-one support, education and enablement, sales and technical training and certification, and joint marketing support to drive growth. There are also provides substantial discounts, sales incentives and joint marketing funds.
FileCloud ships a cloud-agnostic enterprise file sync, sharing and data governance platform. It has announced its new Compliance Center which enables US government agencies and organisations the ability to run ITAR-compliant enterprise file share, sync, and endpoint backup solutions with necessary encryption options. Some key highlights include:
Organizations without sophisticated risk management expertise can run their own compliance solution with necessary encryption options
Automated wizard streamlines compliance to just two clicks, guiding admins through configurations and identifying any missing elements
FileCloud for ITAR provides multi-level data protection through Data Leak Prevention capabilities
Backup target appliance maker ExaGrid has signed up TIM AG as a value-added distributor in the DACH region (Germany, Austria, Switzerland).
HPE has won a $2 billion contract to provide HPC and AI services to the US National Security Agency (NSA). Product will be supplied through the GreenLake subscription business over a ten-year period. There is an HPC-as-a-Service platform based on Apollo and ProLiant servers deployed in a QTS data center and managed by HPE.
Kasten by Veeam, a supplier of Kubernetes Backup, today announced that the CyberPeace Institute has deployed Kasten K10 to protect its Kubernetes applications and reduce the risk of data loss and corruption.
Kingston DataTraveler Max.
Taking advantage of the USB-C interface, Kingston has produced a DataTraveler Max USB 3.2 gen-2 thumb drive. It delivers up to 1000MB/sec read bandwidth and 900MB/sec write bandwidth. Capacities are 256GB, 512GB and 1TB. It weighs just 12g and has a five-year warranty.
Lightbits Labs, which supplies NVMe-optimized, SW-defined elastic block storage for private and edge clouds, has been assigned a patent (11,09,3408) for “a system and method for optimizing write amplification of non-volatile memory storage media.”
The abstract reads: “A system and a method of managing storage of cached data objects on a non-volatile memory (NVM) computer storage media including at least one NVM storage device, by at least one processor, may include: receiving one or more data objects having respective Time to Live (TTL) values; storing the one or more data objects and respective TTL values at one or more physical block addresses (PBAs) of the storage media; and performing a garbage collection (GC) process on one or more PBAs of the storage media based on at least one TTL value stored at a PBA of the storage media.”
New Yorker Electronics has announced its release of the new Innodisk Industrial-grade DDR5 DRAM modules. The modules comply with all relevant JEDEC standards and are available in 16GB and 32GB capacities, as 4800MT/s. The Innodisk DDR5 DRAM also has a theoretical maximum transfer speed of 6400MT/s, doubling the rate of its predecessor, DDR4. In addition, the voltage has been dropped from 1.2V to 1.1V, reducing overall power consumption.
Server-embedded, infrastructure-as-a-service start-up Nebulon has signed up Boston to resell its product.
OWC has announced Jellyfish Manager 2.0 which works directly with Jellyfish servers, specialised shared storage devices that allow multiple post-production video editors to work simultaneously with 4K, 6K, and 8K footage. It integrates with AWS, Backblaze and Wasabi and includes the most requested cloud backup services that allow users to run scheduled backups, and if necessary, recover their data from the cloud.
OwnBackup, which supplies a cloud data protection platform, announced the acquisition of RevCult, a California-based software company that provides Salesforce security and governance solutions, often known as SaaS Security Posture Management (SSPM). SSPM helps organisations more easily secure data that is growing in volume, velocity and variety, as well as avoid exposure by continuously scanning for and eliminating configuration mistakes and mismanaged permissions, which are the top causes of cloud security failures.
Multi-cloud Kubernetes-as-a-Service supplier Platform9 has joined joined Intel’s Open Retail Initiative (ORI), whose mission is to enable retail transformation using open source, edge/IoT, and ISV ecosystem applications.
Storage tester SANBLaze announced its SBExpress Version 8.3/10.3 software release which provides NVMe SSD manufacturers the ability to test PCIe-based NVMe devices as well as NVMe-oF (NVMe over Fabrics) devices. Comprehensive test suites are included for complete verification and compliance of ZNS, VDM, TCG, OPAL/Ruby and T10/DIF specifications for NVMe devices. Enhanced Python and XML APIs provide access to all tests and features enabling integration of SANBlaze SBExpress NVMe systems into existing test infrastructure — all fully compatible with SANBlaze’s upcoming Gen-5 PCIe generation of products.
The SNIA tells the world that the latest revision of SNIA Linear Tape File System (LTFS) Format Specification, v 2.5.1, is now adopted as the international standard ISO/IEC 20919:2021 through the collaboration between SNIA and ISO. The LTFS Format Specification defines the self-describing data structure on tape for the long-term retention of data at low-cost with the benefit of data portability between the different systems and different sites using tapes. The changes from previous ISO/IEC 20919:2016 adds the improvement in storage efficiency with the incremental index recording and supports a wider variety of characters in the file name and extended attribute with the character encoding.
Storage SW startup StorONE has signed a reseller agreement with Virtual Graffiti, a California-based provider of network infrastructure systems. The agreement covers the entire StorONE product portfolio, as well as a series of next-generation hybrid storage systems built on Seagate hardware, that deliver high performance and high capacity at affordable prices.
Data integrity and integration supplier Talend has once again been named by Gartner, for the sixth consecutive time, as a Leader in the August 2021 Gartner Magic Quadrant for Data Integration Tools. For a complimentary copy of the Gartner report, click here.
Cloud storage supplier Wasabi has signed an EMEA-wide distribution contract with Exclusive Networks, a cybersecurity specialist, which has its X-OD online delivery channel. Denis Ferrand-Ajchenbaum, VP Global Vendor Alliances and Business Development at Exclusive Networks, said: “Enterprise customers are budgeting more and more for their storage needs with public cloud providers like AWS, Azure, GCP and others, and frequently getting stung by extra charges for egress and API requests. Wasabi makes consumption easier, simpler and cheaper, and we at Exclusive are delighted to be able to offer EMEA partners the opportunity to enjoy enhanced benefits.”
ReRAM develop Weebit Nano has expanded its partnership with CEA-Leti, the French research institute. As part of the agreement, Weebit will incorporate additional IP licensed from CEA-Leti into its ReRAM offerings, further improving its technical parameters such as endurance, retention and robustness. Tests show an order of magnitude improvement in array-level endurance, and a 2x increase in data retention at the same conditions compared to previous results. In addition, the technology will make it possible for Weebit to address new high-volume markets such as automotive and smart cards by enabling high-temperature reliability up to 175oC and high-temperature compatibility for wafer level packaging.
Digitimes has reported China’s YMTC is experiencing low yields (30 to 40 per cent) on its 128-layer NAND chips. Overall the NAND industry is moving to 162–172 layer NAND, leaving YMTC behind. Wells Fargo analyst Aaron Rakers notes YTMC may not achieve its capacity plans until the second half of 2022 given the lower yields thus far, while production may reach 80–85k wafers per month by the end of 2021.
Virtual storage array supplier Zadara says it’s getting good traction with its recently-launched its Federated Edge programme. This is a fully managed, distributed cloud architecture sold through a global network of MHSPs. “We see a future where there is a Federated Edge Cloud in every city in the entire world, hosted by an MHSP, allowing edge customers to deploy workloads at sub-five milliseconds no matter where they are,” said Nelson Nahum, CEO, Zadara.
Amazon held a Storage Day on September 2 and announced a whole raft of new features for files, objects, blocks, file/object transfer, and backup.
They are aimed at lowering costs through tiering data to cheaper storage classes, simplifying access, automating data movements and verifying backup status. There is a list here, with — for us — a NetApp deal and file/object transfer facility being the highlights.
An AWS blog by senior developer advocate Marcia Villalba lays out the list of announcements.
File
The first and main one is FSx for NetApp ONTAP which we covered here and which provides ONTAP as a native managed service on AWS.
The second file announcement adds intelligent tiering to Amazon’s Elastic File System (EFS). This is similar to S3 tiering, with tiers cost- and performance-optimised on the basis of file access patterns. AWS Customer CapitalOne is using this to get lower cost options for its analytics workloads.
If an AWS user has a file that is not used for a period of time, EFS Intelligent Tiering will move it to the Infrequent Access (IA) storage class. If the file is accessed again, Intelligent Tiering will automatically move it back to the Standard storage class.
File Transfer Family
A third file announcement is a completely new service. The AWS Transfer family of Managed Workflows is an onramp/offramp into AWS that automates file and object transfers via SFTP (Secure Shell (SSH) File Transfer Protocol), FTPS (File Transfer Protocol over SSL) and FTP, into and out of S3 or EFS. The use of SCP, HTTPS and AS2 transfer protocols is not supported.
AWS Storage Day screen grab.
Villalba writes: “Without using Transfer Family, you have to host and manage your own file transfer service which requires you to invest in operating and managing infrastructure, patching servers, monitoring for uptime and availability, and building one-off mechanisms to provision users and audit their activity.” The Transfer Family is a fully managed service to accomplish this.
It can scan files for malware, personal identifying information and anomalies with customised and auto-triggered file upload workflows. Errors in processing files can be automatically handled with failsafe modes as well. AWS says all this can be done with low-code automation.
AWS Transfer Family Managed Workflows lets users configure all the necessary tasks at once so that tasks can automatically run in the background. Read a Transfer Family FAQ to find out more.
No monitoring and automation charges for small objects;
No need to analyse object sizes;
No minimum storage duration for objects;
No need to analyse an object’s expected life.
According to Villlalba, “Now that there is no monitoring and automation charge for small objects and no minimum storage duration, you can use the S3 Intelligent-Tiering storage class by default for all your workloads with unknown or changing access patterns.”
S3 Multi-Region Access Points provide a global endpoint in front of buckets in multiple AWS regions. They work across multiple AWS Regions to provide better performance and resiliency. This feature dynamically routes requests over AWS’s network, to the lowest latency copy of your data, increasing read and write performance by up to a claimed 60 per cent, and providing operational resiliency.
We understand that S3 Multi-Region Access Points rely on S3 Cross Region Replication to replicate the data between the buckets in the Regions chosen by a customer. The customer selects which data is replicated to which bucket. There are replication templates available to help simplify applying replication rules to buckets.
Villalba blogs: “You can now build multi-region applications without adding complexity to your applications, with the same system architecture as if you were using a single AWS Region.”
Block
EBS direct API snapshots now support any volume up to 64TB, increased from 16TB and equal to the size of the largest EBS io2 Block Express volume. Snapshots can be recovered to EBS io2 Block Express volumes for protection and for test and dev.
AWS says it has built the first SAN for the cloud with io2 Block Express instances providing hundreds of thousands of IOPS at sub-millisecond latency and fine ‘nines’ durability. It’s claimed to be good for SAP and Oracle ERP, SharePoint, MySQL, SQL Server, SAP HANA, Oracle DB, and NoSQL databases such as Cassandra, MongoDB and CouchDB.
AWS Storage Day screen grab.
Backup
AWS Backup is a fully managed service to initiate policy-driven backups and restores of AWS applications. AWS Backup Audit Manager provides customisable controls and parameters, like backup frequency or retention period for AWS backups. It provides evidence of backup compliance for data governance, continuously tracking AWS backup activities, audits backup practices and generates audit reports.
The FSx for ONTAP facility will enable NetApp customers to use AWS much more easily. The File Transfer Family will enable other file-access and also object-access customers to do so as well. File tiering will lower the cost of longer-term file storage and the S3 tiering restriction removals will help with storing lots off smaller objects.
The upping of EBS snapshot capacity to 64TB is welcome as is the Backup Audit Manager. Altogether this set of announcements should help AWS to make progress in storing more of the world’s data.
HPE revenues rose just one per cent in its third fiscal 2021 quarter, ended July 31, with supply constraints hitting server ships, but underlying overall order growth promised well for the future.
Revenues of $6.9 billion were close to the year-ago $6.82 billion, but profit was impressively higher: $392 million vs $9 million a year ago.
HPE CEO and President Antonio Neri put a bright and shining face on the quarter: “We delivered a very impressive Q3 performance, marked by strong order growth, expanded margins and record free cash flow. I am pleased to see how our differentiated portfolio is resonating with the market, and our edge-to-cloud strategy is driving improved momentum across our businesses.”
Note the steady rise in profits from Q4 FY20 through to Q3 FY21. That’s impressive business control.
Neri was impressed by order growth, saying there was strengthening demand, up strong double-digits from the prior-year period and year-to-date up 11 per cent from the prior-year period. The subscription shift is progressing well, with the annualised revenue run-rate (ARR) reaching $705 million, up 33 per cent from the prior-year period. HPE as-a-service orders were up 46 per cent year-on-year. Orders are not being cancelled due to supply constraint-caused delivery delays.
Robbiati doesn’t see the supply constraints “ending before the first half of calendar year ’22. So, we just have to navigate this as the capacity of all our manufacturing partners is not back to pre-pandemic levels and that will still take a good two to three quarters.”
Financial summary:
Gross margin — 34.5 per cent, up 420 basis points from a year ago;
Diluted EPS — $0.29 vs $0.01 a year ago;
Cash flow from operations — $1.13 billion, down $342 million from a year ago
Free cash flow — $526 million, down $398 million on the year.
The results were good enough for EVP and CFO Tarek Robbiati to lift the outlook: “We are once again raising our full-year guidance to reflect the continued momentum in the demand environment and our strong execution. This marks the fourth increase in our outlook since our Securities Analyst Meeting in October 2020.”
HPE is also resuming stock repurchases.
Segment results
HPE’s core compute segment brought in $3.1 billion, which was nine per cent less than the year ago’s $3.4 billion. That $300 million difference depressed overall revenues, causing the lethargic one per cent overall growth rate.
The Intelligent Edge segment (switching, WLAN, Aruba SaaS) was truly impressive, featuring 27 per cent year-on-year growth to $867 million. High Performance Computing and Mission-Critical Systems revenues rose 11 per cent to $741 million and Storage also followed this upwards pattern with four per cent growth to $1.2 billion. This was the second successive quarter of storage growth.
The storage growth was better than Dell’s one per cent decline, but not quite as good as NetApp’s 12 per cent rise or Pure’s 23 per cent jump.
There was strength in Nimble, up ten per cent from the prior-year period when adjusted for currency, with strong momentum in dHCI growing double-digits.
All-flash Arrays (AFA) grew over 30 per cent from the prior-year period led by Primera, up mid double-digits from the prior-year period. This was the fifth consecutive quarter of year-on-year AFA revenue growth and compares to +20 per cent growth in the previous quarter.
What went wrong with servers? And with cash flow?
The earnings call had the CFO saying: ”In compute, revenue grew four per cent quarter-over-quarter, reflecting normal sequential seasonality despite previously anticipated supply chain tightness. Operating margins of 11.2 per cent were up 190 basis points from the prior year due to disciplined pricing and the right-sizing of the cost structure in this segment.”
Robbiati said this about cash flow: “First of all, we have to continue to make investments in our inventory level to withstand the supply chain constraints that we flagged for several quarters now. Second, in Q4 of fiscal year ’21, … I described a very, very large GreenLake deal that will impact free cash flow in Q4, but that will generate substantial ARR revenue in subsequent quarters.
“This is a deal in several hundred millions of dollars that we have not announced yet, but it is already something that we are financing, and this is affecting, therefore, free cash flow already as of Q4 of fiscal year ’21. Thirdly, we are still peaking on restructuring cost in fiscal year ’21 and we feel very positive about our cost optimisation and resource allocation program, which will wind down at the end of fiscal year ’22.”
Turning to servers he said units were flat quarter over quarter reflecting supply constraints but also prices were up: “AUP (Average Unit Price) was up mid to high single digits quarter-on-quarter, reflecting pass-through of commodity cost and richer configurations.”
He said this “is translating in this record level of gross margin at 34.7 per cent. Some of our competitors didn’t take that action and it’s down to them and up to them.”
Reading between the lines HPE could have discounted to sell more servers but the supply constraints would work against that tactic, so, with demand strong, HPE stood firm on pricing.
Wells Fargo analyst Aaron Rakers told subscribers: “While we expect some investors to question HPE’s relative Compute performance vs Dell reporting +6 per cent year-on-year growth for Servers and Networking in the July quarter, the company would emphasise a tough compare amid a backlog burn-off post-COVID in the year ago quarter.”
Order book and outlook
In general: “The order book is very, very solid. Antonio underscored this, across the board, our order book is super solid.” Specifically for storage: “Our growth in Storage, which comes with very high-calorie gross margin revenue, is pleasing at three per cent. We took share from some of our main competitors.”
Nimble and Primera have already been mentioned but there was no mention of progress with the recently-announced Alletra arrays.
HPE expects its revenues to continue to grow but didn’t provide a revenue number outlook for the, fourth, quarter or for the full year
HPE strategy and acceleration
Neri is convinced HPE has the right strategy for the market: “I think our edge-to-cloud vision and strategy is absolutely resonating in the market, because customers need three things: they need secure connectivity in this hybrid world; they need a cloud experience everywhere; and then they need data insights yesterday, in my view; and then they need to be able to consume it as-a-service in an elastic way. We have all the four ingredients and that’s why we are going to accelerate further and faster this strategy because it’s working.”
Neri said: “I am proud of HPE’s performance in Q3 and year-to-date, and the significant progress we have made in becoming the edge-to-cloud company. The momentum we have in the market compel us to move even further and faster, and our ability to transform with increasing speed is imperative. This transformation is my number one priority.”
He added: “This is all about acceleration … and between now and March, you’re going to see a massive acceleration and that’s why it’s my number one priority.” His execs better get used to demands for faster progress.
NetApp and AWS have a strategic deal which has put the ONTAP file system natively into AWS as a managed service with FSx for ONTAP.
Updated 4 Sep 2021 with Octavian Tanese comment.
It is a fully-managed ONTAP file system in AWS, with the whole array of ONTAP APIs, data reduction and data protection features, and joins the existing Amazon FSx for Lustre and FSx for Windows File Server.
AWS’s Wayne Duso, VP for AWS Storage, Edge and Data Governance, said in the AWS Storage Day announcement: “You give up nothing.” That means you can move data sets from on-premises ONTAP environments to FSx for ONTAP in the AWS cloud with no changes. ONTAP features such as thin provisioning, FlexClone, SnapMirror and SnapVault are available. The data can be snapshotted using native ONTAP snaps, replicated, and also backed up to S3 with FSx backup.
Once the ONTAP dataset is in AWS then AWS functionality can be used to process data in it. There is integration with AWS tools for automation and monitoring.
AWS is partnering with Accenture to help enterprises adopt and migrate to FSx for ONTAP.
A blog by AWS Chief Evangelist Jeff Barr claims that, with FSx for ONTAP: “You get the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, security, and resiliency of AWS, making it easier for you to migrate on-premises applications that rely on network-attached storage (NAS) appliances to AWS.”
Barr writes: “If you are migrating, you can enjoy all of the benefits of a fully-managed file system while taking advantage of your existing tools, workflows, processes, and operational expertise. If you are building brand-new applications, you can create a cloud-native experience that makes use of ONTAP’s rich feature set. Either way, you can scale to support hundreds of thousands of IOPS and benefit from the continued, behind-the-scenes evolution of the compute, storage, and networking components.”
AWS diagram.
There are two storage tiers:
Primary Storage uses SSDs and is designed to hold the part of your data set that is active and/or sensitive to latency. You can provision up to 192TiB of primary storage per file system.
Capacity Pool Storage grows and shrinks as needed, and can scale to pebibytes. It is cost-optimised and designed to hold data that is accessed infrequently.
You can create one or more Storage Virtual Machines (SVMs) in an FSx for ONTAP file system, each of which supports one or more Volumes. Volumes can be accessed via NFS, SMB, or as iSCSI LUNs for shared block storage.
You can check out FSx for ONTAP pricing on an AWS web page. Pricing reflects five component cost elements: SSD capacity, SSD IOPS, Capacity Pool usage, throughput capacity, and backups.
Pricing varies by AWS region as well and here is an illustrative table from the AWS web page:
* For a general-purpose file sharing workload with storage efficiency savings and capacity pool tiering. See the pricing example below for details on how we calculate effective storage pricing.
NetApp’s Octavian Tanese, SVP Hybrid Cloud Engineering, wrote on LinkedIn: “[FSx for ONTAP] leverages Amazon’s EBS SSD-backed volumes. It combines up to twelve IO1 volumes, scaling up to 192 TB of capacity per managed cluster. Looking ahead, there will be support for other type of AWS storage and deeper integration with the SnapCenter Service application integration software.”
Comment
This deal between NetApp and AWS is a tremendous win for NetApp, giving yet more heft to its data fabric story. ONTAP is now a single file system product covering the on-premises, private cloud, public cloud and hybrid cloud worlds as far as AWS is concerned.
FSx for ONTAP is a step beyond Cloud Volumes for ONTAP (CVO), which is a cloud data management system available on AWS, Azure and GCP. The customer manages, deploys and licenses CVO, whereas AWS presents FSx ONTAP as a fully-managed service, taking the CVO management burden away.
Hyper-converged and platform SW vendor Nutanix announced beat-and-raise revenues and subscription growth in its fourth fiscal 2021 quarter and made a thumping loss — equivalent to 92 per cent of its revenues.
Put another way, for every dollar of revenue it lost 92 cents in costs. Here’s the summary for the quarter ended July 31:
Q4 revenues — $390.7 million, up 19 per cent year-on-year;
Q4 loss — $358.2 million, compared to $185.3 million a year ago;
FY2021 revenues — $1.39 billion, up six per cent;
FY2021 loss — $1.03 billion compared to $872.9 million a year ago.
And yet its results were great, despite the losses, as President and CEO Rajiv Ramaswami explained in his announcement quote. “Our fourth quarter was a strong end to an excellent fiscal year, which was marked by consistent execution and solid progress across both financial and strategic objectives. We have entered our fiscal 2022 with good momentum and a solid plan for growth.”
William Blair analyst Jason Ader opined: ”The firm’s sharp focus on fewer, bigger bets on the product side and enhancing its solution selling capabilities and partner leverage on the go-to-market side appear to be paying off — in the form of better win rates, larger deals, and improving gross and net retention rates.”
He noted Nutanix is seeing strong traction for its unified storage, desktop-as-a-service (Frame), and database lifecycle management (Era) solutions.
The 19 per cent revenue growth was Nutanix’s fastest growth in the last three years. The company gained 700 new customers in the quarter, taking its total to 20,130. The even better news was about its subscription business.
Annual recurring revenue — $878.7 million, up 83 per cent on a year ago;
Annual Contract Value (ACV) billings — $176.3 million, up 26 per cent;
Net dollar retention rate — 124 per cent.
CFO Dustin Williams said: “In fiscal 2022, we expect our growing base of low-cost renewals will drive further improvements in top and bottom line performance.” That’s because a contract renewal costs Nutanix 80 per cent less than selling a new product or winning a new customer.
The company is also selling more products on top of its base hyper-converged software to each customer. New ACV bookings from emerging products grew 100+ per cent year-over-year with a record attach rate of 41 per cent.
The lifetime bookings numbers improved, with 1512 customers spending more than $1 million with Nutanix — up from 1433 last quarter and 1207 a year ago. Basically more customers want more Nutanix products and stick with the company.
We spotted a sharp uptick in its Q4 FY20 to Q4 FY21 revenue growth rate (top right red line in chart below):
Nutanix revenues by quarter by fiscal year. Its growth rate is accelerating.
Wells Fargo analyst Aaron Rakers told subscribers: “The company … reiterated an expectation of driving toward positive non-GAAP EBIT and FCF in F2023,” which indicates GAAP profitability is still some way off — 2024 perhaps, maybe 2025. So long as Nutanix keeps on growing it can keep on burning cash until profitability rises to and beyond the burn rate.
Ader said: “Management continues to view its rapidly approaching renewal opportunity as the key to unlocking operating leverage and achieving its target of free cash flow breakeven in the next 12–18 months (as well as operating profit in calendar 2023).”
Free cash flow (FCF) from operations was -$42 million, compared to -$14 million a year ago. Nutanix says FCF in FY2021 was -$158 million and will be $50 million to $150 million in FY2023 and $300 million to $500 million in FY2025
Rakers reckons the average annualised revenue per average sales and marketing employee at $554K in Q421 was up from an estimated ~$468K in the prior quarter.
Cash and cash equivalents were $285.7 million at quarter end, down from the $318.7 million reported a year ago.
Nutanix reckons to pull in between $172 million and $177 million in ACV billings next quarter, up 26.4 per cent year-on-year. It didn’t provide a revenue guide for next quarter — but analysts were happy. So were investors. The stock rose from $36.95 on September 1 before the results came out to $40.85 today.
Despite having been acquired, Zerto was a failure that its leading executives and Venture Capital board members let happen, despite a litany of repeated signals that the business would not meet its potential.
HPE’s purchase of Zerto for $374 million closed on September 1st, giving HPE superb disaster recovery technology that was, according to sources, mis-sold and mis-marketed for years by executives who mis-read the market — and a board asleep at the wheel that let them do that.
Disaster recovery startup Zerto was founded by Ziv and Oded Kedem, two bothers and engineers, who had previously been involved with another startup — Kashya, a continuous data protection and ‘snap’ replication business. Kashya was sold to EMC for $153 million and became part of its RecoverPoint software. It devised replication-based disaster recovery technology for virtualised servers — VMware first and Hyper-V in 2014.
Let’s check out an overall Zerto timeline, its financial backers and board, and then take a look at what happened.
Zerto timeline
2000 — Kashya founded by CEO Michael Lewin, COO Yair Heller and CTO Ziv Kedem. Oded Kedem was Storage Group and then SW Development Director.
2006 — Kashya sold to EMC for $153 million.
2009 — Zerto founded by Ziv and Chief Architect Oded Kedem.
2010 — Gil Levonai becomes VP Marketing and Products.
2011 — $6 million seed/A round, $15 million B round.
2013 — $13 million C round.
2014 — $25 million D round and 200 per cent year-on-year sales growth.
2016 — $70 million E-round and 100 per cent sales growth over 2015, 2200 customers.
2019 — Moves into backup.
2020 — $33 million in equity financing and $20 million debt facility, staff layoffs.
2021 — Acquired by HPE for $374 million against total funding of $162 million + debt financing.
Backers and board
Zerto VC backers: CRV, Access Ventures, 83North, Battery Ventures, Harmony Partners, IVP, RTP Ventures, and U.S. Venture Partners.
Zerto board members:
Erez Ofer — a founding partner of Greylock IL, now 83North.
Jacques Benkoski of USVP.
Ken Goldman — former chief financial officer of Yahoo!.
Mark Leslie — managing director of Leslie Ventures, a private investment company.
Oded Kedem — Co-founder and Chief Architect.
Scott Tobin — General Partner at Battery Ventures.
Ziv Kedem — Co-founder and CEO.
Overall product strategy
Zerto sold its disaster recovery product at a premium price to enterprises. Sources tell us it was an excellent product that met a real need and was expensive to buy. But Zerto management didn’t understand the staying power, potential and relevance of the cloud. They focussed mostly on perpetual license sales and embraced neither subscription business nor SaaS with any fervour.
Ziv Kedem.
We were told that many customers used Zerto for mission-critical applications, which is good, but only mission-critical applications — which is not. There was no effective land-and-expand strategy to provide DR to non-mission-critical apps. They used basic replication instead. Other customers who could have used Zerto’s DR shuddered at the price and moved on.
As the public cloud idea gained continuing and growing traction, Zerto did not realise that the subscription model provided an effectively lower entry price for its premium cost product. Zerto leadership was wedded to the perpetual license idea and embraced subscriptions late and without fervour.
We have been told that Zerto leadership never really understood the cloud, thinking it a fad initially — even though it sold through cloud service providers.
These factors provided an opportunity for competitors to catch up with its technology, such as Cohesity, Rubrik and Veeam. Their less-costly offerings started making waves with customers. When the cloud tide rose Zerto started drowning. One person told us that they doomed themselves once everybody started going to AWS and Azure. Zerto’s prices were so high it was over.
Although Zerto grew like topsy in its glory years — 2013 to 2017 — the seeds of future troubles were sprouting. They sapped Zerto’s growth increasingly from 2018 onwards, ultimately leading to layoffs in 2020. It had more than 200 sales people in the glory years, but this had fallen to less than 40 at HPE acquisition time, falling by more than 75 per cent.
This appears to be a sustained marketing failure — an inability to analyse what customers wanted and were doing, and to modify company strategy to take advantage of that. We think that the sales and marketing budget was skewed in favour of marketing, which makes this failure even odder.
Gil Levonai.
There were multiple acquisition offers for the company in the 2017 to 2020 period but they were rejected, enabling Captain Hindsight and fellow armchair quarterbacks to be now having a great time. These offers were substantially more than HPE’s $374 million, meaning we think way above $500 million and approaching or in the $750 million area. Perhaps even higher.
This brings us to the board. Why did the board let Zerto leadership mis-manage the company and miss the multiple opportunities to sell Zerto and crystallize their investments? Perhaps they were bewitched by the engineering credentials of the Kedems and their track record. They should not have been. Ziv should have been shunted sideways to a CTO-type role and a professional CEO brought in.
Staff were offered stock options in the glory years, with stock priced using glory years expectations. Now they appear to be underwater, with staff who have not yet vested their options unlikely to do so as it would be a worthless exercise.
HPE is adding Zerto’s software to its GreenLake subscription business. Merely embracing the subscription model wholeheartedly, as it has, should enable HPE to return Zerto to growth. Lowering the cost to make Zerto DR practicable for non-mission-critical apps could strongly expand the Zerto footprint in existing customers.
HPE will do what the Zerto leadership should have done with the product years ago. But the ultimate responsibility for Zerto’s failure to achieve its potential lies with the board. Its lack of action is bewildering and it should have done better.
Case Study. Northern Spotted Owls, a federally protected species, live in the Oregon forests — forests impacted by human activity such as loggers wanting to cut down trees and harvest the timber. The areas where the owls live are protected from the loggers, but which areas are they? Data can show the way, enabling sustainable timber harvesting, other human activity, and even displacement protection from competing species.
Recorded wild life sounds, such as birdsong, are stored with Quobyte file, block and object software and interpreted using AI. This data shows where the owl habitats are located
Northern Spotted Owl.
Forests of Douglas Fir, Ponderosa Pine, Juniper and other mixed conifers cover over 30.5 million acres of Oregon — almost half of the state. Finding out where the owls live is quite a task. The Center for Quantitative Life Sciences (CQLS) at Oregon State University, working with the USDA Forestry Service, has deployed and is tracking 1500 autonomous sound and video recording units deployed in the forests, gathering real-time data. The aim is to to monitor the behaviour of wildlife living in the forests of Oregon to ensure that the logging industry’s impact is managed and allow for other interventions.
The CQLS creates around 250 terabytes of audio and video data a month from the recording units, and maintains around 18PB of data at any given time. It keeps taking data off and reusing the space to avoid buying infinite storage.
Over an 18-month period it devised an algorithm to parse the audio recordings and identify different animal species. The algorithm creates spectrograms from the audio, and processes those spectrograms through a convolutional neural net based on the video. It can identify about thirty separate species, distinguish male from female, and even spot behavioural changes within a species over time.
IT estate
The compute work takes place on an HPC setup comprising IBM Power System AC922 servers, collectively containing more than 6000 processors across 20 racks in two server rooms that serve 2500 users. The AC922 architecture puts AI-optimised GPU resources directly on the northbridge bus, much closer to the CPU than conventional server architectures.
CQLS needed a file system and storage solution able to keep massive datasets close to compute resources — as swapping data in and out from external scratch resources doubled processing time.
At first it was looking at public cloud storage options, but the costs associated were considered “outrageously expensive”.
CQLS checked a variety of storage alternatives and settled on Quobyte running on COTS hardware, rejecting more expensive storage appliance alternatives which could need expensive support arrangements.
The sizes of individual files vary from tiny to very large and everything in between. The Quobyte software is good when dealing with large files, as opposed to millions of highly similar small files. This is advantageous when working on AI training, where TIFF files can range from 20 to 200GB in size.
Concurrently, those files may need to be correlated with data from sensors, secondary cameras, microphones, and other instruments. Everything must flow through one server, which puts massive loading on compute and storage.
Quobyte
Quobyte’s software uses four Supermicro servers with two Intel Xeon E5-2637 v4 CPUs @ 3.50GHz and 256G RAM (DDR4 2400). There are LSI SAS3616 12Gbit/s SAS controllerd running two 78-disk JBODs. These are filled with Toshiba MG07ACA14TA 14TB, SATA-6Gbit/s, 7200rpm, 3.5-inch disk drives.
The entire HPC system is Linux-based and everything is mounted through the Quobyte client for x86-based machines and NFS for the PPC64LE (AC922) servers.
Many groups of users access the system. A single group could have millions or hundreds of files based on the work they do. Most groups leverage over 50TB each and currently there is 2.6PB loaded on the Quobyte setup.
Data ingest detail
Chris Sullivan.
Christopher Sullivan, Assistant Director for Biocomputing at CQLS, said; “We have all kinds of pathways for data to come into the systems. First off all research buildings at OSU are connected at a minimum of 40Gbit/sec network and our building and incoming feed to the CGRB (Center for Genome Research and Biocomputing) is 100Gbit/sec and a 200Gbit/sec network like between OSU and HMSC (Hatfield Marine Science Center) at the coast.
“To start some of the machines in our core lab facility (not the sequencers) do drop live data onto the system through SAMBA or NFS-mounted pathways. Next, we have users moving data onto the servers via a front-end machine, again providing services like SAMBA and SSH with a 40Gbit/sec network connection for data movement.
“This allows for users to have machines around the university move data automatically or by hand onto the systems. For example, we have other research labs moving data from machines or data collected in greenhouses and other sources. The data line to the coast mentioned above is used to move data onto the Quobyte for the plankton group as another example.”
What about backup?
Sullivan said: “Backup is something we need on a limited basis since we can generally generate the data again cheaper than the costs of paying for backing up that large amount of space. Most groups backup the scripts and final output (ten per cent) of the space they use for work. Some groups take the original data and if needed by grants keep the original data on cold drives on shelves in a building a quarter-mile away from the primary. So again we do not need a ton of live backup space.“
Quobyte vs public cloud
Sullivan said: “We found that using public clouds was too expensive since we are not able to get the types of hardware in spot instances and data costs are crazy expensive. Finally, researchers cannot always tell what is going to happen with their work or how long it needs to run, etc.
“This makes the cloud very costly and on-premises very cost-effective. My groups buy big machines (256 thread count with 2TB RAM and 12x GPUs) that last 6–7 years and can do anything they need. That would be paid for five times over in that same time frame in the cloud for that hardware. Finally, the file space is too expensive over the long haul, and hard to move large amounts of data on and off. We have the Quobyte to reduce our overall file space costs.”
Source: Oregon Department of Forestry.
Seeing the wood for the trees
This is a complicated and sizeable HPC setup which does more than safeguard the Northern Spotted Owl’s arboreal habitats. That work is done in an ingenious way — one that doesn’t involve human bird-spotters trekking through the forests looking for the creatures.
Instead, AI algorithms analyse and interpret recorded bird songs, recognise those made by the owls and then log where and when the owls are located. That data can be used to safeguard those areas of forests allowing sustainable logging, and reducing impacts of other human activity and competing species. And the owls can live in peace.
Helix Discovery is a a data center discovery SaaS offering that automatically discovers data center inventory, configuration and relationship data, and maps applications to the IT infrastructure. It provides instant visibility into hardware, software, services, and dependencies across on-premises and multi-cloud environments. Its discovery capabilities are designed to handle the complex management of mainframe, traditional, and hyper-converged infrastructures, software-defined storage and networks, containers, and cloud services.
Erik Kaulberg, an Infinidat VP, trilled: “I’m thrilled to welcome BMC to our AIOps ecosystem. Through BMC Helix Discovery … our InfiniBox customers can take advantage of the complete, dynamically updated service view that BMC Helix Discovery provides, which improves service performance, availability, and customer experiences while also lowering costs.”
It recognises a whole great mass of software and hardware products — including for example, Dell’s VxRail, NetApp’s E Series, Nutanix Block and Cluster, and now Infinidat InfiniBox arrays and software.
Helix Discovery generates detailed datasets and topologies using its Dynamic Service Models, which provide an up-to-date, integrated data store. Dynamic Service Models act as the single source of truth across BMC’s IT landscape and enables artificial intelligence for IT operations (AIOps) and AI service management (AISM) to leverage the latest infrastructure configurations and ensure accuracy and efficiency.
BMC Helix Discovery’s integration with Infinidat’s InfiniBox storage system is available today for new and existing customers.
Comment
We have noticed a string of Infinidat announcements integrating InfiniBox systems into large enterprise IT service management and AIOps frameworks. It’s differentiated by this strategy — no other enterprise storage supplier of our acquaintance is following a similar path.
It seems intuitively obvious that, in an enterprise storage bid where the customers uses a third-party IT management and AIOps framework, Infinidat fitting in with that framework will be at an advantage. Its competitors, out in the AIOps cold, will not.
Zoom meeting customers will be able to store recorded meeting in Seagate’s public cloud: Lyve Cloud.
Yes, Seagate has a public cloud storage business.
Under a Seagate-Zoom agreement, when Zoom’s customers record their meetings, they’ll have the option to save these media files on Seagate’s Lyve Cloud S3 storage-as-a-service (STaaS) platform.
Velchamy Sankarlingham, President of Product and Engineering for Zoom, announced: “Our customers expect secure storage and frictionless sharing of their meeting recordings. Given the scale of meetings we enable and the variety of customer needs, we need cloud storage that delivers best-in-class TCO. We are adding Lyve Cloud support because it delivers those benefits.”
For his part, Ravi Naik, Seagate EVP of storage services and CIO, said: ”We made cloud economics simple and predictable regardless of the high volume of meetings recorded or the number of times viewed. Lyve Cloud charges no API fees and egress fees, and our always-on storage means Zoom users can view their recordings when they need to.”
It’s cheaper than AWS, Azure and GCP in other words.
Zoom currently offers a meeting recording facility using is own Zoom cloud. The multi-year deal between Seagate and Zoom is for a Silicon Valley cloud location, with other options on the horizon.
Lyve Cloud is Seagate’s storage-as-a-service offering using Equinix co-location sites in the USA and Minio’s S3-compatible object storage software. The Lyve Mobile offering is Seagate’s data transfer service and about physically moving data via HDDs or SSDs.