The second 2020 quarter marked a revenue and profits uptick for SK hynix, which sold more server memory and lowered costs by increasing process yield.
Revenues of ₩8.6tn ($7.15bn) were 33.4 per cent higher than a year ago and there was 135.4 per cent profit lift to ₩1.26tn ($1.1bn).
Sk hynix experienced weak mobile demand for DRAM in the quarter but demand for server and graphics DRAM increased. So much so that DRAM bit shipment and average selling price increased by two per cent and 15 per cent QoQ respectively.
NAND bit shipment and average selling price increased five per cent and eight per cent QoQ respectively. As well as making NAND chips, SK hynix is pumping out a lot more SSDs. This quarter, SSD sales for the first time accounted for almost half of its NAND flash revenues.
In the next quarter or two the company thinks the pandemic will continue to cause economic uncertainties. But it anticipates 5G smartphones and next-generation gaming consoles will stimulate DRAM, NAND and SSD growth.
To accompany the earnings release, sk Hynix teased out some roadmap details to accompany the earnings. It will focus on 1Ynm mobile DRAM, LPDDR5 DRAM and 64GB server DRAM product sales, while embarking on mass production of the next, 1Zmm memory node.
The company will also focus on more customer qualifications for its 128-layer 3D NAND product to drive sales upwards. It is also aiming to increase enterprise SSD sales.
Glad all over
The Korean memory chip maker’s recovery from the recent worldwide DRAM and NAND glut is gaining pace. A chart of revenues and profit by quarter shows the upturn:
Competitor Micron saw a sales uplift in its most recent quarter which ended on May 28; 13.6 per cent to $5.44bn. It also exhibited four glut-trough quarters before this one, as the chart shows:
Western Digital revenues climbed 14 per cent on the back of record flash memory performance to $4.18bn in its third fiscal 2020 quarter ended April 3.
It certainly looks as if the DRAM and NAND glut is over.
The datasets used in big data analytics and AI model training can be hundreds of terabytes, involving millions of files and file accesses. Conventional X86 processors are poorly suited for this task and so, GPUs are typically used to crunch the data. Their instruction sets can process millions of repetitive operations many times faster than CPUs.
However, there is a performance bottleneck to overcome when transferring data to GPUs via server-mediated storage.
Typically, data transfers are controlled by the server’s CPU. Data flows from storage that is attached to a host server into the server’s DRAM and then out via the PCIe bus to the GPU. Nvidia says this process becomes IO bound as data transfers increase in number and size. GPU utilisation falls as it waits for data it can crunch.
For example, an IO-bound GPU system used in fraud detection might not respond in realtime to a suspect transaction, resulting in lost money, whereas one not getting access to data faster could detect and prevent the suspect transaction, and alert the account-holder.
Normally data is bounced into host server’s memory and bounced out of it on its way to the GPU. This bounce buffer is required – because that’s the way server CPUs run IO processes. However, it is a performance bottleneck.
If the IO process can be accelerated, with higher speed and lower latency, application run times are shortened and GPU utilisation is increased.
Modern architecture
Nvidia, the dominant GPU supplier, has worked away at this problem in stages. In 2017, it introduced GPUDirect RDMA (remote direct memory access), which enabled network interface cards to bypass CPU host memory and directly access GPU memory.
The company’s GPUDirect Storage (GDS) software, currently in beta, goes beyond the NICs to get drives talking direct to the GPUs. API hooks will enable the storage array vendors to feed more data faster to Nvidia’s GPUs, such as its DGX-2.
GDS enables DMA (direct memory access) between GPU memory and NVMe storage drives. The drives may be direct-attached or external and accessed by NVMe-over-Fabrics. With this architecture, the host server CPU and DRAM are no longer involved, and the IO path between storage and the GPU is shorter and faster.
Blocks & Files GPUDirect diagram. Red arrows show normal data flow. Green arrows show shorter, more direct, GPUDirect Storage data path.
GDS extends the Linux virtual file system to accomplish this – according to Nvidia, Linux cannot currently enable DMA into GPU memory.
The GDS control path uses the file system on the CPU but the data path no longer needs the host CPU and memory. GDS is accessed via new CUDA cuFile APIs on the CPU.
Performance gains
Bandwidth from CPU and system memory to GPUs in an DGX-2 is limited to 50GB/s, Nvidia says, and this can rise to 100GB/sec or more with GDS. The software combines various data sources such as internal and external NVMe drives, adding their bandwidth together.
Nvidia cites a TPC-H decision support benchmark with scale factors (database sizes) of 1K and 10K. Using a DGX-2 with eight drives, the 1K scale factor test latency was 20 per cent of the non-GDS run. This is a fivefold speedup. At the 10K scale factor, latency was 3.33 to five per cent when GDS was used compared to the non-GDS case. This is a 20x-30x speed up.
Nvidia GDS TPC-H benchmark slide.
Four flying GDS partners
DDN, Excelero, VAST Data and WekaIO are working with Nvidia to ensure their storage supports GPUDirect Storage.
DDN is supplying full GDS integration with its A3i systems: A1200, A1400X and A17990X.
Excelero will have a generally available GDS support in the fourth quarter for disaggregated, converged and hybrid environments. It has a roadmap to develop a GDS-optimised stack for shared file systems.
A Nvidia GDS slide deck provides detailed charts showing VAST Data and WekaIO performance supporting GDS storage access.
VAST Data achieved 92.6GB/sec from its Universal Storage array with GDS while Weka recorded 82GB/sec read bandwidth between its file system and a single DGX-2 across 8 EDR links using two Supermicro BigTwin servers.
Blocks & Files envisages other mainstream storage array suppliers will soon announce support for GDS. Also, a GPUDirect Storage compatibility mode allows the same APIs to be used when non-GPUDirect software components are not in place. At time of writing, Nvidia has not declared Amazon S3 support for GDS.
Nvidia GDS is due for release in the fourth quarter. You can watch an GDS webinar for more detailed information.
Spin Memory, a developer of Magnetic RAM, has raised $8.5m in addition series B funding. This is valued at “the same terms as the last closing in April 2019”, lead investor Allied Minds said in a statement. Spin Memory will use the capital to support its research on MRAM to bring its technology to the market.
Founded in 2007, Spin Memory bagged $52m in B-round funding in 2018, taking total funding before the recent top-up to $158m.
The company announced the funding – but not the amount or terms -earlier this month. CEO Tom Sparkman said: “The additional investment validates the work we’re doing here at Spin Memory and demonstrates the value of our unique MRAM IP offerings – especially during such challenging times. We are proud to be part of the growing MRAM eco-system in both the embedded and stand-alone implementations.”
Tom Sparkman
Allied Minds, which owns 43 % of Spin Memory capital, contributed $4m to this capital raise. Arm, Applied Ventures and Abies Ventures jointly contributed $4.25m.
Allied Minds CEO Joe Pignato issued his own quote: “The additional funding announced today endorses the business’ development and provides additional resources to support its rapid growth.”
He suggests markets such as artificial intelligence, autonomous vehicles, 5G communication and edge computing could use Spin Memory’s technology.
Magnetic RAM
Magnetic RAM is positioned as a replacement for SRAM (Static RAM) used in CPU cache memory.
SRAM is volatile, faster than DRAM and also more expensive. Spin Memory claims SRAM technology is growing its capacity at a falling rate and techniques to reduce SRAM current leakage at transistor and chip level are near exhaustion. As CPUs get more powerful the SRAM caches will be unable to keep pace.
Hence the need for replacement technology. Spin Memory says its Spin Transfer Torque (STT) MRAM, which can be non-volatile, is that replacement.
The company is developing embedded and stand-alone MRAM devices. The embedded product, built with Applied Materials, was meant to be commercially available last year. This background suggests new top-up funding was needed to complete the development and bring the embedded product to market.
Spin Memory’s technology uses less energy than DRAM. It has more than 250 patents, a commercial agreement with Applied Materials, and a licensing agreement with Arm. The company is in the R&D phase for multi-layer and multi-bit cell MRAM. 3D MLC MRAM, which would increase MRAM chip capacity and could function as storage-class memory.
Spin Memory 3D MLC diagram shows a 2-layer crosspoint design
A 2-layer structure could double the existing STT MRAM capacity and a 2-bit MLC cell would double it again. Spin Memory suggests this would have the same density as today’s DRAM.
Commvault has rejigged its product lines to better match market needs. The company has separated components for data protection, disaster recovery and data management tasks, and has integrated Hedvig software into HyperScale backup appliances.
Commvault announced the portfolio reshuffle yesterday at its Future Ready customer conference. The company has also rolled out subscription pricing to more products, while confirming ongoing support for perpetual licensing options.
Rajiv Kottomtharayil
Commvault chief product officer Rajiv Kottomtharayil provided an announcement quote: “Customers need to be increasingly agile, flexible, and scalable. This new portfolio addresses data management risks that exist today and that may exist tomorrow, intelligently. From a natural disaster, to human error, to ransomware, our customers are covered.”
Coming out of 2019, Commvault had six product lines: Commvault Complete Backup & Recovery, Commvault HyperScale, Commvault Orchestrate, Commvault Activate, Hedvig’s Distributed Storage Platform (DSP) and Metallic (SaaS backup and recovery service).
The new line-up is:
HyperScale X with Hedvig technology
Commvault Backup & Recovery
Commvault Disaster Recovery
Commvault Complete Data Protection
Commvault Activate disaggregation into separate Data Governance, eDiscovery & Compliance and File Storage Optimization products
Enhanced Container Support for Hedvig’s DSP
Commvault launched HyperScale appliances in 2017 to provide scale-out backup and recovery for containers, virtual and database workloads. The appliances supports on-premises deployments and multiple public clouds, with data movement between these locations.
HyperScale provides up to 5PB of capacity, automatic load balancing across nodes, data caching, cataloguing, automatic rebalancing of stored data across nodes, and integrated copy data management.
The new HyperScale X, available today, brings Hedvig’s DSP file system software to the appliance for the first time, adding higher performance backup and recovery as the system scales. Concurrent hardware failures are managed to maintain availability.
Commvault’s standalone Backup & Recovery protects containers, cloud-native, and virtual server workloads across cloud and on-premises environments. A separate Disaster Recovery product supports on-premises and cloud environments with automated DR orchestration, replication, and verified recovery readiness.
A Complete Data Protection offering combines Backup & Recovery and Disaster Recovery.
Not so simple
Commvault announced Activate in July 2018, in a product simplification exercise prompted by prodding from activist investor Elliott Management. The software is now disaggregated into separate data Governance, eDiscovery and compliance, and file storage optimisation products.
Hedvig’s DSP software, acquired last year, gets more container support. Hedvig introduced Kubernetes Container Storage Interface (CSI) support in October 2019 and with Hedvig for Containers has added:
Native API Kubernetes enhancements to offer encryption and third-party key management (KMIP) support,
Container snapshots for point-in-time protection of stateful workloads,
Container migration to move data across Hedvig clusters on-premises or in the public cloud,
High-availability and disaster recovery for containerised workloads,
Automated policies for snapshot frequency and migrations.
Avinash Lakshman, chief storage strategist for Commvault, said in a statement: “Cloud native applications are critical to the enterprise, and Hedvig for Containers hits that sweet spot of software-defined storage coupled with protection for containerised applications.”
Hedvig for Containers is available today.
Comment
Commvault is in a data protection and storage marathon. The company competes intensely with Actifio, Cohesity, Dell, IBM, Rubrik, Veeam and Veritas for enterprise data protection and management accounts. Also, ambitious backup-as-a-service startups such as Clumio and Druva are muscling in to this territory.
Commvault has rearranged its shop window to better show its wares. It’s also moved to take advantage of its Hedvig software-defined software by adding it to the HyperScale appliance line. Hedvig’s latest updates should enhance the software’s appeal to enterprise DevOps cloud native users.
Commvault is claiming a stake in the general storage market with Hedvig Distributed Storage Platform providing file, block and object storage at cloud scale. This general storage market also features intense competition from new and established suppliers and the idea of a combined file, block and object storage product is not universally adopted.
Commvault can use Hedvig software in its own appliances, but it will be quite the challenge to gain wide traction in the general storage market. There are good growth prospects for the emerging stateful container market and Commvault is building a strong story here. However, the mainstream storage suppliers are not so far behind.
IBM executive Arvind Krishna. 5/30/19 Photo by John O’Boyle
IBM’s latest quarterly revenues dropped a few per cent but mainframe revenues rose strongly and its storage business grew for the third quarter in a row, albeit at 2 per cent.
Q2 revenues declined 5.4 per cent y/y to $18.1bn and net income slumped 45.5 per cent to $1.36bn as the pandemic discouraged customer spending. That said, Big Blue sees green shoots ahead as enterprise customers adopt more AI and head for hybrid multi-cloud IT.
Arvind Krishna.
CEO Arvind Krishna said in prepared remarks: “Our clients see the value of IBM’s hybrid cloud platform, based on open technologies, at a time of unprecedented business disruption.”
He added: “What’s most important to me and to IBM is that we emerge stronger from this environment with a business positioned for growth. And I am confident we can do that.”
The “open technologies” phrase include’s IBM’s acquired Red Hat business which saw revenue rise 17 per cent.
No great pandemic depressing effect visible in IBM’s latest results.
IBM’s business units results were mixed;
Cloud & Cognitive Software — revenues of $5.7bn y/y, up 3 per cent.
Global Business Services — revenues of $3.9bn, down 7 per cent.
Global Technology Services — revenues of $6.3bn, down 8 per cent.
Systems — revenues of $1.852bn, up 6 per cent. This was made up from Systems hardware rising 13 per cent, with Z mainframes contributing a massive 69 per cent revenue rise, storage systems growing slightly by 2 per cent and poor Power server systems down 28 per cent. Operating system software declined 13 per cent while cloud-based revenues in the systems business grew 22 per cent.
Global Financing — revenues of $265m, down 25 per cent.
Storage division CMO and VP Eric Herzog told us: “IBM Storage Systems grows for the third quarter in a row with growth of 2 per cent for Q2 2020, following growth of 18 per cent in Q1 2020 and 3 per cent in Q4 2019.”
We added the latest storage revenue number to our chart of IBM’s quarterly storage revenues by calendar year and the third-in-a-row uptick (green line) is clearly seen:
These numbers do not include some IBM storage software sales nor its Cloud Object Storage business as IBM doesn’t reveal these aspects of its overall storage business.
IBM’s second quarter storage results contrast with HPE’s second fiscal 2020 quarter ended April 30. HPE’s storage revenues slumped 18 per cent to $1bn due to “component shortages and supply chain disruptions related to the COVID-19 pandemic”.
They also contrast to NetApp’s fourth fiscal 2020 quarter ended April 24, when revenues dropped 11.9 per cent to $1.4bn as the pandemic struck and exacerbated product and cloud sales weaknesses.
The Gartner gurus have updated their Backup and Recovery Magic Quadrant, moving two of 2019’s ‘visionaries’, Cohesity and Rubrik, into the leaders’ quadrant.
Update: 2020 MQ for backup and recovery diagram added.
We have seen a copy of the report and Magic Quadrant (MQ) chart. The authors write: “The move toward public cloud, heightened concerns over ransomware, and complexities associated with backup and data management are forcing I&O leaders to rearchitect their backup infrastructure and explore alternative solutions.”
That is fast progress for both companies, but faster for Cohesity than Rubrik, which made its first appearance in 2019. That year Rubrik and Arcserve complained publicly that their position in the visionary and niche player’s quadrants respectively were unfair.
This year Rubrik has made it to the leaders’ quadrant but Arcserve remains boxed in with the niche players. Arcserve did not respond to requests for supplemental information,” Gartner says. “Gartner’s analysis is therefore based on other credible sources, including Gartner client inquiries, Gartner Peer Insights, press releases from Arcserve and technical documentation available on Arcserve’s website.”
Dell and IBM are in the Leader’s quadrant as before but in other changes:
Commvault and Veeam are the top two in the leaders quadrant but Veeam has overtaken Commvault with a higher ability to execute.
Veritas is a leader for the fifteenth time in a row, and Commvault for nine years in a row
Acronis moves from niche player to visionary. The MQ report says Acronis has a widening capability gap and “trails competition in its ability to provide comprehensive data protection capabilities for public cloud, hyperconverged infrastructure (HCI) and network- attached storage (NAS) environments.”
MicroFocus, a niche player in 2019, no longer in the MQ.
Actifio moves higher up the visionaries’ quadrant with Gartner noting limited presence outside North America, lack of tape support, and its “list price for software licenses and annual maintenance is higher than most vendors evaluated in this research”.
In response, an Actifio spokesperson told us: “Gartner relies on revenue and growth for its Ability to Execute axis. Companies that sell hardware, be they appliances or “bricks,” get what one might call an unfair advantage (in this case a spot in the Leaders quadrant) compared with vendors with pure software or SaaS models.”
Like Arcserve, niche player Unitrends did not respond to Gartner requests for supplemental information.
Gartner awards so-called Honourable Mentions to certain other backup suppliers; Clumio, Druva, HYCU, and Zerto.
Clumio, HYCU and Zerto were not included in the MQ because they did not meet Gartner’s revenue criteria. Druva was not included as it did not meet the criteria for minimum number of enterprise backup customers that deployed the solution at the scale.
As a reminder, Gartner’s Magic Quadrant is a 2-axis chart plotting Ability to Execute” along the vertical axis and Completeness of Vision along the horizontal axis. There are four subsidiary quadrants or squares; Challengers and Leaders along the top, and Niche Players and Visionaries along the bottom.
Here’s last year’s backup and recovery MQ for comparison;
Kubernetes is “four to five years away” from being a stable distribution capable of running stateful apps, according to Redis Labs chief product officer Alvin Richards.
Enterprise IT applications typically need to access recorded data, process it, and write fresh data. This data is its state. Can Kubernetes run stateful applications out of the box? “No, absolutely not,” Richards told us in a phone interview. That’s where Kubernetes’ myriad extensions come in.
More on that later, But first, let’s remind ourselves of ‘stateful’ and ‘stateless’.
Stateful and stateless containers
Alvin Richards
Kubernetes began its life orchestrating stateless containers and these provided a way to scale applications relatively easily. But they don’t record data, having no memory of previous sessions when they are instantiated. Like a simple calculator application they have no state. Whenever a calculator is started up it sets itself at zero and waits for input. That input is lost when the calculation app is closed.
A spreadsheet, on the other hand, has state. In other words, you don’t need to enter information each time Excel is started up. The values in its cells are recorded, stored and loaded into memory when the spreadsheet app opens the filed spreadsheet data.
CSI (Container Storage Interface) is the starting point for adding state to containers orchestrated by Kubernetes. CSI enables third-party storage providers to create plugins adding a block or file storage system to a Kubernetes deployment without affecting the core Kubernetes code. Multiple data storage vendors have set up CSI plugins linking their external storage systems to containers or they set up actual storage containers. Each one is unique.
Kubernetes Operators
So is CSI job-done for Kubernetes and storage? Not according to Richards, who says the software needs an operator.
Redis Labs software talks to an operator, a piece of interface code that automates Kubernetes operations. An operator uses the Kubernetes API to tell Kubernetes what to do. Effectively, Richards says, “Kubernetes is an API … to provision and automate infrastructure.” Redis operator has been extended so it can use the API to create a database inside a cluster. It provides high availability with automated disaster recovery.
A question for Redis, as a supplier of an open source, distributed, memory-cached NoSQL database, is what should Kubernetes do about such stateful services?
These, like the Redis NoSQL database, could run in the local Kubernetes cluster or in a managed cloud service accessed through a gateway. The operator needs to know this and there has to be secure node-to-node communication for software like Redis. Raw storage functionality is not enough for enterprise storage needs.
Multiplying distributions
According to Richards, there are many Kubernetes distributions, with each adding its own bit of uniqueness to the pot, that could lead to customer confusion. How will these disparate distributions get combined? Richards contrasts Linux and Docker: “Linux got to a point with a guiding light (Linus Torvalds) and powered ahead…. Docker did not.”
He thinks there are two very different Kubernetes communities: stateless and stateful. My impression is that he wants it to take the Linus/Linux route to power ahead, and adopt stateful container functionality.
Incentives dichotomy
In our view, the baseline Kubernetes distribution expands functionality over time, absorbing the good additions from the various forks. Suppliers providing added paid-for services on top of the raw Kubernetes distribution have to accept this, and move their services up the stack or add new services. They don’t contribute their paid-for code to the Kubernetes open source project; that would destroy their ability to monetise their code.
Other more altruistic coders develop similar or equivalent functionality and contribute it to the project, eroding the basis for the paid-for code. Richards describes this as a dichotomy of incentives. For instance, he thinks that stateful app management, like encryption, should be just a basic part of the raw Kubernetes distribution, i.e, a feature and not a product.
Richards cites MongoDB as now having core and good enough database functionality. In other words mucho of the functionality in originally, paid-for extensions has been assimilated into the core database.
Stating the not-so-obvious
So will a vibrant Kubernetes open source community follow suit? Progress has been made and you can run stateful services in Kubernetes. But, Richards says: “Can a wider audience/community be successful? Well the jury is out on that.”
He thinks we’re maybe four to five years away from having a stable Kubernetes distribution capable of running stateful apps. But “the timeline will shorten as it becomes a major focus.”
Until then we have to endure the messy altruistic and artisanal coding creativity that is the open source movement, and hope it collectively takes us to the right destination where raw Kubernetes can run stateful containers out of the box.
Dell is training up its Data Domain channel to battle Exagrid in the purpose-built, deduplicating backup to disk market.
The company ran a seminar for resellers via Zoom on July 16, entitled “Practically Speaking – Exagrid vs Data Domain”. The synopsis goes like this:
“You asked for it – we’ve got it! – a Data Domain refresher course along side a competitive review of EXAGRID.
In this edition of Practically Speaking, join host Jason Jones and Competitive Analyst Stephen Karra as we talk Data Domain and Exagrid.
This is targeted at Channel Partners that are interested in a introduction to Data Domain and that work with smaller or midsized customers that think Veeam to Exagrid are winning combination!”
Exagrid CEO Bill Andrews told us Dell must be feeling the Exagrid heat: “We are shaking Dell’s tree. The took 20 minutes of their 60 minute channel webinar and dedicated it to ExaGrid. They did not bring up any other vendor.”
He said: “We are taking them out of deals on a regular basis,” he said. “Data Domain is slow for backups, slow for restores, does not scale well and is expensive.
“The challenge is that Dell has a huge sales force, channel and installed customer base so it is hard to get customers to let a new vendor in. Once in, we will take it 70 per cent of the time.
“We are hiring more sales teams ASAP to take it up a level. We have 132 staff in our sales organisation now. They have 20,000 including all sales staff. A true David and Goliath battle.”
Exagrid last week boasted of a great second quarter, gaining 82 new customers. Nineteen customers kicked off with six-figure purchases and one plunked down seven figures.
This week’s data storage news roundup covers some new Nexsan arrays, and GitHub adventures in an Arctic coal mine.
Nexsan’s gen 3 Unity arrays
StorCentric’s Nexsan business unit has launched its third-generation Unity 3300 and 7900 enterprise class unified file and storage arrays. The features include ‘Unbreakable Backup’ to protect against ransomware, and Data Migration and Cloud Connector Modules to move data within tiered storage and hybrid cloud infrastructures.
These are hybrid flash/disk and all-flash arrays, delivering capacity increases in HDD and SSD configurations and up to 50 per cent performance increase over gen 2 Unity.
Mihir Shah, StorCentric CEO, provided an announcement quote: “We are not only delivering the industry’s most powerful solution for mixed workloads in this new third generation of Unity, but we are also enabling customers to meet the challenges presented by today’s ever-present data security and protection risks.”
The Unbreakable Backup feature employs a Nexsan Assureon archive data vault from which uncorrupted data can be restored. Nexsan’s Cloud Connector module connects to 18 public clouds, including Amazon S3 and Google Cloud Storage.
The Data Migration capability replicates files from any legacy system with ingest into Unity via fine-grained filtering and continuous incremental updates.
Unity Third Generation 3300 and 7900 are described in a data sheet and spec sheet, and are generally available.
GitHub’s Arctic Rolls
GitHub has archived its repository of open source code in a former coal mine in Norway’s Svalbard archipelago. The data is stored on modern day movie-style film reels.
AWA vault entrance
The Arctic World Archive lAWA) in Longyearbyen is a joint venture between the Norwegian state coal mining company Store Norske Spitsbergen Kulkompani (SNSK) and Piql, a Norwegian archive technology business. SNSK has the hole in the ground and PiqI makes the film reel cartridges that go into it.
Piql film processing
The film stock is claimed to have a 500 year-plus life and is ‘self-describing’. In other words it should survive technology replacements, unlike punched cards, paper tape and various digital tape formats.
A Piql document says: “[its] film is a photo-sensitive, chemically stable and secure medium with proven longevity of hundreds of years.”
“The data is stored offline and will not be affect-ed in case of an electricity shortage or if exposed to electromagnetic pulses. For extra security against digital obsolescence, the storage medium has been designed to be a self-contained medium where instructions is written in human readable text onto the film explaining how to recover the information.“
PiqI says: “The source code for the retrieval software is stored on the film at the beginning of the reel in both human readable and digital form. File format specifications for preservation formats will be also written on the film in both human readable and digital form. In this way, only a camera/scanner and a computer of the future will be needed to restore the information in the future.”
Piql film reel and cartridge (left) and reader and writer machines (right).
Data is recorded digitally in frames on the film, with error corrections and checksums, using a PiqI writer machine. The film reel is stored in a cartridge, verified and then sent up to the AWA or a customer site. Restoration involves passing the film through a reader/scanner
AWA customers include the Vatican Library, the National Museum of Norway, the European Space Agency, the Museum of the Person, Alinari and several global corporations.
Shorts
Acronis, the data protection vendor, is buying DeviceLock, a maker of endpoint device, port control and data leak prevention software for enterprises and government institutions. DeviceLock will operate as a wholly-owned subsidiary.
Data lifecycle manager Aparavi‘s ‘The Aparavi Platform’ is available exclusively on the Microsoft Azure marketplace. The Microsoft and Aparavi partnership includes a joint marketing initiative with Microsoft cloud solution providers.
The Cohesity Data Platform has been certified to run on Fujitsu PRIMERGY servers globally.
Data protector and manager Commvault has launched an enhanced Partner Advantage program, with simpler win, growth, and performance rebates and more flexibility across program tiers.
TrustRadius has given cloud, data centre and endpoint data protector Druva some 2020 Top Rated Awards for disaster recovery and endpoint backup. Druva said the company has also received more than 50 awards in G2’s Summer Reports 2020, including the No.1 SaaS Backup solution.
Iguazio has signed up with SFL Scientific, a data science consulting firm, to sell joint offerings to enterprises looking to apply AI to real life applications. The partners say customers can incorporate AI and MLOps automation to endless applications such as predictive maintenance, real time recommendations, KYC (know your client) and fraud prevention.
Google Cloud has announced the general availability of its Bare Metal Solution, using NetApp arrays, Virginia, Los Angeles, London, Frankfurt, and Sydney. Customers get physical, dedicated systems with a cloud-like approach for support, service levels, monthly billing, and on-demand flexibility.
Kingston Technology is buying out Phison’s interest in their joint Kingston Solutions, Inc. venture. Phison will continue to focus on technology and R&D and will give Kingston support and service.
DRAM and NAND fabber Micron has announced a DDR5 DRAM enablement program to provide early access to DDR5 technical resources, products and ecosystem partners. Cadence, Montage, Rambus, Renesas and Synopsys are in the program. DDR5 DRAM offers more than twice the bandwidth of today’s DDR4 memory..
NetApp has completed its acquisition of Spot, a broker providing cloud cost controls.
The UK’s University of Reading has implemented Nutanix Files, after deciding not to upgrade its legacy NAS systems.
Rackspace Technology has filed an S1 statement proposing an IPO.
Secondary data manager data protector Rubrik has backed the UK’s Francis Crick Institute as a customer. The institute is Europe’s largest biomedical research facility and uses Rubrik software for cyber-resiliency.
HCI supplier Scale Computing has a launched the Business Resilience Solution bringing server virtualization, end-user computing, data protection and threat mitigation into a single package. This includes Virtual Desktop Infrastructure (VDI) and remote-work infrastructure with built-in disaster recovery services, plus Acronis ransomware protection and data backup.
The Serial ATA International Organization (SATA-IO) has published the SATA Revision 3.5 Specification with features that promote greater integration of SATA devices and products with other industry I/O standards. These include Device Transmit Emphasis for Gen 3 PHY, Defined Ordered NCQ Commands and Command Duration Limit Features.
Toshiba has announced the AIC JBOD Model J4078-01 rack-friendly storage chassis. This is 810 mm long, has 78 drive slots, and fits into any existing rack. According to to Tosh chassis for more than 60 drives are typically very long – sometimes more than one metre – and are unable to fit into existing racks.
Toshiba’s 78-driv JBOD.
A Forrester Consulting study on behalf of Virtana has revealed organisations that embrace hybrid cloud migration projects can yield a 145 per cent return on investment within three years. Organisations who use Virtana’s software in the private cloud may save more than $250,000 in problem resolution and a further $670,000 driving agility with global capacity management.
People
Data protector Arcserve has appointed Ivan Pittaluga as Chief Technology Officer (CTO). He was previously VP of data protection and governance for Veritas Technologies, with a NetBackup focus.
Former NetApp exec Henri Richard has joined Excelero, aNVME-oF storage software supplier, a board advisor.
Rubrik has announced the promotion of Martin Brown to VP of EMEA and the hiring of Jan Ursi as senior director – EMEA channels and technology Partners. Ursi joins after a brief stint at UiPath, and a longer haul at Nutanix.
Tom Black, HPE’s new storage boss, is unifying the SimpliVity and Nimble dHCI engineering R&D and laying off an undisclosed number of engineering staff.
HPE said in a statement that the reorganisation will “allow customers to benefit from a consistent and better experience across hybrid cloud, AIOps, support, automation, and lifecycle management for all of their HCI use cases. As part of this plan, there will be a reduction in the HPE HCI headcount, but we’re not disclosing the number of the impacted team members”.
SimpliVity and Nimble dHCI are based on ProLiant servers and managed through InfoSight software. Both are also-rans in the HCI market. It looks as if HPE is saving money by making SimpliVity a variant of Nimble dHCI.
HPE has decided to focus the SimpliVity HCI (hyperconverged infrastructure) on two markets; small and medium business customers and the developing internet edge where the single box approach fits well. The company is calling this ‘edge-optimised HCI’.
Tom Black.
This is the first big product move by Tom Black, who was appointed SVP and GM of HPE’s Storage business unit in January. He is a storage outsider, coming from HPE’s acquired Aruba networking business. His 20-year career in networking encompasses a stint as VP Engineering for HP’s networking business, and engineering VP roles at Arista and Cisco.
We’re told he has experience in developing high-performance and scale-out systems with a good customer end-to-end experience.
History
SimpliVity, and HPE overall, is an HCI also-ran, when compared with Dell EMC-VMware and Nutanix.
HCI Market with IDC numbers to Q1 2020.
SimpliVity, based in Westboro, Mass, was founded in 2009, and developed an all-in-one data centre lego box for building IT infrastructure. This combined compute, a hypervisor, storage and basic networking in its single hyperconverged enclosure, and represented a simpler way of scaling IT infrastructure than buying separate servers, storage arrays and networking switches.
SimpliVity 380 box.
Notably, SimpliVity used a hardware acceleration ASIC for data reduction, rendering it a proprietary – as opposed to commodity hardware – system.
Competitors included Nutanix and Springpath, which was bought by Cisco. EMC-owned VMware developed its vSAN product in 2013. Dell bought EMC in 2015 and the HCI industry-dominating EMC VxRail product, based on vSAN, launched in 2016.
HPE bought the SimpliVity business in January 2017 for $650m. It also bought Nimble in March 2017 for $1bn as a smaller SAN array than its 3PAR business but one with the category-defining InfoSight predictive analytics management system.
HPE launched the Nimble disaggregated or dHCI system, with compute nodes sharing an external Nimble storage array, a year ago. At the time, the company said the system scaled compute and storage separately and so was better suited to enterprise workloads than SimpliVity. It was also commodity hardware-based. The system will run mixed workloads including business-critical applications at larger scale than edge-optimised HCI.
Nimble dHCI product.
SimpliVity development continued with a value-for money focus for smaller businesses. For example, HPE launched a 1U server-based 325 product enclosure using AMD’s EPYC processor, at the same time as the Nimble dHCI product hit the streets.
SimpliVity continued its low-end focus with a revamped 1U SimpliVity 325 using a gen 2 AMD EPYC 7002 series processor and added InfoSight monitoring in October 2019.
Qumulo has raised $125m in an E-series round at a valuation of $1.2bn.
This takes total funding to $351m for the data storage startup, which was founded in 2012. Qumulo will spend the new cash on global expansion, strategic partnerships and product development.
Qumulo’s pitch is that it sits at the intersection of two mega trends, namely the digitalisation of our world and the advent of cloud computing. Organisations are using more file data and need faster access and they are increasingly looking to put their file stores in the public cloud.
CEO Bill Richter said in prepared statement that Qumulo manages more than 150 billion customer files. “This latest investment is a great recognition of our category leadership … We see rapidly increasing demand driven from content creators spanning artists creating Hollywood blockbusters, to researchers solving global pandemics, to engineers putting rockets into space.”
Qumulo NVME filer
Qumulo competition
Dell EMC with PowerScale (formerly known as Isilon) and NetApp are the dominant mainstream enterprise file storage suppliers. Qumulo, started by Isilon veterans, competes for this business, and claims greater performance, more scale and better public cloud and management facilities.
Qumulo competitor WekaIO is a software-only startup, founded in 2013, that has raised $66.7m. It provides parallel access, high performance computing filesystem software, and has racked up OEM deals with Cisco, Dell EMC, Hitachi Vantara, HPE, Lenovo and Penguin Computing.
Catalogic copy data management software now works with HPE Nimble storage arrays. This has been a long time coming – Catalogic said this was a roadmap objective in 2016.
Copy data management (CDM) is away of automating data copies, typically of databases and virtual machines, for test and development, compliance and replication, with copy deletion when required to prevent copy data sprawl.
Catalogic’s ECX software builds a central catalogue of data items and uses a storage array’s snapshot, replication and cloning feature as the basis for its copy data management activities.
Sathya Sakaran
Sathya Sankaran, COO of Catalogic Software, has provided an announcement quote: “Catalogic ECX will extend [Nimble Storage] simplicity to the full application stack providing app-aware storage snapshots, replication, and instant cloning of VMs and Databases.”
The company says ECX Nimble support makes restores faster, database cloning easier and self-service copy data provisioning possible in hybrid cloud environments.
Catalogic’s ECX Nimble copy automation
Catalogic uses a storage controller-based licensing system with no limitations on data size or the number of database instances.
The Nimble update for Catalogic ECX is generally available on July 31 and will support the full line of Nimble Storage all flash systems. Near term roadmap features include support for asynchronous replication and HPE Cloud Volumes.
Catalogic also provides DPX endpoint and server backup and recovery software, and, through a partnership, ProLion, a set of tools for NetApp file users. It started out as the Syncsort data protection business which underwent a private equity and management buyout in 2013. Since then the company has expanded its remit beyond copy data management into the data protection market.
Competitors include three VC-backed startups: Actifio, Cohesity and Delphix. Actifio has raised $311.5m, Cohesity has raised $660m and Delphix has raised $124m. They far outgun Catalogic, which raised $8m in debt financing in 2015, in spending power for product engineering and sales and marketing.