Home Blog Page 339

Qumulo accumulates $125m

Qumulo
Qumulo

Qumulo has raised $125m in an E-series round at a valuation of $1.2bn.

This takes total funding to $351m for the data storage startup, which was founded in 2012. Qumulo will spend the new cash on global expansion, strategic partnerships and product development.

Qumulo’s pitch is that it sits at the intersection of two mega trends, namely the digitalisation of our world and the advent of cloud computing. Organisations are using more file data and need faster access and they are increasingly looking to put their file stores in the public cloud.

CEO Bill Richter said in prepared statement that Qumulo manages more than 150 billion customer files. “This latest investment is a great recognition of our category leadership … We see rapidly increasing demand driven from content creators spanning artists creating Hollywood blockbusters, to researchers solving global pandemics, to engineers putting rockets into space.”

Qumulo NVME filer

Qumulo competition

Dell EMC with PowerScale (formerly known as Isilon) and NetApp are the dominant mainstream enterprise file storage suppliers. Qumulo, started by Isilon veterans, competes for this business, and claims greater performance, more scale and better public cloud and management facilities.

Qumulo competitor WekaIO is a software-only startup, founded in 2013, that has raised $66.7m. It provides parallel access, high performance computing filesystem software, and has racked up OEM deals with Cisco, Dell EMC, Hitachi Vantara, HPE, Lenovo and Penguin Computing.

Google is also a competitor, thanks to its 2019 acquisition of Elastifile.

Catalogic copy data management adds support for HPE Nimble

Catalogic copy data management software now works with HPE Nimble storage arrays. This has been a long time coming – Catalogic said this was a roadmap objective in 2016.

Copy data management (CDM) is away of automating data copies, typically of databases and virtual machines, for test and development, compliance and replication, with copy deletion when required to prevent copy data sprawl.

Catalogic’s ECX software builds a central catalogue of data items and uses a storage array’s snapshot, replication and cloning feature as the basis for its copy data management activities.

Sathya Sakaran

Sathya Sankaran, COO of Catalogic Software, has provided an announcement quote: “Catalogic ECX will extend [Nimble Storage] simplicity to the full application stack providing app-aware storage snapshots, replication, and instant cloning of VMs and Databases.”

The company says ECX Nimble support makes restores faster, database cloning easier and self-service copy data provisioning possible in hybrid cloud environments.

Catalogic’s ECX Nimble copy automation

Catalogic uses a storage controller-based licensing system with no limitations on data size or the number of database instances.

The Nimble update for Catalogic ECX is generally available on July 31 and will support the full line of Nimble Storage all flash systems. Near term roadmap features include support for asynchronous replication and HPE Cloud Volumes.

Catalogic also provides DPX endpoint and server backup and recovery software, and, through a partnership, ProLion, a set of tools for NetApp file users. It started out as the Syncsort data protection business which underwent a private equity and management buyout in 2013. Since then the company has expanded its remit beyond copy data management into the data protection market.

Competitors include three VC-backed startups: Actifio, Cohesity and Delphix. Actifio has raised $311.5m, Cohesity has raised $660m and Delphix has raised $124m. They far outgun Catalogic, which raised $8m in debt financing in 2015, in spending power for product engineering and sales and marketing.

Nasuni and VAST Data gallop into the storage unicorn herd

Coldago has named 15 storage unicorns – startups valued more than $1bn. That’s three more than when the research firm last compiled this list six months ago.

This update on Coldago’s January 2020 league table sees Datrium, Nasuni and VAST Data entering for the first time. However, Datrium’s appearance will be short-lived as it is in the process of being acquired by VMware

Coldago’s storage unicorn line-up is:

  • Acronis – data protection integrated with security
  • Actifio – copy data and secondary data management
  • Barracuda Networks – security and protection and owned by private equity
  • Cohesity – backup and secondary data management adding file services 
  • Datrium – HCI and disaster recovery as a service 
  • DDN – HPC and enterprise block, file and object storage
  • Druva – data protection as a service with edge, data centre and in-cloud support
  • Infinidat – high end enterprise storage arrays using disk base and memory caching
  • Kaseya – IT infrastructure management and storage vendor based on acquisitions
  • Nasuni – cloud file storage services and collaboration,
  • Qumulo – scale-out filer software
  • Rubrik – backup and secondary data management 
  • VAST Data – single-tier QLC flash-based storage array with smart software
  • Veeam Software – data protection with core virtual server offering moving enterprise and cloudwards
  • Veritas Technologies – acquired and then spun-out data protection and storage management vendor owned by private equity

Actifio, Cohesity and Rubrik form a secondary data management supplier group.

Acronis, Druva, Veeam and Veritas are a data protection quartet, Cohesity and Rubrik use data protection to fuel their secondary data management ambitions and Barracuda also has data protection functionality. Actifio also uses backup as data fuel. It says it pioneered the concept of re-using the backups for various use cases and all of its customers use Actifio for backups.

DDN, Infinidat and VAST Data share a focus on hardware and enterprise storage arrays. VC-funded Infindat and VAST Data are startups with a singular focus on arrays while privately-owned DDN has expanded beyond its HPC roots to move into enterprise storage arrays via acquiring Tintri, Nexenta and WD’s Intelliflash business.

Kaseya is a mini-conglomerate that has grown rapidly through acquisitions such as Unitrends, Spanning Cloud Apps, RapidFire Tools, IT Glue, ID Agent and BullPhish ID. Unitrends and Spanning both supply data protection.

Nasuni started out as cloud file gateway supplier offering file collaboration services. It has grown into a public cloud file services vendor and aims to replace on-premises NAS filers.

Qumulo is also in the filer business, but offering scale-out, on-premises hardware and software originally. It now has more of a software and cloud focus.

Coldago suggested in January 2020 that Actifio could IPO in the next few quarters. The Covid-19 pandemic has probably put a spanner in those works.

Hitachi Vantara joins WekaIO fast filer OEM fan club

Weka
Weka video

Hitachi Vantara has signed up with WekaIO to OEM the startup’s distributed file system and management software.

Update: Two HCP tables and HCP upgrade summary chart added detailing main offerings and HW specs. More detailed HCP performance data added. 16 July 2020.

WekaIO gives the company a higher level of performance that supports parallel access and native NVMe drive connectivity. Hitachi Vantara will tightly couple the software to its newly updated HCP (Hitachi Content Platform) object store.

The company reckons it can sell a Hitachi Vantara-Weka combo into artificial intelligence, machine learning, high-performance computing and analytics applications in several industries.

It’s all about speed. Brian Householder, outgoing head of digital infrastructure at  Hitachi Vantara, said: “Hitachi Vantara is helping our customers maximise their infrastructure advantage by delivering significant performance improvements when accessing and connecting their data to make faster and more accurate decisions.”

Hitachi Vantara has corralled IDC’s Amita Potnis, research director, Infrastructure Systems, for an analyst quote: “CIOs and IT professionals … are now looking at object storage to support new use cases and performant workloads. These organisations are also evaluating distributed file solutions to handle scale and performance to support high-performance computing, real-time analytics and AI.”

Hitachi Content Platform

According to Hitachi Vantara, traditional NAS, primary and cloud-native workloads are transitioning to object storage to meet high performance requirements. Object storage is generally disk-based and thought to be slower than file access to disk-based filers. However, MinIO and OpenIO have both shown that fast flash is a reality.

The scale-out HCP object system supports NFS and CIFS file access and tiers data to public cloud object stores. Weka delivers the fast file access speed while HCP will provide a virtually bottomless file/object store.

Hitachi Vantara said the HCP hardware update will better support next-generation unstructured workloads with performance-optimised, all-flash HCP nodes. These deliver 3.4 times more throughput over Amazon’s Simple Storage Service (S3) protocol, resulting in up to 34 per cent lower costs.

Updated S Series storage nodes  have a more than 3x increase in small object read and write performance; as fast as 40,000 per second.

They deliver >2x increase in large object read and write performance. The HCP S31 node can achieve up to 8,600 MiB per second when writing large objects (100MiB). They enable nearly 3x more capacity in the same rack space than the previous generation and scales up to more than 15 PB disk capacity in a single rack, allowing more than an exabyte of data on-premises

Hitachi Vantara said an unnamed financial services company saw HCP performance improvements when using the flash gear of up to 300 per cent, with some jobs achieving 1,200 per cent improvement over the existing – and unidentified – traditional object storage system.

The data storage vendor claims “a typical Fortune 500 company getting a 10 per cent increase in data accessibility can see more than $65m in additional net income”. The source for this is a 2017 Forbes article which cites Forrester research.

Hitachi Vantara has not provided price or availability information for its updated HCP kit or its HCP-Weka combo offering. Read a datasheet for more information and a blog for more background..

WekaIO

WekaIO has developed a massively scalable and fast file system for high performance computing and large scale enterprise use cases. Cisco, Dell EMC, HPE, Lenovo and Penguin Computing are among partners OEMing, reselling or jointly selling its software.

IBM and NetApp are notable exceptions. IBM has its own Spectrum Scale parallel file system which directly competes with Weka, while NetApp sees no need to partner with Weka to get into HPC or into the enterprise AI market, which is a Weka focus.

Hitachi Vantara has its own basic or mid-range Hitachi NAS Platform file system technology, and the Virtual Storage Platform N Series which offers file services layered on its VSP enterprise SAN array

Datto files confidentially for $1bn IPO

Backup service provider Datto has confidentially filed for an IPO that could value the company at more than $1bn, according to a Bloomberg report.

Datto offers its software via managed service providers to small and medium-sized businesses. Founded in 2007, the company raised $100m in VC funding and acquired Backupify in 2014 and Open Mesh in 2017.

Austin McChord

Vista Equity Partners, a private equity firm, then bought Datto in October 2017. Vista merged Datto with its Autotask business to create a unified managed service platform for IT management, data protection and business continuity.

Tim Weller succeeded founder Austin McChord as Datto CEO after McChord stepped down in October 2018. McChord has various board memberships and VC funding roles outside his Datto board membership.

Morgan Stanley, Bank of America, Barclays Plc and Credit Suisse Group AG have been hired as IPO underwriters, according to Bloomberg. IPO timing will be influenced by the Covid-19 pandemic effects on the US economy and may not take place.

Tintri delivers database-aware storage

Tintri has released an update to its VM-aware storage platform that delivers the same level of management granularity for SQL databases. Benefits include making it easier to clone databases and the ability for DBAs to directly provision more storage when needed.

SQL Integrated Storage builds on technology Tintri developed for virtual machines that made its VMstore arrays a popular choice in VMware environments. The latest release integrates storage directly with SQL Server, and simplifies and automates storage management tasks for SQL Server databases, and hence database administrators (DBAs).

This update has been on the cards for some time, and has been openly discussed on Tintri’s own blog.

The company points out that a single virtual machine may contain multiple databases, and this can complicate managing each one and its storage resources.

Send in the clones

“When you have one VM with several databases inside, you can have challenges of managing at the VM level. If you want to clone a database, for example, you have to clone the entire VM, operating system and everything. If you want to look at the performance or you want to isolate performance per database, that’s something that’s impossible,” says Tintri CTO Tomer Hagay.

Tintri is already starting from a point where it has advantages over other storage platforms in this respect, he says, because it already manages at the VM level. But now, with the new software release, “we’re looking inside those VMs and integrating directly with the database layer to deliver SQL integrated storage.”

As with VMstore and virtual machines, the update presents storage capacity as a single volume to support databases, doing away with the need to divide it into LUNs and RAID groups. This enables storage administrators to get a better overview of what is happening with storage on a per-database level, such as the number of instances of each database and its copies.

Integrating at the database level allows Tintri to look at all the objects that the SQL Server knows about, such as the databases and right down to the database file level. This is done without requiring any software agents to be deployed on the SQL Server. Instead, Tintri is using SQL Server Extended events, a system created for monitoring and collecting different events and system information from SQL Server.

This provides per-database visibility across the entire SQL Server database infrastructure, as well as database-level control of capabilities such as snapshots for data protection and replication. Snapshots are zero-impact, as they are performed without having to pause IO traffic, according to Tintri.

Tintri also claims that it is the only vendor capable of delivering QoS guarantees for each individual SQL Server database with resources being automatically allocated as necessary as IO requirements change for each database. It can provide real-time performance and capacity metrics for each database and database file, including identifying any database suffering from latncy issues.

In addition, it delivers a greater measure of self-service empowerment and self-sufficiency for DBAs, who will have the ability to provision more storage when required, rather than having to file a support ticket and wait for a storage admin to get around to fulfilling the request, according to Tintri.

SQL Integrated Storage comes as an update to the Tintri OS that powers the VMstore arrays, and does not require additional licenses.

Omdia identifies HCI leaders. But where is Nutanix?

Omdia, a tech analyst firm, forecasts the global hyperconverged infrastructure (HCI) market will grow from $11bn in 2019 to more than $60bn in 2023.

Update: Omdia analyst Roy Illsley replies to question and adds market analysis charts and comments, 15 July 2020.

It has a taken a new slant on rating hyperconverged infrastructure (HCI) vendors, dividing suppliers into leaders and challengers. Omdia classifies Cisco, Dell, NetApp and VMware as leaders because they “recorded more than 60 per cent of sub-category scores above the cohort average compared to challengers with less than 50 per cent.”

Also, the “clear observation is that the difference between a hardware or software approach to HCI does not show one is better than the other, says Roy Illsley, Omdia chief analyst, IT & enterprise.

Little sets Cisco, Dell, and VMware apart at the moment, according to Omdia NetApp, the fourth leader, slightly separated from this group. But where is Nutanix in this assessment? It seems an odd omission and we have asked Omdia why Nutanix has been excluded.

Update: Illsley said: “Nutanix and Huawei both said they were taking part, but both had resource issues and could not complete the data gathering phase. From the previous market radar in 2019 I would have expected Nutanix to be a rival to VMware.”

This Omdia diagram shows the difference between the solutions remains relatively small with the standard deviation between the leader to the lowest ranked vendor being just over 1.2 in the unweighted average.

Omdia’s chart shows vendors positioned in a 2D space by assessments of their overall execution and technology, with the circle or bubble size denoting their market impact.

In terms of market impact as generally understood – i.e. revenue share – Omdia’s assessment is quite different from IDC’s storage tracker findings.

IDC HCI vendor market revenue share numbers.

Update: Illsley said: “Market impact is a combination of the market share, NPS, and some other measure such as global coverage by region and vertical.”

Omdia classifies Fujitsu, Hitachi Vantara, HPE and Lenovo as challengers. HCI vendors such as DataCore, Scale Computing and Pivot3 do not appear. There is no distinction in the chart between HPE’s SimpliVity and its Nimble dHCI products. 

Update: Illsley said: “Scale and Pivot3 were invited, but both said they failed to meet the inclusion criteria in terms of the number of customers.”

We’re told the separation between the leaders and challengers in the HCI solutions market was a maximum of 1.2 standard deviation on average, demonstrating that the HCI market is differentiated by a number of small features and capabilities. 

Example of a bell curve and standard deviations.

A standard deviation is a measure of how much an item or group of items differ from the mean value of the whole group on some measurement scale with a bell-shaped curve of values. It certainly sounds quantitatively and statistically rigorous, albeit being based on analyst’s qualitative assessments of a vendor’s position on whatever criteria were being used.

Omdia has not published its assessment criteria, but mention latency, data sovereignty and scale. It declares the HCI market remains immature, but leadership is forming. Both IDC and Gartner have identified HCI leaders in their research for many quarters.

Gartner HCI vendor view, November 2019. See the leaders?

The HCI report is part of Omdia’s  IT Ecosystem & Operations Intelligence Service and is available to subscribers only.

Update: Market analysis

Omdia tracks the HCI market in two market trackers: the software-defined storage tracker and the server market tracker. In terms of the software-defined storage market, HCI (appliances + expansion) is a fast-growing segment and by 2024, approaching $44bn. Figure 1 below shows how the different sub-segments of the software defined storage market will change over the 2018–2024 period. 

Figure 1: Software-defined storage market

HCI appliances (Omdia defines HCI appliance as a single server and integrated software stack that typically includes multiple compute nodes) the dominant form of HCI and worth over $40bn by 2024. HCI expansion (Omdia defines HCI expansion as a storage device in an enclosure without a storage controller where data storing coordination is provided by an attached server) will become a $2.5bn market by 2024.

In terms of the server market, HCI server hardware remains a growing segment but will be less significant than it is in the SDS market. Omdia’s forecasts suggest that by 2024 the HCI server hardware will be approaching $19bn. By comparison, rack-mounted server hardware will be worth over $48bn in 2024, so while HCI server hardware is growing in importance within the server market, the standard rack-mounted server will remain the dominant form in the server market out to 2024 (see Figure 2 below). 

Figure 2: Server market

Omdia is the newly rebranded name for various tech analyst businesses owned by Informa, the UK events and publishing giant. These include Ovum, Heavy Reading and Tractica and a technology research portfolio acquired from IHS Markit.

VCs fill Nasuni war chest for acquisitions and other adventures

Nasuni
Nasuni

Cloud file storage and collaboration startup Nasuni has scored £$25m additional funding plus a $15m credit facility for acquisitions and strategic projects.

The valuation is undisclosed but is higher than the previous capital raise, Nasuni said. The company bagged $25m last year and says it hasn’t dipped into this yet. Total funding stands at $169m.

CEO Paul Flanagan said today: “Closing major funding during these times of economic uncertainty is a testament to the promise that our investors see in Nasuni.”

What’s got investors fired up is that Covid-19 lockdowns and stay-at-home orders means organisations have to deliver file access to workers’ home offices and remote sites. Step forward Nasuni with its cloud file services and remote access.

Andres Rodriguez, Nasuni founder and CTO, says: “As all organisations continue to navigate these uncertain times, one thing is becoming clearer than ever before – businesses need the cloud. Physical data centre have become a liability.”

Board member David Campbell, a Goldman Sachs MD, said in a prepared statement: “The pandemic has forced enterprise IT to accelerate moving their file infrastructure to the cloud, and Nasuni allows them seamless data access globally from both the cloud and the data centre. Nasuni is in a great position to expand and become the market leader not just for cloud file storage, but for all enterprise file storage.”

He said Nasuni’s data capacity under management is growing over 50 per cent annually and the company has clocked up net expansion rate of greater than 115 per cent in each of the last five years.

Software update

Nasuni today released V8.8 of Nasuni’s Cloud File Services. The software update is certified with Windows Virtual Desktop and includes cloud migration and edge capabilities and added ransomware protection. 

New edge availability and health monitoring capabilities enable Nasuni to detect and correct issues before user file access is affected. Edge caching appliances can also run in cloud regions, eliminating the need to cache file data on-premises. 

There is active-active global file locking servers across APJ, EMEA and the USA., increasing the resiliency and performance of remote file locking and file synchronisation. 

V8.8 provides more performance for Nasuni file shares hosted in Azure through support for Azure Ultra Disk and Premium SSD tiers.

Nasuni supports Amazon’s Snowball Edge device to aid file migration to AWS. It says its Continuous File Versioning technology can recover untarnished files when a customer has a ransomware attack, in hours or minutes.

Kioxia’s wafer-scale SSD. What about the packing density?

Kioxia has floated wafer scale SSDs as a much cheaper manufacturing method. As well as costing less than today’s data centre class SSDs, Kioxia says wafer-scale SSDs will deliver millions of IOPS.

But how does turning the wafer into an SSD affect performance and packing density?

Update: Jim Handy views on costs added. 14 July 2020.

Kioxia’s chief engineer Shigeo Oshima discussed the company’s ideas about wafer scale SSDs last month at the VLSI Symposium 2020 session. This was reported first in a Japanese media outlet but certain details are sketchy.

Comparative capacities

Blocks & Files has made some calculations on our trusty back of an envelope and we wonder how wafer-scale SSDs would be any better than ruler-format SSDs.

A NAND wafer is circular, 300mm in diameter, and holds a maximum of 700 dies. At the 200+ layer level these would be 512Gb dies in Samsung’s case. A wafer would hold 700 x 512Gb = 358,400Gb, which adds up to 44,800GB or 44.8TB.

NAND wafer.

Compare this to a 2.5-inch Samsung PM1643 SSD and Kioxia’s PM6 SSDs, which both hold 30.72TB.

Upcoming ruler format SSDs are longer and slimmer than the 2.5-inch U.2 format SSD and and more can be packed into a 1U or 2U server box. For example, Intel’s EDSFF E1.L ruler is 325.35mm long, 9.5mm wide and 38.6mm high, holds 15.36TB. Thirty-two units fit across the front of a 1U rackmount enclosure, giving it 32 x 15.36TB capacity – 492TB.

Supermicro 1U chassis with 32 Intel ruler SSDs. Note the power, cooling and connectivity units at the rear of the rack shelf.

Rack packing density

A standard rack is 19-inches (482.6mm) wide, including the ears that fit either side onto the rack frame posts. Going crosswise, this means only one 300mmm diameter wafer-scale SSD can fit into the rack. The rack is 36-inches (914.4mm) deep, so two 300mm wafer SSDs could certainly be placed one behind the other, and possibly three if packed closely.

Fitting wafer-scale SSDS into a rack

It’s easier to fit rectangular items into a rectangular box than circular items. As an analogy think about how vinyl records and DVDs came in oblong packaging to fit more easily into bookshelves and drawers. 

Let’s compare the 1U Supermicro storage shelf above, with its 32 rulers, with a similar storage enclosure using two wafer SSDs in its front, and similar power, cooling and connectivity units at the rear of the rack shelf.

Fitting two wafer-scale SSDs, power, cooling and connectivity units into a 1U rack shelf.

This 1.75 inches high 1U rack enclosure could hold more than two 300mm NAND wafers by stacking them. They would have to be cooled and, assuming 0.3 to 0.4-inch cooling air gaps between the wafers, you could stack four in 1U, so 4 x 44.8TB = 179TB.

Two stacks, one behind the other, gives us 358TB. By our guesstimate, the ruler SSD packing density is 27 per cent greater than the wafer SSD packing density. So wafer-scale SSDs appear to have a cost disadvantage when it comes to packing density.

Of course, acquisition cost is just one element of total cost of ownership, which also includes power, cooling and data centre space considerations.

Performance

A 30TB U.2 SSD using 512Gb dies would have 469 dies and the dies can be accessed in parallel. Kioxia’s Oshima said wafer-scale SSDs could deliver millions of IOPS and enormous performance. So-called “super multi-probing technology” would enable hundreds of chips on the wafer to be accessed at the same time.

In other words, the raw dies on the wafer would need IO access channels and a wafer-level controller to operate the wafer drive. In principle, this is what happens with SSDs today. IO channels are added to the dies, and connect them to a controller. The controller can access individual dies in parallel and so increase IO performance. This is limited by the bandwidth of the PCIe channel hooking up the drive to the host server’s memory. 

To gain a performance advantage, wafer scale SSDs would require significantly more parallelism – and hence more IOPS and bandwidth – than an M.2, U.2 or ruler format SSD.

Costs

Jim Handy of Objective Analysis watched a recording of Oshima’s presentation. With reference to the wafer-scale SSD section he told us: “There wasn’t a lot of substance to that part of the presentation.  In essence the speaker said that they can probe every chip on the entire wafer at once, and somehow this will allow SSDs to be built without any of today’s intervening steps between the wafer and a storage system. 

“He said that this will allow costs to be reduced to “as little as” 20 per cent of the  cost of today’s  flash storage.  He then points out that  HDD’s cost is ~10 percent that  of flash from which the audience would naturally conclude that HDD is threatened.

“The real crux of the story is how they can get the cost to that 20 per cent level.  This is a little hard for me to understand since SSDs sell for as little as 9 cents/GB today, and the cost to make a wafer of 96-layer TLC is about half of that, or 4.5 cents.  (64-layer TLC is closer to 6 cents.)

“If SSDs only cost twice as much as it costs to produce a wafer of NAND flash, then it’s hard to see how anyone would be able to get an SSD’s or even a flash array’s cost down to 20 per cent of current levels simply by reducing packaging costs”.

Net:Net

To conclude, we think wafer-scale SSDs are an interesting idea. But they have lower rackmount packing density than rectangular standard format SSDs. It is unclear how the merits of the technology as trailed by Kioxia, overcome this disadvantage. We have contacted the company and will update the article if we find out more.

This week in storage: wafer-level SSDs and Kaminario takes Silk

Kioxia floats concept of ‘wafer-level SSDs

Kioxia, formerly Toshiba Memory, has talked about plans for a new way of delivering flash storage it calls “wafer-level SSD”.  The concept is to skip all the dicing, assembly, packaging, and SSD drive assembly seen in conventional flash memory and SSD manufacturing methods and simply mount the entire wafer to deliver a huge chunk of data storage with high performance.

Using what Kioxia calls “super multi-probing technology”, hundreds of chips in a single wafer can be probed – presumably meaning wired up – and operated in parallel, which could enable enormous performance and millions of IOPS. The plans were discussed by Kioxia Chief Engineer Shigeo Oshima in a presentation at VLSI Symposium 2020.

Kaminario becomes Silk

All-flash array pioneer Kaminario has changed its company name to Silk. In a statement on its new site, the firm explained that the cloud had changed the way most organisations use storage, and so it was changing its mission to help businesses get more out of their cloud data while keeping a lid on spending.

“We searched for a name that tells the story of how we help businesses become stronger yet more flexible….so we chose the one unique element in nature that is all those things. Silk,” the statement says.

As Kaminario, the firm adapted its VisionOS storage software to operate on AWS, Azure and the Google Cloud Platform earlier this year.

Silk says that its Cloud Data Platform “fits between your full application stack and cloud data infrastructure, making everything run smarter. Without changing your environment.” It claims to reduce cloud costs by 30 percent by optimising cloud data use through techniques such as real-time data reduction and thin provisioning.

A pair of shorts

Unstructured data management firm Data Dynamics has concluded a $9m financing round to help to scale its sales and marketing teams, and expand its product portfolio. The company says its tools are deployed in over 25 per cent of the Fortune 100, delivering hundreds of millions of dollars in cost savings and risk mitigation.

Integrator Diaway has unveiled an edge server aimed at high performance workloads. The Edgebox combines AMD EPYC 7002 Series processors, Western Digital storage (Ultrastar DC HC500 Series and NVMe SN640 drives) with enterprise-grade networking and network management from Juniper Networks. Edgebox is a single-socket 2U server designed to handle workloads like software-defined storage, virtualization, hyper-converged infrastructure, and data analytics.

People

The Storage Networking Industry Association (SNIA) has appointed Dr. J Michel Metz to serve as Interim Chair of the SNIA Board of Directors. Metz succeeds David Dale, who stepped down from the role as SNIA Board Chair after serving in that position for seven years. This appointment is effective until SNIA’s Annual Members’ Meeting in October 2020, when annual officer elections will be held.

Bobby Soni is to succeed Brian Householder as President of Hitachi Vantara’s Digital Infrastructure Business Unit (DIBU), effective August 3, 2020. Householder is leaving Hitachi after 17 years with the company. Soni has served for the past year as chief operating officer and was previously the company’s chief solutions and services officer, and senior vice president of cloud, global services and emerging solutions.

UK digital archiving specialist Arkivum has appointed Chris Sigley as its chief executive. Sigley joined the UK-headquartered company in June, will lead Arkivum as it aims to acheive further growth in its specialist market. He was previously at data management specialist Redstor for 15 years, most recently as chief sales officer.

LTO tape shipments hit capacity record in 2019

A record 114,079 petabytes of total LTO tape capacity (compressed) shipped during 2019, according to the latest report from The LTO Program Technology Provider Companies (TPCs), while unit shipments continued to decline.

This increase in capacity shipments comes after a decline during 2018, which was due to a pause in manufacturing. Once tape cartridge production resumed, capacity shipments rose to record amounts in 2019, according to the TPCs, HPE, IBM and Quantum.

The pause in manufacturing was due to a patent dispute between the two firms that manufacture the actual tape media, Fujifilm and Sony, which crippled the global supply of LTO-8 tape media until a global patent cross-licensing deal was agreed last year. The TPCs are licensees of LTO-8 technology, officially certifying media supplied by Sony and Fujifilm.

This appears to be water under the bridge to the TPCs, which are instead keen to promote the benefits of the newer LTO-8 technology as insurance against data loss through ransomware as well as for everyday archiving and backup.

“As the datasphere continues to grow at astronomical rates and cybersecurity defense becomes a priority for organizations of all sizes, LTO tape technology remains a leading solution in addressing modern-day data storage needs,” said IBM’s VP for Storage Offering Management Sam Werner, in a statement accompanying the report. This is because tape is offline storage, not directly accessible to ransomware and malware threats that may corrupt live data

However, media unit shipments have been on almost continuous decline for much of the past decade, partly due to the increasing capacity of successive tape generations allowing customers to store more on fewer tape cartridges, but also because compatibility means customers can continue to use older media with new equipment.

LTO-8 technology offers up to 30TB of compressed capacity, with transfer speeds of up to 360 MB/sec native and 750 MB/sec compressed. The TPCs claim that these speeds compare favourably with some modern disk drives with transfer rates in the order of 210 MB/sec.

Last month, Fujifilm detailed plans for a future 400TB tape cartridge technology, 33 times larger than the current LTO-8 cartridge.

LTO tape’s features make it a critical component of any modern-day data storage infrastructure, according to the TPCs. It offers secure and reliable long-term archival storage, at a cost substantially lower than disk or cloud storage when considering factors such as power, cooling and retrieval.

IBM details storage portfolio for AI infrastructure

IBM has launched the Elastic Storage System (ESS) 5000 and updated Cloud Object Storage (COS) and Spectrum Discover as part of a new AI storage portfolio.

The company laid out its vision for an AI pipeline in a webcast with Eric Herzog, chief marketing officer at IBM Storage, and Sam Werner, VP of Storage Offering Management. This vision comprises a pipeline with Collect, Organise and Analyse stages, with IBM offerings tailored for each stage.

According to Werner, 80 per cent of firms expect AI use cases to increase within the next two years, yet at the same time 89 per cent of them say deploying AI is a struggle, largely because of data challenges. So, an information architecture for AI is required, and this is where IBM is aiming its new and updated portfolio.

The new ESS 5000 is optimised for the Collect stage. This is based around IBM Power9 servers and operates the firm’s Spectrum Scale parallel file system. IBM claims it is capable of 55GB/sec from a single eight-disk enclosure, and can scale up to handle an impressive eight yottabyes of capacity in its role as a data lake. IBM said this compares favourably against competing products such as NetApp FAS and Dell EMC’s Isilon line.

Two configurations are on offer: the SL model fits in a standard rack and scales-up to 8.8PB with six enclosures, while the SC fits in an extended rack and scales-up to 13.5PB with eight enclosures. The ESS 5000 is set for general availability from August.

Also fitting into the Collect stage is IBM COS, which is designed to run on commodity x86 servers. This has undergone a complete revamp of its storage engine, designed to increase system performance to 55 GB/sec in a 12-node configuration. This can improve read performance by 300 per cent and writes by 150 per cent depending on object size, according to IBM.

COS will soon support host-managed shingled magnetic recording (SMR) drives for improved density, delivering up to 1.9 PB in a 4U disk enclosure with 18TB SMR disks. This update will be available from August.

For the Analyse part of the pipeline, IBM’s solution is the ESS 3000. This all-flash NVMe system in a 2U enclosure was launched last October and also runs Spectrum Scale with a performance of 40GB/sec.

IBM said that although the ESS 3000 or the ESS 5000 are both Elastic Storage with Spectrum Scale file systems and could thus be put to work in any part of the AI pipeline, it is positioning the ESS 3000 as best for Analyse and the ESS 5000 as best for Collect, owing to the lower cost per GB the latter system offers.

From Q4 2020, IBM Spectrum Scale is getting a new feature, Data Acceleration for AI, which will allow access and automated data movement between Spectrum Scale storage and object storage, whether on-premises or in the cloud.  This can effectively allow object storage to become part of the same data lake as the Spectrum Scale system.

For the Organise stage of the AI pipeline, IBM said that the AI infrastructure has to be capable of supporting continuous, real-time access across multiple storage vendors. Its solution here is IBM Spectrum Discover, providing file and object data cataloguing and indexing, with support for heterogenous data source connections to storage systems from other vendors and from cloud storage.

“We provide Spectrum Discover to collect file and object metadata, and it is able to search billions of records in seconds,” said Herzog.

From Q4 2020, Spectrum Discover will be available in a containerised configuration that can be deployed on Red Hat’s OpenShift application platform, making it easier to deploy in a multi-cloud or hybrid cloud infrastructure.

Also announced and coming in September is an updated Policy Engine for Spectrum Discover that enables customers to set policies to automatically tag and archive old data to lower cost storage, or migrate data to consolidate and lower overall storage costs.

“This provides a real way for customers to manage a very large unstructured data repository,” said Herzog.

IBM said its platforms are interoperable with infrastructure from other vendors, allowing customers to integrate existing storage systems into any of the stages of their AI pipeline if required.