Home Blog Page 154

Iron Mountain on getting rid of storage junk

Given that customers need to dispose of old disk, tape and solid state drives, what problems might they face? We were briefed by Chris Greene, VP and global head of sales and operations, asset lifecycle management business at Iron Mountain, on this surprisingly complicated topic.

Chris Greene, Iron Mountain
Chris Greene

He said: “The biggest challenge facing most organizations is the significant number of devices that may need to be dealt with at any given time … across hundreds – if not thousands – of locations, all while ensuring that this process is compliant with local regulations.”

We could be talking about tens of drives a year with smaller organizations, and hundreds if not more than a thousand with large organizations.

If they don’t dispose of devices properly, confidential information could fall into the wrong hands. To prevent this, devices need secure transport to disposal sites, and a secure chain of custody needs to be in place. So, Greene says: “Organizations should use a trusted third-party service provider who can provide a record of all the devices that are either recycled or destroyed.”

How should they dispose of old disk drives?

Greene is keen on doing things right: “Secure disposal practices start with strong data protection and asset management practices – drives should be encrypted, tracked, and maintained via tags tied to an inventory system.” And then: “Organizations need to choose a secure data destruction and erasure method. Shredding and wiping (with overwriting software such as Iron Mountain’s Teraware) are acceptable methods for all media types, and degaussing is acceptable for magnetic media (i.e. not SSDs).”

Wiping via overwriting software removes all the data, but leaves the drive intact and ready to be reformatted and reused. Shredding physically destroys the medium so that it can no longer be read or used. It can be quicker and less expensive than wiping.

Degaussing involves using a powerful magnet to alter the data on the magnetic media like tape and disk drives. This can be lower cost and quicker than wiping, but is not as scalable as shredding.

Iron Mountain waste center
IT kit ready for destruction and/or recycling at an Iron Mountain center

Bureaucracy is inescapable in Greene’s view: “It is important that each device’s destruction is logged and reconciled to an asset management system. [This] provides an audit trail of destruction required for regulatory compliance.”

There is an environmental angle here, as Greene points out: “Favoring destruction over wiping also has negative implications for an organization’s sustainability goals. Destroying or recycling a device can be up to 20 times more energy-intensive than extending the useful life of the hard drives through remarketing or redeployment. Drives also contain potentially harmful and toxic materials and must be disposed of properly to avoid any negative environmental impacts.”

Can’t users just do three consecutive full block overwrites of all ones, all zeroes and then all ones, and then sell off the now cleaned drive in safety?

Greene says: “No. Organizations should use certified software that can confirm all data has been successfully wiped. There’s always a risk of malfunctions when it comes to wiping data, which makes it important to verify that a wipe has occurred. Most of the leading data sanitization software solutions have the verification step built in.”

We asked Greene about any problems we should be aware of when disposing of SSDs. He came up with a wear-leveling angle that was news to us and means a different wiping technology has to be used.

“SSDs’ wear leveling will remove certain SSD sections from use, but these decommissioned sections may still retain data. Wiping will not write to these sections, leaving data behind.”

The answer is: “By using whole SSD encryption, all of the data on the drive will become unreadable without the decryption key. By then formatting the drive and removing the encryption key, it can be securely disposed of without the risk of any data remaining on the drive.”

Iron Mountain processing a disk drive for disposal
Iron Mountain processing a disk drive for disposal

If you decide to shred the SSD instead, then the shredding machinery needs to cope with small form-factor SSDs, such as M.2 drives and USB sticks. Otherwise the drive could pass through unscathed and, of course, retain its data.

Unless in obsolete formats, tape cartridges can readily be reused instead of being destroyed. Greene said Iron Mountain “can remove data from these tapes by degaussing the tape tracks and wiping the chip. This enables the tapes to be reused as opposed to being incinerated or put into a landfill. This is currently available in the UK and is being piloted for roll-out across mainland Europe.”

Older, obsolete tapes are best incinerated, as some of their component materials – such as polyethylene naphthalate (PEN) with barium ferrite (BaFe) magnetic pigment – mean they cannot be recycled. This also applies to snapped, burned, or chemically damaged tapes, as expert bad actors could recover data from them.

How can Iron Mountain help here in general?

“When it comes to drive disposal, we can use our software Teraware to securely wipe devices and have facilities around the world that are R2 (Sustainable Recycling) compliant and equipped with shredding equipment. In addition, we can also ensure a secure chain of custody through our global fleet of locked, alarmed, and GPS monitored vehicles. Our fleet of vehicles is also equipped with onboard drive shredding equipment.” 

And the costs? It depends.

“Costs depend on the nature of a customer’s requirements, such as media type, location, and volumes. If assets can be remarketed, value can be returned directly to the customer, which can offset the cost of erasing or disposing of the data.” 

It’s not cheap then.

But Greene would say this: “When evaluating cost, organizations should also consider the financial and reputational costs of data breaches. In 2022, the global average cost of a data breach was $4.35 million, according to IBM. Therefore, companies cannot afford to risk the significant costs of a data breach and should prioritize safely disposing of their IT assets.”

We don’t know what the average cost of a data breach comprising data recovered from a badly disposed junk drive is, but suspect it could be less. Still, reputational damage can last for a long time, so drive disposal better be carried out in a considered and effective way.

Pliops spreads wings with RAID supercharger

Pliops has added three data services to its XDP x86 CPU offload card to give applications more NVMe SSD performance, endurance and better, faster drive failure protection.

The XDP is a low-profile PCIe-connected card with its own processor, the eXtreme Data Processor, which manages a group of NVMe SSDs. It is a key-value-based (KV) controller with built-in hardware engines for compression, encryption, RAID and other low-level storage functions. These functions are typically carried by system software, which Pliops refers to collectively as a storage engine.

Pliops has now announced the XDP-RAIDplus, XDP-AccelDB and the XDP-AccelKV data services versions of the XDP.

Uri Beitler, Pliops
Uri Beitler

Uri Beitler, Pliops founder and CEO, zoomed in on the SSD failure angle, saying in a statement: “SSD faults in servers hosting data-hungry applications are a leading cause of significant downtime, impacting productivity and affecting SLAs… XDP-RAIDplus was designed to maximize the capabilities of NVMe SSDs to the most demanding I/O needs of any system, while optimizing the system’s cost/performance ratio. Numerous customers are sacrificing data reliability in order to avoid performance drops – we solve this by not only accelerating current solutions but also by providing this key feature to customers that cannot take advantage of existing solutions.”

Startup Pliops announced its XDP-Rocks product, in hindsight the first XDP data service, in October. It accelerated RocksDB databases throughput by up to 20x and reduced tail latency by 100x, we’re told.

The XDP-RAIDplus extends the XDP hardware engines with additional RAID features. Users can configure RAID 5-like protection with no physical capacity loss by using virtual hot capacity (VHC) areas on the managed drives, eliminating the need to have a spare drive. The card only rebuilds the data on the failed drive, not the whole drive. This reduces rebuild time by up to 65 percent and provides higher throughput than traditional RAID offerings. 

Pliops XDP card
External and internal views of the Pliops XDP card

Pliops claims its XDP-RAIDplus provides applications with: 

  • Up to 10x increase in throughput
  • Up to 6x capacity increase (26.6 percent with 6 x 15TB drives and RAID 5)
  • 5x improvement in SSD drive endurance
  • 50 percent reduction in TCO
  • Increased uptime

Brian Beeler of StorageReview said of XDP-RAIDplus: “In lab tests, we focused on the XDP-RAIDplus Data Service and tested with Solidigm 30.72TB P5316 QLC SSDs, which showed 1.5X to 5.6X better performance than software RAID 0 across our workloads. Further, when initiating an SSD failure, we were able to rebuild an entire 30.72TB SSD, with a mixed workload running, in roughly 450 minutes.” 

That’s around 7.5 hours. In October 2020, software-based storage array supplier StorONE announced a release of its S1 array software that it claimed rebuilt failed SSDs in three minutes and a 16TB disk drive in less than five hours. It uses parallel processing and an erasure-coding scheme.

GRAID, which produces a speedy hardware RAID product, and Xinnor, which provides a software RAID system with accelerated performance, will be obvious products to compare with the XDP-RAIDplus. Get an XDP-RAIDplus background document here.

The XDP-AccelDB data service is a database accelerator for SQL apps such as MySQL, MariaDB, PostgreSQL, as well as NoSQL apps including MongoDB, and also software-defined storage products. It features atomic writes, smart buffering and data shaping, and delivers up to 3.2x more transactions, a 3x latency reduction, and up to 6x capacity expansion. 

Lastly, the XDP-AccelKV product, which includes XDP-Rocks, is a KV accelerator for RocksDB, WiredTiger and similar products, providing a claimed order of magnitude higher performance than software-only products.

Acronis license validation goes AWOL as company cops to ‘bug in the database software’

Users are reporting problems with Acronis Cyber Protect Home Office that effectively invalidate license keys and render the product inoperable.

Cyber Protect Home Office, originally known as Acronis True Image, creates a full system image – directories, files and metadata – of a system which can be used for recovery if it is hit by a malware attack or user errors. It can provide file and directory-level backups as specified by a user. The software agent can run full or incremental backups with backup stored on a local disk, a separate computer or in Acronis’s cloud. The software can also clone drives and partitions and sync folders between computer systems running the software or a cloud instance.

An Acronis customer told us: “My copy of Acronis Cyber Protect Home Office (at least the Backup/Restore component) has certainly been U/S for 24 hours now.”

We have asked Acronis what is happening and a spokesperson told us: “Some Acronis customers unfortunately experienced issues accessing their accounts due to database corruption on one of the cluster nodes related to a software bug in the database software. No data is lost. We are increasing the number of cluster nodes to avoid performance degradation in the future.” Acronis now says the problem has been fixed and is over.

The main difference between Cyber Protect Home Office and True Image Cyber Protect Home Office is that it includes antivirus, web filtering, and ransomware protection features in a built-in security suite. It also detects illicit crypto-mining, Zoom and Teams injection attacks, and malicious websites.

When a user purchases a subscription to Cyber Protect Home Office, the product has to be activated by sending a key to Acronis.

Acronis on Reddit
Reddit users started noticing problems two days ago.

Our Acronis customer said: “Some potentially significant internal server issues appear to be going down at Acronis, causing products to deactivate (since they currently don’t appear to be able to ‘phone home’ to validate licences keys at runtime). Also User Login accounts [are] responding with an Error 500. I just ‘spoke’ to their Online Support Chat, who confirmed that they are ‘currently experiencing Internal Server issues’ – ETA resolution currently unknown.”

Some users have taken to Twitter to complain: 

Acronis tweets

Toshiba exec says SSDs can never replace HDDs

HDD storage
HDD storage

A Toshiba exec has argued that hard disk drives are essential to store data in several market areas because SSDs are ill-suited to the job, costing more and with limited manufacturing capacity.

Update. Question answers re the SNIA’s Emerald initiative and energy efficiency added; 24 January 2023.

Rainer Kaese, Toshiba
Rainer Kaese

Citing TrendFocus numbers, Toshiba Electronics Europe exec Rainer Kaese said: “259 million HDDs were shipped last year – a case in point. Their total capacity reached 1.338 zettabytes, an increase of almost one-third compared to 2020. Never before have HDDs been shipped with more combined storage capacity.” 

That is so but the unit shipment numbers are flat. Statista says 258.9 million units were shipped in 2021, which was 2 percent less than 2020’s 260.3 million units. Back in 2010, there were 650 million disk drives shipped. The numbers have declined due to replacement by SSDs and because disk capacity on nearline (7.2K rpm, 3.5-inch drives) has ramped higher and higher, in Toshiba’s case at the 20TB level. 

HDD shipments

SSD replacement momentum has stalled as SSD $/TB price falls have slowed and are more or less parallel with HDD $/TB declines.

Writing in Global Security Mag, Kaese claimed: “SSDs are not expected to completely replace hard disks at any point. Since the need for storage space is growing virtually everywhere and only HDDs can deliver the high storage capacities at low costs that datacenters, cloud and other application areas demand, both media will continue to coexist for years to come.”

Kaese said market sectors where HDDs excel over SSDs start with online storage in core and cloud datacenters. He declared: “HDDs are simply the most economical medium for these large online storage facilities; their capacity is constantly increasing due to the advancement of technology, while the price per Terabyte is continuously decreasing. Flash memory is much more expensive and could not be produced in sufficient quantities, so it is mainly used as a cache when the data throughput of hard disk arrays is insufficient.”

The second sector he alights on is: “Network storage in companies and households: network-attached storage (NAS) systems serve as central data storage and backup storage for many small and medium-sized companies, but also in more and more households.”

That’s because “they can comfortably handle the transfer speeds in most company and home networks, where the high performance of flash memory would only be noticeable when transferring very many small files. In this case a caching SSD, for which some NAS systems have a separate slot, is sufficient.”

A third market sector is: “Video surveillance: The market for video surveillance is booming because people’s need for security is growing, and the systems are becoming cheaper and cheaper, making them affordable for private users.”

Next we have: “External storage for computer users and gamers: Flash memory has displaced the hard disk from almost all client systems, but because the manufacturers of PCs, notebooks and games consoles often only install quite small SSDs for cost reasons, users today generally have less storage capacity available in the devices than in the hard disk era.”

The problem is solved by adding external storage: “Usually data will be stored externally in ‘the cloud’, but people may prefer to have access to locally situated storage – due to cost reasons or security concerns. External hard disks with 2 or 4 terabyte are therefore extremely popular, and are a simple and inexpensive way for many users to expand their storage.”

His fifth sector is archiving: “When it comes to long-term storage of data, hard disks are the storage medium of choice, along with tapes. HDDs are somewhat more expensive for the same capacity, but they score points for shorter access times when certain documents need to be retrieved for an audit. In addition, HDDs can use deduplication mechanisms to reduce the amount of data to be archived, which can significantly reduce costs, depending on the type of data.”

Deduplicated data can be stored on tape, clawing back that HDD advantage. Absent that point, Kaese is effectively saying one should use HDD-based archives for “certain documents” that need shorter access times when an audit is being run. That suggests the audit comes around in an unexpectedly short time as, with more notice, a tape-based archive with a disk front-end cache could serve up the documents.

Blocks & Files thinks that HDD unit ship numbers will continue to decline, due to SSD cannibalization, and that the HDD manufacturers need MAMR, HAMR and SMR technologies to continue enjoying their $/TB advantage over SSDs. That’s because SSD suppliers can keep on adding more layers to their chips, currently around the 176-layer level, and so reduce their own $/TB cost.

Update

We asked Rainer Kaese a couple of questions questions and here are the answers;

Blocks & Files: Does Toshiba support the SNIA’s Emerald initiative?

Rainer Kaese: So, we are referring to tests performed in the HDD applications lab in Europe for different JBOD solutions featuring Toshiba drives. Toshiba’s R&D is contributing to SNIA’s Emerald initiative, but the evaluations conducted at our European HDD applications lab are done in ways that have lower levels of complexity (not to SNIA  Emerald standards).

The sentence “… roadmaps stipulate” is in reference to the ‘typical’ data explosion figures (published by industry analysts like IDC) and also the energy efficiency targets that data center operators have set. Based on these, lower power per capacity is required if the total storage installed is going to keep up with demands. Toshiba is making a significant contribution here by introducing high-capacity helium-filled HDDs. See page 5 of the attached material presented at Cloudfest 2022. It should also be noted that there are ways of attaining greater power savings at a system level by using all existing power/idle/standby features of the HDD, but this has to be implemented from the host/system side.

Blocks & Files: Energy efficient compared to what? 10K drives? Wouldn’t QLC NAND be more energy-efficient still? 

Rainer Kaese: Here we mean compared to conventional 7200 RPM air-filled HDDs of 10TB capacities and lower. Basically there are 3 ways for storing data in video surveillance. 

These are: 

  • 5×00 RPM drives using SMR technology. These are more desktop type class for low-end recording with just one drive. They are energy and cost-efficient, allowing basic review functions but almost no specific analytics. 
  • Using higher capacity CMR 7200 RPM surveillance drives (6TB to 10TB). Allows multi-drive/RAID and analytics, but are relatively power hungry (due to rather low capacity per drive, and use of air-filled drive cavities). 
  • Using 7200rpm CMR helium-filled drives (14TB capacity and above). This approach provides elevated analytical capabilities and lower power per capacity.

10k RPM drives are not a practical option in a surveillance context, due to having too low capacity and requiring too high a power budget. Nor is QLC-NAND. It may be more energy efficient, but still has far too high a cost per capacity. Also the endurance limitations do not match well with the workload characteristics of this application – where overwriting again and again is needed.


Cloudian pulls in $60m funding to chase growth

Cloudian has raised $60 million in fresh funding – its first round since 2018 – and appointed its first board chairman in a bid to accelerate growth.

The company supplies HyperStore S3-compatible object storage and NAS that can run on-premises or in the public cloud with a single hybrid cloud and cloud-native management layer. It was started in 2011 by CEO Michael Tso and president Hiroshi Ohta and is one of the top three privately owned and venture-funded object storage suppliers. The other two are MinIO and Scality.

Tso said: “As organizations move to the next level of digital transformation, they increasingly seek technologies that deliver hybrid cloud data management at limitless scale across all platforms. Cloudian’s cloud-native data management software lets our customers simplify operations and creates new opportunities to derive value from data.”

The F-round takes total funding to $233 million. It had contributions from Digital Alpha, Eight Roads Ventures Japan, INCJ, Intel Capital, Japan Post Investment Corporation, Silicon Valley Bank, Tinshed Asia, Wilson Sonsini Investments, and investors like new chairman Bob Griswold. 

Bob Griswold, Cloudian
Bob Griswold

Griswold has a strategic advisor role as part of his chairmanship. He was VP of strategy and planning at HPE, SVP product line management at Seagate, and VP and chief strategist for Enterprise, Commercial and Small Business at Cisco. 

He said: “Cloudian is exceptionally well positioned to capitalize on today’s transition to cloud-native technologies, and I believe strongly in the company’s strategy and its enterprise-proven platform; so much so, that I personally invested in the current funding round.”

Cloudian sees substantial scope for growth because of the surge in unstructured data and the need to search and analyze it for information that may help organizations become more efficient or responsive to their customers and environment.

Gartner research indicates 95 percent of new workloads deployed in global business such as media, healthcare and manufacturing in 2025 will employ cloud-native technology. It says there is a rapidly growing need for multi-cloud infrastructure to provide a unified platform for these applications. 

We asked Michael Tso some questions about Cloudian, the new round and his views on the market.

Michael Tso, Cloudian
Michael Tso

Blocks & Files: Why was there a need to raise money now?

Michael Tso: The new funding was raised to support our continued growth as we exit a strong quarter and look to capitalize on the opportunities we see in the coming year.  

Blocks & Files: What will the new money be used for? Engineering? Go-to-market stuff? General corporate purposes?

Michael Tso: The funds will support Cloudian’s go-to-market and product development initiatives under way in data analytics, hybrid cloud, data protection, and sovereign cloud. Each of these requires investment as we develop these growing markets for cloud-native data management.  

Blocks & Files: Who was the board chairman before Bob Griswold?

Michael Tso: We didn’t have an official chairman before. 

Blocks & Files: Have any board members left recently?

Michael Tso: Dr C S Park became a board advisor as he is looking to retire from all his board duties.

Blocks & Files: Cloudian’s market positioning statement seems relatively unchanged – correct me if I’m wrong – so is it right that it is not pursuing a new opportunity in the market but doubling down on the existing need for enterprises to store increasing amounts of unstructured data both on-premises and in public clouds in S3 accessible form with a single management layer covering geo-distributed systems?

Michael Tso: Our core value remains the same – hybrid-cloud data management. Within that space we see market demands evolving quite rapidly. More and more, our customers are adopting cloud-native technologies enterprise-wide, and now use S3-compatible storage for unstructured data in general.

Three factors are driving this. First, data management applications increasingly support the S3 API. Vendors in data analytics, data lake house, and data protection have all gone this way. Analytics vendors are the most recent to adopt the S3 API, and Cloudian has announced relationships with Snowflake, Teradata, Microsoft SQL Server, VMware Greenplum, and Vertica. 

Second, our customer’s in-house applications increasingly employ the S3 API for primary data workloads, including performance-sensitive ones such as trading floor operations. This increases the demand we see for performance features and flash-based platforms.  

Third, customer expectations, now shaped by public cloud, increasingly demand efficient, easy-to-manage cloud-native technologies throughout the organization. By contrast, old-school storage silos now appear wasteful and hard to manage, inefficiencies that are harder to tolerate when you have experienced the single namespace simplicity of cloud. With Cloudian, you can put scale-out cloud technology wherever you need it, from the edge, to the core, to the cloud.  

Blocks & Files: How would Cloudian differentiate itself from MinIO?

Michael Tso: Cloudian was built from the start as an enterprise-level platform, and this is what our customers are looking for. They choose Cloudian for our customer-proven scalability to hundreds of petabytes, for geo-distribution capabilities across any number of locations, for industry-leading security certifications, and for enterprise-class support with local teams worldwide. Our platform was built from the start for enterprise scale, and we have the customer references to back that up. By contrast, MinIO was launched as an open-source tool for Kubernetes developers, a legacy that remains today. It is not unusual for us to encounter a user that develops applications on MinIO then switches to Cloudian for production.      

Blocks & Files: And what about Scality?

Michael Tso: Cloudian offers a full-range solution, including software and appliances, all sold and supported by the Cloudian team. We even offer a remote management program, called HyperCare, where we essentially manage the complete solution for you. Furthermore, Cloudian HyperStore is an easy-to-use solution, built on a single software image that makes it easy to deploy and to scale. Cloudian also offers the most complete set of security certifications in object storage, a distinction that has won business at US government agencies and at defense cloud providers such as milCloud 2.0. All of these differentiate Cloudian from our competitors. 

Blocks & Files: How does Cloudian view the edge market? Is there scope for object storage at particular kinds of edge sites?

Michael Tso: The edge is an important market for us. Our customers know that moving data is hard. It’s costly, time consuming, and adds security risk. With Cloudian, you can put the physical storage wherever you need it and still manage the whole infrastructure as one system. Our customers have used this to deploy storage at wind tunnels, manufacturing sites, and healthcare clinics. We envision a continuum of storage resources starting from the core and moving out to any location where data is collected, yet all managed and protected within a single framework.  

Blocks & Files: How does Cloudian view the need for high-performance object storage with all-flash hardware? Is this an important and growing or a relatively limited niche market?

Michael Tso: Cloudian software is media-agnostic. We have seen a growing number of our customers running Cloudian on all-flash systems to benefit from the resulting performance gains. 

Several factors now drive this move to flash. One is the increasing use of object storage in performance-sensitive workloads. For example, a Cloudian customer in the credit card space is standardizing on object storage for all unstructured data, both archive and primary. Their infrastructure is all-flash. Another driver for flash will be power efficiency. As energy prices climb, the power savings of flash become increasingly compelling. We expect industry-standard servers and components will continue to drive down the total cost of ownership for flash systems and Cloudian software’s media agnostic capability enables customers to make future-proof technology decisions today. 

Hitachi Vantara sips mainframe data via Model9

Hitachi Vantara is collaborating with Model9 to feed mainframe data to its HCP and VSP 5000 storage systems and make it available to modern apps running in the hybrid cloud.

Update: OSA definition corrected to read Open Systems Adapters. 19 January 2023.

Model9 moves mainframe data to other systems, initially replacing mainframe tape storage with disk-based Virtual Tape Libraries, and then sending it with the S3 protocol to AWS and S3-supporting on-premises object storage systems. Hitachi Vantara has an S3-compliant object storage system, the Hitachi Content Platform (HCP), and a unified block and file high-end storage array product line, the VSP 5000.

Mark Ablett, president of Digital Infrastructure at Hitachi Vantara, said in a canned statement: “The dilemma for many organizations is that on-premises data is siloed, and access to legacy data on tape backups is difficult. Combining our VSP 5000 series and HCP storage portfolio with Model9’s software provides many benefits including protected backup copies, recovering and restoring data quickly and replacing costly virtual tape hardware with affordable cloud storage.”

Affordable cloud storage here means the public cloud and HCP system, with HCP able to tier to the public cloud. 

Hitachi Vantara partnership
Blocks & Files diagram

The Model9 software has three components: Manager, to move and store backup/archive data in the cloud; Shield, to cyber-protect copies of mainframe data; and Gravity, to move mainframe data to the cloud and there transform and load it into cloud data warehouses and AI/ML pipelines.

We should envisage mainframe-using customers wanting to bring modern analytics and other applications to their mainframe data. Unfortunately, such applications don’t run on the mainframe and the mainframe data has to be moved to systems they can access. Model9’s software acts as the data mover for this.

The software runs on Z using the dedicated zIIP processor, typically DFHSM and FDRABR type products along with DFDSS, and sends data out, typically in S3 form, either to the public cloud, or to the HCP system across an OSA link. OSA stands for Open Systems Adapters, the mainframe’s tcp/ip networking cards.

Hitachi Vantara says the partnership brings “additional capabilities to the Hitachi Virtual Storage Platform (VSP) 5000 series and Hitachi Content Platform (HCP) object storage portfolio.”

Asked about this, a spokesperson told us: “The primary data can be on a VSP5000. The customer can take copies of that data using Hitachi V’s ShadowImage capability which is then backed up to the HCP via the mainframe using the OSA links. The Model9 processing is done in the zIIPs. Model9 has a capability to write to SAN but most customers want to write to object storage.”

Model9 and Hitachi V point out that 71 percent of Fortune 500 companies host their critical IT on a mainframe. Model9 says that, as more businesses undertake mainframe modernization, the most important IT operating factor for businesses was to create a single point of data management across hybrid environments. The two claim moving data to the hybrid cloud can be difficult and risky with concerns such as securing data in transit, the volume of data involved, the ability to fully access and gain value from that data, and the fear of breaking applications that are functioning well.

Model9 and Hitachi V say their collaboration makes mainframe data available and accessible for both hybrid cloud applications and predictive analytics services. They breach the mainframe silo and bring its previously closed-off data into the modern hybrid cloud world with x86-based applications doing so much more than ones in the narrow mainframe ecosystem.

Bootnote

In June 2021, Model9 set up a Growth Advisory Board to help guide its global expansion and deepen industry awareness of its data movement and management capabilities. It included Brian Householder, former president and CEO of Hitachi Vantara, as a member. And now we have a Model9 partnership with Hitachi Vantara.

VAST Data, Infinidat reportedly amongst 21 storage tech unicorns

There are 21 storage industry startups worth a billion dollars or more – unicorns – according to a CB Insights newsletter.

Update. Note on Coldago’s December 2022 storage unicorn list added at end of story. January 18, 2023.

The newsletter lists 1,250 startup and private company tech unicorns as at Dec 31, 2022. We downloaded the list and found the storage players.

According to the newsletter, they are, in order of valuation: Databricks ($38 billion), Rubrik ($4 billion), Cohesity ($3.7 billion), VAST Data ($3.7 billion), Acronis ($3.5 billion), OwnBackup ($3.35 billion), Astera Labs ($3.2 billion), Dremio ($2.0 billion), Druva ($2.0 billion), Kaseya ($2.0 billion), Redis Labs ($2 billion), Alation ($1.7 billion), DataStax ($1.6 billion), Infinidat ($1.6 billion), Firebolt ($1.4 billion), Qumulo ($1.2 billion), Imply Data ($1.1 billion), OVH ($1.1 billion), Wasabi ($1.1 billion), MinIO ($1.0 billion), and SingleStore ($1.0 billion).

The most highly valued unicorn on CB Insights’ list, at $140 billion, is ByteDance – the Chinese content sharing company which owns TikTok and other properties. Databricks is the eighth most highly-valued unicorn in this list at $38 billion, and there is then a long gap down to the next storage biz, Rubrik, at the $4 billion level.

We can divide our storage unicorns into categories:

  • Block storage – Infinidat
  • File storage – Qumulo, VAST Data
  • Object storage – MinIO
  • Data protection – Acronis, Cohesity, Druva, OwnBackup, Rubrik
  • Cloud storage, etc. – Kaseya, OVH, Wasabi
  • High-speed interconnect – Astera Labs
  • Data warehousing and analytics – Alation, Databricks, DataStax, Dremio, Firebolt, Imply, Redis Labs, SingleStore

The most popular category is overwhelmingly data warehousing and analytics, followed by data protection, and then cloud storage and services. There are only four storage hardware-related suppliers: Astera Labs (CXL), Infinidat, Qumulo and VAST Data, with Qumulo and VAST Data both on the software-defined spectrum. The rest are all 100 percent software companies.

We compared this list to a July 2020 Coldago list of 15 storage unicorns. DDN, Nasuni, Veeam and Veritas are in the Coldago list but don’t make the CB Insights list.

Neither do Actifio, Barracuda Networks nor Datrium, but they have all been acquired. That means there are 13 new storage unicorns on the CB Insights list compared to the Coldago list, with most of them springing up in the data warehousing and analytics market.

Bootnote

Coldago published an updated storage unicorn list in December 2022. It lists 17 private companies with a $1B valuation minimum without any changes since June 2022 and included ones owned by private equity. In alphabetic order they are: 

  • Acronis, 
  • Barracuda Networks, 
  • Cohesity, 
  • DataCore, 
  • DDN, 
  • Druva, 
  • Infinidat, 
  • Kaseya, 
  • MinIO, 
  • Nasuni, 
  • OwnBackup, 
  • Qumulo, 
  • Rubrik, 
  • VAST Data, 
  • Veeam Software, 
  • Veritas Technologies 
  • Wasabi Technologies.

Data management is still human labor for most Register readers

A Blocks & Files report, based on Register and B&F reader responses to an online questionnaire, shows that automated data management is a long way off.

The report, Data Management is Still Manual Labor for Most Register Readers, looked at three general topics: storage tiers and data placement; on-premises and public cloud data movement; and use of data warehouses and lakes. The responses came in from more than 750 IT professionals – some 375 in Europe, slightly less than 200 in North America and others in Asia and Latin America.

We recorded responses to various questions added to the bottom of articles and this report looks at responses to those questions focused on data management. They provide a series of snapshot views and we highight one area here.

One group of questions looked at what storage tiers were in use, and the results confirmed that SSD take-up has been remarkably popular:

A surprising number of responders said fast (2.5-inch 10Krpm) disks were in use – 43.1 percent – more so than the  22.7 percent saying nearline disks (3.5-inch, 7,200rpm) are deployed in their organizations. Possibly that’s due to hyperscalers buying the largest number of nearline drives.  

It’s clear that several tiers of storage exist and we wanted to see how data was placed on the tiers. A big pitch of data management suppliers is that data should be placed in the right tier to get the best balance between performance and cost for the data’s access needs. This activity should reduce data storage costs by moving data off expensive and high-performance tiers. More than 60 percent of responders said they did not use data management software to do this; data placement wasn’t automated.

That surprised us. Perhaps older data was deleted so there wasn’t so much data placement pressure? Yes – slightly more than 63 percent deleted data, and did so manually (60.6 percent) with half using both data’s age and access level as the criteria for deletion.

Other findings concern data and app movement between the on-premises and public cloud environments, looking at the effect of egress charges, and also aspects of data warehouse and data lake use.

Take a look at them – the six-page report can be accessed in PDF form here.

Infinidat on CXL, SMR disk drives and green IT

Blocks & Files spoke to Infinidat CEO Phil Bullinger recently, who gave us some insight into Infinidat’s 2022 business and its hopes for 2023.

We looked at Infinidat’s stance on CXL memory expansion technology, SMR disk drives support for AI/ML workloads running on GPUs, and the greening of IT technology.

Our conversation also touched on having Infinidat’s software running in the public cloud and the possibility of Infinidat bringing out smaller systems – with positive answers to both topics, but not just yet.

Infinidat is a high-end unified block and file storage supplier for mission-critical and also data protection workloads with guaranteed availability SLAs and performance. It produces both disk-based and all-flash arrays with a highly efficient DRAM caching technology, Neural Cache, which can satisfy up to 95 percent of all data read requests from DRAM. This can make its disk-based systems faster than competing all-flash arrays.

It was a long conversation and we have edited it to focus on what we thought were the main topics.

Blocks & Files: Could you describe what you saw in 2022?

Phil Bullinger

Phil Bullinger: Since I joined two years ago, we’ve been really transforming Infinidat. And the top line growth has responded well, in terms of not only scaling out the team in the business, but scaling up the top line, and certainly the profitability of the company. And that continued in 2022; we finished the year with solid double digit growth, cashflow-positive, even profitable. 

We are 10 to 11 years old. We’re not a startup, spending money like drunken sailors. We’re a real business, multi-100 million dollar scale, and we’re funding that growth through the proceeds of the business. The operating cash flow we generate is what we use to drive investment in the business.

In 2022, we, like every customer in our industry, saw some macro economic influence headwinds in the year, [with] some deals getting pushed out. Customers leaning more towards one or two quarter maintenance renewal agreements than buying new infrastructure.

I’m still pretty optimistic that 2023 will come back strong, but we could be in a couple quarters here where organisations are looking to conserve their IT dollars. 

In this economy, what we saw was a lot of companies taking smaller bites at the apple, not big bites, just nibbles. I think that dynamic will probably continue for a quarter or two. And I know we’re not alone. I mean, we have enough context across the industry; the storage ecosystem is a pretty small fraternity of people. So we know what’s going on in the marketplace.

The very good thing for infinidat is that our core value propositions of TCO, guaranteed SLA, and  consolidation of dozens of frames into single frames, resonates really well in an environment where enterprises are looking to stretch their IT dollars. So we’re actually finding Infinidat plays pretty well on on a muddy track. And that’s good. 

We have a lot of large accounts, a lot of Fortune 10, Fortune 50, and Fortune 100. We have a Fortune 50 Financial Services customer in the US that’s gone from zero to one of our largest customers in 18 months. And when we started doing business with them, they demanded that we included, in the cost of our products a dedicated system admin, because the vendor, our competitor, that we were replacing had three dedicated heads, just managing the systems, doing  data migration, dealing with the complexity. They demanded that we also hire one, and it’s now I think, about 11-12 months into the relationship, and they let that person go. We’ve replaced, you know, 70 old systems with 20 or 30. 

In 2022 we really invested in what I consider to be some fundamental capabilities to allow us to continue to scale the company processes, tools, applications, the interlock between our sales and our marketing data systems, our ERP systems. We’re on the cusp of introducing a new manufacturing relationship into the company that will very much accelerate our global ability, both from a supply chain and an assembly perspective, to service our customers globally, much more efficiently. 

Blocks & Files: What will 2023 bring?

Phil Bullinger: Storage in general is probably the only consumable left in the data centre, right in terms of data accumulating, and it still amazes me, the relentless insatiable demand for more storage capacity in large enterprises.

I think repatriation, or just a semblance of thoughtfulness going into what workloads or on-premises workloads are in the cloud; that’s helping our business. The economics strongly favour our kind of approach. And that’s pulling business our way.

We’re going to continue to grow and scale our go-to-market. We’ve got some really exciting alliances, partnerships with global global system integrators, that are bringing us into their business and their customer base. I think there certainly is a type of customer and a type of workload at scale, with mission-critical data, where Infinidat really is the best answer.

Are we bringing performance improvements, capacity improvements? And the answer is, yes, we’ve done that through 2022. And we’ll continue to do that going forward.

Blocks & Files: How does Infinidat view running its software in the public cloud?

Phil Bullinger: There is no platform in the market today that we would consider to be a peer level competitor to InfiniBox that has their software running in the cloud. It’s usually a data protection workload. They’re looking to push colder bits or rainy day bits to the cloud. And we have so many ways of getting that done with all of our alliance partners and software partners. 

I will say, and I’m not here to pre-announce any product, but we’re not standing still on this. Our customers are interested in an end-to-end Infinidat experience from an on-prem platform with cold bits into the cloud. And we’re working on that. I would say, just watch this space and track us as we go forward.

Our software doesn’t really depend on unique hardware. We have the capability of delivering an Infinidat experience pretty readily on generic infrastructure.

Blocks & Files: You’d naturally want to deliver though, the Infinidat experience, not just a commodity, high end array experience?

Phil Bullinger:  The reason that hyperscale public cloud resident primary storage is not a thing in the industry is because the reason you buy primary storage for your mission-critical apps is because you want those SLAs, you want that guaranteed availability.

I can’t walk into any of my customers and say, I’ll give you five nines or even six nines in the cloud. They would throw me out of the door. You have to be very specific about what problem you’re trying to solve. This whole mentality of cloud first, cloud first, cloud first; it’s never really applied to the space where we are. That doesn’t mean there aren’t workloads that might be an attractive there. Where a catcher’s mitt in the hyperscale public cloud under the same Infinidat banner of experience might make a lot of sense. And that’s the piece that we’re absolutely not ignoring.

Blocks & Files: How does Infinidat view CXL technology and its memory expansion possibilities?

Phil Bullinger: Our architecture depends on on the coherency between three active active:active nodes. And we use nanosecond latency InfiniBand. Today we have more than enough DRAM available to us, in commodity off the shelf servers, to fully implement the Neural Cache architecture and its ability to, frankly, outperform every other cache architecture in the industry in terms of hit rates.

I think CXL could be interesting as we look at architectures like Sapphire Rapids, but Sapphire Rapids is still a couple of years away in terms of being a production volume, economically attractive CPU option for us.

Yes, we’re tracking these technologies, I think it could open up interesting pools around our cache architecture as we can pool more DRAM together. Today, we’re not limited by our hardware. I want to emphasise, hardware is not limiting our performance today.

Blocks & Files: How about supporting GPU-style AI/ML workloads?

Phil Bullinger: Our typical deployment is petabytes of data and upwards of 10,000 users on a box and, dozens of applications. It’s a multi-faceted workload that we have to be very, very good at adapting to. A lot of the large AI/ML dedicated platforms and industry that can take advantage of GPUs; they’re kind of monolithic workload machines and typically scale-out NAS.

It’s not exactly our wheelhouse, although, we have a lot of AI/ML workloads on our box, because we’re low latency, high performance at scale storage, ubiquitous storage.

I’m excited about where CPU technology is going, where memory consolidation and coherency is going with CXL. CXL and GPUs; we’re tracking all of that. It’s just going to be a little while before it will actually intercept production SKUs.

Blocks and Files: How about shingled magnetic recording disks, which Western Digital is pushing more and more? Since you’re agnostic to the underlying media, will Neural Cache be just as effective at hiding the slowness of shingled magnetic discs as it is hiding that of normal discs?

Phil Bullinger: Yes. I think the dynamics of our platform, especially our largest capacity platforms, that are primarily running data protection workloads, would be a good SMR candidate back end. Because one of the things that is also unique about our architecture is we we stream fairly large chunks to the backend storage tier. That really would be quite amenable not only to SMR but also QLC and PLC flash as it goes forward.

We can really adapt well to media that we might have on the back end, with certain endurance and workload limitations that others may have a hard time dealing with.

Blocks and Files: Are performance per watt and other green computing measures likely to be a message that will resonate more in 2023?

Phil Bullinger: Yes. We’re generally seeing that green IT requirements progressively become more and more important and move their way up the purchase criteria. Europe probably leads that conversation a little more than the US, but we see it in the US as well. 

This year we brought the 20 terabyte drives into the platform. And that was a pretty dramatic improvement in terms of our energy efficiency, and space-efficiency per floor tile. There’s also our natural capability of consolidating dozens of frames. That has really helped the energy budget.

That Fortune 50 financial services company I mentioned, just in the early stages of replacing the prior competitor’s architecture, have already dropped 100 kilowatts of electrical demand off their bill, and 100 kilowatts in the data centre is big.

But this is a treadmill. There is no endpoint of this conversation. You just keep driving forward on this. 

Blocks & Files: If you brought out a smaller Infinidat array, do  you think you’d be able to provide the same consolidation message?

Phil Bullinger: A lot of people have asked about smaller SKUs for a long time. Now, our value proposition resonates most clearly at scale. And that’s where the company has been, and continues to be, as datasets aren’t getting smaller; they’re getting larger. 

I think frankly, we’ve only scratched the surface on the adaptability of the Neural Cache and our data placement engine – two architectures that can scale down as well as continue to scale up. 

So yes, we’re looking at this. And the evolution and the maturation of flash and HDD and hybrid architectures and dedicated architectures creates a fertile engineering landscape for us to look at how we can apply what we do really well to different capacity points.

The challenge, the starting point for our architecture is three servers with DRAM, an InfiniBand, connection between them, and then a direct attachment to a pool of back-end media, where every one of those three nodes can see every persistent storage media device in the frame. 

The ante sort of the opening bid on that architecture is not an inconsequential amount of hardware. So we naturally start at a certain capacity point, because you want to kind of amortise the hardware costs over a certain capacity point to do that.

Can we deliver our value proposition on smaller capacity points? Absolutely. … I think as persistent storage technologies evolve and change, we have an opportunity to push further into that area.

Storage news ticker – January 17

Cloud-native distributed, CDC-based data replication platform supplier Arcion announced a real-time integration connector for SAP Adaptive Server Enterprise (Sybase ASE). It claims this enables high-volume database migration and real-time data replication with guaranteed delivery, 100 percent transactional integrity, and sub-second latency, with zero downtime.  With the addition of this new connector, Arcion now supports CDC out of all databases from the SAP ecosystem: Sybase ASE, HANA and IQ.

Data streaming supplier Confluent has signed a definitive agreement to acquire Immerok, a leading contributor to Apache Flink, a powerful technology for building stream processing applications. Immerok has developed a cloud-native, fully managed Flink service for customers looking to process data streams at a large scale and to deliver real-time analytical insight. With Immerok, Confluent plans to accelerate the launch of a fully managed Flink offering that is compatible with its managed Kafka service, Confluent Cloud. A public preview of the Flink offering for Confluent Cloud is planned for 2023. Confluent’s initial focus will be to build an exceptional Apache Flink service for Confluent Cloud, bringing a cloud-native experience that delivers the same simplicity, security and scalability for Flink that customers have come to expect from Confluent for Kafka.

Data migrator and manager Datadobi has won Swiss financial, pensions and insurance service business Retraites Populaires as a customer for its StorageMAP Product. With StorageMAP, Retraites Populaires was able to move 18 years of production and archive data, into a new NetApp environment while ensuring end-to-end chain of custody. Retraites Populaire was an HPE 3PAR and Synergy customer.

Event streamer and Cassandra NoSQL database supplier DataStax has bought machine learning (ML) business Kaskada. DataStax calls itself a real-time AI company and says Kaskada software  manages, stores and accesses time-based data  to train behavioral ML models  and deliver  instant, actionable insights. DataStax will open source the core Kaskada technology initially, and it plans to offer a new machine learning cloud service later this year. Davor Bonaci, Kaskada CEO, said: “We’re thrilled to join forces with DataStax to enable the real-time AI stack that just works, fueled with data from Astra DB.” No price was mentioned.

One larger research and analytics firm, the Futurum Group, has bought another, smaller research and analytics firm, the Evaluator Group which is active in the storage industry. Daniel Newman, CEO and principal analyst at Futurum Research, said: “The addition of Evaluator Group to our roster will bring a greater depth and reach of knowledge that is unique in the world of research and analysts.” The Evaluator Group will now be part of The Futurum Group family of companies. Camberley Bates will remain with Evaluator Group as managing director working with The Futurum Group principals Daniel Newman and Shelly Kramer to coordinate strategy and activities. Key Evaluator Group leadership Randy Kerns and Russ Fellows along with the Evaluator Group team will also remain with the company, driving new strategic initiatives.

Dense optical disk developer Folio Photonics will have its founder and chief development officer Dr Kenneth Singer present a “High capacity optical data storage for active archives” session at next week’s SPIE Photonics West, taking place January 28 – February 2, 2023, at the Moscone Center (San Francisco, CA). He will detail the optical pickup unit for dynamic testing at commercial speeds, as well as the results on writing and reading an eight-layer disc. Prospects for commercialization of the technology for long-lived active-archive applications will also be explained.

Fujitsu and Sapporo Medical University today announced the launch of a joint project starting in April 2023 to realize data portability for patients’ healthcare data including electronic health records (EHRs)  and personal health records (PHRs). Fujitsu will develop a mobile app that enables users to view healthcare data on their iPhones  and a cloud-based healthcare data platform to manage patients’ health data. This project marks the first initiative in Japan to link electronic medical records with Apple’s Health app under Apple’s support. Sapporo Medical University Hospital, the affiliated hospital of Sapporo Medical University, aims to introduce the system in April 2023.

IBM has released Spectrum Scale Erasure Code Edition (ECE) v5.1.6 with new open source tools. They include 

  • ece_tuned_profile – Tuned profile to be used on ECE
  •  ece_os_readiness – Assesses the readiness of a single node to run ECE.
  •  ece_os_overview – Uses the JSON files from ece_os_readiness to summarize the all servers readiness.
  • ece_network_readiness – Runs a network test across multiple nodes and compare the results against IBM Spectrum Scale Key Performance Indicators (KPI). This tool attempts to hide much complexity and present the results in an easy to interpret way.
  • ece_storage_readiness – Runs a raw read test on storage using FIO tool and presenting the results against KPI. The tool hides complexity, presenting results in an easy to interpret way.
  • ece_capacity_estimator – Calculates the effective capacity of Spectrum Scale Native RAID systems. It is an estimation and actual figures could differ from the calculated ones. Be prepared for 1% deviations when using this tool.

Index Engines expanded its development team last December, hiring nearly 30 software engineers from the now defunct Pavilion Data to support its cyber resiliency solutions, including its analytics engine to detect data corruption due to ransomware, CyberSense. The engineers in Pune, India and San Jose, California, that joined Index Engines were responsible for creating the file system for Pavilion’s HyperParallel Storage Array. Their addition s expected to help support additional backup and snapshot platforms for CyberSense as well as provide use cases outside of post-attack detection.

Unicorn IT security and services supplier Kaseya is expanding its Orlando, Florida operation where it employs 125 of its c5,000 employees, planning to add another 750 tech support heads at Orlando. Kaseya quadrupled its Miami office space in 2022.

Lenovo has announced a new generation of its ThinkSystem and ThinkAgile servers and storage with Sapphire Rapids 4th Gen Intel Xeon Scalable processors. They accelerate data networking, AI inference and analytics, delivering improved performance to help businesses better manage, process and analyze the explosive growth of data. Target workloads across all industries, include in-memory databases, large transactional databases, batch processing, real-time analytics, ERP, CRM, legacy system replacements and virtualized and containerized workloads. New ThinkAgile V3 HX, MX and VX hyperconverged infrastructure systems are pre-integrated with an open ecosystem of partners, including Microsoft, Nutanix and VMware software capabilities, and are available via TruScale Infrastructure as a Service.

In-memory computing software supplier MemVerge said its Memory Machine is the first software-defined Compute Express Link (CXL) memory management offering to offer support for the 4th Gen Intel Xeon Scalable processor (Sapphire Rapids) as a CXL platform development environment. Memory Machine provides transparent access to a pool of DDR and CXL memory, dynamically placing the hottest data in the fastest tier, and guaranteeing quality of service.  MemVerge’s application-aware memory tool, Memory Viewer will offer day-one support for Sapphire Rapids.

Hyperscale analytics data warehouse storage supplier Ocient released v21 of its SW, adding geospatial and machine learning features. They include: 

  • 17 new Geospatial Data Analytics functions – Enabling customers to ingest and analyze complex geospatial data sets at a scale and speed previously infeasible.
  • Secondary Indexing – Enhancing performance on complex data sets by offering a suite of secondary indexes to access and retrieve data faster than other data warehouses.
  • Machine Learning (ML) Data Models – Growing our list of supported models to ensure large teams of data scientists can query source data efficiently and directly from the Ocient database.

There are also multiple feature enhancements and extended operating system (OS) support.

Pure Storage announced a new energy efficiency SLA for Evergreen//One. This energy efficiency SLA enables organizations to measure the maximum number of actual Watts per tebibyte (TiB). If the guaranteed Watts/TiB is not met, customers can request service credits and Pure Storage will execute remediation actions, including densification or consolidation, at no additional cost.

Pure also said that, as a result of increased demand from existing and new customers for its subscription offerings – including its Evergreen Portfolio (Evergreen//One, Evergreen//Forever, Evergreen//Flex), Pure Cloud Block Store, and Portworx – in the third quarter of fiscal year 2023, it achieved subscription Annual Recurring Revenue (ARR) of $1 billion for the first time, up 30% year-over-year. Its Subscription Services revenue was $244.8 million, up 30% year-over-year. 

Samsung Electronics preliminary Q4 2022 revenue estimate is down 9 percent annually to KRW 70 trillion ($55 billion) due to lower smartphone and memory chip demand. It estimates there will be a consolidated operating profit estimate of KRW 4.3 trillion ($3.38 billion), a  69 percent annual dip and Samsung’s lowest operating profit since Q3 2014. The company is reducing memory chip production by 10% in reaction to lower demand, following production cuts by Kioxia, Micron and SK hynix. Digitimes thinks Q1 2023 could be the bottom of the memory down cycle.

Comforte AG announced the launch of its Data Security Platform integration in partnership with Snowflake to help customers securely move sensitive data to Snowflake’s single, integrated platform and use it for data analytics while helping comply with data privacy regulations. Protection methods such as format-preserving encryption (FPE) or tokenization provide a strong level of protection while keeping the data usable for analytics initiatives. 

Data observability supplier Kensu as partnered withSnowflake to better aid data practitioners in gaining full visibility into their real-time data with Snowflake’s data storage, processing, and analytics capabilities. The Kensu Community Edition is now powered by Snowflake’s single, integrated platform, enabling more users to seamlessly deploy Kensu’s agent-based approach to deliver real-time, contextual data observations with a free, unlimited-time developer environment. 

SK hynix’ vice chairman and co-CEO Park Jung-ho discussed ways to strengthen cooperation with global tech companies including Qualcomm Technologies on the sidelines of the CES 2023 in Las Vegas. Park, along with SK hynix President and co-CEO Kwak Noh-Jung and other executives, met with Qualcomm President and CEO Cristiano Amon on January 4th in Las Vegas. The prospect is for SK hynix to supply its memory chips for use with Qualcomm’s smartphone application processor. Qualcomm Technologies has been expanding its business into automotive and consumer, industrial and networking IOT.

The open-source SODA Foundation, in partnership with Linux Foundation Research, has just published its 42-page Data and Storage Trends 2022 Report: “Data and Storage Strategies in the Era of the Data-Drive Enterprise.” SODA brings together industry leaders to collaborate on building a common framework to promote standardization and best practices for data storage, data protection, data governance, data analytics, etc. to support IoT, big data, machine learning, and other applications. Download the report here.

Synology has released a new 3-base NAS box, the DS723+, for homes and small businesses. It features an AMD Ryzen R1600 Dual-Core CPU which offers much better redundancy and performance for the user compared to the previous generation’s CPU. There is a 10GbitE optional RJ-45 port upgrade with the E10G22-T1 mini card, dual M.2 NVMe card slots and it’s expandable up to 32GB of RAM (2 x 16GB). There are two hot-swap drive bays (7 via expansion unit), 2 x M.2 NVME drive slots. It supports 3.5-inch SATA HDD (4, 8, 12, 16 and 18TB) and 2.5-inch SATA SSD drives. The DS723+ is available starting today through Synology partners and resellers worldwide at an MSRP of $449.99. More info here.

Synology DS723+.

… 

Chinese supplier TerraMaster has released TOS 5 SW, which it says has a HyperLock-WORM File System to prevent tampering. Data stored in it can be locked for an unalterable specific period up to 70 years, during which it cannot be altered or deleted, only read. Admins can give users read and write permissions.

Datacenter virtualizer Verge.io announced a distribution agreement with Advanced Computer Solutions Group (ACSG) to extend the reach of its virtualization software  ACSG deploys technology and cybersecurity products to educational institutions, including both K-12 and higher education, as well as government municipalities, private industry organizations and small businesses.

Verge-OS software abstracts compute, network and storage from commodity servers and creates pools of raw resources that are simple to run and manage. It is ultra-thin software that is easy to install and scale on low-cost commodity hardware and self-manages based on AI/ML. A single license replaces separate hypervisor, networking, storage, data protection, and management tools to simplify operations and downsize complex technology stacks.

A Veeam 2023 Data Protection Trends Report finds that, globally, 85 percent of organizations expect to increase their data protection budgets by 6.5% in 2023 – higher than spending in other areas of IT. Cyberattacks caused the most impactful outages for organisations in 2020, 2021 and 2022. 85 percent of organisations were attacked at least once in the past 12 months, up from 76 percent the year before. Recovery is a main concern as organizations reported that only 55 percent of their encrypted/destroyed data was recoverable from attacks.

Data Replicator WANdisco signed an initial agreement worth $6.6 million with a European-based global telecommunications service provider for a one-off migration that provides for Internet of Things data that resides in the client’s datacenter to be migrated to the cloud. Once the migration is complete, it is expected that the Client will launch a range of IoT-related services. WANdiso said: “This is the third tier 1 global telecommunications company to choose WANdisco’s solutions since the start of 2022. … WANdisco believes there is potential for significant expansion opportunities with this customer”

WANdisco announced a trading update for the 12 months ended 31 December 2022. Trading in Q4 2022 finished strongly following significant contract momentum with both new and existing customers. Preliminary unaudited revenues are expected to be at least $24 million, growth of 229 percent year on year (FY21: $7.3m). Bookings in FY22 grew 967 percent to $127 million (FY21: $11.9 million), driven by progress in the IoT industry vertical with most contract wins under a commit-to-consume revenue model. A number of the one-off migration contracts won during 2022 have the potential to expand into commit-to-consume contracts during 2023. WANdisco ended the period with a strong balance sheet, approximately $19 million cash and $44 million in trade receivables. Together with an RPO of $110 million, this should see the company through to profitability. 

Fungible shareholder sues company: Wants to inspect the books prior to $190m Microsoft deal

Fungible stockholder and ex-employee Naveen Gupta has filed a lawsuit against the company, seeking to “investigate potential wrongdoing and breaches of fiduciary duties.”

The allegations are around convertible promissory notes the business issued, with the claim that “after-the-fact” down-round fundraising may have ensured that execs and other chosen stockholders received more of the proceeds from Microsoft’s $190 million acquisition of Fungible in December 2022 than they otherwise would have.

Gupta alleges that neither he nor other stockholders were given the opportunity to participate in a Series D round of financing, claiming only those who were aware of Fungible’s discussions with Microsoft were allowed to participate.

The lawsuit, case number 2023-0007-JTL in the Delaware Chancery Court, asks that Gupta be allowed to inspect Fungible’s books and records in connection with the Microsoft acquisition, including its stockholder list from June 2022 and board-level records relating to strategic transactions, promissory notes and Series D financing activities from March 2022.

The (redacted) lawsuit claims: “The Company’s founders now seek to cash-out Plaintiff, as well as the rest of the Company’s employees, at a grossly unfair price and substantial discount to their stock option exercise price.”

It states: “Because it appears that the company provided controlling stockholders, including [redacted] and [redacted], with the Series D Preferred Stock in a self-interested transaction to divert Merger consideration from other Fungible stockholders, Plaintiff has demonstrated a credible basis to suspect wrongdoing.”

Gupta had been a Fungible employee for four years, with stock options through which he became a Class A common stockholder. 

The main event that led to the Microsoft acquisition, referred to as a merger in the court document, was that Fungible failed to sell enough of its Data Processing Unit (DPU) and FS1600 network storage file storage system, meaning it was in danger of running out of money.

We have seen a copy of the redacted public filing document for the lawsuit.

The suit alleges that certain persons may have enriched themselves unfairly through the convertible promissory note and Series D preferred stock rounds at the expense of common stockholders, and should not have done so.

Now we wait to see if the court grants Gupta’s request. Then, once he inspects the books, we’ll see if he then sues Fungible and the unnamed stockholders for breaching their fiduciary duty.

BackupLABS aims to protect niche SaaS apps

UK startup BackupLABS intends to protect everyday SaaS apps and not just the big providers.

Users of SaaS applications like Salesforce and ServiceNow need to backup and protect their own data as the provider only looks after its own infrastructure. Data protectors such as OwnBackup, Commvault (Metallic), Druva and others offer data protection for Salesforce, ServiceNow, Microsoft 365 and Dynamics, but there are dozens of other SaaS applications in use with unprotected customer data.

Rob Stevenson, BackupLABS
Rob Stevenson

CEO Rob Stevenson says he founded BackupLABS to fill this gap: “I started BackupLABS after becoming frustrated around 2018 when I saw that only a few companies were offering 365 backup – and doing it badly. I didn’t have the knowledge or experience back then to develop a 365 backup platform and it’s annoyed me ever since. The problems back then (MS not backing up 365) are now apparent in all these newer SaaS apps that many organizations use so I started BackupLABS.”

He added: “I also became frustrated that I was only offering out backup services to UK-based SMEs when there is a big worldwide market out there. I didn’t want to miss the boat again basically.”

BackupLABS is now out of stealth and says it offers a simple, always-on facility to protect business-stored data on SaaS platforms against accidental user deletion, nefarious users, unauthorized access and platform errors. At launch it is providing integrations for Trello, GitHub and GitLab. Protection for Notion, Jira, Asana and more will follow soon, we’re told.

The service, according to BackupLABS, provides:

  • Automatic daily backups stored within AWS
  • Setup complete within minutes
  • Rapid restores with granular recovery
  • Compliance with ISO 27001, SOC2 certification and cyber insurance requirements
  • 256-bit AES encryption
  • Backup audit log record
  • Zero Knowledge policy – employees have no access to data
  • Data protection regulation compliance with HIPAA, GDPR, UK Cyber Essentials Plus

Stevenson is also a director at BackupVault, which specializes in protecting SMEs in the UK and uses one of three third-party backup tools (Redstor, N-able and Veeam). “I have been running BackupVault since 2004 and know the cloud backup industry very well. But since 2017 I have ramped up BackupVault having sold an IT support business I used to own. So now I run BackupVault and BackupLABS.”

A third reason for starting up BackupLABS was that “many of our current BackupVault customers were asking us to do so. The most common one was: ‘We use Trello and lost data before, can you back that up?’ Also, we backup quite a few software developer’s server data and they often asked if we can backup GitHub. Asana was another common ask.”

There was a fourth reason as well: “Another reason for customers asking was that they were having an ISO 27001 audit and one of the core aspects of 27001 is to show where you keep data, and demonstrate it is backed up and protected. It’s also surprising how many ISO auditing companies are now only just starting to look beyond office servers and 365 for locations of critical data. Only a handful now are asking if their SaaS stuff is backed up.

“These customers realized many of these platforms are not backed up and when I researched further, only a couple of other backup companies offered it. And the UX/UI and general functionality was terrible,” he claimed. “So we built BackupLABS.”

Q&A

Blocks & Files: Tell us about the gap in the SaaS app customer data protection market.

Rob Stevenson: The big backup boys only concentrate on the main three (365, Workspace, Salesforce), but my experience tells me this is going to be an issue over the next few years. Also, with ISO audits, GDPR, new legislation, insurance, all these organizations actually have to backup this data. And the app providers all say in their T&Cs that they don’t. ‘Cloud is just someone else’s computer,’ as the saying goes.

Blocks & Files: How does BackupLABS build its SaaS app connectors?  

Rob Stevenson: BackupLABS uses the public-facing APIs that all these apps offer to connect and backup/restore the data. They actually have to offer these APIs out as they don’t want the hassle or responsibility of backing up the customer data. The Shared Responsibility Model and their T&Cs sees to that as well.

Blocks & Files: Do you provide deduplication? 

Rob Stevenson: Yes, this happens on S3 at the AWS end. The actual size of customer data on these SaaS apps is surprisingly small – certainly compared to what we backup at BackupVault with VMs, 365, Google Workspace etc.

Blocks & Files: Do you have a (virtual) airgap to defend against ransomware? I see you use AWS so S3 and ObjectLock seems to be one way you would/could have a ransomware protection feature. 

Rob Stevenson: Yes, we use ObjectLock enabled to do this. Ransomware is at the top of our concerns. As you know, it’s an issue with on-prem servers at the moment, but I predict criminals will target where the data is/going very soon. I’ve seen ransomware infect and encrypt 365 OneDrive and SharePoint many times before so its only a matter of time before they do for these SaaS apps. On our roadmap is the ability for customers to increase retention and have more of an archiving feature too.

Blocks & Files: Will you add integrations to protect ServiceNow, Salesforce and Microsoft Dynamics? 

Rob Stevenson: Probably not. We may offer 365 backup at some point, but it’s been done well by a number of providers now so we wouldn’t really be offering anything different. Backing up 365 is also incredibly complex and so therefore not worth the effort compared to established players.

Blocks & Files: How can an SME-focused Bournemouth-based backup shop compete on the global stage? 

Rob Stevenson: I have been in the industry for decades and can see where all the providers have made mistakes. So many providers say they do one thing and then don’t deliver, have sub-par support etc. We will be different.

Also, we have been stalked by three large VC firms already offering ludicrous amounts of funding if we wanted to take it. I am more inclined to be bootstrapped, though, and take on debt financing (such as SaaS-Capital.com) based on our monthly revenues. This is becoming a more popular way of growing recurring revenue tech businesses recently. Plus, statistically, bootstrapped companies have a better outcome compared with VC-backed ones. But it is nice to know that I have the VC funding option on the table, even if I am not actively looking for it.

We are also a fully remote team (UK, Eastern Europe, US, Ecuador so far) and so don’t need to be reliant on funding staff in one place such as SF.

We are also a product-led company instead of a sales-led business. I have seen it many times before in the tech/backup industry when its very hard to purchase a service. We intend on making it very easy to do so and then the customers effectively insist on purchasing. Similar to what Dropbox did initially. We are heavily focused on a great UX/UI and have a specialist onboard to help with this. A lot of larger backup companies have terrible UX/UI as you have probably seen. This all feeds into our product-led growth model too.

We intend to focus on the SMEs to begin with, though, and once we get proper traction, offer it out to larger enterprises that have more specific and custom requirements (we will need a sales team then).

Blocks & Files: How do you price your service? 

Rob Stevenson: It depends on the app. For Trello, we charge based on the number of “Boards” they have, while in GitHub we charge based on the number of “repos” they have. Other apps use different methods of billing and we will tend to follow that. So it may be workspaces, users, items etc.

Blocks & Files: Are you funded from BackupVault’s revenues? 

Rob Stevenson: We were bootstrapped by our sister company and are now self-funded.

Blocks & Files: Will you sell your SaaS backup service to managed service providers? 

Rob Stevenson: Yes, 100 percent. We intend on doing this in the later part of this year, but want to concentrate on getting enough end users on board and the service polished first. We will then roll out a partner plan and also affiliates.

Blocks & Files: How do you position BackupLABS against Clumio

Rob Stevenson: They primarily backup AWS services so not in comparison to us. We will target SaaS apps that have an API that allow us to connect to. A lot of these larger (and well-funded) backup providers can only afford to go for the big stuff. We will focus on the smaller niche apps that still contain critical business data.

Blocks & Files: Your last word is?

Rob Stevenson: The amount of critical data stored on apps like Trello and GitHub is growing as more diverse online tools are adopted. Having external backup in place for SaaS data can therefore make the difference between a business surviving a data breach or having to close completely.