Home Blog Page 356

Rival post-PCIe bus standards groups sign peace treaty

Two rival groups developing CPU-peripheral bus standards have agreed to work together.

The CXL and Gen-Z groups announced yesterday a memorandum of understanding, which opens the door for future collaboration. Blocks & Files expects a combined CXL-Gen-Z specification will be developed quickly and available before the end of the year.

Jim Pappas, CXL Consortium board chair, issued a quote: “CXL technology and Gen-Z are gearing up to make big strides across the device connectivity ecosystem. Each technology brings different yet complementary interconnect capabilities required for high-speed communications. We are looking forward to collaborating with the Gen-Z Consortium to enable great innovations for the Cloud and IT world.”

Gen-Z Consortium President Kurtis Bowman had this to say: “CXL and Gen-Z technologies work very well together, and this agreement facilitates collaboration between our organisations that will ultimately benefit the entire industry.” 

The MOU

CXL and Gen-Z technologies are read and write memory semantic protocols focused on low latency sharing of memory and storage resource pools for processing engines like CPUs, GPUs, AI accelerators or FPGAs. CXL is looking at coherent node-level computing while Gen-Z is attending to fabric connectivity at the rack and row level.

The MOU between the two groups outlines the formation of common workgroups to provide cooperation and defines bridging between them. 

Companies wishing to participate must simultaneously be a promoter or contributor member of the CXL Consortium and a general or associate member of the Gen-Z Consortium.

Why we need a new bus standard

Server CPUs, as they get more cores and link to GPUs with multiple engines, FPGAs, and ASICs, need a faster bus to link the data processing entities to memory and storage devices and other peripherals. Current bus technology is unable to transmit data fast enough to keep the processing engines busy.

PCIe speed table.

Today’s PCIe gen 3 bus is transitioning to PCI 4.0, which is twice as fast, and PCIe 5 is set to follow. But even PCIe 5.0 considered too slow for modern requirements. Accordingly, four consortiums have emerged to move bus technology standards forward: CXL, Gen-Z, CCIX and OpenCAPI.

We think CCIX and OpenCAPI are toast and will have to find niche technology areas to survive or merge somehow with CXL and Gen-Z.

Four buses in a row

The Compute Express Link (CXL) bus protocol specification means that CXL can run across a PCIe 5.0 link when it arrives. CXL is supported by the four main CPU vendors; AMD, ARM, IBM and Intel. Gen Z, OpenCAPI and CCIX have not attracted the same degree of CPU manufacturer support.

CXL is also supported by Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei and Microsoft.

CXL diagram

The Gen-Z consortium is supported by AMD, ARM, Broadcom, Cray, Dell EMC, Hewlett Packard Enterprise, Huawei, IDT, Micron, Samsung, SK hynix, Xilinx and others, but not Alibaba, Facebook and Intel. It has more than 40 members.

The CCIX (Cache Coherent Interconnect for Accelerators) was founded in January 2016 by AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and Xilinx – but not Nvidia or Intel.

The OpenCAPI (Open Coherent Accelerator Processor Interface) was established in 2016 by AMD, Google, IBM, Mellanox and Micron and other members are Dell EMC, HPE, Nvidia and Xilinx. OpenCAPI has been viewed as an anti-Intel group, driven by IBM.

Why Salesforce needs third party apps for safe backup and restore

The unwritten rule for Salesforce users is: ‘Backup your own data’.

Without a dedicated backup app, customers could risk losing their data because Salesforce does not offer complete dataset backups on a daily basis.

Salesforce’s built-in data protection facilities include:

  • Weekly export – this does not include full data set and you could lose up to a week’s worth of data
  • Export to data warehouse – needs a script and API knowledge
  • Sandbox refresh – 30-day refresh period
  • Data recovery service – six-eight weeks needed plus $10,000 fee

None of the above options meet what an enterprise IT department would regard as a satisfactory backup service for important data. For the enterprise, the acceptable data loss period is measured in seconds or minutes rather than hours, days or weeks. And restore needs to be fast – ideally measured in minutes.

Chart showing Salesforce recovery options

The data custodian

SaaS applications such as Salesforce do not regard themselves as custodians of the tenants’ data in their multi-tenant SaaS environment. You the customer are the custodian of your data – just like you are when running applications in your own data centre.

Of course, the enterprise could roll its own backup app; to do the job it would need to set up a scripting routine to use the Salesforce API calls or write its own application code. But this is option is not feasible for most users.

That is why additional SaaS applications such as OwnBackup for Salesforce provide a backup, restore, archive data protection service for Salesforce customers. Competing Salesforce backup applications include CopyStorm, Druva, Reflection Enterprise and Kaseya’s Spanning.

OwnBackup set up in 2015 and has raised $49.8m in four funding rounds. It has offices in Tel-Aviv, New Jersey and London. The company offers automated complete and daily dataset (data and metadata) backup plus archiving and sandbox seeding for test and dev. OwnBackup’s technology was initially developed as a sideline from 2012 onwards by CTO Ariel Berkman, while working at Recover Information Technologies, an Israeli firm that recovers data from crashed storage drives and media.

Multi-tenancy problem

Why can’t customers use their existing backup applications to do the job?

OwnBackup co-founder Ori Yankelev told us in a press briefing last week that traditional relational database backups require customer access to the underlying infrastructure. “They need an agent on the machine. With multi-tenant [SaaS] applications this is not possible. The customer doesn’t have access to the infrastructure.”

A customer cannot run a backup agent on Salesforce infrastructure: only Salesforce could do that. However, Salesforce provides access to its infrastructure through API calls only – which OwnBackup uses to provide its service.

Data that is input into Salesforce is visible to customers when they operate the app. But this is not necessarily the case for the metadata that Salesforce uses, as API calls are required to access that information. A backup application has to access this data to be able to backup and restore customer data.

SaaS backup silos

The upshot is that the Salesforce customer needs a dedicated backup silo and backup software for Salesforce data, even if they operate on-premises backup facilities.

Also, note that OwnBackup and other backup apps tend to specialise in one SaaS app target. This means customers likely need different third-party backup apps for each SaaS application they use.

Yankelev said OwnBackup is considering adding Workday to its coverage. But he noted: “Its APIs don’t deliver the flexibility needed for us to deliver our services. Hopefully it will change.” Subject to this caveat, the company could announce OwnBackup for Workday by the end of 2020.

Our impression is that a SaaS app’s API set is not necessarily designed to enable a third-party backup application to backup and restore a customer’s complete dataset with minimal or no data loss.

Amazon makes AWS EFS file read ops 5x faster

Amazon Web Services has upped the read operations per second for default Elastic File Systems users from 7,000 to 35,000 – at no charge.

The company has posted some details in a blog, which explains there are two file access modes:

  • General purpose (GP) with now 35,000 random read IOPS,
  • MAX I/O performance mode with up to 500,000 IOPS and slightly higher metadata latencies than GP mode.

AWS’s EFS competes with Azure and Google file services. With this initiative, AWS has made its default or standard file access seven times faster than Google’s standard access mode. Amazon’s MAX IO performance mode’s 500,000 IOPS comfortably outstrips Google’s premium mode, with its 60,000 IOPS.

Google Cloud Filestore

At the end of 2018 Google began beta testing Cloud Filestore with NFS v3 support and two classes of service:

  • Standard costing 20¢/GB/month, 80 MB/sec max throughput and 5,000 max IOPS,
  • Premium at 30¢/GB/month, 700 MB/sec and 30,000 IOPS.

Google also offered startup Elastifile’s file service which supported NFS v3/4, SMB, AWS S3 and the Hadoop File System. The service delivered millions of IOPS at less than 2ms latency, according to Google, It bought Elastifile in July 2019 and said it would be integrated into the Cloud Filestore. This now offers standard (5,000 IOPS) and performance (60,000 IOPS) tiers.

Azure Files

Microsoft’s Azure Files supports SMB access and is sold in three flavours

  • General Purpose V2 or basic with blob, file, queue, table, disk and data lake support,
  • General Purpose V1 for legacy users with blob, file, queue, table, and disk support,
  • FileStorage for file only.

GP v2 and v1 have standard and premium performance tiers, with files access only in the standard tier. The premium tier includes FileStorage and uses SSDs. The default premium tier quota is 100 baseline IOPS with bursting up to 300 IOPS for an hour. If the client exceeds baseline IOPS, it will get throttled by the service.

This performance is nowhere near AWS EFS or Google Cloud Filestore. levels. Microsoft will need to raise its game here.

Update 8 April 2020. Azure Files Principal PM Manager Mike Emard contacted Blocks & Files to tell us: “Azure Premium Files supports up to 100,000 IOPS and 10GB/s of throughput.” Inspect https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning#storage-tiers to find out more.

Pavilion Data unfurls NVMe-oF array roadmap

Pavilion Data is developing a NVMe-oF flash array that is double the speed and capacity of its current line-up. The storage vendor also plans to release several software enhancements this year.

CEO Gurpreet Singh and his team briefed analysts this week about the current state of business and where the product roadmap is headed.

Pavilion Data Systems was founded in 2014 and has raised $58m to date in three rounds, including $25m most recently in 2019. Blocks & Files places Pavilion in a group with other NVMe-oF startups, including Apeiron, E8, Excelero and Kaminario. This group competes with mainstream NVMe-oF flash array suppliers such as Hitachi Vantara, NetApp and Pure Storage. It has reseller deals with Dell EMC, HPE and IBM, which also sell their own NVMe-oF all-flash arrays.

RF100 all-flash array

The San Jose startup has developed the RF100 all-flash array, which is built like a network switch with up to 10 controllers. This is a radically different approach from typical all-flash arrays which use the traditional dual-controller array architecture. Pavilion controllers are basically line cards and each contains four Ethernet ports – 40Gbit/s or 100Gbit/s.

Pavilion system architecture diagram as envisaged by Blocks & Files.

The controllers are connected across an internal PCIe network to NVMe SSDs. This design means that it scales performance linearly. Each controller is implemented as a line card with 4 x 40Gbit/s or 100Gbit/s Ethernet ports. Host servers use NVMe-oF to access the RF100, and TCP and ROCE (RDMA) access is supported.

Customers can populate the array with SSDs they have purchased themselves, so existing supply deals can continue.

View from the Pavilion

Performance is Pavilion’s main selling point: the RF100 is fast, consistent and scales linearly. Claimed performance is up to 120GB/sec read bandwidth, 60GB/sec write bandwidth and 20 million 4K Random Read IOPS.

Pavilion claims world-record performance in four STAC-M3 analytical benchmarks against all other publicly disclosed systems, including systems with direct-attached SSD storage and Intel Optane drives. STAC members can get the details here.

Singh claims: “We’re the fastest storage system in the world [and] the need for speed is real.”

Pavilion CEO Gurpreet Singh.

He thinks Pavilion has one of the largest NVMe-oF deployments in the world, with a US federal customer.

The company has a dual sales track, with some systems going to research labs such as the Texas Advanced Computing Centre, which need sheer consistent data access grunt. Other systems go to enterprises that need similar performance for applications such as media rendering farms and financial trading.

According to Singh, the company won several deals in the first quarter, including a $500K-plus deal last week with a global top five consumer electronics and media company. The customer will use Pavilion kit in an 4K and 8K video rendering farm.

Singh expressed confidence in the state of his business and with its partnerships with Dell EMC, HPE and IBM. He said sales cycles are elongating but not disappearing. At present, the company is modelling a two-quarter impact from the pandemic for enterprise customers and less impact on government and research business. Singh said Pavilion is not seeking fresh funding.

Roadmap

On the hardware front, Pavilion engineers are developing a PCIe gen 4-based system for 2021. PCIe gen 4 is twice as fast as the current PCIe gen 3 bus and the intent is that the upgrade will run twice as fast as the current hardware, with its average 40 microsecond access latency and 20 million IOPS. The system should also hold twice as much data, at 2PB effective capacity.

Pavilion’s array hardware

The IO ports will ramp up to 200Gbit Ethernet and InfiniBand capability, which is twice the current 100 gig speed.

Later this year Pavilion will introduce single pane of glass monitoring for multiple deployments and will support for open source Nagios and Zabbix network monitoring.

Pavilion Data analyst briefing slide – iPhone photo.

Pavilion’s roadmap includes a unified namespace for files across multiple systems, plus compression and asynchronous replication. Customers will get support for the S3 object access protocol and the ability to tier snapshots to S3 datastores.

Templates or reference architectures are going to be provided so partners can build all-flash or hybrid flash and disk Spectrum Scale systems. They also get recipes for Kubernetes-Splunk and VMware-Splunk systems.

Net:net

The mainstream NVMe-oF all flash array suppliers include Dell EMC, HPE, Hitachi Vantara, IBM, NetApp and Pure Storage. All position their NVMe-oF arrays as being the fastest arrays in their portfolio. Two, Dell EMC and IBM, have high-end, multi-controller or monolithic NVMe-oF arrays – the PowerMax from Dell EMC and the DS8800 from IBM. Multi-controller arrays are typically more powerful and scalable than dual-controller all-flash arrays, which all these suppliers have in their product lines.

NVMe-oF startups include Apeiron, Excelero, Kaminario and StorCentric’s Vexata.

Pavilion’s challenge is to distinguish itself from these competitors. They are all fast. The company needs its array to be faster, and is confident that its technology gives it the performance edge. Think of high end, monolithic NVME-oF performance at prices nearer dual-controller NVMe-oF arrays.

The company is leaning on this claimed performance superiority in its market messaging to try to stand out from the competition. We will monitor its progress with interest.


Your occasional enterprise storage digest, featuring Commvault, Nutanix, HYCU, MariaDB and more

Companies in this week’s enterprise storage news roundup, cover encryption, replication, tiering, object support, snapshots, the public cloud and more. Let’s start alphabetically.

Commvault upgrades for cloud and simpler data management

Commvault this week announced a bunch of software improvements, including single sign-on. Customers can now sign in once to manage Commvault across multiple deployments through a pull-down menu that enables navigation across Commvault regional, data centre, client, or other deployments within the Commvault Command Center.

New capabilities include support for:

  • Back up, recover, and migrate AWS DynamoDB, Redshift, and DocumentDB databases.
  • Convert, back up, and migrate VMware workloads to the Alibaba Cloud ECS.
  • Migration Oracle and Microsoft SQL database applications to Azure.
  • Integrating with the ServiceNow platform to enable self-service. Commvault customers can back up, recover, and migrate file systems, VMs, Microsoft SQL Server workloads and other ServiceNow catalog assets, within the ServiceNow SaaS platform.
  • Convert Oracle Unix databases to Linux and vice versa. This makes it easier to migrate on-premises Unix Oracle databases to the cloud, or cloud Linux Oracle databases to on-premises infrastructure, all without needing an Oracle enterprise license.

Diamanti scales up Spektra

Diamanti, the developer of a bare-metal hyperconverged Kubernetes-orchestrated container system, has upped its Spektra software game and has added a new appliance.

Spektra v2.4 adds volume encryption and supports self-encrypting drives. It also introduces asynchronous replication for offsite disaster recovery

The new D20X appliance is an existing D20 node that incorporates the latest Intel Cascade Lake Xeon processors, which feature increased core counts, a larger cache and higher clock speeds. The upshot is 36 per cent more processing power on average than models using previous Xeon CPUs.

HYCU high-fives Azure

HYCU’s data protection-as-a-service offering, Protégé, offers app- and database-aware data protection, disaster recovery across clouds, recovery of applications on a different cloud, e.g. for test and dev and also migration.

The company started out by supporting the Google Cloud Platform and has now added Azure support.

HYCU for Azure runs as a native service, available via subscription directly from Azure Marketplace and included in the Azure bill. The software is built on native APIs, supports all BLOB storage classes and auto selects the right class of Azure BLOB storage for the policy chosen by the customer.

HYCU for Azure is available immediately and data migration and disaster recovery functionality is generally available in 30 days. Data protection is available free of charge for the next three months.

InfiniteIO tiers to on-prem and cloud S3 storage

InfiniteIO has launched Hybrid Cloud Tiering software, enabling customers to access and manage all primary NAS data migrated to S3-based object storage in native file format.

The company supports public cloud providers such as Amazon Web Services, Google Cloud Platform, IBM Cloud, Microsoft Azure and Oracle Cloud Platform. Currently supported on-premises NFS- and S3-based private cloud platforms include Cloudian, Dell EMC, HPE, NetApp, Scality and Western Digital.

InfiniteIO has also announced extended metadata management support for Hitachi HCP S3 and NFS and Pure FlashBlade environments. Users and applications can continue to access the migrated data whether it is on-premises or in the cloud.

According to the company, customers can utilise cloud native services such as analytics, machine learning and serverless computing while maintaining internal security and governance without changing the existing infrastructure or user experience.

InfiniteIO plans to offer native file format support in Q2 as part of the Hybrid Cloud Tiering v2.5 software release.

MariaDB goes into the sky

MariaDB has announced SkySQL database-as-a-service version of its eponymous software.

SkySQL has a cloud-native architecture and uses Kubernetes for orchestration; ServiceNow for inventory, configuration and workflow management; Prometheus for real-time monitoring; and Grafana for visualization. It offers transaction and analytics support, with row, columnar, and combined row and columnar storage.

Michael Howard, MariaDB CEO, proffered this bullish quote: “Existing services, long in the tooth, lock out community innovation, meaning patches, new versions and features are missing for literally years. MariaDB SkySQL is a next-generation cloud database, built by the world’s top database engineers in the industry, allowing organisations large and small to know they have an always-on partner to not only roll out new applications, but ensure a consistent and enduring quality of service.”

SkySQL is currently available on Google Cloud Platform and pricing starts at $0.45 per hour. Developers can access example applications using SkySQL on GitHub.

Nutanix Hybrid Cloud Nanodegree

Nutanix has teamed up with Udacity, an online learning supplier, to deliver the Hybrid Cloud Nanodegree. Scholarships for are available for qualifying IT professionals.

The Nanodegree covers private cloud infrastructure and the design of hybrid private and public application deployment. Target students are those managing traditional business applications, legacy infrastructure, or cloud-native applications on public cloud infrastructure.

In phase one, Udacity will select 5,000 applicants to participate in a Hybrid Cloud Fundamentals course. From those 5,000 students, 500 high-performing students will be awarded a full scholarship. The program is designed and created with Nutanix and you can read a FAQ about it.

Scale and Acronis renew OEM vows

Scale Computing is to resell Acronis Cloud Storage, building on an existing OEM partnership with Acronis. The company already resells Acronis Backup software.

Jeff Ready, Scale Computing CEO, said in an announcement quote: “Scale Computing customers can now easily and securely store their Acronis backups locally or in the cloud, with a scalable solution that is designed for agile businesses.”

Scale is to resell Acronis Cloud Storage in one- or three-year terms and in increments from 250GB to 5TB.

StorONE gets objects, snaps, replication and more

StorONE has upgraded TRU S1, the storage software stack that runs its S1 Enterprise Storage Platform. New capabilities include:

  • Added S3 support to enable object volumes to be created alongside its existing FC/iSCSI blocks and NFS/SB files.
  • Added multiple tiering support, ranging from Optane SSDs, NVMe flash SSDs, SAS SSDs (including QLC), disk drives and the public cloud (for archiving).
  • NVMe SSDs can be placed in a separate pool from SAS SSDs. S1 metadata is automatically placed on them if they are present.
  • Support for unlimited snapshots with older snapshots able to be tiered to disk drives or the public cloud.
  • Replication support for cross-cluster deployments with asynchronous, semi-synchronous and synchronous replication of data from one StorONE system to another. Source and target storage clusters can have different drive redundancy settings, snapshot data retention policies and drive pool types.

Asynchronous replication acknowledges when the writes complete locally and to the local TCP buffer. The semi-synchronous replication setting acknowledges when data is written locally to the remote TCP buffer. Synchronous replication acknowledges when data is written locally and to the remote storage system.

Zerto lays off staff to survive ‘storm’

Zerto, the disaster recovery startup, laid off a “ton of employees” today, according to a source. We asked Zerto about this, using this exact phrase.

In response Zerto sent us the following statement: “In this new economy, financial viability is key. The winning companies will be the ones who successfully transition and put themselves on a path to profitability. 

“Zerto has today committed to streamline its core business and reduce operating expenses. We made this move to ensure that Zerto will weather this storm and will continue to be successful. It was an extremely difficult decision, but we are taking these steps to ensure that Zerto remains financially strong now and in the future.”

Zerto was founded in 2009 and has taken in $129m in funding, with the last round in 2016 totalling $20m. Since then it has been growing while burning cash on the one hand and  receiving revenue from sales on the other.

Last week the company announced the release of Zerto 8, which among other things, adds support for backup and disaster to VMware workloads on Google Cloud Platform.

Hyperconverged storage system vendor Pivot3 last week also laid off staff.

Starboard Value sinks claws into Commvault

Ambush of Polish partisans during the January Uprising

Starboard Value, the prominent activist investor, has accumulated a 9.3 per cent stake in Commvault, the veteran data management vendor.

Starboard has not yet revealed what it wants Commvault to do but it will want Commvault to do something. On its website, the firm proclaims: “Starboard invests in deeply undervalued companies and actively engages with management teams and boards of directors to identify and execute on opportunities to unlock value for the benefit of all shareholders.”

Commvault gave us a noncommittal response to Starboard’s stake building: “Commvault’s top priorities, at this time, are the health and safety of our employees, taking care of customers, and operating our business. As always, Commvault embraces open dialogue with our shareholder community and will continue to act in their best interest.”

However, it is unlikely to welcome the attention of Starboard, which is currently agitating for changes at eBay and Box Inc. The company is already reorganising sales, marketing and product strategies under the leadership of Sanjay Mirchandani. He was installed as CEO in February last year with the approval of Elliot Management, another activist investor, which ripped into Commvault in March 2018.

The outcomes of Elliot’s intervention included the CEO resigning, board-level changes, a 20-into-four product set simplification, and a stronger focus on partners and the public cloud.

Commvault progress

Under Mirchandani, Commvault has acquired the software-defined storage startup Hedvig and the Metallic SaaS backup company and changed its sales leadership.

But a revenue upturn has eluded the company so far, with the Q3 fy20 quarter ended December 31, 2019 representing the fourth successive quarter of revenue decline, compared with the previous year.

Discussing those earnings, Mirchandani said: “Our ability to achieve these results is a direct reflection of the progress we are making on the simplification, innovation and execution priorities we established at the start of the fiscal year. These priorities will be the foundation for our return to growth.”

Three-bit memristor device can store more data

Scientists have discovered a ternary (3-bit) memristor which they say is close to mimicking how the human brain works and which could help solve the world’s data storage problems.

Professor Thirumalai Venky Venkatesan of the National University of Singapore, who led the international team of researchers, said in a statement: “We hear a lot of predictions about AI ushering in the fourth industrial revolution. It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets. It is clear that we will have to rethink our approaches to computation on all levels: materials, devices and architecture. We are proud to present an update on two fronts in this work: materials and devices. Fundamentally, the devices we are demonstrating are a million times more power efficient than what exists today.”

Materials and devices

The researchers have invented a metal-organic complex molecule whose electrons can be in one of three states, in a phenomenon called charge disproportionation or electronic symmetry breaking. One of the states has electrons distributed unequally between different sides of the molecule. Such symmetry breakage normally requires extreme pressure or high or low temperature.

The states are stable over time, and can be manipulated at room temperatures using electric fields (voltage), and the molecules can form a device. This could be a 3-bit memristor or a 2-bit memcapacitor, or both. The device scales down to 60 nm². Its “discrete states are optimal for high-density, ultra-low-energy digital computing”, the researchers write in their paper.

Computer simulation

The concept was proved with computer simulation conducted by Damien Thompson of the University of Limerick, using the Irish Centre for High-End Computing supercomputer.

Damien Thompson , University of Limerick

He said in a press release: “We managed to push way beyond industry roadmaps by finding a ternary resistive memory device with three states that are well-separated from each other in terms of conductance and, just as importantly, stay working away perfectly for weeks on end.”

We can envisage the three states as having two symmetrical and one asymmetrical distribution of electrons. Thompson said: “The third asymmetric state is created simply by allowing current to flow through the device and it persists over a broad temperature range (-100 to +100 °C) so it is suitable for most conventional computing as well as future applications emerging from the symbiosis between physics, computing and biology.

Diagram from ternary memristor paper.

“In this new material, ions pulse back and forth between different binding sites on the molecules, which opens up the third state, making it energetically accessible and technologically exploitable.”

Ternary memristor and TLC NAND

We asked Thompson if having a 3-state memristor is like having 3 -bit (TLC) flash cell – insofar as accessing stored data is concerned.

He in turn forwarded our question to Sreetosh Goswami of the National University of Singapore, who he referred to as “the experimental mastermind behind the work”.

Goswami told us: “First of all, flash cells require very high voltage (~10V) and thus use a lot of energy, their storage lifetime is limited (don’t store your family photos ever on flash drives!), and they have a very limited endurance (less than 10,000 write/rewrite cycles). Memristors, on the other hand, consume ultra-low energy and last for 10^11(/12) cycles.”

“Secondly, three bits is actually eight levels; it’s different from a three-level device. A three-level system is something called ‘trit’ where you can define different states as 0,1,2 or -1,0,+1. There are unique advantages of a trit that enables over 19,000 different logic operations for two input variables. (We are currently pursuing circuit designs that will explore and push these ideas further.)”

This prompted further questions from me. “What this says to me is that the ternary memristor would need to be integrated in a non-binary computing system unless, I suppose, the 3 states were given binary values; 00, 01 and 10 for example. That seems odd to me and unlikely to happen. Would you have any views on the need for a non-binary computer system to use ternary memristors?”

Goswami replied: “There are computing architectures that can deal with binary and ternary systems in tandem though they are right now at a concept level. In principle, use of Sheffer ternary operations enhance the computing density by a significant margin. To test those out, you need a reliable 3 state device which has been missing. You will see few such ideas, hopefully soon!

“Additionally, due to the co-existence of meristance and memcapacitance, this device produces self-spiking and even chaos which we are currently using to perform several operations such as Max-cut computation. So, I think there are several things we can do with these devices.”

The Sheffer operation is an aspect of Boolean logic connected to the NAND operation. Max-cut computation refers to finding a part of a graph with the most edges.

According to Professor Venkatesan, “Dr Sreetosh has discovered that he can drive these devices to self-oscillate or even exhibit purely unstable, chaotic regime. This is very close to replicating how our human brain functions.”

Electrical circuit background

Electrical circuits are built from three basic building blocks; the capacitor, the resistor, and the inductor. A memristor, a resistor that stores information, its resistance, when it loses power, is a fourth building block. It was devised and built by HP Labs Fellow Stan Williams in 2008, based on a prediction made by Leon Chua in the 1960s, 

HP Labs spent many years trying to productise the invention but failed in its endeavour. Williams left HPE in 2018 and is now a professor of nanotechnology at Texas A&M University and a co-author of the ternary memristor paper .

A memcapacitor is a nonlinear capacitator with immediate response, and also stores information when no power is flowing to the device.

The molecule

The molecule is one of a family of redox active ligands and has a pincer ligand. A ligand is an ion or molecule that binds to a metal atom, sometimes donating electron pairs, and forming a so-called co-ordination complex. There can be several ligands attached to a central atom. Redox refers to a change in the oxidation state of atoms and this involves the movement of electrons between the atoms involved.

Pincer ligands bind to three adjacent co-planar sites with high thermal stability.

Researcher roll call

  • Professor Thirumalai Venky Venkatesan, National University of Singapore (NUS), was lead principal investigator of this project. 
  • Professor Sreebrata Goswami of the Indian Association for Cultivation of Science in Kolkata, India, invented the molecule. 
  • Dr Sreetosh Goswami, a research fellow at NUS Nanoscience and Nanotechnology Initiative, is the key architect of this paper and is a former graduate student of Professor Venkatesan.
  • Associate Professor Damien Thompson at the University of Limerick modelled the interactions between the molecules. 
  • Professor Stan Willams of Texas A&M University, the original HP Memristor builder, is a co-author of the paper.

Their paper is entitled “Charge disproportionate molecular redox for discrete memristive and memcapacitive switching,” and is published in Nature Nanotechnology (2020). DOI: 10.1038/s41565-020-0653-1

Komprise KEDM migrates file data ‘6 times faster than the status quo – and at half the cost’

Komprise today released a file migration utility incorporating parallelisation, which runs 27 times faster than Linux Rsync, according to its internal benchmarks.

Komprise Elastic Data Management (KEDM) takes file data from an NFS or SMB/CIFS source and transfers it across a LAN or WAN to a target NAS system or via S3 or NFS/SMB to object storage systems or the public cloud.

Kumar Goswami, Komprise CEO, provided this quote: “Komprise Elastic Data Migration minimises migration downtime by using analytics to identify the right files to move, maximising data migration efficiency.”

According to the file data management company, KEDM migrates data six times faster than the status quo and at half the cost. Komprise COO Krishna Subramanian explained that; “Six times faster than status quo is an estimate only as we did not have access to commercial tools to test against them.

“On the cost, we are typically less than a half to a third of commercial solutions like DataDobi or DataDynamics StorageX for the standalone Komprise Elastic Data Migration solution.”

KEDM is an acceleration of Komprise’s transparent move technology. It executes in virtual machines running the Komprise Observer software portion of its Intelligent Data Management suite. Observers are data movers and can be scaled out in a grid to increase the migration resource.

KEDM scans the source file system, extracts the metadata, with its permissions and file attributes, and migrates the filesystem. The software parallelises operations at the share, volume, directory and file level. It uses a specially-written NFS client to minimise the number of NFS protocol interactions. This is helpful when migrating large numbers of small files as the protocol chatter takes up network bandwidth otherwise used for data payload transfer. Obviously this does not work for SMB files.

Komprise Elastic Data Migration diagram.

KEDMe is multi-threaded and uses multi-processing in a multi-core host. Every transferred file or object has an MD5 checksum calculated before and after the transfer. A post-transfer comparison verifies migration accuracy and the software retries operations automatically when a fault is detected.

KEDM provides a dashboard and has API access so it can be programmatically controlled from existing data management software.

Benchmark

Komprise ran a benchmark test against the Rsync utility using a 74GB Android open source project data set with 990,000 files -some large (1.5GB – 10GB) and thousands of small files (500B – 100KB). Source files from a disk system were transferred to an all-flash filer across a LAN and also across a simulated WAN by adding a 30ms delay to each transfer. 

KEDM was 27 times faster than Rsync. Komprise didn’t provide raw number but noted KEDM completed the run across the simulated WAN it in minute, whereas Rsync did not complete in 48 hours.

Datadobi, a competitor, also uses parallelisation and checksumming to verify migration integrity. If we assume Datadobi is part of the migration status quo, Komprise is claiming it is six times faster than DobiMigrate.

KEDM can be included in Komprise’s Intelligent Data Management suite or purchased standalone. It is available through Komprise’s channel.

Here is a one page KEDM datasheet.

Your occasional enterprise storage roundup, featuring Backblaze, Quantum, Storj, the QUIC protocol and more

Data growth provides the fuel for our main items in this week’s data storage news digest. Let’s get cracking.

Backblaze cloud storage reaches exabyte milestone

Backblaze, a cloud storage provider, is now storing an exabyte of user data. Not bad for the five founders who started out building their own cloud in a Palo Alto apartment, in 2007. BlackBlaze said its engineers are now looking at how to store one zettabyte.

The growth has looked like this:

  • 2008 – 10TB 
  • 2010 – 10PB
  • 2014 – 100PB
  • 2015 – 200PB
  • 2020 – 1000PB

It’s not possible to get a realistic figure of how much data is stored in AWS, Azure and Google. Those vendors don’t reveal that number.

Backblaze has gotten this far with just $3m in venture funding. All marketing to date has been via a blog and until this year word of mouth. There are now 145 employees and customers in more than 160 countries. 

Quantum expands video surveillance line

Quantum has expanded its video surveillance line (now called the VS-HCI portfolio), which it introduced a year ago.

Jamie Lerner, Quantum CEO, said: “Video surveillance plays a vital role in securing infrastructure and critical assets, and in protecting citizens. We’ve applied our years of expertise to build a comprehensive portfolio for surveillance, and we are responding quickly to the needs of our customers.”

There are four new servers:

  • VST mini-tower and VS4160 NVR rack mount network video recording servers
  • VS2108 – a video analytics server with up to six GPUs inside its 2U enclosure
  • VS1110 – a highly-available building management server

Quantum has also added secure remote monitoring for VS-HCI, using a web portal.

Could QUIC replace TCP/IP?

Lars Eggert, chair of the QUIC working group within the IEFT,  is participating in an SNIA webcast about Google’s QUIC protocol. The webcast poses the question: will this new UDP-based transport protocol for the web replace TCP/IP? It broadcasts live on Thursday, April 2, 2020– 10:00 am PT / 1:00 pm ET and you can register online.

Eggert will discuss the unique design aspects of QUIC, its difference to the conventional HTTP/TLS/TCP web stack, and early performance numbers. He also looks at potential side effects of a broader deployment of QUIC.

QUIC was designed and deployed by Google and accounts for 35 per cent of the company’s egress traffic. This in turn corresponds to about seven per cent of all internet traffic. 

There is strong interest in QUIC from other large internet players and the ongoing IETF standardisation of QUIC is likely to lead to an even greater deployment in the near future, according to the SNIA.

Tardigrade decentralised cloud storage service

Storj Labs has launched the S3-compatible decentralised Tardigrade cloud storage service, and is offering service-level agreements at 99.9999999 per cent durability. The company claims end-to-end prices are a fraction of the traditional cloud providers.

Customers will receive 5GB of storage and bandwidth free for the first 30 days. After that, they move to Tardigrade’s storage tariffs, which are about half of cloud storage providers’ list prices, according to Storj. They can also earn additional credits via Tardigrade’s referral program. The platform currently has more than 19PB of available capacity, which is hosted by individuals and partners.

With the Tardigrade Open Source Partner Program (OSPP), any open source project with a Tardigrade connector will receive a portion of the revenue generated by those users for their cloud storage bills. This helps address the challenges open source software companies have faced when trying to monetize workloads in the cloud, Storj said.

John Gleeson, Storj vice president of operations, provided a quote: “Over 80 per cent of cloud workloads run on top of open source software, however these companies only receive a small percentage of the revenue. We’re pleased to launch Tardigrade into production so our open source partner program members can start generating revenue when their users store data in the cloud.”

Amazon S3-compatible applications can use the service by changing a few simple parameters. There is also a library of bindings for some of the most popular coding languages, including Go, Android, C, .NET/Xamarin, Node.js, Swift, and Python.

Storj tried this with bitcoin payments back in 2017.

Pandemic-related data storage offers

Commvault is offering a customer care programme available at no charge through September 1st. The no-strings attached offers include.

  • Metallic End Point Backup and Recovery SaaS-based software with Microsoft (in the US and Canada only) to help organisations with remote workforces.
  • For organisations worldwide, Commvault Complete Backup and Recovery software will enable protection and management of data in cloud environments or on-premises. Details are on their way.
  • Creation of a critical alert program to monitor customer systems and provide alerts on unusual changes in their data protection environment.
  • Training and e-learning videos, virtual and self-paced courses – to learn ways to optimise data management.
  • Online live, weekly sessions with Commvault data experts. 
  • Blog series addressing relevant topics.

Druva is offering a 6-month free trial of protection for Office 365 and endpoint protection for up to 300 seats. 

HYCU is offering unlimited backup to all new Italian customers, free of charge for three months, valid until June 30, 2020. Simon Taylor, HYCU’s CEO, issued a quote: “At this point in time, any company that is dealing with the unexpected rise of self-quarantining and moving critical workers to a remote or at home situation is disruption enough. The least we can do is to offer our support as businesses try to deal with unanticipated costs and significant transition in supporting their workers.”

LucidLink said it has been inundated with requests from companies that use a NAS and/or fileserver in their office and need to ensure their employees can effectively work at home. They all need to move their files and data to the cloud where they can be universally accessible, create as little disruption to current workflows as possible and not sacrifice the user experience. The company has cut the capacity fee in half from $20/TiB to $10/TiB per month, and has eliminated the connected device / per user fee. 

SpectraLogic will produce a virtual conference to enable interaction, education and engagement between customers, partners and Spectra executives. It will take place on Tuesday, May 12, 2020, and will enable attendees to ‘walk the floor’ by hearing about the latest market trends, learning about Spectra products, asking questions, sharing feedback, watching product demonstrations, and meeting with Spectra executives. 

Data protector Unitrends is offering:

  • Free backup hardware 
  • 50 per cent off direct to cloud backup to protect remote workers even via Wi-Fi
  • 50 per cent off O365 backup, along with its recently released built-in dark web monitoring
  • US-based support is here for customers 24x7x365

Shorts

Hitachi Vantara has completed the acquisition of the assets of Waterline Data, a developer of intelligent data cataloging products. HV has announced the Lumada Data Catalog which incorporates Waterline technology and expands DataOps solutions across edge-to-core-to-cloud environments.

Micron uMCP.

Micron has begun sampling a multichip package (uMCP) for 5G smartphones which combines 12GB of DRAM with 256GB of NAND. This uses a 512Gbit of 96-layer 3D NAND die with 2-channel LPDDR5 DRAM. The DRAM uses 10nm-class technology and reaches up to 6,400 Mbit/sec. The idea is that 5G phones will do more multi-tasking and a multi-chip package uses up to 40 per cent less space than a 2-chip combination. 

NAKIVO Backup and Replication v9.3 peta testing continues. Its new functionality allows you to manage backup and recovery of Oracle databases from the NAKIVO Backup & Replication console.

XenData Multi-Site Sync

XenData has launched the Multi-Site Sync service for cloud object storage which creates a global file system accessible worldwide via XenData Cloud File Gateways. They work with Amazon Web Services S3, hot and cool tiers of Azure Blob Storage and Wasabi S3. XenData gateways are optimised for video files, supporting partial file restore and streaming. This makes the solution a good fit for media applications, according to the company.

NetApp tops customer survey for block and file storage

NetApp is ranked the top supplier for block and file storage by end users in a survey of European and US companies.

Coldago, a small data storage research firm, surveyed end users in 1,123 US companies and 560 European companies, in the UK, Germany and France, in January 2020. Half the users worked at enterprise-sized companies and half worked at SMEs.

The survey participants answered 20 questions about their preference for storage technologies such as data protection choices, software-defined storage adoption, virtual SAN adoption by supplier, file, block and object storage supplier preferences and others.

The answers are filtered by US or European respondents and charted in order of US respondent rankings.

We have picked out the file and block charts to provide a sample of what you can find in the report. Here is the block storage array supplier choices chart.

Dark grey bars – US. Light grey bars – Europe.

NetApp, Dell EMC and Hitachi Vantara are the highlighted top three. The dark grey bars indicate US respondents and the lighter grey bars indicate European respondents. Pure ranks fourth and Infinidat fifth, ahead of IBM and HPE. US respondents rate Pavilion Data Systems higher than DDN while European respondents flip the positions.

A file supplier chart is next; 

NetApp is top, again, with Qumulo in second place, ahead of Dell EMC with its long-established Isilon product. Pure is fourth and DDN is fifth, and newcomer VAST Data is in sixth place, ahead of Hitachi Vantara. VAST would be in twelfth place based on European preferences alone. 

The final question asked users about their persistent memory plans.

Forty-five per cent of USA respondents and 39 per cent of their European counterpart plan to adopt the technology, which is also known as storage-class memory. It is already used and deployed by nine per cent of US respondents and four per cent of European respondents.

Coldago’s report is free to access, requiring only the input of an email address to receive the 23-page document.

Hyperscalers keep server sales afloat. Enterprise storage vendors are less fortunate

Covid-19 has depressed demand for servers and data storage from transportation, hospitality and physical retail operation. At the same the pandemic has increased demand for video streaming, web conferencing and online retail, according to the tech analyst firm IDC.

Server and external storage sales will fall in the first half of the year, but shipments should grow again, fuelled by cloud provider demand, IDC said.

Kuba Stolarski, research director, IT infrastructure at IDC, said in a statement: “The impact of covid-19 will certainly dampen overall spending on IT infrastructure as companies temporarily shut down and employees are laid off or furloughed. 

“While IDC believes that the short-term impact will be significant, unless the crisis spirals further out of control, it is likely that this will not impact the markets past 2021, at which point we will see a robust recovery with cloud platforms very much leading the way.”

A tale of two sub-markets

IDC has modelled pessimistic, optimistic and middle ground scenarios for the server and storage markets.

The middle-ground, more likely option is based on a depressed Chinese market spreading to other regions before abating towards the end of 2020.

This scenario sees server sales dropping 11 per cent y-on-y this quarter and 8.9 per cent in Q2. But then sales will rise again as cloud service providers buy more servers to meet rising demand for their services. Overall, IDC’s analysts see server sales in 2020 declining 3.4 per cent from 2019 to $88.6bn.

External storage sales, such as SANs, filers and object stores, will slump 7.3 per cent this quarter and 12.4 per cent in Q2. There will be slight growth by the end of 2020, giving an overall decline of 5.5 per cent to $28.7bn compared to 2019.

According to IDC the IT Infrastructure market has two sub-markets moving in different directions: decreasing demand from enterprise buyers and increasing demand from cloud service providers. The external storage systems market, with a higher share of enterprise buyers, will experience a deeper decline than the server market in 2020

Cloud service providers do not buy external storage systems, but they do buy SSDs and disk drives. IDC doesn’t predict how this might affect overall SSD and disk drive sales. It does predict that the storage market will return to growth in 2021.