Home Blog Page 345

Hammerspace builds hybrid storage foundations

David Flynn, Hammerspace
David Flynn, Hammerspace

Hammerspace came out of stealth in December 2018 with a software technology said to unify file silos into a single network-attached storage (NAS) resource that can provide access to unstructured data anywhere, in hybrid or public clouds. Since then it has extended its reach, for example, adding Kubernetes support and ransomware protection.

Blocks & Files had the opportunity to conduct an email interview with CEO and co-founder David Flynn. We asked him how the Covid-29 pandemic has affected the company and also how Hammerspace positions itself against competitors. His replies have been edited for brevity.

Flynn co-founded FusionIO, a pioneering all-flash storage startup, but left the company about 18 months before its $1.1bn acquisition by SanDisk in 2014. He then co-founded Primary Data, a data management startup that bagged $100m funding in cash and debt over four years. However, the company shut in January 2018 without bringing product to market. Hammerspace emerged from the ashes in May 2018, in the same offices as Primary Data.

David Flynn.

Blocks & Files: How has the Covid-19 pandemic affected demand for the Hammerspace product?

David Flynn: The pandemic is forcing people to accelerate their cloud initiatives, deploying tools, and workloads at scale to support unexpected growth in their remote workforce. Hammerspace helps customers leverage the infrastructure that they already have to get their data to the cloud nearly instantly and without the need for a data migration. Hammerspace is experiencing a surge in demand, as customers see us as an easy-button for their woes.

Hammerspace saves time and cost for IT by greatly reducing intervention, eliminating tedious tasks like sync jobs between sites or remote backup. Customers tell us that this saves them millions of dollars.

Blocks & Files: How does Hammerspace tech help work-from-home employees?

David Flynn: We have been working with our channel partners and Citrix to increase productivity and the experience for Citrix Virtual App and Desktop users (CVAD).  What we bring to Citrix VAD is that users and all their data no longer must be ‘homed’ to a single location.  

Hammerspace untethers ‘User Profile Data; so that we can automatically and instantly bring it closer to the users, increasing productivity by saving time and improving performance.  

Also, because our replication is multi-site active-active, user data won’t be lost in an outage. So, regardless of where your employees or consultants sit – we can get them to access their data using the cloud, co-lo facilities, or their own data centers without a hiccup.

Hammerspace says it supplies data as a service.

Blocks & Files: How has Hammerspace messaging changed concerning the relationship between data and the storage infrastructure?

David Flynn: Over the last year, our messaging has evolved; but the fundamentals that the company was founded on remain true.

Hybrid IT is made up of multiple data centers full of mixed infrastructure from various vendors pushing an array of cost-models. The only way to bring order to this chaos is to adopt a data-centric approach to how we store, access, manage, and protect data. 

Hammerspace helps customers by intelligently leveraging the underlying infrastructure without being subordinate to it. Our model turns that chaos into a strength, giving us the resources to adapt to ever-changing needs by continuously optimizing data across any-and-all hybrid infrastructure.

Blocks & Files: How do you market this?

David Flynn: How do we market a successful product around it? People have budgets for storage that do data management, but they don’t have budgets for data management.  So, we have taken the gloves off and position ourselves as software-defined hybrid cloud storage.

Hybrid cloud storage from Hammerspace can store, serve, manage, and protect data anywhere – on white-box servers, on enterprise NAS, on any public or private cloud, and in Kubernetes. We emphasize our unique value including data-in-place assimilation, an active-active global file system, autonomic data mobility, Kubernetes integration, and enhanced metadata. We do this while significantly reducing the overall TCO of storage, improving performance, and getting complexity under control.

Blocks & Files: What sort of customers buy Hammerspace and why? What amount of data do they need to have stored for them to find Hammerspace technology useful?

David Flynn: Hammerspace customers are diverse, spread across telecoms, media & entertainment, financial services, federal, and pharmaceuticals – among others. While some of our customers have industry-specific workflows, the broad base suffers from many of the basic hybrid cloud use-cases such as data migrations, DR to cloud, archive, or adding persistent storage to Kubernetes.

Our customers have told us that they are looking for something easier to manage while reducing the overall cost of storage.  Data growth is forcing them to search for a better way to get hundreds of TB to tens of PB of unstructured data to the cloud and are frustrated with the storage-centric approaches pushed by legacy NAS vendors and their derivatives.

Blocks & Files: Is Hammerspace a software-defined storage metadata management product that abstracts existing file and object storage in whatever on-premises or public cloud location it happens to be, and enables customers to better optimise data placement for performance, cost, and compliance?

David Flynn: This is a great way to articulate what we do, but I would make a few suggestions. Right now, Hammerspace is not targeting regulatory compliance use-cases. On the other hand, our product is fully featured for data protection and to serve data directly as storage.

One other essential note – we are also having success as a storage and data management solution for Kubernetes. We upend the misguided idea that you need to create yet another storage silo with a container-specific storage platform. With our CSI driver, we support both block and file persistent volumes, we provide feature-rich enterprise-class data management capabilities to Kubernetes for single, multi-cluster, and hybrid-cloud environments – using any storage.

Listen up, Fujifilm. PoINT Systemes already pumps objects to tape storage

Our recent article about Fujifilm’s Object Archive software, which will support object data on its magnetic tape, prompted PoiNT Systemes to get in touch.

The German firm’s S3-enabled Archival Gateway software already pumps excess object data to tape, thus saving more expensive disk capacity.

Yesterday the company added support for IBM TS3500 and TS4500 libraries, “covering both LTO drives and IBM 3592 series models” via the release of Version 2.1 of the software.

Founded in 1985, PoINT originally developed archive storage on optical disks – and for a few years it was owned by Digital Equipment Corp. Today, independently owned PoINT develops storage management software and an archive gateway to move object storage data to tape using the S3 protocol. There are solution briefs for NetApp StorageGRID and Cloudian’s HyperStore and customers include Daimler, Bayern Invest, SiXT, ReiseBank and WAVE Computersysteme.

Archival Gateway

PoINT Archival Gateway features erasure coding, parallel access to tape drives for high throughput rates and multiple library support for high capacity scaling. It supports up to 256 tape drives (LTO or IBM 3592) and eight libraries and can pump data at 230GB/sec. That mean13.8TB/min, 828TB/hour, and realistically can go past 1PB/day.

The software runs on redundant server nodes. Using its own format, it saves blocks of data redundantly on different tape cartridges so that data is not lost, should one cartridge fail. Object data is stored natively, with data and metadata preserved. This means objects are restored without conversion or rebuild processing.

The software supports multiple tape libraries: 

PoINT Archival Gateway is described in a white paper. It has two instantiations: interface nodes which link to accessing client systems; and database nodes which look after target tape systems. These nodes run separately from each other and multiple copies can be executed, providing parallelism. Accessing clients link to the gateway across Ethernet. The gateway then links to the tape libraries using Fibre Channel or iSCSI.

PoINT told us it will support the Fujifilm object initiative. Software engineer Manfred Rosendahl said: “Currently the PoINT Archival Gateway uses its own tape format to provide features like erasure coding, which is not possible with the previously available formats like LTFS. We’ve been talking to Fujifilm for some time now and we will also support the new OTFormat in future releases of PoINT Archival Gateway.”

DDN gives itself a new logo

DDN, the veteran high performance computing storage vendor that recently bought several enterprise storage businesses, has had a makeover.

Here is its new logo:

And this is what the company has to say about the new logo.

“DDN’s new circular segmented logo symbolizes the company’s new energy and reflects its path of continuous innovation and renewal. A bright and vibrant new color scheme pays tribute to its legacy and modernizes the brand to underscore the company’s dedication to its customers and the desire to combine the best technologies with dynamic expertise to deliver a streamlined experience with deeper more valuable insight into their data assets.”

So there you have it, but why did the company go to the bother of a rebrand? DDN has emerged from a flurry of acquisitions to become an enterprise and HPC vendor covering containers, virtual systems, files, blocks, objects, and software-defined storage.

The acquisitions are:

  • Lustre organisation from Intel in June 2018,
  • Tintri array for virtual servers September 2018
  • Nexenta SW-defined file and object storage in May 2019
  • IntelliFlash organisation from Western Digital in September 2019

Bulked-up DDN says it has 10,000 customers and is the largest privately-held storage supplier, with 20 technology centres around the globe. The company has assimilated the acquisitions into DDN and Tintri business units and is now pitching itself a provider of “Intelligent Infrastructure”. It can provide storage to enterprise and HPC customers and support on-premises, public cloud and hybrid cloud use.

DDN offers classic storage for HPC, which is basically parallelised data access to vast data file stores. HPC-style IT has been adopted by enterprises for big data analytics and the AI/machine learning world.

Tintri’s patch – legacy on-premises enterprise IT – is evolving too, with virtualised servers meeting a wave of containerisation and Kubernetes orchestration and a widening adoption of software-defined storage to avoid hardware supplier lock-in. That storage in general is unifying some or all of block, file and object access to simplify the storage infrastructure, and adopting a predictive analytics-style management facility delivered as a service.

The on-premises world is butting up against public cloud and reacting with hybrid clouds combining on-premises and public cloud IT facilities and consumed with public cloud-like on-demand scalability and subscription payment schemes.

Behold the new DDN, which says HPC-like workflows in the enterprise market, driven by AI and huge analytics, are moving IT organisations to embrace parallel file systems and scalable data management platforms.

Sure, storage demand is growing, but DDN is now competing with the mainstream storage array vendors (Dell, HPE, NetApp, Pure, etc.), filer and object storage suppliers (Isilon, Qumulo, Cloudian, Scality, etc.), and HPC businesses such as Panasas, as well as new technology startups (Infinidat, StorONE, VAST Data, Weka, etc.)

DDN’s HPC market strength gave it the financial firepower to make the four acquisitions. Can it parlay this into general enterprise IT market success against such a widespread array of deep-pocketed and new, VC-funded technology suppliers? Let’s hope it does because that will spur them to do better too.

MayaData launches Kubera; a Kubernetes management service

Open source developer MayaData has announced Kubera; a product for the operational management of Kubernetes.

Kubernetes came into being at Google because managing the development, deployment and de-commissioning if containers was excessively complex for developers. Now MayaData has launched Kubera because managing Kubernetes has become too complex.

Murat Karslioglu, Head of Product at MayaData, issued a quote: “Kubera builds on our experience in supporting a community of thousands of OpenEBS users. Originally intended to be used only by our support – we quickly learned that our users could benefit from Kubera as well.”

He says individual users and enterprises of all sizes are finding that Kubera helps them to achieve cost savings and productivity gains when using Kubernetes.

Kubera is a service delivered from the cloud and its functionality includes;

  • Simplified configuration, management and monitoring of stateful workloads on Kubernetes, including Kafka, PostgreSQL, Cassandra, and other workloads
  • Simplified back-up of stateful workloads on Kubernetes, whether the workloads are stored in the CNCF open-source project OpenEBS or otherwise
  • Dynamic visualisations of an entire Kubernetes environment, with point and click controls for capabilities such as snapshotting and cloning of data and compliance checks and alerts
  • Automated lifecycle management of data layer components including a newly available Enterprise Edition of OpenEBS and underlying storage such as disks and cloud volumes.

MayaData CEO Evan Powell told us: “An analogy might be helpful.  Whereas our OpenEBS is often compared to vSAN from VMware – Kubera provides the analytics, visualization, controls and automation for efficient operations of OpenEBS on Kubernetes much like VMware’s vRealize Operations software provides similar capabilities to users of vSAN. 

You can read a blog about Kubera by Powell. 

Free individual Kubera plans are available as well as Team and Enterprise plans including support services from MayaData for the entire environment including the OpenEBS Enterprise Edition and related components. Kubera subscriptions start at $49 per user per month.

Kioxia releases faster SAS SDD – the 24gig PM6

Update; June 17 – SAS roadmap and performance data added.

Although the NVMe interface is set to rule the fast SSD interface roost, Kioxia has released a PM6 gen 4 SAS interface drive running at 24Gbit/s.

This is double the bandwidth of current 12Gbit/s SAS links, which is good news for system builders wedded to SAS.

SAS technology roadmap to 48Gbit/s.

Otherwise the PM6 looks pretty much like its PM5 predecessor. That 12Gbit/s SAS drive was launched in August 2017 and used Kioxia’s (then Toshiba’s) 64-layer BiCS 3D NAND technology in TLC format. The PM6 gets the benefit of newer 96-layer BiCS tech in the same TLC format, meaning KIoxia can produce drives with the same capacity as before but using fewer dies, so lowering its costs.

Like the PM5, the PM6 comes in write-intensive, mixed-use, and read-intensive versions, with different endurance levels; 10, 3 and 1 drive writes per day respectively. Capacities vary in each case;

  • PM6 WI – 400GB, 800GB, 1.6TB, 3.2TB,
  • PM6 MU – 800GB, 1.6TB, 3.2TB, 6.4TB, 12.8TB,
  • PM6 RI – 960GB, 1.92TB, 3.84TB, 7.68TB, 15.36TB, 30.72TB.

Update: The initial performance data supplied by Kioxia is sequential read bandwidth of up to 4.3GB/second. The PM5 in read-intensive form delivered up to 2.1GB/sec so that improvement is welcome.

Other PM5 performance numbers are; up to 340,000 random read IOPS, 120,00 random write IOPS, and 2.72GB/sec sequential write throughput. Kioxia subsequently said the PM6 has, compared to the PM5 read-intensive model;

  • Up to 54 per cent improved sequential write bandwidth – calculated to be 3.2GB/sec
  • Up to 144 per cent better random read performance – calculated to be 489,600 IOPS
  • Up to 185 per cent greater random write performance – calculated to be 222,00 IOPS

KIoxia tells us the PM6 is a dual-port enterprise drive and is capable of recovering from the failure of two of its dies. The security features include sanitise instant erase (SIE), TCG Enterprise self-encrypting drive (SED) and FIPS 140-2 certification.

PM6 drives are now available for evaluation and qualification. Samples of 30.72TB products are scheduled to be available after August. SSDs based on 24G SAS will soon be available in servers from market leading OEMs. Market availability for the PM6 24G SAS SSD Series is expected in Q4 2020.

Dell updates Isilon with PowerScale label, fresh hardware and other tweaks

Dell has updated its Isilon scale-out filers with new PowerScale branding and products as well as S3 object access and a DataIQ data analytics feature.

PowerScale is the brand that Dell deputy chairman Jeff Clarke referred to in comments about the coming Unstructured.NEXT product in the Q1 earnings call in May, calling it “the last of the powering up of the portfolio.”

Dan Inbar, GM and president for Storage at Dell, issued a quote: “The amount of unstructured data enterprises store as file or object storage is expected to triple by 2024, and there are no signs of it slowing.”

Dell has now, following on from the PowerStore launch, completed its Power-branding of its storage products. It has two unstructured data storage products: Isilon for files and ECS for object storage.

By adding S3 object access to v9.0 of the  PowerScale OneFS operating system, the way is paved for a unified Isilon/ECS product line. However, ECS and Isilon are not coming together yet, with ECS meant for purpose-built object stores.

Hardware

There are two PowerScale models, both all-flash and 1U in size, and each based on a Dell PowerEdge server – the F200 SAS drive system and the F600 all-NVMe drive system. The existing Isilon range has three product classes covering a high performance to high capacity spectrum:

  • All-flash nodes – F800 and F810 (basically F800 + deduplication)
  • Hybrid flash/disk nodes – H600, H500, H5600 and H400
  • Nearline and Archive filers – A200 and A2000 for cool and cold data.

The PowerScale F200 and F600 fit in the all-flash node category and slot in below the F800 in capacity terms. 

PowerScale F200 and F600 systems.

The F200, a single CPU socket system with just 4 SSDs (960GB, 1.92TB, 3.84TB)  and 3.84TB  to 15.36TB capacity, is aimed at Internet edge, remote and branch office (ROBO) deployments.

The F600 is a higher performing system with two CPU sockets, 8 drives (1.92TB, 3.84TB, 7.68TB) and a 15.36TB to 61.44TB capacity range. Its processing power supports OneFS deduplication and compression and it could go in larger ROBO sites and in data centres for media and entertainment workloads.

There can be from 3 to 252 of these systems in a cluster and they can be mixed and matched with existing Isilon clusters.

In comparison the F800, with a Xeon E5-2697A v4 CPU, is much higher capacity, supporting 60 SAS SSDs (1.6TB, 3.2TB, 3.84TB, 7.68TB, 15.36TB) with a 96TB to 924TB range.

Performance

Dell has not released specific PowerScale product performance numbers or even released details of the actual CPUs used. However, it has said the v9 PowerScale OneFS can deliver up to 15.8 million IOPS and the new PowerScale nodes (F200 and F600) are up to 5x faster than “its predecessor,” without specifying the predecessor. We think it is the F800.

We know the F800 provides up to 250,000 IOPS and 15GB/sec throughput as a single node, and we can compare F200 and F600 attributes to that. 

The F200 with just one CPU and 4 SAS drives supports 2 x 10/25GbitE network links whereas the F800, with its 60 drives and single CPU, supports 2 x 10/25/40 GbitE. We suspect the F200 will be slower than the F800. 

The F800 can have up to 256GB of memory for its CPU to handle the 60 SAS SSDs, but the F600, with just 8 x NVMe SSDs (faster than SAS), has up to 384GB of memory; 50 per cent more DRAM for 86 per cent fewer drives. It also supports 2 x 10/25/100 GbitE networking, more than the F800’s 40GbitE top end. 

The F600’s superiority in CPU socket number, DRAM capacity, SSD speed, and IO port bandwidth suggests that its IOPS and throughput numbers will be significantly higher than those for the F800. In fact we think the F600 will deliver 5x more performance than the F800, meaning 1.25 million IOPS and more throughput as well. A literal 5x throughput improvement would mean 75GB/sec but we think this could be unrealistic.

Operating System

The F200 and F600 are supported by v9 of the (PowerScale) OneFS operating system, codenamed Cascades. It runs on all existing Isilon nodes as well. This version adds S3 object access to existing support for NFS, SMB and HDFS.

All data on the system can be simultaneously read and written through any protocol.

Data reduction has been improved to make it up to 6x better than the previous OS version, delivering an effective increase in capacity for existing Isilon nodes supporting data reduction, such as the F810.

OneFS v9.0 supports clusters with up to 60PB of raw capacity; an immense amount. New nodes can be added to a cluster and brought online in a claimed 60 seconds. Then the data load across the cluster can be automatically rebalanced to relieve hot spots. Old nodes can be decommissioned with no downtime.

There are new Ansible and Kubernetes integrations to make PowerScale better suited for DevOps work.

PowerScale OneFS can also run in the AWS, Azure and Google clouds, enabling a single environment across Internet Edge, ROBO, data centre and public cloud locations.

DataIQ

DataIQ software provides a single view of file and object data across Dell EMC (including Unity), third-party and public cloud storage. This heterogeneous file and object environment can be scanned, searched, classified, tags added to items, and data automatically moved according to policies. 

DataIQ screen.

That means the data is known about and can be stored in the most cost-effective system for its access level. It also allows for self-service or automated movement of data between file and object stores, and between on-premises and public cloud locations. Data can also be moved between PowerScale and ECS systems. 

DataIQ provides reporting on data usage, storage costs, user access patterns and more. 

Data items of different types can be grouped into a project with tags and then such projects dealt with as single entities. We could envisage a film project with associated file components which can be managed as a single item and moved from one system to another.

DataIQ gives Dell a foothold in the heterogeneous data management space, enabling it to start competing with other file data managers such as InfiniteIO and Komprise. It is also included with PowerScale giving Dell an effectively instant customer base.

A duplicate finder plugin locates redundant data across volumes and folders, enabling users to delete duplicates, save costs and streamline their storage infrastructure. That brings Dell into the copy data management space as well.

PowerScale systems, like other Dell storage products, can be managed through the CloudIQ monitoring and predictive analytics service.

We understand Dell is considering OEMing OneFS. Having it available on PowerEdge servers would help enable this.

Dell PowerScale OneFS v9.0, PowerScale product nodes and DataIQ are now generally available globally. Existing Isilon and ECS products remain supported.

GigaSpaces brings Group PSA mainframe up to speed

New EU vehicle emission regulations have created an onerous workload for car manufacturers.

When prospective car-buyers access an online Peugeot Citroen’s (Groupe PSA) site, they need to know if the vehicle configuration they select meets the new WLTP (world harmonised light-duty vehicles test procedure) standard. To deliver the expected user experience, meaning having a snappy web site, the WLTP calculations should be completed in less than 100ms. Also the calculations need to be accurate and reliable enough to avoid fines that could reach up to €100 million annually.  

Peugeot Citroen’s mainframe system does not have the power to meet the demand for checking WLTP compliance in real-time.

Insight Edge diagram.

If the WLTP software layer is executed entirely on the mainframe, it is limited to 200 requests/sec. But the Peugeot Citroen requirement is to support 3000 requests/sec.

Rather than update the mainframe, which would be expensive, Groupe PSA decided to offload the WLTP processing to a networked 3-server X86 cluster It used an in-memory, real-time analytics SW product called InsightEdge from GigaSpaces, implemented through CAP Gemini. The difficulty of computing WLTP compliance at individual car configuration level is indicated by CAP Gemini deciding in-memory software was needed.

A trio of X86 servers running InsightEdge in-memory software to work out vehicle emissions delivers 15 times more requests per second than the mainframe can do alone.

A caching alternative was rejected because it meant accessing the mainframe to execute complex queries and analytics, which slowed things down.

The caching alternative would not support added intelligence for processing and simplifying the requests on the fly, with aggregation and masking. InsightEdge co-locates data and business logic in the same memory space to lighten storage IO and context switch needs and speed request processing.

Groupe PSA’s deployed WLTP software has an adapter layer on the mainframe, which connects via an orchestration layer to InsightEdge. That SW is deployed on a cluster of three HPE ProLiant DL380-G9 servers running in full high-availability mode, and with 16 partitions.

This InsightEdge system delivers a 15-19ms query and analytics response time, and handles up to 95.2 per cent of calculation requests without accessing the mainframe. So the mainframe is not entirely off-loaded.

Pavilion Data: Our storage array is considerably better than yours

The legacy design of dual-controller storage arrays is unsuited for today’s performance and capacity demands. So claims Pavilion Data, the NVMe-over-Fabrics array startup.

All suppliers will need to junk dual-controller systems if they are to cope with growing data volumes, CEO Gurpreet Singh told a press briefing last week.

Unfurling the company’s near-term roadmap, Singh said the company’s Hyperparallel Flash Array (HFA) deliver unmatched performance and capacity and represents a “third wave of computing”.

Pavilion Data says its HFA beats file and block access competitors and provide better throughout per rack unit than object storage competitors.

HFA has up to 20 controllers, with shared memory for metadata, and fast access to NVMe SSDs with parallel access to the SSDs. The system supports block, object and NFS file access. The company said SMB support is contingent on customer demand. Each controller is dynamically defined as a block, file or object access controller, and a second controller acts as a standby backup.

U.2 format SSDs only are supported. However the SSDs are mounted on carriers, so fresh formats could be supported by redesigned carrier cards.

Pavilion’s roadmap includes 30TB SSD support in the second half of this year along with an IO card and Optane SSD support.

V2.4 of the Pavilion OS will add a fast object store, Nagios system monitoring software integration, data compression and Windows drivers for NVMe-oF TCP and RoCE.

Competition

Pavilion likes to compare HFA performance using raw IOPS and GB/sec numbers with latencies, and add cabinet rack unit take-up and capacity to emphasis performance rack space density.

In the briefing, it contrasted a 4U, 1PB usable capacity HFA system for block access with publicly available numbers from a Dell EMC PowerMax (80U and 3PB), NetApp’s AF800 (48U and 4.4PB) and a Pure Storage Flash Array (6 to 9U – it was uncertain which – and 896TB)

The diagram shows Pavilion’s IOPS and GB/sec superiority:

This is a grab from a Pavilion slide with supplier labels added

This Pavilion slide compares file access with Pure Storage’s FlashBlade and Isilon and VAST Data systems.

This is a grab from a Pavilion slide, with supplier labels added

And it has repeated the exercise with Pure Storage, MinIO and OpenIO for object storage.

This is a grab from a Pavilion slide with supplier labels added.

Pavilion is slower in absolute throughput terms than MinIO and OpenIO but uses much less rack space. The company said: “Pavilion outperformed all competitors in the same rack space. When normalized to a per RU basis, we actually outperformed MinIO and OpenIO as well.”

 It outperformed Pure in the same rack space. The roadmap includes a fast object store later this year and this may swing the object performance meter in Pavilion’s favour.

Background

Gurpreet Singh said Pavilion Data has broken free from the other NVMe-oF array startups (we think he is referring to Excelero), citing last year’s $25m capital raise which takes total funding to $58m. The company has more than 85 employees and “many” customers, though it won’t say how many. It claims one US Federal customer runs the world’s largest NVMe-oF system but it can’t reveal the name, nor the size of the system.

Public customers include the Texas Advanced Computing Center (TACC), where three Pavilion HFAs replaced five EMC DSSDs, and Statistics Netherlands.

Your occasional storage digest with Samsung, Dell EMC, Nutanix and more

Tom’s Hardware has published a scoop-ette on an 8TB Samsung QVO 870 SSD using QLC (4bits/cell) NAND and produced in the U.2 format. The entry level starts at 1TB.

News of the 870 leaked via Amazon listing, which was pulled shortly after the Tom’s Hardware story. Samsung’s current 860 QVO uses the company’s 3D 64-layer V-NAND in QLC format and we understand the 870 QVO uses the later and denser 100+ layer 3D V-NAND generation.

The drive should have a SATA III interface like the 860, with these drives built for affordable capacity rather than high performance.

Rakers round-up

Wells Fargo senior analyst Aaron Rakers has given his subscribers a slew of company updates, following a series of virtual meetings with suppliers. Here are some of the insights he has gleaned.

Intel’s 144-layer 3D NAND is built with floating gate technology and a stack of three 48-layer components. This will be more expensive than single stack or dual-stack (2 x 72-layer) alternatives.

Optane Traction: “Over a year post launch (April 2019), Optane has now been deployed at 200 of the Fortune 500 companies, and has had 270 production deal wins, and an 85 per cent proof of concept to volume deployment conversion rate. In addition to optimization work with SAP Hana and VMware, [Intel] noted that the partner ecosystem / community has discovered new use cases for Optane, such as in AI, HPC, and open source database workloads.”

FPGA maker Xilinx: “Computational Storage. Xilinx continues to highlight computational storage as an area of FPGA opportunity in data centre. This includes the leverage of FPGAs for programmable Smart SSD functionality – Samsung representing the most visible partner; [Xilinx] noting that the company has active product engagements with several others.”

Dell EMC PowerStore: “The new Dell EMC PowerStore system also looks to compete more effectively against Pure Storage with Anytime Upgrades and Pay-per-Use consumption. The PowerStore systems can be deployed in two flexible pay-per-use consumption models with short and long-term commitments (including new one-year flexible model). This will likely be positioned against Pure Storage’s Pure-as-a-Service (PaaS) model. Anytime Upgrades allow customers to implement a data-in-place controller upgrade that will most likely be viewed as a competitive offering against Pure’s Evergreen Storage subscription (enabling a free non-disruptive controller upgrade after paying 3-years of maintenance).“

Nutanix and HPE: “Nutanix’s relationship with HPE continues to positively unfold. [Nutanix CFO Dustin] Williams noted a strong quarter for the partnership in terms of new customers. He said that Nutanix has been integrated into Greenlake but it is still in its infant stages.” 

Seagate: “Seagate will be launching a proprietary 1TB flash expansion card for the upcoming (holiday season) Xbox Series X, and the company will be the exclusive manufacturer of the product. While not a high margin product, this alignment provides brand recognition as well as potential to drive higher-capacity HDD attach for additional (cheaper) storage. Regarding the move to SSDs within the next-gen game consoles, we would note that Seagate has shipped 1EB of HDD capacity to this segment over the past two quarters, and thus we would consider the move as having an immaterial impact to the company.”

SVP Business & Marketing Jeff Fochtam characterised Seagate’s SSD “strategy as being complementary to its core HDD business. He noted that Seagate has been on a strong profitable growth path with SSDs, which he credits to the company’s supply chain efforts and strategic partnerships. The company currently has >10 per cent global share in consumer portable SSDs, after rounding out the portfolio ~1 year ago.” 

Nutanix Xi Clusters

Nutanix has been busy briefing tech analysts about Xi Clusters.

The company told Rakers: “Nutanix Xi Clusters (Hybrid Multi-Cloud Support): clusters give [customers] the ability to run software either in the datacenter or in the public cloud. This makes the decision non-binary. Clusters are in large scale early availability today, GA in a handful of weeks first with AWS and then with Azure. It will be cloud agnostic. The ability to run the whole software stack in the public cloud strengthens the company’s position in the core business by giving the customer the optionality to run Nutanix licenses in the public cloud at the time of their choosing.” 

And we learn from Nutanix’s briefing with William Blair analyst Jason Ader: “Through its Xi Clusters product, Nutanix enables customers to run Nutanix-based workloads on bare metal instances in AWS’s cloud (soon to be extended to Azure), leveraging open APIs and AWS’s native networking constructs. This means that customers can use their existing AWS accounts (can even use AWS credits) and VPCs and seamlessly tap into the range of native AWS services. From a licensing perspective, Nutanix makes it simple to run Nutanix’s software either on-premises or in the cloud, allowing customers to move their licenses as they so choose.

Shorts

Backup biz Acronis has a signed a sponsorship deal with Atlético de Madrid, and is now the football club’s official cyber-protection partner.

Accenture has used copy data manager Actifio to automate SQL database backup for Navitaire, a travel and tourism company owned by Amadeus, the airline reservation systems company.

SSD supplier Silicon Power has launched a US70 PCIe Gen 4 SSD in M.2 format and 1TB and 2TB capacities. It uses 3D TLC NAND, has an SLC cache, and delivers read and write speeds up to 5,000MB/s and 4,400MB/s, respectively.

Silicon Power US70 M.2 SSD.

Some more details of the SSD in Sony’s forthcoming PlayStation 5 games console have emerged. It has 825GB capacity and is a 12-lane NVMe SSD with PCIe 4.0 interface and M.2 format. The drive has a 5GB/sec read bandwidth for raw data and up to 9GB/sec for compressed data. In comparison, Seagate’s FireCuda 520 M/2 PCIe gen 4 SSD also delivers 5GB/sec read bandwidth. It has 500GB, 1TB and 2TB capacity levels. An SK Hynix PE8010 PCIe 4.0 SSD delivers 6.5GB/sec read bandwidth. Check out this Unreal Engine video for a look at what the PS5 can do.

Unreal Engine PlayStation 5 video.

Stellar Data Recovery has launched v9.0 of its RAID data recovery software. The new version adds recovery of data from completely crashed and un-bootable systems, and a Drive Monitor to check hard drive health.

The Storage Performance Council has updated the SPC-1 OLTP benchmark with five extensions that cover data reduction, snapshot management, data replication, seamless encryption and non-disruptive software upgrade. They provide a realistic assessment of a storage system’s ability to support key facets of data manageability in the modern enterprise.

Automotive AI company Cerence is building AI models using WekaIO’s filesystem, which won the gig following a benchmark shoot-out with Spectrum Scale and BeeGFS.

Striim, a supplier of software to build continuous, streaming data pipelines from a range of data sources, has joined Yellowbrick Data’s partner program.

WANdisco, which supplies live data replication software, has raised $25m in a share placement. The proceeds will strengthen the balance sheet, increase working capita and fund near term opportunities with channel partners The UK company said continues to work towards run-rate breakeven by capitalising on the Microsoft Azure LiveData Platform.

GigaOm: Cohesity, Komprise and Commvault lead unstructured data management pack

Blocks & Files has seen an extract of a soon to be published GigaOm report that assesses unstructured data management suppliers

Sixteen vendors are covered by analyst Enrico Signoretti in the GigaOm Radar for Unstructured Data Management. They are Aparavi, AWS Macie, Cohesity, Commvault, Druva, Google Cloud DLP, Hitachi Vantara, Igneous, Komprise, NetApp, Panzura, Rubrik, Quantum, Scality, SpectraLogic and Veeam.

Enrico Signoretti

Signoretti confirmed the imminent publication of the report. He told us: “As you know, interest in Unstructured Data Management is skyrocketing. Users want to know what they have in their storage infrastructures and need tools to decide what to do with it. We have identified two categories (infrastructure- and business-focused).

“The first category provides immediate ROI and improves overall infrastructure TCO, while the latter addresses more sophisticated needs, including compliance for example. Vendors are all very active and the roadmaps are exciting.” 

In the report, Signoretti writes: “Leading the pack we find Cohesity, Komprise and Commvault.” Hitachi Vantara and faster-moving Druva are moving deeper into the leaders ring. Challengers Igneous and Rubrik are entering the leaders ring. Dell EMC, HPE and IBM are not present in this overall group of suppliers.

Draft GiagaOm Unstructured Data Management radar screen diagram

GigaOm has already published Key Criteria for Evaluating Unstructured Data Management, which is available to GigaOm subscribers and provides context for the Radar report.

GigaOm Radar details

GigaOm’s Radar Screen is a four-circle, four-axis, four-quadrant diagram. The circles form concentric rings and a supplier’s status – new entrant, challenger, or leader – is indicated by placement in a ring.

The four axes are maturity, horizontal platform play, innovation and feature play.

There is a depiction of supplier progression, with new entrants growing to become challengers and then, if all goes well, leaders. The speed and direction of progression is shown by a shorter or longer arrow, indicating slow, fast and out-performing vendors.

The inner white area is for mature and consolidated markets, with very few vendors remaining and offerings that are mature, comparable, and without much space for further innovation. 

The radar screen diagram does not take into account the market share of each vendor. 

Scality’s Zenko cloud data controller gains data moving feature

Scality, the object storage vendor, has added data-moving to its Zenko data management software. The upgrade turns Zenko into a data orchestrator and controller that works across multiple public clouds and on-premises file and object stores.

This has echoes of the file lifecycle management capabilities of Komprise and InfiniteIO and the global metadata-based activities of Hammerspace

Zenko sprang out from Scality’s engineering team in September 2018. It is positioned as an object and file location engine, a single namespace interface through which data can be stored, retrieved, managed and searched across multiple disparate private and public clouds, enabled by metadata search and an extensible data workflow engine. 

Zenko overview video.

But at launch, it could not move data. Now it has “an open source, vendor neutral data mover across all clouds, whether private or public like AWS,” Scality CEO Jerome Lecat said.

Giorgio Regni, Scality CTO, added: “This release provides deeper integration with AWS.  Customers can now write directly to AWS S3 buckets and Zenko will see this data and manage it. Prior to this release, customers had to write the data into Zenko to apply workflow policies like tiering and lifecycle. Now any existing AWS S3 bucket and on-premises NFS (i.e.Isilon and NetApp) volume can be discovered by Zenko and form part of Zenko’s global namespace.”

The Zenko software supports AWS S3, Azure Blob, Google Cloud and Wasabi public clouds. On-premises systems supported Scality’s RING and other S3 object storage, Ceph and NAS (NFS only). Zenko inspects these sources and imports object and file metadata into its own store. Applications interface to Zenko with a single API and can search for and access objects in this store, with Zenko effectively acting as an access endpoint for the various source object and file storage repositories.

The data moving capability means Zenko can move objects and files between the source locations, as workload needs dictate.

The Zenko store is kept up to date, over time and not in real time. by using asynchronous updates. These are triggered with mechanisms such as the AWS S3 Bucket Notification, Lambda functions and AWS Identity and Access Mechanism (IAM) policies for cross-site access control.

These updates can trigger Zenko actions. For example, objects might have a specific metadata tag attached to them, such as “archive”. This could initiate Zenko data moving workflow action to archive the object into a public cloud cloud cold store, or a Fujifilm Object Archive tape library. Other tags could initiate a replication exercise or cause data to be moved to specific target sites and applications.

Zenko Orbit screen

Scality could develop Zenko to enhance its Zenko Orbit storage monitoring and analytics component to move data in response to policies setting, for example, cost and capacity limits.

Regni said “We are working with several ISV and system partners in Europe, the US and Japan to help them accelerate new solutions in cloud and object data management based on Zenko.”

There is a free version of open source Zenko and a licensed enterprise version. You can check out a Zenko White Paper for more information.

Infinidat closes mystery funding round

Infinidat, the high-end storage array maker, has completed a D-round of funding with existing investors, but it is not saying how much it raised. Prior to this round, the company has raised $325m in three slugs since 2010.

Infinidat said the cash “will be used to build on new initiatives, such as the increasing demand for flexible consumption models in the market, strengthening the company’s growth plans and enabling it to build further on its industry leadership position. It will also be used for technical research and product development.”

The news accompanies a management reshuffle, with Moshe Yanai relinquishing the chair. His replacement, executive chairman Boaz Chalamish, will oversee the two newly appointed co-CEOs.

Chalamish was most recently chairman and CEO at Clarizen and his background includes jobs at VMware, HP and Mercury. His appointment comes in the wake of Yanai resigning his CEO role last month, stepping aside to become the Chief Technology Evangelist.Two co-CEOS were appointed in his place – COO Kariel Sandler and CFO Nir Simon.

At the time Yanai said he was closely collaborating with Infinidat investors TPG and Goldman Sachs “to drive towards our next phases of growth”.

Boaz Chalamish

Infinidat has also promoted three execs: Catherine Vlaeminck to VP worldwide marketing, Dan Shprung becomes EVP, EMEA and APJ, and Steve Sullivan is now EVP, Americas.