Home Blog Page 294

StorFaster? Quantum converges StorNext metadata controllers onto faster storage server

Quantum has introduced StorNext H4000, a more powerful hybrid flash/disk appliance that does away with the need for separate metadata controllers and client access nodes by using faster controllers.

StorNext is Quantum’s scalable file system software and hardware range for entertainment and media content production workflows. Quantum added the H2000 hybrid flash/disk product line, positioned above its existing QXS series, in January this year. Now it’s capped that with an even more powerful H4000 product using exactly the same form factor chassis, network ports, SSD and disk drive options as the H2000.

Ed Fiore, Quantum’s GM for Pimary Storage, said: “With this new architecture, we’ve made it easy for any organisation to leverage StorNext to improve collaboration, accelerate their workflows or production pipelines, and manage their unstructured data from ingest through archive – without the need to deploy a storage network.”

Like the H2000, the H4000 comes in either a 2U x 12 3.5-inch slot chassis – H4012 – supporting nearline SAS disk drives; or a 2U x 24 2.5-inch bay box – H4024 – which can be filled with SSDs or 10K rpm disk drive. Both support the PCIe Gen 4 bus, 32Gbit/s Fibre Channel, 12G SAS, 10/25/40/100Gbit Ethernet, and controllers with AMD CPUs.

H4012 and H4024 front and back pictures.

The H2000 has 16 AMD cores per node, 32GB RAM and 256GB of mirrored flash per controller All these elements are more abundant in the H4000 with 48 cores/node, and 128GB system memory and 512GB of mirrored flash per controller. It’s a much more powerful system.

The H2000 is positioned as a Storage Next cluster capacity enhancement node while the H4000 can do more. Its datasheet states: “The StorNext 7 metadata controller, client access nodes, block services, and hypervisor services can all run on a single H4000 appliance. By converging all of these components into a compact 2U server, the H4000 appliances eliminate networking complexity and administration, delivering a dramatically simplified StorNext user experience.”

Quantum continued: “This efficiency reduces needed rack space by 50 per cent, reducing power and cooling costs, and conserving physical space.”

Quantum table.

StorNext file and block services, along with other data services, run as virtual machines on the H4000. Quantum says “By virtualising and containerising the core components of the StorNext 7 software, multiple virtual machines and data services can run on a single appliance, providing more options for using StorNext 7 software in more places than ever before.”

It will make it possible to use StorNext on additional cloud and hardware platforms in the future.

The StorNext 7 software has also been updated to add:

  • File system cluster monitoring and management using web services
  • System performance charting and graphing
  • Enhanced alerting, log management and health monitoring capabilities for hardware components, such as the server enclosures, and drive health

We expect even more powerful H Series systems in the near term, as Fiore also said: “This new architecture unlocks the future potential to run certified applications like the Quantum CatDV media asset management software on the same server, which will further simplify media workflows and other use cases.”

The H4000 Series is available to purchase today, with first customer shipments planned for the end of May. No pricing details were revealed. If you want to pore over the finer details of the H4000, here’s a blog by Quantum marketing director Skip Levens.

Samsung now shipping samples of speedy SAS SSD

Samsung has re-implemented its 2016-era PM1643 SAS SSD as the PM1653, doubling its speed and more than doubling its layer count. It’s still not as fast as a PCIe gen 4 interface drive but is ready for enterprises and OEMs that are updating their SAS-based SSD infrastructure to 24Gbit/s (24G).

The PM1653 is a 2.5-inch format drive with capacities ranging from 800GB to 30.72TB, the same range as the earlier PM1643. This prior drive was built with 64-layer V-NAND, Samsung’s 4th 3D NAND generation, organised with 3 bits/cell (TLC). The PM1653 ups the V-NAND generation to 6 with 128 layers, still using TLC cells. 

It has two SAS ports and works with Broadcom’s 24G SAS adapter and, according to a statement from Jas Tremblay, VP and GM of Broadcom’s Data Center Solutions Group: “The combination of the PM1653 SSD and Broadcom’s next-generation SAS RAID products delivers up to 5X RAID 5 performance.”

Samsung says the PM1653 offers up to 800,000/270,000 random read/write IOPS and moves sequential data around at up to 4.3GB/sec writing and 3.8GB/sec when reading. The earlier PM1643 topped out at 400,000/90,000 random read/write IOPS. It throughput was slower as well – 2.1GB/sec sequential writes and 2GB/sec sequential reads.

However, PCIe Gen 4 interfaces are faster than 24G SAS. Samsung’s 980 PRO uses that PCIe interface and the same 128-layer TLC NAND as the PM1653. It comes in the M.2 format and pumps out 800,000 random read IOPS but up to 1 million random write IOPS. Its maximum bandwidth is superior to the PM1653 as well: 6.9GB/sec sequential writes and 5GB/sec sequential reads.

A fifth generation SAS 45G standard is being developed by the SCSI Trade Association. The 24G SAS data rate is actually 22.5Gbit/sec and the 45G specification will double that to 45Gbit/sec. The specification may be agreed in the 2022/2023 period with products appearing 12 to 18 months later. By then, PCIe gen 5 interfaces, twice as fast as PCIe 4.0, may well be in general use and could threaten the future of the SAS interface.

On the competitive front, Kioxia’s 24G SAS PM6 drive is generally slower than Samsung’s PM1653, delivering up to 489,600 random read and 222,000 random write IOPS. Its highest sequential bandwidth is 4.3GB/sec writing and 3.2GB/sec reading.

Samsung has not released endurance or latency data for the PM1653. It is sample-shipping now, with mass production starting some time in the second half of the year.

IBM builds containerised version of Spectrum Scale

IBM is launching a containerised derivative of its Spectrum Scale parallel file system called Spectrum Fusion, as well as delivering new ESS 3200 Elastic Storage System storage array and a capacity enhancement for the ESS 5000.

The rationale is that customers need to store and analyse more data at edge sites, while operating in a hybrid and multi-cloud world that requires data availability across all these locations. The ESS arrays provide Edge storage capacity and a containerised Spectrum Fusion can run in any of the locations mentioned.

Denis Kennelly, IBM Storage Systems’ general manager, said in a statement: “It’s clear that to build, deploy and manage applications requires advanced capabilities that help provide rapid availability to data across the entire enterprise – from the edge to the data centre to the cloud. It’s not as easy as it sounds, but it starts with building a foundational data layer, a containerised information architecture and the right storage infrastructure.”

Spectrum Fusion

Spectrum Fusion combines Spectrum Scale functionality with unspecified IBM data protection software. It will appear first in a hyperconverged infrastructure (HCI) system that integrates compute, storage and networking. This will be equipped with Red Hat Open Shift to support virtual machine and containerised workloads for cloud, edge and containerised data centres.

Spectrum Fusion will integrate with Red Hat Advanced Cluster Manager (ACM) for managing multiple Red Hat OpenShift clusters, and it will support tiering. We don’t yet know how many tiers and what types of tiers will be supported.

Spectrum Fusion provides customers with a streamlined way to discover data from across the enterprise, IBM said. This may mean it has a global index of the data it stores.

IBM also said organisations will manage a single copy of data only – i.e. there is no need to create duplicate data when moving application workloads across the enterprise. The company does not mention data movement in its launch press release.

Spectrum Fusion will integrate with IBM’s Cloud Satellite, a managed distribution cloud that deploys and runs apps across the on-premises, edge and cloud environments. 

Q and A

We asked IBM some questions about Spectrum Fusion:

Blocks & Files: What is the data protection component in Spectrum Fusion?

IBM: For data protection, Spectrum Fusion primarily will leverage a combination of the technology within Spectrum Protect Plus and the storage platform layer based on Spectrum Scale.

Blocks & Files: How many storage tiers are supported?

IBM: Spectrum Fusion will support 1,000 tiers that can span across an enterprise and cloud including Flash, HDDs, Cloud(S3) and tape.

Blocks & Files: Spectrum Fusion is being designed to provide customers with a streamlined way to discover data from across the enterprise. Does that mean it has some kind of global data index?
IBM: Spectrum Fusion implements a global file system with a single name space so it does have global awareness of file names and locations. We will support 8YB (yottabytes) of global data access and namespace that can span across the enterprise and cloud. The technology is based on existing IBM advanced file management (AFM) technology currently available in Spectrum Scale. Existing NFS or S3 data from other vendors can be integrated into this global data access allowing existing data sources to integrate into Spectrum Fusion environments.

Blocks & Files: Organisations will be able to manage only a single copy of data and no longer be required to create duplicate data when moving application workloads across the enterprise. How will they access the data from a remote site? Will the data be moved to their site? 

IBM: Yes when accessed for optimal performance. For remote access, Spectrum Fusion will automatically move/cache only the data needed to a remote site. With local caching in the remote site, the system can deliver high performance but without the expense and security concern of duplicating large volumes of data. The applications will see the data as a “local” file but the data is physically located on a remote system (Spectrum Scale, remote NFS FS, or an S3 data bucket).

ESS news

IBM’s ESS systems are clustered storage servers/arrays with Spectrum Scale pre-installed. The ESS 3000 is a 2U-24-slot box fitted with NVMe flash drives and up to 260TB usable capacity. It is a low-latency analysis node.

The high-end ESS 5000 capacity node has two POWER9 servers, each 2U high and running Spectrum Scale, and uses 10TB, 14TB or 16B disk drives in either 5U92 standard depth storage enclosures or 4U106 deep depth enclosures. It scales up to 13.5PB with eight of the 4U106 enclosures.

ESS 3200.

The new ESS 3200 comes in a 2U box filed with NVMe drives and outputs 80GB/sec; a 100 per cent read performance boost over the ESS 3000. It supports up to 8 InfiniBand HDR-200 or Ethernet-100 ports and can provide up to 367TB of storage capacity per node. 

The ESS 5000 has been updated with a capacity increase and now scales up to 15.2PB.

All ESS systems are now equipped with streamlined containerised deployment capabilities,  automated with the latest version of Red Hat Ansible. Both the ESS 3200 and ESS 5000 feature containerised system software and support for Red Hat OpenShift and Kubernetes Container Storage Interface (CSI), CSI snapshots and clones, Windows, Linux and bare metal environments.

The 3200 and 5000 work with IBM Cloud Pak for Data, a containerised platform of integrated data and AI services, for integration with IBM Watson Knowledge Catalog (WKC) and Db2. They are also integrated with IBM Cloud Satellite.

Spectrum Fusion in HCI form will become available in the second half of the year and in software-only form in early 2022.

Contain yourselves: Scality object storage gets cloud-native Artesca cousin

Scality has popped the lid on ARTESCA, its new cloud-native object storage, co-designed with HPE, that is available alongside its existing RING object storage product.

Artesca configurations start with a single Linux server and then scale out, whereas the RING product requires a minimum of three servers. The Kubernetes-orchestrated ARTESCA container software runs on x86 on-premises servers with HPE having an exclusive licence to sell them for six months.

A statement from Randy Kerns, senior strategist and analyst at the Evaluator Group, said: “Scality has figured out a way to include all the right attributes for cloud-native applications in ARTESCA: lightweight and fast object storage with enterprise-grade capabilities.”

Scality chief product officer Paul Speciale told us: “We believe object storage is emerging as primary storage for Kubernetes workloads, with no need for file and block access.”

ARTESCA uses the S3 interface and storage provisioning for stateful containers is done through its API. There is no POSIX. Artesca has a global namespace that spans multiple clouds and can replicate its object data to S3-supporting targets, and Scality’s RING storage. Speciale said Scality is “working on an S3-to-tape interface” with tape-held data included in the ARTESCA namespace.

The software can integrate with Veeam, Splunk, Vertica and WekaIO via S3, provisioning data services to them. Existing RING or cloud data can be imported into ARTESCA’s namespace.

The software features multi-tenancy and its management GUI supports multiple ARTESCA instances, both on-premises and in multiple public clouds – AWS, Azure, GCP:

ARTESCA GUI showing multiple instances.

ARTESCA has built-in metadata search and workflows across private and public clouds.

Scality says it has high performance with ultra-low latency and tens of GB/s of throughput per server, although actual performance numbers are still being generated in the HPE lab and in actual deployments. We can expect them to be available in a couple of months.

The product has dual-layer erasure coding, local and distributed, to protect against drive and server failure. If a disk fails, the server has enough information to self-heal the data locally, with no time-sapping network IO needed. If a full server fails, the distributed codes can self-heal the data to the remaining servers in the cluster. They work in parallel to accelerate the recovery process. Lecat said this scheme makes high-capacity disk drive object storage reliable.

ARTESCA has been developed to support many Kubernetes distributions. It should run with VMware’s Tanzu system and with HPE Ezmeral, although Lecat adds that both need to be validated. 

Target application areas include cloud-native IoT edge deployments, AI and machine learning and big data analytics. There is an initial supportive ecosystem including CTERA, Splunk, Veeam and Veeam’s Kasten business, Vertica and WekaIO.

HPE configs

There are six ARTESCA configurations available from HPE, suitable for core and edge data centre locations and including Apollo and Proliant servers in all-flash and hybrid flash/disk versions:

Six HPE Configurations for ARTESCA.

Chris Powers, HPE VP and GM for collaborative platforms and big data, said in a statement: “Combined with a new portfolio of six purposefully configured HPE systems, ARTESCA software empowers customers with an application-centric, developer-friendly, and cloud-first platform with which to store, access, and manage data for their cloud-native apps no matter where it lives — in the data centre, at the edge, and in public cloud.”

ARTESCA is available, through HPE only, for 6 months, with one, three and five-year subscriptions, starting at $3,800 per year which includes 24×7 enterprise support. HPE is also making ARTESCA available as a GreenLake service.

Comment

Scality is following MinIO in producing cloud-native object storage. Speciale said: “MinIO is very popular but doesn’t have all the enterprise features needed.” Being lightweight, ARTESCA fits in with Edge deployment needs and Speciale hopes that this will help propel it to enterprise popularity.

Speciale said that Scality’s “RING software has a 10-year roadmap” and is not going away. He also said ARTESCA will support the coming COSI (Container Object Storage Interface). CSI is focused on file and block storage.

We can envisage all object storage providers converting their code to become cloud-native at some point in the future. ARTESCA, and MiniO, will surely have a heck of a lot more competition in the future.

OK, SANshine, how do you position Nebulon against external arrays, HCI, SmartNIC and DPU storage?

Analysis: Would-be SAN array and HCI rival Nebulon’s SPU (Storage Processing Unit) is a fresh way of delivering shared storage and other services by using what looks like hyper-converged infrastructure (HCI) hardware.

It uses a card plugged into a server’s PCIe bus, but its overall architecture, features and benefits are different from everything else in the storage market, sometimes in subtle ways. 

We’ve positioned Nebulon’s technology against external arrays, actual hyperconverged infrastructure, HCI with a hardware accelerator, storage accessed over SmartNICs and storage accessed via DPUs (Data Processing Units), with the aim of providing a way to better understand and differentiate Nebulon’s tech.

This article was based on briefings with suppliers such as Fungible, Nebulon, Nvidia and Pensando, and represents our understanding of the evolving shared storage and server infrastructure market. Let’s start with the external block array or filer.

External arrays

An external shared array typically links to accessing servers via Fibre Channel HBAs (block access) or Ethernet NICs (iSCSI block and NAS file access). The array generally has two x86 controllers, an operating system providing services and a bunch of drives.

Hyper-Converged Infrastructure

Hyper-converged infrastructure (HCI) emerged as an alternative to external shared arrays, providing shared capacity by aggregating locally attached storage in a group of linked servers and presenting out to applications as a virtual SAN.

The storage operating system and services functions ran on the processors in the servers and so took some of their CPU capacity away from running applications. The servers were also constrained to run under the control of a hypervisor.

SDS = Software-Defined Storage. V = virtual machine.

These servers also managed the overall infrastructure and, over time, system monitoring and predictive analytics services were added, delivered via the supplier’s cloud. They are now evolving towards AIOps.

HPE’s SimpliVity varies the classic HCI concept by adding an inline hardware accelerator to provide compression and deduplication.

SmartNIC infrastructure

A SmartNIC is a network interface card with an added processor and other accelerator hardware plus software/firmware to provide network, security and storage services. The card is plugged into a host server’s PCIe bus and also into an external storage system, with an example being Nvidia’s BlueField-2 SmartNIC and DDN’s ExaScaler array. The ExaScaler array software runs in the BlueField card and so that card is now the storage array controller.

This means the host server no longer carries out storage processing work and can run more virtual machines.

We note that in this case the dual-controller array becomes a single SmartNIC controller and that has implications for availability. This means the SmartNIC becomes a single point of failure.

Another BlueField example is VMware’s Project Monterey with the vSphere hypervisor running in the BlueField card.

These SmartNIC systems are not hyperconverged systems and the host servers are not constrained to run a hypervisor. They could be bare metal servers or containerised ones. Shared storage for the accessing servers comes from an external array accessed across a fabric and not from a virtual SAN.

Could a set of servers with SmartNICs provide a virtual SAN? Theoretically, yes, so long as a supplier provided the software needed, such as vSAN code running in the SmartNIC.

Infrastructure management is performed by the server-SmartNIC system, and monitoring and predictive analytics come from a supplier’s cloud service.

Nebulon’s SPU infrastructure

Nebulon conceptually replaces the SmartNIC with its own Storage Processing Unit (SPU) hardware and software. Unlike a typical SmartNIC server, the shared storage is provided by internal, locally attached drives, virtualised into a SAN, and not by an external array.

The SPU provides infrastructure management for the host server and is itself controlled and managed from Nebulon’s cloud which also provides the analytics services.

As with a SmartNIC the host server can run whatever applications are needed; bare metal, virtual machines or containers and no cycles are consumed running shared storage. The SPU does not run a hypervisor though.

In B&F’s view the Nebulon SPU is a SmartNIC variant; its functionality partially overlaps with SmartNIC functionality but its system architecture is different as we have described.

DPU infrastructure

A specific hardware Data Processing Unit (DPU) is conceived of as being necessary to run data centre infrastructure tasks such as networking, storage and security because they are too onerous and take server cycles away from running application code.

The view is that there are now so many servers, storage arrays, network and security boxes in a data centre that they need their own offloaded management, control and network fabric to relieve over-burdened application servers.

This is an emerging technology with two main startup suppliers – Fungible and Pensando – both building specialised chip hardware. Fungible has launched a shared, external FS1600 storage appliance front-ended by its own DPU card (SmartNIC in our view). This links to its central DPU across a dedicated high-speed TrueFabric that can scale to 8,000 nodes.

The FS1600 is conceptually similar to a DDN ExaScaler-BlueField array, with the BlueField SmartNIC running the ExaScaler controller software and interfacing the array to accessing servers, like the FS1600’s Fungible chip.

Neither the FS1600 nor the ExaScaler-BlueField array are like Nebulon’s SPU system as that provides a virtual SAN and not an external array.

Here’s a diagram and a table positioning the various kinds of systems.

Y = yes, N = No, P = Possible, SDS = Software-Defined storage, FC HBA = Fibre Channel Host Bus Adapter, JBOF = Just a Bunch of Flash drives.

Composability

SmartNICs and DPUs are involved with composability, the concept of dynamically creating a server instance from component processor, accelerator (FPGA, GPU, ASICs), DRAM, storage-class memory, storage and network elements selected from resource pools. This virtual server runs an application workload and is then de-instantiated with its resources being returned to their parent pools for later re-use.

The benefit is said to be better component resource utilisation than having them in fixed server configurations and stranded when they are not needed.

Suppliers like Pensando and Fungible are creating DPUs to run the infrastructure code and instructions that compose the servers and relieve them of the infrastructure workload tasks, such as east-west networking inside a data centre. Nvidia sees BlueField enabling data centre composability.

Nebulon is not involved in data centre composability. It could be but its view at present is that composability is a concern for hyperscale-class cloud and application providers, not for mainstream enterprises. Nebulon’s SPU is not, or not yet, a type of DPU in the composable sense.

Resistance is not futile, according to ReRAM startup Weebit Nano

Resistive RAM startup Weebit Nano has lifted the lid on its strategy to bring its ReRAM storage class memory to a fairly crowded market. B&F took a look.

Resistive RAM refers to random-access memory with two resistance states signalling a binary 1 or zero. ReRAM developers include Crossbar, SK hynix, and, back in 2016 at least, Western Digital. 4DS Memory and Dialog’s Adesto are also rivals in this space.

The actual technologies used to manipulate resistance vary, with SK hynix using phase-change memory, Weebit Nano using oxygen filament formation in a nano-porous SiOx (silicon oxide) material, and Crossbar also using filament formation in a silicon oxide material. All three suppliers say their technology is non-volatile and are using a crosspoint-style architecture  in their chips, similar to Optane’s 3D XPoint design.

Weebit Nano was started up in Israel in 2014 by then-CEO Yossi Keret and is attempting to commercialise patented technology developed by Professor James Tour at Rice University (and licensed from there). Tour discovered that sending a current through silicon oxide could counteract its inherent insulator property and create a silicon crystal (oxygen vacancy) pathway or filament. Electrical pulses could then break and re-connect this filament, changing the resistance levels from low to high.

Weebit Nano ReRAM diagram: Ti – titanium, TiN – titanium nitride, SiN – silicon nitride, SiOx – silicon oxide

Financial engineering

Unusually Weebit IPO’d early, in 2016, on the Australian stock exchange using a piece of financial engineering; it performed a reverse-merger with Radar Iron, an Australian mining company. Since then it has raised a total of $21.3m in four equity rounds. Keret set up a collaboration with French technology research organisation CEA/Leti in 2016 to develop the technology as Weebit had no in-house capability, 

Keret resigned as CEO in December 2017 but remained on the board. He’s now the CEO of Israeli startup Nanorobotics. Coby Hanoch took over in the Weebit CEO role in October 2017, and has a sales background in the ASIC and Electronic Design Automation space.

The picture we’re drawing here is that this is not a typical storage technology startup – based on their founders’ tech and going the VC funding route. Keret and Hanoch claim that Rice University’s ReRAM technology, apart from generally having DRAM-class speed (100ns write), and endurance (~1 million write cycles and 10-year retention) is relatively straightforward for adoption by semiconductor fabs. The idea seems to be that others – fabs – would build the products and license its tech.

Embedded and discrete memory

Weebit says the ReRAM materials involved are fab-friendly and can be easily and cost-effectively added to the final stages of a semi-conductor process – BEOL or back end of line – with a 2-mask adder and 5 – 8 per cent extra wafer cost. It is developing a product for the embedded memory market. Weebit needs a fab to build and sell chips using it, and said it expects to sign its first commercial agreement in the middle of this year.

More interestingly from the enterprise storage point of view, it’s developing a discrete memory version of its technology. This requires the addition of a smaller selector (transistor) which enables isolating specific memory cells for a rewrite operation without altering the other cells. A quite large selector can be used in embedded memory but a smaller and low-power one is needed for discrete memory. This has been developed by Leti.

Weebit and partner Leti have filed a patent for this and another for a technology to enable multi-level cell operation – more than 1 bit/cell. 

Weebit Nano discrete memory Risc-V demo module.

It’s developing a demonstration module incorporating its ReRAM (OxRAM) with a RISC-V microprocessor and peripherals, with silicon planned for the end of the year. Potential customers could then use the module as a platform on which to develop applications such as low-energy IoT devices, security products and sensors.

Weebit Nano discrete memory chip roadmap.

The first module silicon is planned for the end of the year. A demonstrable discrete ReRAM array could be delivered in 2023. 

This is still early days in Weebit’s ReRAM technology development, and there is nothing concrete in the product sense yet. Park this storage-class memory candidate in the It-Might-Happen part of your mental landscape and don’t view it as any threat to Optane just yet.

Your occasional storage digest with Veritas, Seagate, and Tintri

There’s quite a bit to chew on in this week’s storage digest, from Seagate’s gaming battlestation light shows and migration from one Enterprise Vault target to another, to some capacity analysis news from Tintri.

Veritas Enterprise Vault migration

Data migration supplier Interlock Technology has said its VaultAnywhere product can now migrate data from Veritas’ Enterprise Vault v14.0 backup product to public clouds and on-premises destinations.

VaultAnywhere is certified by Veritas and migrates data at Enterprise Vault’s storage layer instead of using the Enterprise Vault API. Interlock claims this is up to 30 times faster than running the migration using the API.

The migration can be done using Centera Content Addressable Storage, NAS or S3 protocols. Vault Anywhere supports migration to Enterprise Vault-supported object storage targets such as AWS S3, on-premises S3-supporting object storage and Azure Blob repositories.

Interlock supports the Enterprise Vault Streamer API, required for S3 storage to be used with versions of Enterprise Vault prior to v14, and can migrate data from any source Enterprise Vault storage to any target Enterprise Vault storage.

Enterprise Vault metadata, such as indices, is preserved during the data movement and an audit trail is recorded.

Seagate’s FireCUDA battlestation lightshow

Seagate has rolled out a FireCuda Gaming Hard Drive and FireCuda Gaming Hub. The Hard Drive has RGB LED lighting customisable through a Seagate Toolkit to change as a game progresses Razer Chroma RGB compatibility is available to synchronize users’ Chroma-enabled gaming peripherals with this disk drive.

Seagate FireCuda Hard Drive (left) and FireCuda Hub (right)

The HARD Dive is not that capacious, having 1TB ($79.99), 2TB ($109.99) and 5TB ($179.99) capacities and price points. The HUB increases these to 8TB ($219.99) and 16TB ($399.99). Both Hub and Hard Drive have USB 3.2 gen 1 connectivity. The Hub supports USB-C and USB-A ports to hook up other peripherals.

Let gaming battles commence.

Tintri analysis claims large capacity savings for VMstore

DDN’s Tintri unit  has analysed of thousands of its Tintri VMstore customer deployments using its own SaaS analytics platform and claims midrange 50TB, 2RU, VMstore systems are able to deliver over 2PB of effective capacity in typical enterprise IT production environments – representing data reduction savings of up to 41x.

It quotes Avaya’s Steve Bartholomew, director of the firm’s IT Global Architecture division, as saying: “VMstore is the most form-fitting storage that I have ever come across.  Imagine taking a full rack of legacy storage and consolidating 400 TB into 4 rack units, so you use 10 per cent of the footprint but get 50 per cent performance.”

Tintri VMstor T7000.


Tintri claims its Analytics service measures storage capacity and performance, as well as compute and memory requirements. They can be projected up to 18 months in advance across multiple systems, for each application, database and virtual machine.

Shorts

Alluxio’s v2.5 software update introduces a Java Native Interface (JNI) based FUSE integration to support POSIX data access. The team said it had also improved on AI/ML high-performance and high-concurrency workloads. Improved S3 support means admins can maintain and manage the Alluxio namespace through a standard object storage console across existing users.

AMD’s 7nm Rome and Milan EPYC processors have 8 memory channels, like Intel’s Ice Lake gen 3 Xeon-SOs. The coming AMD 5nm Genoa CPU will increase this to 12 memory channels, potentially increasing memory capacity by 50 per cent.

Cloud backup and storage service provider Backblaze, with Mac deployment options, is partnering with Jamf to sell its services in the Apple Mac enterprise market. Backblaze has also improved its Windows Mass Silent Installer (MSI) product and will launch mass deployment updates for its Mac client in the coming weeks.

Hitachi Vantara has appointed three new execs: Roger Lvin as President, Digital Solutions Business Unit; Radhika Krishnan as Chief Product Officer; and Frank Antonysamy as Chief Digital Solutions Officer. Bobby Soni is the President of Hitachi Vantara’s other Digital Infrastructure business unit.

HYCU will be offering its cloud-native backup and recovery SaaS facilities on the Google Cloud Platform through a reseller, Google Cloud Premier partner SADA. 

Intel was surprised by Micron’s March withdrawal from the Optane market, according to a SearchStorage report. It also said Optane PMem drives will support the coming CXL interface, opening up their use to AMD, Arm and other CXL-supporting processors.

Research house TrendForce said it expects DRAM prices to rise 18-23 per cent QoQ in 2Q21 owing to peak season demand. PC DRAM prices are now expected to undergo a 23-28 per cent QoQ growth in 2Q21 due to the increased production of notebook computers.

Scale-out file storage software supplier Qumulo is expanding to the APAC region in partnership with HPE and its ProLiant servers. This comes less than nine months after Qumulo announced Series E funding, and said it planned to expand its global operations.

Redis has said that its Redis Enterprise Cloud database is now available in the AWS Marketplace, supporting Redis in flash, Redis modules and active-active geo-distribution. Customers benefit from consolidated billing, combining Redis Enterprise Cloud usage with their Amazon Web Services (AWS) usage.

SingleStore’s analytics database is now available through IEX Cloud’s platform to  ingest, normalise, and analyse large volumes of diverse data in real time. The SingleStore database enables speed and scale in data ingestion, processing, and querying. IEX Cloud’s platform distributes that data around the world via a single API with minimal latency.

Storage Made Easy reported record revenues in 2020 for its Enterprise File Fabric unified file and object storage product with 28 per cent revenue growth Y/Y. It said pandemic-induced remote working had increased data sizes and led to object storage increasingly being used in conjunction with file storage as primary storage for many employees.

Varada, which supplies data lake query acceleration, updated its software to leverage 10 times more data and deliver results up to 100 times faster than other data lake-based analytics platforms. It claims its dynamic and adaptive indexing technology enables security analytics workloads to run at near real time, especially on highly selective queries, without moving, duplicating or modelling data.

Veeam has quoted IDC data to show it outgrew the data protection market and many competitors in the second half of 2020, with 17.9 per cent Y/Y revenue growth. The market grew at 0.4 per cent and its competitors are shown in an IDC table:

Survivor Violin Systems has notched up a reseller for its all QV-Series flash array: Ohio-based Cloud Propeller which supplies IaaS, BaaS, DRaaS and VDIaaS to the public sector. Cloud Propeller chose Violin after evaluating DDN’s Tintri and NVMe-backed VSAN alternatives.

System and application performance monitor and manager Virtana has hired Christina Richards as its Chief Marketing Officer. It is developing self-service SaaS capabilities with a Virtana Platform offering and will be developing an ability to monitor and manage Kubernetes-orchestrated containers directly. Currently it manages them indirectly when they run in virtual machines.

HPE: Why mission-critical storage needs more intelligence

SPONSORED Storage is perhaps the most fiendishly complex part of enterprise IT, as the storage infrastructure has to meet the demands of a range of workloads with differing requirements for performance, all while ensuring reliability. This is particularly so for storage that supports mission-critical applications where high availability is an essential requirement.

Add to the mix the need to accelerate the speed of their business and become more agile, and it is clear that organizations need to have a strategy for managing their data, one that takes a more intelligent approach. This in turn calls for more intelligent platforms to support the whole process and avoid being tied down with the complexity that mission-critical storage often entails.

These are the challenges that HPE set out to address with its Intelligent Data Platform and the HPE Primera mission-critical storage arrays that are based on it.

Although HPE Primera builds on the 3PAR heritage, HPE has taken the opportunity to effectively start with a clean sheet and build a next-generation solution that addresses customer challenges, Matthew Morrissey, senior product marketing manager for HPE Primera and 3PAR, says.

“The roots of the platform really started when we were talking with our customers and prospective customers about their specific pain points in this mission critical space. What we were finding is that a lot of these organizations were just spending too much time managing, supporting and tuning their infrastructure, and it was holding both IT and the business back, because their time was consumed with just keeping the lights on versus driving innovation.”

Part of the problem is that the basic architecture for mission-critical storage is little changed from when SAN systems were introduced around 20 years ago, yet the workloads that organizations are running today have evolved drastically. Storage arrays have gained numerous enterprise grade features, such as snapshots and remote replication, but these have arguably contributed to the increased complexity of configuring and managing the storage infrastructure.

High availability, one of the most important considerations in the high-end storage space has traditionally been achieved by building lots of redundancy into the system, so that once a failure occurs, you have another redundant piece of hardware ready to fail over to.

HPE Infosight

This reactive approach is outdated, according to HPE, and a better solution is to be able to predict errors before they occur and therefore prevent them. This implies the use of AIOps (artificial intelligence for IT operations) to use a mixture of analytics and machine learning to spot any developing issues and take remedial action.

“With the coming of intelligence and AI, a much more efficient method is to architect a system with AI in mind, where you’re looking at a more predictive approach, preventing problems before they happen,” says Morrissey.

This capability is delivered through HPE InfoSight, which the firm gained via its acquisition of Nimble Storage several years ago. InfoSight is a cloud-based service that collects telemetry data from HPE storage systems deployed at customer sites worldwide, and uses machine learning to detect patterns or signatures that could indicate developing problems. It has been operating for over a decade and accumulated trillions of data points, making it widely recognised as the most mature predictive analytics platform for IT infrastructure.

HPE has expanded InfoSight’s capabilities by embedding Primera with an embedded AI engine for a complete end-to-end AI pipeline. This means HPE Primera is globally informed and locally optimized. Bringing some intelligence onto the system itself allows Primera to operate in real-time to ensure predictable performance for workloads within the storage environment.

“Every moment of every day Primera never stops predicting future application performance and resource needs in tiny increments. Every few seconds, Primera has a continuously developing window into the future and this enables the system to intelligently and dynamically optimize resource utilization to drive predictable performance,” Morrissey explains.

With Primera and the Intelligent Data Platform, HPE is bringing the intelligence further up the application stack, to be able to pinpoint problems at the virtualization layers, such as in VMware and Hyper-V. This matters because more than 90 percent of issues affecting performance happen above the storage layer, according to HPE, and identifying and correlating those issues is a problem that’s become too complex for humans to solve.

As an example, adding a new workload to a storage array can often have unforeseen implications for the performance of other workloads. Thanks to InfoSight, HPE Primera understands the workloads and their required performance and allows customers to comprehend what will happen if a specific application were deployed on their storage array, and plan accordingly.

This includes recommendations for expanding storage capacity, or perhaps just moving workloads around to make the optimal use of available resources in the entire storage infrastructure.

“From a workload planning perspective, we can help you optimize your environment by putting a specific workload on a different resource, because maybe you don’t have to buy additional hardware, maybe you don’t have to buy additional flash. We can help you optimize your environment with your existing resources,” says Morrissey.

This self-managing automation and simplicity should not come at the cost of performance, and HPE Primera delivers this through a high degree of parallelization and an all-active multi-node architecture that has been built for NVMe drives and architected to deliver ultra-low latency.

“When we look across the installed base of Primera, we know from the figures we get from InfoSight that 75 percent of all I/O is delivered within 250 microseconds of latency. And that just screams predictable performance,” Morrissey says.

So confident is HPE of the performance and reliability of the Primera arrays, it is backing them with a 100 percent availability guarantee as standard, with the proviso that customers are up to date with Primera OS updates. Customers can claim credits for up to 20 percent of the value of the array in the event of any outage.

The level of automation in HPE Primera extends to deployment and upgrades. While traditional mission-critical storage infrastructure often requires professional services technicians to come in and install, HPE claims that customers can deploy Primera themselves in as little as 20 minutes. Software upgrades are also seamless, with InfoSight informing customers what upgrades to download rather than an administrator having to pour through release notes. HPE claims that customers can upgrade Primera themselves during business hours, with no disruption to service.

Organizations also now live in a hybrid world, with workloads spread across on-premises and cloud-based infrastructure. There is a need to move data seamlessly between the two environments, such as to move an application from a cloud-based developer environment into an on-site production environment, where low latency and high availability can be guaranteed.

Native integration

To deliver this, Primera includes native integration with HPE Cloud Volumes and with cloud providers such as AWS and Azure. The cloud management tools embedded into HPE CloudVolumes help avoid the hidden costs such as egress fees, according to HPE.

This also touches on the issue of data management. Throughout the data lifecycle, data gets moved primary storage to secondary storage to backup and archive storage. The Intelligent Data Platform moves the data to where it needs to be based on the requirements of the application.

HPE InfoSight also provides customers with the ability to forecast capacity and performance needs and provide recommendations for timely upgrades, where necessary.

With HPE’s GreenLake subscription payment model, customers can also avoid the need for a large up-front capital expenditure, and HPE can even take the entire weight off the customer’s shoulders with a team that will remotely manage Primera as well, if required.

What this all adds up to is that organizations need greater intelligence in their storage infrastructure, in order to help them meet the growing demands of data and mission-critical workloads. They need an intelligent platform that can predict and proactively resolve issues before they occur, and deliver recommendations about how to handle specific workloads.

Powered by InfoSight, HPE believes that Primera and its Intelligent Data Platform delivers on those requirements. Primera raises expectation for mission-critical storage with a 100 percent availability guarantee while delivering the agility of cloud.

This article is sponsored by HPE.

GigaIO adds composability pods and clusters update for resource-chomping HPC and AI folks

Composability supplier GigaIO has updated its FabreX software to group its rackscale pooled accelerator resource units into pods of six cells and clusters of six pods, making up a 12-rack monster composable resource for high-end HPC and AI workloads.

GigaIO extends a PCIe gen 4 bus out from servers, and groups accelerators – FPGAs, ASICS, GPUs and DPUs – and Optane and other NVMe SSD storage into resource pools – GigaCells – usable by any or all the servers hooked up to the PCIe 4 link.

Alan Benjamin, GigaIO’s CEO, said in a statement: “With our revolutionary technology, a true rack-scale system can be created with only PCIe as the network. The implication for HPC and AI workloads, which consume large amounts of accelerators and high-speed storage like Intel Optane SSDs to minimise time to results, is much faster computation, and the ability to run workloads which simply would not have been possible in the past.”

Components such as Intel Optane SSDs continue to communicate over native PCIe (and CXL in the future), as they would if they were still plugged into the server motherboard, for the lowest possible latency and highest performance.

Instead of running several networks and having a so-called sea of NICs, HPC and AI data centre managers use the single PCIe Gen 4 fabric, composed of switches and PCIe NICs from GigaIO, to interconnect servers and the pooled resources.

GigaCell example. The Data Gateway is a pooled storage appliance with 24 x 2.5-inch slots in its 2U chassis. It can link to external networked storage. 

The basic element in GigaIO’s scheme is a GigaCell, which consists of third-party compute servers,  a top-of-rack GigaIO switch, a storage unit (called a Data Gateway in the diagram below), an Accelerator Appliance containing GPUs, FPGAs, ASICs or DPUs, or even a GPU server such as Nvidia’s HGX A100.

GigaPod with six constituent GigaCells.

V2.2 Fabrex enables up to six GigaCells to be grouped into a GigaPod. GigaIO said all the resources inside the entire GigaPod are connected by the FabreX universal fabric, transforming the entire GigaPod into what GigaIO calls one, composable, unit of compute.

It can be enlarged for even bigger workloads, up to six GigaPods can be aggregated into a GigaCluster, with cascaded and interleaved switches. Such a cluster can run to 12 x 42-inch racks within a 100m distance using optical cables.

GigaCluster with six constituent GigaPods.

The PCIe Gen 4 fabric is built with GigaIO spine and leaf switches and the actual system composing is carried out by third-party off-the-shelf software options available from a number of vendors to avoid software lock-in.

Workloads run faster, as if they were using components inside one server, but harness the power of many nodes, all communicating within one seamless universal fabric. Leaf and spine, dragonfly and other scale-out network topologies are fully supported.

GigaIO quotes Addison Snell of Intersect360 Research as confirming the need for a composable universal fabric: “With analytics, AI, and new technologies to consider, organisations are finding their IT infrastructure needs to span new dimensions of scalability: across different workloads, incorporating new processing and storage options, following multiple standards, at full performance. The data-centric composability of FabreX is aimed at solving this challenge, now and into the future.”

Comment

GigaIO is competing in the data centre composability stakes with Dell EMC (MX7000), Fungible, HPE (Synergy), Liqid, and Nvidia. In our view GigaIO and Liqid are contenders in this space as both their offerings centre on the PCIe bus.

GigaIO addresses composability from an HPC and networking point of view, with customers buying into its switches and NIC, although it requires third-party composability software to be sourced by customers. It may need system integrator partners to sell complete systems to enterprises. 

Seagate pats itself on back for flat Q3 results

Seagate reported flat Q3 revenues of $2.73bn (up 0.4 per cent) and $329m net income, advancing 2.8 per cent The disk drive maker said it sold more nearline, high-capacity drives and is ramping up 18TB shipments from the current 16TB high-end.

In the earnings call yesterday, CEO Dave Mosley said: “Seagate delivered an outstanding March quarter, executing well across multiple dimensions,” referring to revenues, operating margin, earnings-per-share and share buybacks.

He added: “Strong cloud data centre demand and ongoing recovery in the enterprise markets drove our highest ever HDD shipments of 140 exabytes, a record mass capacity revenue of more than $1.6 billion.”

Financial summary:

  • Gross margin – 27.1 per cent (27.4 per cent a year ago)
  • Operating margin – 14.1 per cent (13.8 per cent a year ago)
  • Cash flow from operations – $378m
  • Cash and investments – $1.2bn
  • Diluted EPS – $1.48 ($1.38 a year ago)

Demand was strong in the enterprise nearline and data centre markets. PC drive demand was steady and the company noted increased demand for mission-critical 2.5-inch 10K rpm) drives. Video surveillance and image demand was down but should rise in the next quarter, the company said. The average capacity per drive rose to 5.1TB from 4.1TB a year ago. Shipments of 16TB, 18TB and 20TB drives represented nearly half of all the exabytes Seagate shipped in the quarter.

Mosley’s prepared remarks revealed that Seagate is “servicing the vast majority of market demand for 16TB and higher capacity drives. We’ve started to aggressively ramp 18-terabyte volume, and current demand suggests strong sequential growth through at least the calendar year.”

It is also shipping Mach.2 dual actuator drives, having “recently begun the high volume ramp of MACH.2 drives with a leading hyperscale customer and plan to expand shipments to additional customers later in the calendar year.”

Heat-assisted magnetic recording (HAMR) drives are being evaluated by users; “ Today customers are testing 20TB HAMR drives in their production environments, which offers valuable feedback that we are factoring into our product roadmaps.”

But Seagate is hedging its HAMR tech bets, planning to “begin shipping a few versions of 20-terabyte drives in the second half of the calendar year,” Mosley said. A shingled magnetic recording version was also mentioned in the earnings call, along with drives with different firmware environments, to cater for hyperscale customers who need 20TB drives in a variety of formats.

Lyve Drive

The Lyve Drive external disk drive program is expanding cautiously. Mosley said Seagate is “on track to have four Lyve Cloud sites up by the end of the calendar year. We are getting an ecosystem support and have now been certified with each of the leading backup software vendors.“ 

He is excited about “future potential for Lyve products and services, which open a large and growing market opportunity for Seagate estimated to reach about $50bn by 2025.”

Outlook

Mosley said: “Seagate continues to execute well and remains excited about the tremendous opportunities we foresee ahead, both in the near-term and longer term, driven by massive growth of data.’

Seagate expects fourth quarter revenues of $2.85bn, plus/minus $185m, 13.1 per cent Y/Y growth at the $2.85bn number. This would make for full year revenues of $10.5bn, the same as last year. That growth rate is a substantial acceleration on the current quarter’s 0.4 per cent Y/Y growth and if it were to continue throughout the next few quarters Seagate’s fy2022 results could show a substantial rise over fy2021.

Rubrik gains NetApp as a reseller

NetApp
NetApp CloudJumper

NetApp is to resell Rubrik’s Cloud Data Management software for data protection, security, compliance and governance.

Rubrik provides software to backup application data and help customers with data security, compliance and governance. It can use NetApp StorageGRID object storage as a target system for storing the backed up data on-premises and NetApp target facilities in the public cloud.

Kim Stevenson, an SVP and GM at NetApp said: “Rubrik and NetApp together are delivering data management solutions for digital transformation in a hybrid multi-cloud world.” 

The companies will offer a common set of data management tools across a data fabric spanning the on-premises and public cloud environments. Rubrik and NetApp say the combination of Rubrik Cloud Data Management and enhanced NetApp ONTAP software will provide consolidation benefits, deeper cloud integration and continuous data availability.

Rubrik’s Go Business Edition and Go Foundation Edition software will now appear on NetApp’s global price list. This deal will help both Rubrik and NetApp channel partners combine to win business.

Comment

This is a big win for Rubrik and gives it added credibility with NetApp’s salesforce and its channel partners. It will also enhance Rubrik’s standing in the overall enterprise data protection and governance market and reinforce its status in comparison to Cohesity, Commvault, Veeam and others as a major player to be reckoned with.

Regarding HPE’s May 4 unleash the power of data event

HPE is promoting a REALLY important webcast on May 4 entitled “unleash The Power of Data,” and we think we have worked out what it is about.

Tweets – such as this one – began appearing a few days ago.

A teaser video appeared on an HPE webpage:

HPE May 4 event teaser video.

The webpage references a downloadable eBook, Unleashing the Power of Your Data – For Dummies.

This dummy followed the link and downloaded this book, an HPE special edition:

It is organised into six chapters:

  1. Changing the Game with Data
  2. Establishing an Intelligent Data Strategy
  3. Understanding Intelligent Data Platform Components
  4. Transforming Your Business and Your IT
  5. The Cloud, the Edge, and Your Valuable Data
  6. Ten (Or So) Benefits of an Intelligent Data Platform

The book “examines how intelligence changes everything, the role of an intelligent data strategy, and how your business and IT can be transformed with an intelligent data platform at the heart of this strategy.” IT sets out to answer what it calls nine key questions:

  1. Do you have a data strategy?
  2. What elements make up an intelligent data strategy?
  3. What makes up an intelligent data platform?
  4. Is your infrastructure able to predict and prevent problems before they occur?
  5. Is your data residing on infrastructure that’s designed to be high-performing and available?
  6. Do you have a data protection strategy?
  7. Can you achieve data agility while enabling hybrid cloud?
  8. Are you getting the most value from your data?
  9. Are you optimising the cost of storing that data at reach step of the life cycle?

The basic message is that HPE storage products are integrated and organised with AI-driven global intelligence and AIOps to form an intelligent data platform.

The HPE products and offerings mentioned en route through the book include the Ezmeral software portfolio, InfoSight, Nimble Storage, Primera, SimpliVity, StoreOnce, Cloud Volumes Backup, Cloud Bank Storage, and GreenLake.

HPE’s Intelligent Data Platform concept has been around for a couple of years, certainly since June 2019. The book describes it like this: ‘An intelligent data platform collects data not just from storage devices, but from servers, virtual machines (VMs), networks, and other infrastructure elements across the stack. It applies AI and ML to spot what’s not right in order to predict and prevent issues.

It uses predictive analytics to anticipate and prevent issues across the infrastructure stack and to speed resolution when issues do occur.”

It’s about having a self-managing, self-healing and self-optimising infrastructure “with workloads that operate across the cloud, in on-premises environments and at the edge.”

Almost two years later, HPE is set to make a song and dance about it. We think it is going to centre on innovations around InfoSight, HPE’s intelligent cloud-based system monitoring  and predictive analytics service. The company acquired this with Nimble Storage and has extended the technology to cover SimpliVity edge HCI and Primera data centre arrays as well as HPE’s servers.

We think HPE will announce that InfoSight has been developed into a tool looking at storage and more in a hybrid multi-cloud and on-premises setting with greater AI capabilities to get the right data into the right locations.