Home Blog Page 269

A slow burn: SmartNIC sales to grow to over $1.5 billion in 2026

The SmartNIC revolution is going to be a slow-burn one, with Gartner predicting there will only be $1.6 billion of sales in 2026, up from $50 million in 2020.

Wells Fargo analyst Aaron Rakers passed on a Gartner report summary to subscribers, saying: “Gartner forecasts that shipments of SmartNICs (also known as DPUs and IPUs) will grow 18x from 2021 to 2025 as data centres move from top of rack (TOR) switches to SmartNIC leaf switches.”

A SmartNIC (Network Interface Card) or DPU (Data Processing Unit) has processing power and software to look after network, storage and security functions currently managed by the NIC host server or dedicated devices. It is worth putting these functions in a SmartNIC because, in larger data centres with hundreds if not thousands of servers, an increasing amount of network traffic is solely between servers (so-called east-west traffic) and not between applications in servers and customers, partners, etc. outside the data centre (north-south traffic).

Example SmartNIC products are NVIDIA’s BlueField-2 card, Intel’s IPU products, Xiliinx’s Alveo FPGA, Nebulon’s SPU and chips made by Pensando and Fungible.

A Juniper blog discussed how SmartNICs offload layer-4 to -7 network processing in the OSI model from host servers:

If we take the easy-west network traffic off host servers and have SmartNICs handle it instead then the x86 servers have more cycles that can be used by applications. The Gartner forecast suggests that the number of server and hypervisor licenses can be reduced by between ten per cent and 30 per cent because internal data centre networking, storage and security workloads can be offloaded to the SmartNICs.

In April this year Gartner estimated that, by 2023, one in every three network interface cards shipped will be a SmartNIC, according to the Juniper blog. Intuitively it feels as if the SmartNIC market has cooled from this level.

The Gartner forecast included a chart showing market sector interest in SmartNICs:

The right-hand bars show that SmartNIC adoption waned from 2019 to 2020 in the education, finance, healthcare, manufacturing and retail market sectors — possibly as hype gave way to reality. That would slow down SmartNIC adoption. The communications sector has, unsurprisingly, the highest interest in SmartNICs with, Rakers says, “use cases like virtual 5G routing, switching, and user plane functionality targeted.”

Intel says it has the leading share of SmartNIC sales in the hyperscaler market, and that is cross-sector as far as the Garner chart is concerned. It’s also a niche. A thumping great big one, but a niche nonetheless. The Gartner slow SmartNIC market growth forecast indicates that enterprise on-premises adoption of SmartNICs is going to be a low-temperature one, and not a raging fire.

Seagate pipelining test car data to IBM/Nvidia AI smart car modelling systems

Moving 100TB per car of data a day from test cars to GPU-driven automated driver assistance (ADAS) AI modelling systems needs a multi-stage data pipeline. Seagate, IBM and Nvidia have created one based on using disk drives in the cars, object storage, parallel file system software and GPUs.

Each test vehicle is, in effect, a mobile edge IT site generating around 100TB a day of data from the multiple sensors in the car — such as radar, LiDAR, engine management system, cameras and so forth. This data has to be somehow fed into a data lake, from which it can be extracted and used for AI and machine learning modelling and training. The end result is AI/ML model code which can be used in real life to assist drivers by having vehicles increasingly taking on more of the driving responsibilities.

The problem is that there can be 10, 20 or more test vehicles, each needing to store logged data from multiple devices. Each device can store its own data (a distributed concept) or they can all use a central storage system.

Each method costs money. Seagate calculates that just putting in-vehicle data storage in place can cost up to an eye-watering $200,000 per vehicle. That means a 50-vehicle fleet needs an up to $10 million capital expenditure before you even start thinking about how to move the data.

Centralising the data can be done two ways: move the data across a network or move the data storage drives. Once in the same data centre as the AI training systems the data has to be made available at high speed to keep the GPUs busy.

Networking the data is costly and, unless the automobile manufacturer spends a great deal of money, slow. Consider that a fleet of 20 vehicles could each arrive at the edge depot on a daily basis with 100TB of data. That’s 2PB/day to upload. It’s cheaper and maybe faster to just move the drives containing the data than send the bytes across a network link. 

Data pipeline

In summary we have five problem areas in this data pipeline: 

  • In-vehicle storage;
  • Data movement from vehicle to data centre;
  • Data centre storage;
  • Feeding data fast to GPUs;
  • AI training.

Seagate takes care of the first three stages. IBM looks after the fourth stage, and Nvidia is responsible for stage number five.

Seagate-IBM-NVIDIA ADAS data pipeline. (HiL/SiL stands for Hardware-In/Software-in.)

Each test vehicle has a disk drive array in its trunk: a Seagate Lyve Mobile array. These take data streamed from sensors to National Instruments (NI) data loggers and store it centrally. When a test car returns to its depot, drives inside data cartridges can be removed and physically transported to Lyve Mobile Rack Receivers in the AI/ML training data centre.

The Lyve Mobile system can be used on a subscription basis to avoid capital expenditure.

Once arrived at the data centre the data can be stored in Seagate’s CORTX object storage system and also in Seagate’s Lyve Cloud object storage for longer-term retention. At this point IBM steps onto the stage.

Its AFM (Active File Management) software feeds the data from CORTX, a capacity tier, into the Spectrum Scale parallel file system and NVMe flash storage. It’s then in a performance tier of storage, and Spectrum Scale can send data at high speed and with low latency, using RDMA — think GPUDirect, to Nvidia GPUs for the model training.

This Seagate-IBM-Nvidia partnership enables fleets of cars at the mobile edge to generate hundreds of terabytes of test data on a daily basis and have it transported to and stored within a data centre, and then transferred at high speed to GPUs for model training. We have a workable data pipeline for ADAS data generation and model creation. 

Toshiba extends 18TB technology to NAS and workstation disk drives

Toshiba has extended the use of its 18TB FC-MAMR MG09 disk drive technology to its N300 NAS drive and X300 workstation drives, raising their maximum capacity to 18TB from the prior 16TB.

Both the N300 and X300 are 3.5-inch format, nine-platter drives in sealed helium-filled enclosures. The FC-MAMR technology, explained here, uses a specifically oriented output from a Spin Torque Oscillator (STO) added to the write head to increase the strength of the magnetic flux write signals pumped out to the disk’s recording surface and so make smaller and still stable bit areas possible. It adds an effective 2TB capacity, a 12.5 per cent increase, so a 16TB drive becomes an 18TB one.

The MGO9 has a 550TB/year workload rating, 2.5 million hours MTBF measure and a five-year warranty. Both the new drives are downrated on these measures. The N300 spins at 7200rpm, and has a 6Gbit/sec SATA interface. It transfers data at 281MB/sec, supports up to 180TB/year written and has a 1.2 million hours MTBF rating with a three-year warranty.

The X300 has a two-year warranty. Toshiba says its cache technology optimises cache allocation during read/write operations so as to provide high-level performance in real time. (Ditto for the N300.) Yet Toshiba’s datasheet includes no transfer performance number. Gamers interested in really fast I/O will use SSDs instead of disk drives.

The X300’s latency is 4.17ms, the same as that for the N300. We’d suppose the X300’s transfer rate is the same as the NX300 as well. Toshiba’s datasheet says the X300 has a 600,000 hours MTBF rating, unlike the N300’s 1.2 million hours, and a two-year warranty. We’re not given an X300 workload rating but that’s not so relevant for what is a PC/workstation gaming disk drive.

Western Digital has upped its 3.5-inch disk drives to 20TB using embedded NAND in the controller to hold drive metadata — so-called OptiNAND technology. Toshiba has to find another 2TB of capacity from somewhere to add to its drives to reach that level.

Both the N300 and X300 should be available before the end of 2021.

Your occasional storage digest with Rewind, Satori, a Snowflake quadfecta, benchmark bifecta and more

There’s a mini-blizzard of Snowflake news — a quadfecta in fact — plus two startups raising cash, Rewind for SaaS application service backup, and Satori for its secure real-time access to data from multiple stacks. We have news about a benchmark bifecta (AI and datacenter) too. We have some TrendForce storage reductions for 2022 and there’s also a truckload of mini news items, including some customer deals and exec hires.

Read on.

Satori fund raiser

Israeli startup Satori, founded in 2019, has raised $20 million in A-round funding.

It is developing a universal data access service to provide real-time access to secured data. It provides a single control plane for real-time data access and usage oversight across data stacks. Think DataSecOps. Satori has multiple out-of-the-box integrations with data stores such as Snowflake, Amazon Redshift, Amazon Athena, Amazon Aurora, and Azure SQL.

The round was co-led by B Capital Group and Evolution Equity Partners, with participation from Satori’s Seed investor YL Ventures. Their cash will pay for R&D and engineering and go-to-market expansion in the USA.

Eldad Chai, co-founder and CEO of Satori, said: “Satori’s platform is the only service able to seamlessly integrate diverse data tech stacks and streamline data access and security without code and without changes to the underlying data stores.”

The other co-founder, CTO Yoav Cohen, said: “We started Satori as we knew cloud data infrastructures would require a radical shift in security toward it being granular, universal and also non-intrusive. Having launched with multiple out-of-the-box integrations with the industry’s leading cloud data stores, such as Snowflake, Amazon Redshift, Amazon Athena, Amazon Aurora, and Azure SQL, we empower data teams to roll out self-service access, row- and column-level security, and dynamic de-identification across data stores in minutes.”

Satori is a Japanese term for understanding.

Rewind fund raiser

Startup Rewind raised a $65 million B-round to further develop its cloud backup and recovery software for business SaaS tools such as BigCommerce, GitHub, QuickBooks Online, Shopify, Shopify Plus, and Trello. It raised $15 million in an A-round in January.

The over-subscribed round was led by Insight Partners with participation from Bessemer Venture Partners, FundFire, Inovia Capital, Ridge Ventures, ScaleUp Ventures, and Union Ventures. Atlassian Ventures made a so-called strategic investment in the round as well.

Mike Potter, co-founder and CEO of Rewind, said: “While we’ve continued to double revenue and software subscribers year over year, every day our team talks with seasoned technology professionals who still do not realise they lack full control of their vital business data. We see tremendous opportunities to backup the entire cloud and are working towards that vision.

“This latest round of funding will allow us to expand our reach, bring additional SaaS backup solutions to the market, and raise awareness for the need to include all cloud and SaaS applications in a business’s backup and recovery strategy.” 

Rewind will spend the cash on product development. It already has more than 100,000 customers; the count in January was 80,000-plus. So it’s effectively added 20,000 customers in six months — more than 3000 a month. That’s meteoric growth. You can see why the VCs are excited.

Snowflake news quadfecta

AnalyticsIQ, a provider of predictive marketing data, announced an expanded relationship with Snowflake, which calls itself the Data Cloud company. AnalyticsIQ will make its B2C and B2B data available through the Snowflake Data Marketplace.

Satori recently announcing a partnership with Snowflake to enable DataSecOps for Snowflake’s data cloud. This collaboration has also led to a soon-to-be-released publication, Snowflake Security: Securing Your Snowflake Data Cloud — a comprehensive guide, developed alongside Snowflake, codifying DataSecOps best practices to enable data teams to secure their cloud environment.

Snowflake launched a Financial Services Data Cloud aimed at finance sector customers and their data warehouse and analysis needs. It said businesses can use the Financial Services Data Cloud to launch new products and services, build fintech platforms, and collaborate on data across the enterprise, while meeting regulatory requirements, using Snowflake’s security and governance capabilities.

Snowflake has successfully completed an assessment conducted by KPMG on the 14 Key Cloud Controls for protecting sensitive data, a component of the EDM Council’s Cloud Data Management Capabilities (CDMC) Framework that ensures a comprehensive set of industry standard guidelines for Financial Services organisations and other industries as they move their data into the Cloud. It’s the first cloud platform to be so assessed.

TPC AI benchmark

The Transaction Processing Performance Council (TPC) announced the immediate availability of TPCx-AI, the first industry-standard, vendor-neutral benchmark for measuring real-world AI and ML scenarios and data science use cases. TPCx-AI uses a diverse dataset and was specifically designed to be adaptable across a wide range of scale factors. It provides a means to evaluate performance for the System Under Test (SUT) as a general-purpose data science system that:

  • Generates and processes large volumes of data;
  • Trains preprocessed data to produce realistic machine learning models;
  • Conducts accurate insights for real-world customer scenarios based on the generated models;
  • Can scale to large-scale distributed configurations;
  • Allows for flexibility in configuration changes to meet the demands of the dynamic AI landscape.

The benchmark measures end-to-end time to provide insights for individual use cases, as well as throughput metrics to simulate multiuser environments for a given hardware, operating system, and data processing system configuration under a controlled, complex, multi-user AI or machine learning data science workload.

TPCx-AI is an executable kit that can be rapidly deployed and measured. It is designed to provide relevant, objective performance data to industry users and is available for download via TPC’s web site.

SPEC datacentre benchmark

The Standard Performance Evaluation Corporation’s (SPEC) Virtualization Committee released the SPECvirt Datacenter 2021 benchmark, a multi-host benchmark for measuring the performance of a scaled-out datacenter. It uses real-world and simulated workloads to measure the overall efficiency of virtualization solutions and their management environments. This new benchmark complements the existing SPECvirt_sc 2013 server consolidation benchmark, which is designed for a single-host environment.

The SPECvirt Datacenter 2021 benchmark feature overview:

  • Multi-host benchmark — Minimum of four hosts required, scales in increments of four.
  • Datacenter operations model — Multi-workload benchmark measures performance of hypervisor infrastructure, including how the hypervisor manager controls resources.
  • Five real-world and simulated workloads —
    • OLTP database, based on HammerDB benchmark.
    • Hadoop/Big Data cluster, based on BigBench benchmark.
    • Simulated departmental mail server.
    • Simulated departmental web server.
    • Simulated departmental collaboration server.
  • VM resource management — Handled by the hypervisor manager, including scheduling policies. Workload VMs powered on or deployed during benchmark.
  • Ease of use — Single preconfigured template VM to set up harness and workloads. No tuning of guest OS/software necessary.

The SPECvirt Datacenter 2021 benchmark is available for immediate download from SPEC for $2500. There is a $500 discount for those who already have a copy of the SPECvirt_sc 2013 benchmark until March 2, 2022.  Discounts are also available for qualifying non-profit research and academic organizations. Visit the SPEC web site for more information.

TrendForce predictions for 2022

Research house TrendForce says that, while DDR5 products gradually enter mass production, NAND Flash stacking technology will advance past 200 layers.

The three dominant DRAM suppliers (Samsung, SK hynix, and Micron) will gradually kick off mass production of next-gen DDR5 products, and also continue to increase the penetration rate of LPDDR5 in the smartphone market in response to demand for 5G smartphones. With memory speed in excess of 4800Mbit/sec, DDR5 DRAM can massively improve computing performances via their fast speed and low power consumption. 

As Intel releases its new CPUs that support DDR5 memory, with Alder Lake for the PC segment, followed by Eagle Stream for the server segment, DDR5 is expected to account for about 10–15 per cent of DRAM suppliers’ total bit output by the end of 2022. Regarding process technologies, Samsung and SK hynix will kick off mass production of 1 alpha nm products manufactured with EUV lithography. These products’ market shares will likely increase on a quarterly basis next year.

Turning to NAND Flash products, their stacking technologies have yet to reach a bottleneck. Hence, after 176L products entered mass production in 2021, suppliers will continue to migrate towards 200L and above in 2022, although these upcoming products’ chip densities will remain at 512Gb/1Tb. 

Regarding storage interfaces, the market share of PCIe Gen-4 SSDs will likely skyrocket in the consumer PC segment next year. In the server segment, as Intel Eagle Stream CPUs, which support PCIe Gen-5, enter mass production, the enterprise SSD market will also see the release of products that support this interface. Compared to the previous generation, PCIe Gen-5 features double the data transfer rate at 32GT/sec and an expanded storage capacity for mainstream products at 4/8TB in order to meet the HPC demand of servers and data centers. 

Additionally, the release of PCIe Gen-5 SSDs is expected to quickly raise the average data storage capacity per server unit.

Shorts

Cloud backup and storage service provider Backblaze has written a Multi-cloud Architecture Guide. Read about it in a blog. The blog lists several multi-cloud advantages: better reliability and lower latency; redundancy; more freedom and flexibility; affordability; and best-of-breed services.

A Databarracks 2021 Data Health Check survey shows that 15 per cent of organisations are still using a combination of disk and tape backups, with 51 per cent now using online or cloud backups. Cloud and online backups have continued to increase in popularity, climbing from 23 per cent in 2008 to 51 per cent in 2021. Four per cent still use tape as their only backup medium — unchanged since 2012. Combined disk and tape use has declined from a peak of 29 per cent in 2012, to 15 per cent.

Unstructured data migration specialist Datadobi has completed certification by KPMG in accordance with Service Organization Control (SOC) 2 Type I requirements for DobiMigrate’s operations, support, and engineering processes. This helps strengthens Datadobi’s appeal to enterprises. Get a copy of the KPMG report here.

Cloud data protection service provider Datto has announced Datto Continuity for Microsoft Azure,  asimple, secure, and reliable business continuity and disaster recovery solution for business infrastructures in the Azure cloud. It’s built on Datto’s flagship BCDR technology, and will provide MSPs the ability to offer data protection, management, and streamlined recovery for their SME clients’ business workloads — at a predictable cost and without the need to piece together individual technologies or depend solely on Microsoft’s data backup services.

DCIG has a report looking at the top five vendors for on-premises NAS consolidation: CTERA, iXsystems, Nasuni, StorONE and WekaIO.

File sharing/collaboration supplier Panzura has launched a Service Hub, complimentary for all Panzura customers, that expands access to customer support and other Panzura services. The company also introduced a bold new corporate brand image, redesigned logo, and web site. Its Chief Services Officer, James Seay, burbled smoothly about this: “For us, there’s no such thing as business as usual. We’re committed to going above and beyond to provide the best customer experience on the planet.” The debut of the Panzura Service Hub is part of an effort to revamp customer programs and processes.

Pliops, developing  a storage processor to offload server CPUs, has set up a global channel program. It has implemented a partner assist, channel-centric go-to-market strategy and will strategically align with systems integrators and value-added resellers offering database, analytics, ML/AI, HPC and web-scale solutions, as well as cloud deployments. TD SYNNEX will distribute Pliops’ XDP product in North America. The XDP is now broadly available.

PNY Technologies, a European provider of systems for Artificial Intelligence, HPC, datacenter and professional visualization markets, has announced recently that it has signed a distribution agreement with Nvidia. It has become an Nvidia direct global distributor for the entire range of InfiniBand and Ethernet networking switches, adapter cards/NICs and cables for resellers in EMEA.

US Signal, a Michigan-based IT systems company, has chosen SoftIron as a partner to expand its Ceph-based Storage-as-a-Service infrastructure. The initial implementation, which immediately increases US Signal’s storage capacity by over a petabyte, leverages SoftIron’s Ceph-based HyperDrive storage appliances and is being facilitated with a no-downtime migration across multiple distributed datacenter sites in four midwest states, including Illinois, Indiana, Michigan, and Wisconsin.

As a result of a strong Q3, open-source supplier SUSE expects to report full year adjusted revenue in the top half of the guidance range ($554M to $574M) and adjusted cash EBITDA above the top of the guidance range ($246M to $266M), and still expects to achieve an adjusted EBITDA margin for the year of mid-30s per cent.

Cloud-native data protector Trilio announced the release of TrilioVault for Kubernetes (TVK) v2.5, which offers a comprehensive approach to ransomware protection and recoverability in alignment with the National Institute of Standards and Technology (NIST) Cybersecurity Framework and in support of Zero-Trust architectures. It features backup immutability and encryption. TKV v2.5 also gets multi-namespace backup support, Azure Blob and GCP Object Storage as backup targets, and authentication support supporting OIDC, LDAP and cloud authentication providers.

Josh Harbert.

People

DevOps data supplier Delphix has appointed Josh Harbert as its Chief Marketing Officer. He will be responsible for building a world-class marketing organization, accelerating pipeline generation, amplifying product and brand awareness, and increasing customer engagement with Delphix. His CV shows tenures at Apptio, Qumulo, and most recently, Tanium.

Liqid has hired Steve Tucker as its CFO. The company has doubled its staff year-over-year. He joins Liqid from insurance software provider Vertafore. Liqid has also hired Beth Turman as Executive Director for Public Sector Sales, Gary Billingsley as Director of Federal Sales, and Ben Bolles as Executive Director for Product Management.

Bernie Wu.

MemVerge has hired Bernie Wu as VP of Business Development. He will lead MemVerge’s global alliance, software partnerships and industry ecosystem efforts as well as working to expand vertical market use cases for Big Memory computing using MemVerge’s memory virtualisation technology. Wu comes from Trend Micro with stints at FalconStor, MetalSoft, Levyx, Prophetstor, PiCoral, Cheyenne Software and Conner Peripherals.   

Customers

MariaDB Corporation today announced that cloud-based Student Information System provider Campus Cloud Services has migrated to MariaDB SkySQL as its cloud database running on Google Cloud Platform (GCP) and SkyDBA for fractional DBAs with A+ proactive care. Campus Cloud is a cloud service that manages all student data in one place — across admissions, academics, financial aid, accounts and billing, graduation and career placement — for over a hundred thousand students each day and growing.

Platform9, which supplies multi-cloud Kubernetes-as-a-Service, announced that Norna, an applied Artificial Intelligence company, experienced a ten-fold productivity improvement and a 78 per cent total cost of operations (TCO) reduction after implementing Platform9’s Managed Kubernetes-as-a-Service to power the company’s retail fashion AI technology.

Toulouse-based PortAlliance Engineering has become a Qumulo File Data Platform customer. It deployed its first Qumulo array in September 2020 and currently has a capacity of 160TB. PortAlliance Engineering is part of the Airbus group.

The San Francisco 49ers professional football team announced a multi-year partnership with Qumulo to serve as the team’s data storage provider. Qumulo storage will support the 49ers’ operations at Levi’s Stadium and its SAP Performance Facility. A primary use of Qumulo’s storage platform will be extending the storage capacity of security camera feeds captured in and around the 49ers’ home venue and training facility. What previously required 54 individual storage arrays will now be compacted into a single namespace.

Liqid composability flows into VMware through vCenter plugin

VMware vCenter users can now dynamically compose servers on which to run virtual machine (VM) workloads with a Liqid software plug-in.

Traditionally vCenter users create virtual machines which run on physical server hardware configurations. Such servers are virtualized by VMware and run VMs inside an ESXi hypervisor environment. Now the underlying physical servers themselves can be dynamically composed by Liqid’s Matrix CDI software using a vCenter plugin. This capability was first explored in June.

Sumit Puri.

Sumit Puri, CEO & Cofounder, Liqid, put out a somewhat contentious statement: “The static nature of traditional server architecture is the primary driver for countless companies moving to the cloud.” Others might say: “No it’s not. It’s the lower cost and faster deployment in the cloud.”

His next point is more realistic: “We compliment VMware nicely in that we accelerate host deployments and scaling so customers can deliver their virtualised systems faster and more efficiently. … Liqid’s composable software features can be managed in vCenter via a single pane of glass, enabling IT to dynamically create and manage both virtual machines and the bare metal hosts they reside on, maximising efficiency 

Matrix CDI composes a server configuration from elements including a processor and DRAM, GPUs, FPGAs, and NVMe storage which are connected across a PCIe or other fabric. The aim of this is to increase the utilisation of these resources because fixed physical server configurations can have some of their resources under- or over-utilised — stranded, as it were. 

Liqid’s composability software makes dynamically software-configured servers out of resource elements, which can then be treated as bare metal servers on which to run ESXi and virtual machines. Liqid and VMware say these software-defined servers can better match the resource requirements of VM workloads. Such software-defined servers can be set up and deployed in minutes, contrasting with the days or even weeks needed for a new physical server acquisition and deployment.

The Liqid vCenter Plug-in for vCenter Server provides a web-based tool integrated with the vSphere Web Client UI that allows customers to compose bare-metal hosts, add and remove resources, and view configuration information in vCenter. Liqid’s software fits into the existing VMware server environment, making its adoption straightforward.

Flow baby, flow — Datacenter-as-a-Service from Mirantis

SaaS, PaaS, IaaS — no, think bigger. How about an entire datacenter as-a-Service? OpenStack cloud supplier and Kubernetes-focussed Mirantis has launched just such an offering, called Flow.

Flow is cloud-native, vendor-agnostic and supports both virtual and containerised workloads. New customers can have the service up and running in five days. They deploy and run a centrally managed, scalable cloud infrastructure on a public cloud and out to the edge.

Adrian Ionel.

Adrian Ionel, CEO and co-founder of Mirantis, said: “As more businesses pursue digital transformation, enterprise datacenters are challenged to deliver a true cloud experience to their users while also reducing costs. Until now, cloud-native was sold and marketed as piece parts for enterprises to assemble. Mirantis has already helped hundreds of today’s tech savvy companies, including Booking.com, Reliance Jio, Netscope, and Societe Generale, implement an open source, cloud-native approach to infrastructure.

“For the first time, Flow takes that software, knowledge, and support expertise and packages it up for easy deployment, enabling enterprises to replace their legacy infrastructure, or even begin their cloud journey, with an open-source, cloud-native datacenter that supports their most valuable use cases and brings a stream of innovations to developers and application owners — at significantly less cost.”

A Mirantis blog reads: “Combining Mirantis Container Cloud with 24x7x365 monitoring and operations support, Mirantis Flow provides a Datacenter-as-a-Service experience that enables your developers to focus on creating and using Kubernetes and OpenStack clusters rather than managing infrastructure.”

Mirantis Flow integrates many open source technologies in a flexible way and packages those as a subscription service. Flow can utilise existing computing hardware and includes:

  • Mirantis Container Cloud, for providing deployment and lifecycle management of Kubernetes cluster across multiple infrastructure platforms;
  • Mirantis Kubernetes Engine certified distribution;
  • Mirantis OpenStack (on Kubernetes); 
  • Lens Spaces; 
  • Mirantis StackLight monitoring and alerting; 
  • OpsCare 24×7 proactive support or OpsCare managed services, which includes deployment.

Pricing and Availability

Mirantis Flow is available immediately and priced at $15,000 per month or $180,000 annually, which includes:

  • 1000 core/vCPU licenses for access to all products in the Mirantis software suite;
  • No additional charge for control plane and management software licenses;
  • Support for initial 20 virtual machine (VM) migrations or application onboarding;
  • Unlimited 24×7 OpsCare support

A Mirantis blog provides background information.

Go go GigaIO as it composes a funding raise

Data centre network fabric startup GigaIO has raised $14.7 million in a B-round, following its $4.5 million A-round in 2018. That came six years after its 2012 founding and one year after a 2017 seed round. GigaIO is developing FabreX software to compose — dynamically group together — rack-scale pooled accelerators such as GPUs. The initial FabreX product was launched in 2019.

The B-round was over-subscribed and led by Impact Venture Capital. It included participation from Mark IV Capital, Lagomaj Capital, SK hynix, and Four Palms Ventures.

Alan Benjamin.

Alan Benjamin, President and CEO of GigaIO., issued a statement: “Today, by completing this funding round, we are better positioned to get [our] technology into the hands of more customers and channel partners and to increase traction among commercial and other customers.”

FabreX is said to be the world’s only enterprise-class, universal composable fabric and marketed to HPC and AI-using customers. Workloads run as if they were using components inside one server but harness the power of many nodes, all communicating within one universal fabric. The idea is that composed resources are utilised more efficiently and not stranded inside a server with periods of idleness.

Benjamin said: “Due to the pandemic, hardware testing has been difficult. Since the start of the year however, we’ve been able to get equipment into facilities and the results have been fantastic for us. Our customers are thrilled with the results and impressed by what they can do with the technology, and to be blunt, they’re amazed that we’re getting results that the industry has been striving to achieve for more than a decade”

It is based on the PCIe bus, and PCIe Gen-4 server systems and storage drives have started appearing this year. PCIe Gen-5, doubling PCIe Gen-4 speed, is on the near horizon and the CXL bus or link will be based in that, with its memory pooling capability.

This technology development will increase FabreX capabilities by increasing network fabric speed and the size of the composable element pool.

GigaIO will use the funding to accelerate sales and marketing efforts. It will expand its market and channel development by recruiting more partners and expanding channel programs.

Google Cloud gains ease of use and enterprise enhancements

Google Cloud is getting reliability, availability and protection additions to its object, file and container functions that make it more firmly enterprise-class and also easier to use.

There are four individual announcements detailed in a blog co-authored by Guru Pangal, GM Storage, and Brian Schwarz, Director Product Management.

They write: “Today, we are adding extensions to our popular Cloud Storage offering, and introducing two new services: Filestore Enterprise, and Backup for Google Kubernetes Engine (GKE). Together, these new capabilities will make it easier for you to protect your data out-of-the box, across a wide variety of applications and use cases.” 

Google announced:

  • Dual-region bucket extension with
    • Custom regions
    • Turbo Replication
  • Backup for GKE
  • Filestore Enterprise

A dual-region bucket is a “single namespace (aka bucket) that spans regions. A dual-region bucket is not a simple load balancer or access point in the network tier sitting on top of two independent buckets. It is a true single namespace bucket, active-active for read/write/delete, which offers some important strong consistency properties. … Google Cloud is unique in offering this capability among major public cloud vendors.“

The custom bit means that, now, customers can select the two regions they want to combine in a dual-region, such as Frankfurt and London (UK), or Los Angeles and Las Vegas.

Turbo-replication provides an optional 15-minute RPO capability which replicates 100 per cent of a customer’s data between regions in 15 minutes or less. This is backed by a Service Level Agreement, and the bloggers claim this is a first from a leading cloud provider. 

Containers backup

Google is introducing Backup for its GKE (Google Kubernetes Engine) and another blog describes this. In fact it is announcing Preview for Backup for GKE, but the service is not ready yet.

It is a cloud-native way to protect, manage, and restore containerised applications and data. Google says it is “the first cloud provider to offer a simple, first-party backup for Kubernetes.”

Customers “can create a backup plan to schedule periodic backups of both application data and GKE cluster state data. [They] can also restore each backup to a cluster in the same region or, alternately, to a cluster in a different region.”

To sign up for the Backup for GKE Preview, customers should reach out to their account team or contact a Google Cloud sales rep. 

Filestore and HA

A new FileStore Enterprise service is a a fully managed cloud-native NFS system and includes high availability (HA) courtesy of synchronous replication across multiple zones in a region.

Customers can provision NFS shares that are seamlessly synchronously replicated across three zones within a region. If a zone fails the other zones take over and there is no service interruption. 

The two Google execs argue that this makes it a good fit for traditional tier-one enterprise applications (such as SAP) that need to share files.

A third blog explains more about it. It says Filestore Enterprise is backed by a Service Level Agreement that delivers 99.99 per cent regional availability.

The Filestore product family now includes:

  • Filestore Basic for file sharing, software development, and GKE workloads;
  • Filestore High Scale for high performance computing (HPC) application requirements such as genome sequencing, and financial-services trading analysis;
  • Filestore Enterprise for critical applications (eg, SAP) and GKE workloads.

This blog reads: “Filestore also lets you take periodic snapshots of the file system and retain a desired number of recovery points. With Filestore, you can easily recover an individual file or an entire file system in less than ten minutes from any of the prior snapshot recovery points.”

This capability could well lessen the appeal of third-party file backup products for Google Cloud file users.

Cisco UCS servers: nowhere to go, except out of Cisco?

Firing Squad painting by Goya.
Firing Squad painting by Goya.

Cisco’s UCS servers were downplayed in yesterday’s Cisco Investor Day presentation to financial analysts, suggesting servers are not a high-priority in Chuck Robbins’s company. So, what next for the UCS server line — disposal?

Update: William Blair analyst Jason Ader’s view added. 17 Sep 201.

This was Cisco’s first Investor Day since 2017, hence a major event. In his presentation, CEO Chuck Robbins outlined the six key pillars of Cisco’s strategy: 

Robbins slide.

Wells Fargo analyst Aaron Rakers told subscribers: “Cisco … will be adjusting its reportable segments to… :

  • Secure, Agile Networks — Campus Switching, Datacenter Switching, SD-Branch Routing, Compute & Wireless;
  • Hybrid Work — Collaboration & Contact Center solutions;
  • End-to-End Security — SASE + NetSec, Zero Trust, Detection & Response, Application Security;
  • Internet of Future — Routing Optical Networking, Public 5G, Silicon, & Photonics;
  • Optimized Application Experiences — Full Stack Observability and Cloud Native Platforms.

Servers (compute) are included in the Secure, Agile Networks segment. Todd Nightingale, EVP and GM for Enterprise Networking and Cloud, presented on the Secure, Agile Networks topic and confirmed this in one of his slides:

Nightingale slide.

Rakers pointed out that there were “No Meaningful Comments on Cisco’s UCS x86 Servers. One [topic] that was not meaningfully discussed during Mr Nightingale’s presentation was the strategic role/importance of Cisco’s x86 UCS solutions to the overall enterprise datacenter strategy.”

For Rakers the key points of Nightingale’s pitch were the installed base Catalyst 9000 campus switch upgrade opportunity, the $90billion core enterprise networking total addressable market, and subscription renewal traction.

William Blair analyst Jason Ader told his subscribers that: “servers [were] (completely ignored during the analyst day despite the data center segment being around 7 per centof product revenue, based on our estimates).” 

Servers no longer core

Let’s make a statement: UCS servers are no longer a core part of Cisco’s strategy. Could it exit the business?

Cisco exited the building energy management business in 2011. It disposed of its Home Networking division, including Linksys, to Belkin in 2013. In its fourth fiscal 2020 quarter results, which were poor — nine per cent down year-on-year — Robbins said: “Over the next few quarters, we will be taking out over $1 billion on an annualised basis to reduce our cost structure.”

Its servers don’t make enough revenue for Cisco to be included in IDC’s top five server companies [IDC Worldwide Quarterly Server Tracker, 2021Q1] which were Dell, HPE, Inspur, Lenovo and IBM. Big Blue had the lowest share of the five, at 5.3 per cent of a $20.9 billion market. Enlyft estimated Cisco had a 3.71 per cent market share earlier this year.

The company is under cost pressure, its servers have a low market share, and it has disposed of under-performing, non-core businesses in the past.

We might imagine that both Dell and HPE would like to pick up the UCS business and allied HyperFlex HCI products, and so add to their respective market shares. Dell might be particularly keen as UCS servers form part of its VxBlock converged infrastructure systems, and to have HPE pick up the UCS business would be embarrassing.

It is, we think, quite possible that Cisco could have exited the server business by the end of 2022.

PowerNIC picnic? Dell preparing SmartNIC’d VxRail

We should soon see VxRail hyperconverged systems fitted with CPU-offloading SmartNICs, according to a blog by a Dell exec.

Ihab Tarazi.

Ihab Tarazi is the CTO and SVP at Dell Technologies Networking and Solutions, and looks after technology strategy and roadmap, and next generation products and platforms. He writes: “VxRail will be the first Dell Technologies solution with SmartNIC/DPU technology to launch in the Spring of 2022” due to “the tight integration with VMware and Dell EMC.”

The card could be branded PowerNIC or PowerDPU to fit in with Dell’s “Power” branding.

The SmartNIC/DPU will be based on Project Monterey in which VMware’s ESXi hypervisor runs on an Arm processor fitted to a smart network interface card plugged into a server’s PCIe slot. The VxRail’s system software will provide the automation, orchestration, and lifecycle management of the SmartNIC/DPU firmware and ESXi software.

The SmartNIC card can replace standalone, purpose-built hardware devices in datacenters or in space-constrained edge locations for enterprises and also telcos. He says the SmartNIC can be used or network monitoring, telemetry and observability functions in a distributed zero-trust environment across cloud and edge instances.

Tarazi says the VxRail system will support P4 programming capabilities to aid adding custom features to the card. It will be the first SmartNIC-equipped Dell server system. We expect PowerEdge options to follow close behind. But not its PowerFlex non-vSphere HCI systems — not unless its hypervisor gets ported to a SmartNIC.

Dell’s converged infrastructure systems, the PowerONE and Dell/Cisco VxBlock, are candidates for SmartNIC use but the latter uses Cisco UCS servers and they would house and interoperate with the SmartNIC. That’s not under Dell’s control. PowerONE is, but its system software would need modifying to interoperate with a SmartNIC running ESXi. The benefits would need to be clearly spelt out too.

And we expect HPE servers and HCI systems and also Nutanix systems to be exploring SmartNIC usage as well. They won’t want to gift Dell a free ride if they can help it.

Tintri products refreshed with DDN controllers

DDN-owned Tintri has updated its IntelliFlash all-flash N-Series and hybrid flash/disk H-Series arrays, adding DDN fault-tolerant controllers while reducing the effective capacity levels of the H-Series. This continues its strategy of unifying the hardware used in its DDN and Tintri product ranges.

Update. Additional and corrected information from Tintri added. 21 Sep 2021.

The performance-optimised N-Series is an all-flash NVMe array supporting SAN block (iSCSI, Fibre Channel) and NAS file (NFS, SMB) protocols. The existing three-model range (N5100, N5200, N5800) is replaced by a two-model one (N6100, N6200). The performance and capacity-optimised H-series is a two-tier system with an NVMe flash performance tier and disk capacity tier. It supports the same SAN and NAS protocols as the N-Series.

DDN’s SVP of Products, Dr James Coomer, said: “With the new IntelliFlash N6000 series and enhanced H-Series, DDN has made Enterprise data features and Intelligent Infrastructure and data management solutions accessible for at-scale customers.”

Tintri H-Series with controller on top and expansion unit below.

The H-Series has a two-rack unit controller with 24 NVMe SSD slots. Up to four disk drive expansion trays are supported. By comparing the old and new datasheets we have built a table which shows how both the raw disk drive capacity and effective hybrid (flash+disk) capacity have been reduced:

The previous H6200 scaled out to the 26PB level, but the new one can only manage 20PB. We have asked Tintri why these reductions have been made and a spokesperson said: “The maximum raw and effective capacities referred to above are from an early version of the H-Series datasheet and were calculated based on 18TB HDDs, which were intended to be the highest capacity drives for the H-Series.

“However a decision was made, prior to the H6200 GA, to limit the H-Series configurations to support 8TB and 14TB HDDs only and introduce support for the 18TB drives at a later date. The new H-Series data sheet associated with the Sept. 15 announcement correctly reflects the maximum raw and hybrid capacities (based on 14TB HDDs). Unfortunately, it appears the corrected version of the original H-Series datasheet was not posted at the time of the October 2020 GA, so what is being referred to as the “existing H6200” shows early, incorrect maximum capacities.”

N-Series

Tintri’s announcement talks of the N-Series having an “expanded unified platform, [so] customers managing massive data repositories can increase read and write performance driven by the latest NVMe technology and a combination of zero-impact inline data reduction and patented intelligent caching. Customer applications run faster and respond quicker while reducing complexity and cost in their environments.“ But no performance numbers are revealed to back these claims up or put them in context.

Tintri IntelliFlash [controller] head unit.

The N-Series controller has the same 2RU x 24-NVMe SSD slot form factor as the H-Series and a second table built from the old and new data sheets shows the capacity changes:

It looks at first as if the new N-Series has much lower effective capacities than the old ones — but poring over the data sheets revealed major description changes in the capacity area. Both the old and the new series can have SAS expansion shelves. The old N-Series data sheet lists both the maximum raw flash capacity and the maximum expansion raw flash capacity, as well as the maximum effective capacity, not distinguishing between NVMe and SAS.

The new N-Series data sheet changes things, omitting the expansion capacity and only supplying the maximum NVMe effective capacity, thereby making the comparison between the old and new N-Series systems impossible.

A Tintri spokesperson said: “The effective capacities for the N5100/N5200/N5800 systems include SAS SSD expansion shelves. The N6100/N6200 systems do not currently support SAS SSD expansion shelves, hence the lower capacities. However, N-Series support for SAS expansion shelves is planned with an upcoming release.”

Apart from that it seems apparent that the N6100 replaces both the N5100 and N5200, with the N6200 replacing the N5800. We asked Tintri why is there no N6800? A spokesperson replied: “We are currently evaluating the business justification for the N6800 – in terms of production cost and the required price premium.”

The IntelliFlash N6000 series will be available in Q4 2021. Enhancements to the IntelliFlash H-Series are available now.

Dell EMC Isilon PowerScalisation — new models, ransomware protection and faster backup

DEll EMC racing towards multi-cloud IT

Dell EMC has added new hybrid and archive nodes to its PowerScale filer range, provided ransomware protection services and accelerated PowerScale backup and restores. There are four new hardware products: H700 and H7000 hybrid nodes, and A300 and A3000 nearline/archive nodes.

We have here steady incremental improvements following on from the F900 addition to the all-flash PowerScale/Isilon range in May.

David Noy, Dell’s VP Product Management for Unstructured Data Solutions, wrote in a blog post: “These … enhancements provide more flexible consumption, management, protection and security capabilities to eliminate data silos and help you effectively use unstructured data to innovate with confidence.” 

We’ve tabulated the new and existing systems to show the differences. First, the hybrid series:

It appears the H700 is positioned to supersede the H400 and H500, with the H7000 set to do the same to the H5600. We have not seen throughput numbers for these two new hybrid PowerScale boxes yet, and IOPS have only been revealed for the H600.

Next, the nearline/archive products:

A look at the table indicates that the A300 is positioned to supersede the A200 and the A3000 to replace the A2000. It all seems straightforward enough. We have no performance numbers for these products though.

The four new systems require v9.2.1 of the OneFS operating system or a later version. Dell says this delivers writeable snapshots, faster upgrades, secure boot, HDFS ACL support, and improved data reduction and small file efficiency. 

A new version of the DataIQ management software provides an improved user experience for large scale clusters, UI enhancements for ease of navigation​ and the ability to run reports to analyse volumes by time stamps.

Data protection

Dell has turned to Superna services and Faction, the US-based MSP, again — this time to add ransomware protection to PowerScale. A Cyber Protection and Recovery solution from Superna for PowerScale now includes hosting the Superna Ransomware Defender solution for multi-cloud deployments within Multi-Cloud Data Services enabled by Faction.

Customers can recover their data from a malware event by using the public cloud. A Superna AirGap Enterprise feature provides automation to the air gap feature.

A Dynamic NAS Protection function is available with PowerProtect Data Manager v19.9. This, we are told, intelligently and automatically scales to optimise performance, enabling protection and recovery for any NAS that supports NFS or CIFS, including PowerScale, PowerStore and Unity systems.

It provides up to 3x faster backups compared to NDMP backups with Avamar, and up to 2x faster NDMP restore performance with Avamar.

Drew Hills, Infrastructure Analyst, IT Systems, Information Technology, USC Australia, provided a quote about this: “With PowerProtect Data Manager, Dynamic NAS Protection automatically slices shares, filesystems and volumes into multiple streams that run in parallel within the same policy. It also automatically balances and scales across resources, simplifying management while accelerating backups faster than ever before.”

A Dell webinar on BrightTALK discusses these announcements.