Home Blog Page 415

Panasas takes on board new investors in push for growth

In its nineteenth year the HPC storage supplier Panasas has secured a fresh funding round and given the board a makeover.

The cash infusion is less than $50m and “tranched”, meaning stage payments are dependent upon performance, CEO Faye Pairman told us. The new investors will occupy four of seven seats in the enlarged board.

The funding comes entirely from two new investors, KEWA Financial, a US. insurance company, and Dowroc Partners, a private fund owned by investment professionals with expertise in storage technology, energy, and cloud computing.

Panasas’s last funding in 2013 comprised a $15m private equity investment  by Mohr, Davidow Ventures and a $25.2m venture capital F-round, with participation from Samsung Ventures, Intel Capital and others.

Enterprise HPC players

So why does Panasas need more money now?

Robert Cihra partner at Dowroc Partners, said Panasas is a “small company that’s looking to grow…This is an investment in growth.”

The money will fund technology and product development, support expansion into new markets, and the exploration of OEM relationships. This at a time of growing competition in Panasas’s core market.

General enterprises are driving this with their need for HPC-like storage systems to handle fast access to millions and sometimes billions of file for their big data analytics and machine learning applications.

The technology development has attracted new entrants alongside the classic quintet of (now Red Hat) Ceph, Dell EMC’s Isilon, DDN, IBM’s Spectrum Scale and Panasas.

Qumulo and WekaIO with NetApp have also entered the enterprise HPC scale-out file access business.

The proprietry of proprietary

Panasas historically shipped its HPC storage software running in its own proprietary arrays until November 2018, when it made its software portable. The file system was updated and the ActiveStor hardware requirement became commodity X86 server-based.

This opens possibilities for partnership options such as running software on other suppliers’ hardware. Jim Donovan, Panasas chief marketing officer, at the time of the announcement, said the move gives “us the opportunity to port PanFS to other companies’ hardware. That’s the potential. We’re not announcing any OEM deals at the moment.”

Blocks & Files thinks Panasas will announce new partnerships fairly quickly. We imagine HPE is a company Panasas will talk to and, looking further afield, Cisco and possibly Lenovo.

All aboard the Panasas…Express?

Panasas CEO Faye Pairman

There is now a seven-member board compared to the 4-member one last year. The board directors are

  • Faye Pairman, President and CEO
  • Elliot Carpenter, CFO
  • Andre Hakkak – founder and managing partner, White Oak Global Advisors
  • David Wiley, founder and CEO KEWA – new member
  • Robert Cihra, partner at Dowroc Partners and 20-year Wall Street analyst – new member
  • Jorge Titinger, CEO of Titinger Consulting, ex-SGI SGI – new member
  • Jonathan Lister, VP Global Sales Solutions at LinkedIn – new member

As you can see the four new members could outvote the three legacy members.

Pairman told us Tittinger’s HPC and Lister’s sales and business growth experience will help drive Panasas’s roadmap and re-accelerate growth.

VAST Data: The first thing we do, let’s kill all the hard drives

VAST Data wants to direct an extinction event for hard drives.

This is a bold ambition but VAST Data is a remarkable startup, certainly in the scope of its claims and use of new technology. Without QLC flash, 3D XPoint, NVMe over fabrics, data reduction, metadata management, its product simply would not exist.

The company has accumulated $80m in funding through two VC rounds and officially launches its technology today. But it is already shipping product on the quiet and says customers are cutting multi-million dollar checks for petabytes of storage. Existing customers include Ginkgo Bioworks, General Dynamics Information Technology and Zebra Medical Vision.

It also claims it has earned more revenue in first 90 days of general availability than any company in IT infrastructure history, including Cohesity, Data Domain, and Rubrik combined.

So what is the fuss all about?

VAST opportunities

This may sound unreal: VAST Data has found a way to collapse all storage tiers onto one with decoupled compute nodes using NVMe-oF to access 2U databoxes filled with QLC flash data drives and Optane XPoint metadata and write-staging drives, with up to 2PB or more of capacity after data reduction.

How do VAST’s claims stack up? They seem plausible but it is, of course, early days. Our subsidiary articles look at the various elements of VAST Data’s story: click on the links below.

Existing storage tiers

Let’s try it from another angle. VAST Data has created an exabyte-capable, single tier, flash-based capacity store that can cover performance to archive-level data storage needs at a cost similar to or lower than hard disk drives. This equates to about $0.03/GB instead of a dual-port NVMe enterprise SSDs at $0.60/GB.

Single VAST Data tier

At its heart the technology depends upon a new form of data reduction to turn 600TB of raw flash and 18TB of Optane SSD into up to 2PB or more of effective capacity. Without this reduction technology the exabyte scaling doesn’t happen

In addition, extremely wide data stripes provide for global erasure coding and fast drive recovery. 

There is a shared-everything architecture embodied in separate and independently scalable compute nodes in a loosely-coupled cluster running Universal Filesystem logic. These link across an NVMe link to databoxes (DBOX3 storage nodes)  which are dumb boxes filled with flash drives for data and Optane XPoint for metadata.

Exploded view of VAST Data’s DBOX3, showing side-mounted drives. Ruler-format drives would fit quite nicely in this design.

The flash drives have extraordinary endurance – 10 years – because of VAST’s ability to reduce the number of writes; the opposite of write amplification.

The pitch is that VAST’s data reduction and minimised write amplification enables effective flash pricing at capacity disk drive levels with performance at NVMe SSD levels for all your data; performance, nearline, analytics, AI machine learning, and archive.

Benefits

  • With a flat, single-tier, self-protecting data stricture there s no need for separate data protection measures.
  • There is no need for data management services for secondary, unstructured data.
  • There is no need for a separate archival disk storage infrastructure.
  • There is no no need to tier data between fast and small device and slow, cheap and deep devices. 
  • There is no need to buy, operate, power, cool, manage, support and house such devices.
  • With a single huge data store with NVMe access and, effectively, parallel access to the data, then analytics can be run without any need for a separate distributed Hadoop server infrastructure.


VAST striping and data protection

Illustration of Zebra
Illustration of Zebra by Ludolphus

The way data is written and erasure coded is crucial to VAST’s data protection capabilities and centres on striping.

Suppose a QLC drive fails. Its data contents have to be rebuilt using the data stripes on the other drives. Here things gets clever. VAST Data’s System has clusters of databoxes with 20-30 drives each. There can be 1,000 data boxes and so 20,000 – 30,000 drives. Because it has so many drives with a global view VAST it has its own way of global erasure coding which has a low overhead. With, for example, a 150+4, you can lose 4 drives without losing data and have just a 2.7 per cent overhead.”

VAST says its 150+4 scheme is 3x faster than a smaller disk stripe, such as 8+2, and classic Reed-Solomon rebuilds.

VAST Data DBOX3 databox.

The stripe erasure code parameters are dynamic. As databoxes are added the stripe lengths can be extended, increasing resilience. We could envisage a 500+10 stripe structure. Ten drives can be lost and the overhead is even loser, at two per cent. This scheme is twice as fast as disk drives at erasure rebuilds.

With a 150+4 structure customer need to read (recover) from all the drives but only an amount that’s equivalent to a quarter of the drives. There are specific areas in XPoint for the resiliency scheme, and they tell a compute node part from this drive and that part from that drive, using locally decodable algorithms.

Because of these locally decodable codes VAST does not need to read from the entirety of the wide stripe in order to perform a recovery.

So each CPU reads its part of the stripes on the failed drives and recovery is, VAST says, fast. It’s roughly like Point-in-Time recovery from synthetic backups.

Next

Move on and explore more parts of the VAST Data storage universe;

VAST Data’s business sitrep

VAST Data was founded in 2016 by CEO Renen Hallak, an ex-VP for R&D at XtremIO and its first engineer. Some time before that he worked on computer science looking at computing complexity, which seems appropriate.

He left XtremIO in 2015 and had a round of talks with potential customers and storage companies, talking about storage needs and technology roadmaps before founding VAST Data.

Hallak’s key insight was that customers did not want tiered storage systems but had to use them because flash storage was too expensive and not capacious enough, while disk was too slow.

He came round to thinking that QLC (4 bits/cell) flash and better data reduction could provide the capacity and XPoint the fast write persistence and metadata storage. NVMe over fabrics could provide a base for decoupled compute nodes running storage controller-type logic, accessing storage drives fast, and enabling huge scalability.

VAST Data founder and CEO Renen Hallak

Ex-Dell EMC exec Mike Wing is VAST Data’s president and owns sales. He spent 15 years at EMC and then Dell EMC, running the Xtremio business and packaging VMAX and Unity all-flash systems.

Shacher Fienblit is the VP for R&D, coming from being the CTO at all-flash array vendor Kaminario. Jeff Denworth is VP for Product Management, and joined from CTERA. Analyst Howard Marks, the Deep Storage Net guy, is a tech evangelist and it’s his first full time post since 1987.

VAST Data’s R&D is in Israel. Everything else is in the USA, including the New York headquarters and mostly on the East Coast, because that’s where most of the target customers are to be found. There is a support office in Silicon Valley.

It received $40m in A-round funding in 2017 and had just has raised $40M of Series B funding, with the round led by TPG Growth, with participation from existing investors including Norwest Venture Partners, Dell Technologies Capital, 83 North and Goldman Sachs. Total funding is $80M. 

Now VAST has come out of stealth, confident it has world-beating storage technology. 

It sells its product three ways;

  • As a 2U quad server appliance, with 8 x 50Gbit/s Ethernet links, running VAST containers, presenting NFS and S3 out to client servers, and the databox enclosure,
  • As a Docker container image and databox enclosures, with the host servers running apps as well as the  VAST containers,
  • As a Docker container, meaning software-only.

The last point means that VAST could run in the public cloud, were the public cloud able to configure databoxes in the way VAST needs.

Every startup loves its technology baby and is convinced it is the most beautiful new child in the world. Hallack and his team are certain VAST will become a vast business.

If they are right VAST will be not just an extinction event for hard disk drives but also represent a threat to virtually all existing HW and SW storage companies. Can its tech live up to this potential or will it, like other startups, be a shooting star, with a bright but brief burst of light before burning up? We’ll see. It’s going to be an interesting ride; that’s for sure.

Next

Explore more parts of the VAST Data storage universe;

VAST Data’s Universal filesystem

The VAST Data system employs a Universal Storage filesystem. This has a DASE (Disaggregated Shared Everything) datastore which is a byte-granular, thin-provisioned, sharded, and has an infinitely-scalable global namespace.

It involves the compute nodes being responsible for so-called element stores.

These element stores use V-tree metadata structures, 7-layers deep with each layer 512 times larger than the one above it. This V-tree structure is capable of supporting 100 trillion objects. The data structures are self-describing with regard to lock and snaps states and directories. 

VAST Data tech evangelist Howard Marks said: ‘Since the V-trees are shallow finding a specific metadata item is 7 or fewer redirection steps through the V-Tree. That means no persistent state in the compute-node (small hash table built at boot) [and] adding capacity adds V-Trees for scale.”

The colour wheel in the server is the consistent hash table in the compute-node that provides the first set of pointers to the blue, cyan and magenta V-trees in the data-nodes.

There is a consistent hash in each compute node which tells that compute node which V-tree to use to locate a data object.

The 7-layer depth means V-trees are broad and shallow. Marks says: “More conventional B-tree, or worse, systems need tens or hundreds of redirections to find an object. That means the B-Tree has to be in memory, not across an NVMe-oF fabric, for fast traversal, and the controllers have to maintain coherent state across the cluster. Since we only do a few I/Os we can afford the network hop to keep state in persistent memory where it belongs.”

V-trees are shared-everything and there is no need for cross-talk between the compute nodes. Jeff Denworth, product management VP, said: “Global access to the trees and transactional data structures enable a global namespace without the need of cache coherence in the servers or cross talk. Safer, cheaper, simpler and more scalable this way.”

There is no need for a lock manager either as lock state is read from XPoint.

A global flash translation system is optimised for QLC flash with four attributes:

  1. Indirect-on-write system writing full QLC erase blocks which avoids triggering device-level garbage collection, 
  2. 3D XPoint buffering ensures full-stripe writes to eliminate flash wear caused by read-modify-write operations,
  3. Universal wear levelling amortises write endurance to work toward the average of overwrites when consolidating long-term and short-term data,
  4. Predictive data placement to avoid amplifying writes after application has persisted data.

The idea of long-term and short-term data enables long-term data (with low rewrite potential) to be written on erase blocks with limited endurance left. Short-term data with higher rewrite potential can go to blocks with more write cycles left.

VAST buys the cheapest QLC SSDs it can find for its databoxes as it does not need any device-level garbage collection and wear-levelling, carrying out these functions itself.

VAST guarantees its QLC drives will last for 10 years with these ways of reducing write amplification. That lowers the TCO rating.

Remote Sites

NVMe-oF is not a wide-area network protocol as it is LAN distance-limited. That means distributed VAST Data sites need some form of data replication to keep them in sync. The good news is, Blocks & Files envisages, is that only metadata and reduced data needs to be sent over the wire to remote sites. VAST Data confirms that replication is on the short-term roadmap.

Next

Explore more aspects of the VAST Data storage universe;

VAST decouples compute and storage

Separate scaling of compute and data is central to VAST’s performance claims. Here is a top-level explanation of how it is achieved.

In VAST’s scheme there are compute nodes and databoxes – high-availability DF-5615 NVMe JBOFs, connected by NVMe-oF running across Ethernet or InfiniBand. The x86 servers run VAST Universal File System (UFS) software packaged in Docker containers. The 2U databoxes are dumb, and contain 18 x 960GB of Optane 3D XPoint memory and 583TB of NVMe QLC (4bits/cell) SSD (38 x 15.36TB; actually multiple M.2 drives in U.2 carriers). Data is striped across the databoxes and their drives.

The XPoint is not a tier. It is used for storing metadata and as an incoming write buffer.

The QLC flash does not need a huge amount of over-provisioning to extend its endurance because VAST’s software reduces write amplification and data is not rewritten that much.

All writes are atomic and persisted in XPoint; there is no DRAM cache. Compute nodes are stateless and see all the storage in the databoxes. If one compute node fails another can pick up its work with no data loss.

There can be up to 10,000 compute nodes in a loosely-coupled cluster and 1,000 databoxes in the first release of the software, although these limits are, we imagine, arbitrary.

A 600TB databox is part of a global namespace and there is global compression and protection. It provides up to 2PB of effective capacity and, therefore, 1,000 of them provide 2EB of effective capacity.

The compute nodes present either NFS v3 file access to applications or S3 object access, or both simultaneously. A third access protocol may be added. Possibly this could be SMB. Below this protocol layer the software manages a POSIX-compliant Element Store containing self-describing data structures; the databoxes accessed across the NVMe fabric running on 100Gbit/s Ethernet or InfiniBand.

Compute nodes can be pooled so as to deliver storage resources at different qualities of service out to clients. 

Next

Explore other parts of the VAST Data storage universe;

VAST data reduction

VAST Data’s technology depends upon its data reduction technology which discovers and exploits patterns of data similarity across a global namespace at a level of granularity that is 4,000 to 128,000 times smaller than today’s deduplication approaches.

Here, in outline, is how it works.

A hashing function is applied to fingerprint each GB-sized block of data being written, and this measures in some way the similarities with other blocks – the distance between them in terms of byte-level contents. 

With blocks that are similar, near together as it were, with few byte differences, then one block – let’s call it the master block – can be stored raw. Similar blocks are stored only as their difference from the master block.

The differences are stored in a way roughly analogous to incremental backups and the original full backup.

The more data blocks there are in the system, the more chances there are of finding similar blocks to incoming ones.

Once data is written subsequent reads are serviced within 1ms using locally decodable compression algorithms.

Jeff Denworth, VAST Data product management VP, says that some customers with Commvault software, which dedupes and compresses backups, have seen a further 5 to 7 times more data reduction after storing these backups on VAST’S system. VAST’s technology compounds the Commvault reduction and will, presumably work with any other data reducing software such as Veritas’ Backup Exec. 

If Commvault reduces at 5:1 and VAST Data reduces that at 5:1 again, then the net reduction for the source data involved is 25:1.

Obviously reduction mileage will vary with the source data type.

Next

Smartphones get NVMe for faster flash card data access

The SD Association is adopting the NVMe protocol to speed data access to add-in tablet and phone flash cards.

The industry standards-setting group of around 900 companies has agreed the microSD Express standard, SD v7.1, to link external flash storage cards to a phone or tablet’s processor and memory. This uses the PCIe 3.1 bus and NVMe v1.3 protocol, enabling transfers at up to 985MB/sec.

The new format removes a storage bottleneck that slowsperformance as mobile and IOT devices data usage increases.

MicroSD Express card format branding

PCIe 3.1 has low power sub-states (L1.1, L1.2) enabling low power implementations of SD Express for the mobile market. The microSD Express cards should transfer data faster than previous microSD formats while using less electricity, thus improving battery life.

By way of comparison Western Digital’s latest embedded flash card for phones and tablets, the MC EU511, runs at 780MB/sec – 26 per cent slower.

The EU511 can download a 4.5GB movie, compressed to 2.7GB, in 3.6 seconds. MicroSD Express would write that amount of data to its card in 2.7 seconds, courtesy of lower latency and higher bandwidth. 

There are three varieties of microSD Express:

  • microSDHC Express – SD High Capacity – 4GB to 6GB capacities,
  • microSDXC Express – SD eXtreme Capacity – 8GB – 512GB,
  • microSDUC Express – SD Ultra Capacity supporting up to 128TB.

The SD Association upgraded its SD card standard to SD Express in June 2018, bring in PCIe and NVMe to that card format, but this was not taken up by SD card suppliers.

MicroSD cards are the smallest version of the three SD card physical size options:

  • Standard: 32.0 × 24.0 × 2.1 mm.
  • Mini: 21.5 × 20.0 × 1.4 mm, 
  • Micro: 15.0 × 11.0 × 1.0 mm.

The larger SD flash memory cards for portable devices, including phones, are much less popular than microSD cards.

The microSD Express card format is backwards-compatible with microSD card slots but older devices are unable to use the NVMe speeds. The new format is in place to adopt faster PCIE standards as they arrive, such as PCIe gen 4 with 2GB/sec raw bandwidth, and PCIe gen 5 with 4GB/sec.

An SDA white paper discusses the new standard if you want to find out more.

WD smartphone flash drive stores two hour movie in under four seconds

Western Digital is sampling a smartphone flash drive that can store a two-hour movie in 3.6 seconds.

The embedded flash drive writes data at up to 750MB/sec and goes by the name of the iNAND MC EU511. The 3.6 seconds write time quoted for a two-hour movie is for a 4.5GB video file coming in to the phone across a 5G link and using H.266 compression to reduce the data to a 2.7GB file.

The device uses WD and Toshiba’s 96-layer 3D NAND formatted with an SLC (1bit/cell) buffer. WD calls this SmartSLC Generation 6.

The drive supports Universal Flash Storage (UFS) 3.0 Gear 4/2 Lane specifications and is intended for use in all smart mobile devices.

Capacities range from 64GB, through 128GB and 256GB, to 512GB. Its package size is 11.5x13x1.0mm. The drive uses TLC (3bits/cell) apart from the SLC buffer.

The EU511 provides a top end to WD’s existing pair of EFD drives, the iNAND MC EU311 and EU321, which max out at 256GB and support UFS 2.1 Gear3 2-Lane.

My desktop with a broadband internet link takes 20-30 minutes to download a 4.5GB movie. Reducing this to less than four seconds sounds like a fantasy, and may not be possible given the landline phone infrastructure underlying the broadband link. Mobile wireless connectivity is set to overtake landline-based internet access where there is no end-to-end fibre connection.

Goodbye landline and, if it’s a broadband internet access bottleneck, good riddance too.

Western Digital is shipping MC E511 samples to prospective customers.

Nutanix kicks Buckets into life

Nutanix is within a month of launching Buckets, a hyperconverged infrastructure, S3-compatible object service, according to our sources.

The technology gives the company a full house of storage services for VM, file, block and object.

In November 2017 Nutanix said it would develop its Acropolis Object Storage Service, based on data containers called buckets. Object services would form part of an Enterprise Cloud OS (ECOS) and sit alongside Virtual Machine and Block services and File Services.

ECOS is a suite of services for Nutanix hyperconverged infrastructure (HCI)  software that runs across on-premises servers to form private clouds, and public clouds such as Amazon, Azure and Google Cloud Platform. Example services are the Acropolis Compute Cloud, Calm,  a multi-cloud application automation and orchestration offering, and various Xi Cloud Services such as Xi Leap disaster recovery.

Nutanix’ complete set of storage top-level services.

Files has been delivered and Buckets is weeks away from launch.

Object storage

Buckets stores large unstructured datasets for use in applications such as big data analytics, archiving, compliance, database backups, data warehousing and devops. WORM (write once; read many) buckets are supported.

Billions of objects are stored in buckets which exist in a single global namespace that works across on-premises data centres (Nutanix clusters), regions within multiple public clouds, and multiple clouds. Objects can be replicated between, and tiered across clouds, and handle petabytes of data, with up to TB-sized objects.

Erasure coding means there’s no hole in a Nutanix bucket.

Access is via an S3-compatible  REST API. The data in the buckets is deduplicated, compressed and protected with erasure coding. It is natively scale-out in nature: add more Nutanix compute nodes or storage capacity as needed.

There is an Object Volume Manager and it handles;

  • Front end adapter for the S3 interface, REST API and client endpoint,
  • Object controller acting as the the data management layer to interface to AOS and coordinate with the metadata service,
  • Metadata services with management layer and key-value store,
  • Atlas controller for lifecycle management, audits, and background maintenance activities.

Nutanix says Buckets is operationally easy to set up and use, talking of single click simplicity via its Prism management tool. Prism is said to use machine learning to help with task automation and resource optimisation.

We understand that Buckets will be consumable in two ways, either as Nutanix hyperconverged clusters with apps in VMs running alongside or as dedicated clusters.

Nutanix engineer Sharon Santana discusses Nutanix’ coming object service.

Check out a Nutanix video about is object storage service here and a data sheet here.

Nutanix’s object storage competitors include Caringo, Cloudian, DDN, Scality, SwiftStack, IBM, HDS and NetApp.


Trust the public cloud Big Three to make non-volatile storage volatile

AWS and Google Cloud virtual machine instances – and as of this month, Azure’s – have NVMe flash drive performance, but user be warned: drive contents are wiped when the VMs are killed.

NVMe-connected flash drives can be accessed substantially faster than SSDs with SAS or SATA interfaces.

The Azure drives – which have been generally available since the beginning of February – are 1.92TB M.2 SSDs, locally attached to virtual machines, like direct-attached storage (DAS). There are five varieties of these Lsv2 VMs with 8 to 80 virtual CPUs:

The temp disk is an SCSI drive for OS paging/swap file use.

The stored data on the NVMe flash drives, laughably called disks in the Azure documentation, is wiped if you power down or de-allocate the VM, quite unlike what happens in an on-premises data centre. In other words Azure turns non-volatile storage into volatile storage – quite a trick.

That suggests you’ll need a high-availability/failover setup to prevent data loss and a strategy for dealing with your data when you turn these VMs off.

AWS and GCP

Six AWS EC2 instances support NVMe SSDs – C5d, I3, F1, M5d, p3dn.24xlarge, R5d, and z1d.

They are ephemeral like Azure VMs. AWS said: “The data on an SSD instance volume persists only for the life of its associated instance.”

Google Cloud Platform also offers NVMe-accelerated SSD storage, with 375GB capacity and a maximum of eight per instance. The GCP documentation warns intrepid users: “The performance gains from local SSDs require certain trade-offs in availability, durability, and flexibility. Because of these trade-offs, local SSD storage is not automatically replicated and all data on the local SSD may be lost if the instance terminates for any reason.”

All three cloud providers treat the SSDs as scratch volumes so you have to preserve the data on them once it has been loaded and processed.

This article was first published on The Register.

Lenovo moves HCI partnerships closer to the edge

Lenovo has teamed up with Pivot3 and Scale Computing for its HCI needs, and aiming full tilt at edge computing with a new secure SE350 server.

The ThinkSystem SE350 is the first of a family of edge servers. It is a half-width, short-depth, 1U server with a Xeon-D processor, up to 16 cores, 256GB of RAM, and 16TB of internal solid-state storage.

Which way is up?

How secure is the SE350? “You touch it and it self-destructs the data,” Wilfrid Sotolongo, VP for IoT at Lenovo’s Data Centre Group said.

It can be hung on a wall, stacked on a shelf, or mounted in a rack. This server has a zero-touch deployment system, is tamper-resistant and data is encrypted. 

Lenovo ThinkSystem SE350

Networking options include wired 10/100Mb/1GbitE, 1GbitE SFP, 10GBASE-T, and 10GbitE SFP+, secure wireless Wi-Fi and cellular LTE connections. 

Expect to see this in Lenovo IoT hyperconverged systems using Pivot3 and Scale Computing software.

Its partnership with Pivot3 is aimed at video security and surveillance use cases in smart city and safe campus deployments. For instance, a Middle East customer used the combo with SecureTech to secure a sprawling hotel complex, the company said.

Lenovo loves edge computing

Lenovo quotes Gartner numbers showing about 10 per cent of of enterprise-generated data is created and processed today outside a centralised data centre or public cloud. By  2022 that will jump to 75 per cent, creating a much bigger edge market.

Lenovo positions its Scale Computing hookup for retail HCI with customers such as Delhaize that deploy mini-data centres. It provides edge infrastructure that can run IT and IoT workloads, detect and correct infrastructure errors to maximise application uptime. The technology is manageable remotely or on-site by non IT teams. 

Lenovo has a third card to play. It is working with VMware on Project Dimension – on-premises VMware Cloud. The idea here is to provide infrastructure-as-a-service to locations on-premises, particularly those on the edge.

Lenovo demonstrates IoT and edge solutions next week at Mobile World Congress in Barcelona.