Scale Computing: Idea that customers should run multiple edge compute platforms is ‘absurd’

Hyperconverged system vendor Scale Computing has an edge computing  system to sell and wants to define edge computing in its own way.

The Edge Computing concept needs simplifying, and Scale Computing reckons it knows how. Simplification means having a single platform for mission-critical applications.

It argues that edge computing cannot encompass every localised internet of things and other IT activity outside the data center, from simple embedded industrial control systems, retail point-of-sale systems, remote-controlled video cameras on building walls, remote office VDI, vehicular and airplane computing from engine management to entertainment screens, and everything in between.

Co-founder and CEO Jeff Ready tells us about  it. It turns out it’s mostly, as we understand it, about classic ROBO – remote and branch offices – with IOT added on.

Blocks & Files: What does the term ‘Edge Computing’ mean to you?

Scale Computing co-founder and CEO Jeff Ready.

Jeff Ready: Edge computing is about running mission critical applications outside the data center. Full stop. There are varying use cases, workloads, and needs within that envelope, but at the root of the issue is the need to run applications somewhere other than the data center or cloud.

The definition itself is important as it defines the business need without falling into a trap of buzzwords, widgets, speeds and tech specs.  

Blocks & Files: How do you define ‘mission critical applications’?

Jeff Ready: Mission critical applications means, of course, any app that ‘must run’ to make the business successful. High availability is implied, and critical. After all, a downed mission-critical app makes for a bad day; these apps need to run.  

‘Mission critical’ does not automatically define exactly how the application will run, in what format it will run, or on what architecture it will run.  

This is where big tech is getting it wrong. Containers don’t matter. Virtual Machines don’t matter. Hyperconvergence doesn’t matter.

These are technologies – all of them useful in some way -but the technologies themselves are not the business objective. A failed containerised application is certainly not better than a running virtual machine, even if containers represent newer technology.  

To the business, all that matters is if the application is online. It’s the business objectives, not the bits and bytes, that matter. The objective is that the application needs to be online and working correctly.   

Blocks & Files: What does ‘outside the data center’ mean for you?

Ready: When you are outside those data center walls, the world of IT is very different. Things taken for granted in the data center may no longer exist and may not even apply. Reliable, redundant power? Unlikely to be found in the back of a coffee shop. Always-on internet connectivity? Think about how often your home ISP goes down, or you’ve rebooted your router. IT staff who can walk over to a rack of gear and visually see what’s going on? You won’t find that inside your local retail store, or on a utility pole.  

Many things in the data center were created for the convenience of the IT professionals who run those data centers. Rack-mounted gear doesn’t make sense if there isn’t a rack. Noise is expected in a data center, but loud servers won’t work if the environment is also somewhere people will be working, relaxing, or conversing.  

Physical size, noise, and power consumption can make a tremendous difference at the edge, even when these things aren’t major considerations in most data centers. 

Blocks & Files: You don’t expect edge locations to have IT admin staff?

Jeff Ready: No, not at all. This is the manageability angle. In the data center, a task that takes an IT administrator 30 seconds; that task isn’t given a second thought. If that same task needs to be repeated across 5,000 retail locations at 30 seconds each, it’s a huge undertaking.  

Further, troubleshooting problems and even basic deployment take on a whole new meaning when dealing with remote edge locations.  Spending an hour to deploy a new server in the data center? No big deal. Spending a day or two getting VMware configured in your new environment? Par for the course. But, multiply those tasks by hundreds, thousands, or tens-of-thousands of locations and the situation is wildly out-of-control.  

Blocks & Files: What’s the lesson you draw from this?

Jeff Ready: When you factor in these complexities, the idea that customers should run multiple types of edge infrastructure becomes absurd. One platform for hybrid-cloud containers, a second for legacy applications, a third of IoT device management, a fourth for digital video and analytics, a fifth to run the drones and robots… where does it end?   

Blocks & Files: Okay. Then we have no IT-skilled admin at the edge locations but we do need to run different applications. How would you organise it?

Jeff Ready: We need four things; a single platform, self-healing, centralised management and equipment that doesn’t assume it’s running in a data center.

There should be one platform to run those applications, regardless of whether those are legacy virtual machines, containerised apps, hybrid-cloud apps, IoT controls, etc.  Multiple platforms is a management disaster, so one platform for all the apps, regardless of age or deployment technology. 

The platform must keep those applications running even when there are hardware and infrastructure software failures, and it must do so without drowning IT in troubleshooting or application management tasks. Self-healing is an absolute requirement, where automation can replace the need for on-site IT staff. 

This automation must be combined with centralised management.  The ’30-second-task’ inside the data center should still be a 30 second task, and not 30 seconds multiplied by the number of locations. Deploying applications, changing those applications, and managing environments needs to be made orders-of-magnitude easier than it is in a data center environment, because we’re dealing with orders-of-magnitude greater locations. This is a perfect scenario for cloud-based-management controlling on-premises environments. 

The computing hardware used to run the platform must meet the requirements of the non-data center environment it is being deployed into. Size, power, noise, etc. need to match the realities of that non-data center environment. We need the benefits of the data center without attempting to recreate a data center-like environment at the edge. 

Comment

Ready says: “There should be one platform to run those applications, regardless of whether those are legacy virtual machines, containerised apps, hybrid-cloud apps, IoT controls, etc.” Basically its a one size (platform) fits all kind of argument.

Ready narrows the limits of what Edge Computing means (mission-critical outside the data centre) to suit the product Scale has to sell.

We can see there will be benefits – procurement, maintenance, deployment, manageability – from running a single platform across all edge locations but what happens if the platform is unsuitable for the application?

We’re thinking of a utility pole, video-cam, industrial control type deployment where a high-availability, X86 server might be a second-best choice to a smaller ARM-based system. Another example; a vehicle’s engine bay might be so shaky and hot that it fries an X86 platform whereas a hardened ARM system can survive quite nicely. 

Is this mission-critical? It is to the vehicle and a business selling a vehicle-based offering.

We don’t think a one platform fits all argument is valid across the board, because the edge computing board is too wide. Within Scale’s self-set limits it’s arguments may well be valid.