The time-travelling CTO from 2005 would be envious, and slightly overwhelmed, by the number of ways in which software can now be hosted and run. Setting up a hosting platform used to be a dull, tedious and expensive process, but it was straightforward and well-understood by any senior technologist worth their salt.
Every couple of years since then, a new technique has arrived to disrupt this previously stable area of IT and the trend with each new innovation is towards more convenience for developers and operations staff. But CTOs should be wary — the price of convenience is almost always a loss of control.
Convenience vs control
In this context, ‘convenience’ means:
- Reducing the time and effort required to take software built by your developers and make it accessible to your customers;
- Minimising the time and effort required to maintain that software once it has been deployed; and
- Having scalability and reliability built into your hosting environment rather than having to design and implement it yourself.
And ‘control’ means:
- Your ability to influence the individuals and organisations involved in running your software;
- Your ability to make guarantees to your stakeholders about the performance, reliability and security of your software;
- Your ability to choose when and how new technologies are integrated into your application.
Virtualisation and Infrastructure-as-a-Service (IaaS)
Just over ten years ago, the options for hosting the software built by your company were limited — one way or another you’d be buying or renting physical servers to run your applications and paying for those servers to be maintained. Then came the widespread adoption of virtualisation, allowing one server to be treated as many software-defined virtual machines (VMs).
Virtualisation was the enabling technology that allowed the cloud-hosting model (AWS, Azure etc) to become viable and the concept of ‘Infrastructure-as-a-Service’ was born. Hosting capacity could be made available and discarded in seconds via an API call, instead of the months taken to procure, install and configure physical servers.
IaaS kick-started a revolution in how applications were designed and deployed, blurring the lines between developers and technical-operations staff to create the new role of DevOps and leading to the development of new tools for defining entire hosting environments and deployment processes in software. Fundamentally however, most companies were still building applications to be deployed on to dedicated servers (albeit virtual ones) and were still responsible for building and maintaining those new tools and processes.
Platform-as-a-Service (PaaS)
The next level of abstraction away from running your own servers and deployment processes was ‘Platform-as-a-Service’. In this model you package your software and upload it to your PaaS provider who then takes care of provisioning capacity and deploying your application.
Running on top of a cloud-hosting company’s IaaS system, PaaS platforms remove many of the ‘boilerplate’ tasks involved with building and running a production hosting environment, such as load-balancing and managing OS and security updates. In PaaS environments, application owners have no visibility of the VM or OS layer on which their software is running.
Function-as-a-Service (FaaS)
The most recently defined layer in the stack of hosting techniques is ‘Function-as-a-Service’ (also often referred to as ‘serverless’). FaaS is conceptually similar to PaaS, but allows for smaller fragments of code to be deployed in a way that allows for self-scaling applications to be built. AWS popularised the concept of FaaS with its own Lambda product, but now third-party platforms such as Serverless provide an additional layer of abstraction.
A FaaS platform controls when and how to add capacity to your application by rapidly adding and destroying additional instances of your code in response to demand — in the FaaS model, you no longer have visibility of the runtime environment on which your software runs, let alone the OS, VM or physical server. Today, the stack of technologies between your code and the actual hardware looks a little like this:
A quick note on containers
Those familiar with this subject might argue that ‘containerisation’ (using tools like Docker and Kubernetes) should be acknowledged as a major development in hosting, which it has been, but for the purposes of this article we’re treating containerisation as a refinement and intersection of virtualisation and configuration management.
How this affects you as CTO
At the higher levels of abstraction, it is tempting to view ‘hosting’ as just another step in your application’s deployment pipeline — like packaging or testing. As such it is also tempting to allow decisions around hosting to be developer-led, but that approach is not without risks to you or your company.
It is undeniable that the further up the abstraction stack you go, the more convenient hosting becomes. Most PaaS and FaaS products have tutorials that show you how to get your code running on the Internet in five minutes and these tutorials are often fair reflections of the reality of using those products. What is also undeniable is that if something goes wrong further down the stack, you will have little influence in making sure that your application is at the front of the queue when priority calls are being made about recovery.
Although it is almost unheard of now, six or seven years ago there were periods in which AWS suffered multi-day outages of entire availability zones (virtual data centres). If that were to happen again, and your application were running on a PaaS or FaaS platform built on top of a cloud hosting platform that was unavailable, how would you cope?
Your PaaS/FaaS provider may or may not have assigned you a dedicated account executive. You might be limited to using their ticketing system or forum to chase them. If you’re really unlucky their ticketing system might also be deployed on their own platform and therefore also be unavailable.
Once you’ve ensured that recovering your application is a priority for your PaaS/FaaS provider, you have exhausted all of your influence. You have no control at all over how they petition their cloud hosting partner, who are notoriously difficult to directly influence even if you are a big spender, which your PaaS/FaaS provider may not be.
The same scenario could unfold with a major Heartbleed-like security vulnerability or a sustained denial of service attack on the underlying cloud hosting platform. How will you make sure your company’s interests are looked after? In circumstances like these, you will have to return to your customers, board and investors and tell them that the timescale for having your application up and running again is indeterminate and out of your hands.
A reduction in control manifests itself in other less critical ways. Imagine scenarios where:
- Your developers are trying to track down performance problems but can’t gain access to the underlying OS to install the monitoring tools they need;
- Your developers want to upgrade to the latest version of a language or switch to a new language, but your PaaS/FaaS provider isn’t ready; and/or
- You are trying to sell your company and the purchaser insists on conducting an audit of your low-level security practices and wants server level access.
All of this comes across as a criticism of the more abstracted hosting models available today, but that’s not the intention. The intention is to reinforce that the decisions you take around hosting have impact beyond the questions of cost and time-to-market, which in the early days of many companies feel all-consuming. The intention is also to remind you that fully delegating decisions around hosting to your development team may be inappropriate as they may not always consider the wider ramifications of their choices.
Whatever decision you come to, you must be very clear with all of your internal stakeholders that decisions around hosting are important, subtle and not without risk.