Building and running applications on Kubernetes
Hopefully by now some of the following won’t be too much of a shock, but its worth spending a couple of minutes clarifying how applications hang together on a cloud based infrastructure such as Kubernetes. I should say, especially like Kubernetes, given that containers are what we’re orchestrating here.
These entities don’t come with the heavyweight baggage of a virtual machine, but neither are they as resilient. Applications that run on containers need to take account of and plan for elements of the infrastructure appearing and disappearing from under them. For a static website this doesn’t really matter, but for any application that needs to maintain some idea of state this is crucial. This state cannot be maintained within the container itself in a non-volatile way, as containers themselves are volatile. Data that needs to be kept should be written out to some kind of data store – this might be a regular database (or a cloud-based version of one like Amazon’s RDS) or a shared data store like Redis which I’ve mentioned on these pages previously.
With this kind of architecture, scaling up should be a matter of starting new copies of a service. In theory at least, the real world has a habit of ruining the utopias imagined by infrastructure architects. Still, these ideas have brought us some of the largest running systems in the world (assuming Google hasn’t stolen a march on everyone again by now in its data centres by doing something different).