Wednesday, October 14, 2015

Container Patterns

Containers can be deployed in several ways, depending on goals and usage scenarios. Let's have a look at typical patterns and view this in light of z Systems:





a) Scale up. Have one or a few large Linux images, e.g. in an LPAR. Cram as many containers as possible into a host. This allows for maximum resource utilization (CPU, memory), since the Linux kernel can use all the available cycles for all the containers needing them. Think of a large PaaS platform where you just want high efficiency.





b) Scale out using many virtual servers. The hypervisor can provide for high utilization of all resources, but will also allow for well-defined resource utilization on virtual machine granularity. Second level virtualization can provide tenant isolation as well as just clean separation of tiers. Scaling up a tier can be achieved by adding containers or dedicating more resources for a virtual machine -- providing a kind of QoS management.






c) The group model. Almost being the orthogonal approach over b), this scheme groups containers in alike chunks. Each group comes with all the components required for the solution. Scaling for larger capacity can simply be achieved by spinning up additional groups -- either in the same Linux host or a different one. Virtualization can be added independently for tenant isolation or availability reasons.

This is how Kubernetes (they name the group "pod") or docker-compose (calling them "application") work. Depending on orchestration, individually scaling out components of the group (e.g. the orange container in a group above) is very simple.


In all these cases, scaling through containers will help limited software scalability of applications. Running many of them will keep them in an efficient range of their n-way curve, while the Linux host or hypervisor can scale out and use the efficiency of large systems.

Do you see other dominant deployment models? Looking forward to your comments.

Thanks to Michael H, Stefan R, Andreas S, Dominic R and others for inspiring discussions triggering this writeup.

No comments:

Post a Comment