BLOG

The Container Colloquialism Translator for NetOps

Lori MacVittie Miniatur
Lori MacVittie
Published July 12, 2018
  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin
  • Share via AddThis

There are days when the jargon coming out of container land makes your head spin. With each new capability or feature offered by related solutions – service mesh, orchestrators, registries – seems to mandate a new term or phrase. That phrase often makes sense to DevOps but evokes a squinty, confused expression from NetOps.

Kind of like the one you make when I ask where the nearest bubbler is. You call it a water fountain. In Wisconsin, we call it a bubbler. Same thing, different term.

It turns out that a lot of the ‘new’ capabilities and features related to scaling containers internally and in multi-cloud scenarios are really just water fountains that DevOps is calling a bubbler. This clash of colloquialisms can cause friction with NetOps as containers continue to move into the mainstream unabated. Even if container clusters maintain an isolated, mini-cloud like existence in production, there are still points of contact with the corporate network over which NetOps continues to reign. And invariably NetOps and DevOps are going to have to work together to scale those clusters securely in a multi-cloud world.


Ingress Controller

  • As has been noted, the term “ingress controller” put a fresh coat of paint on layer 7 load balancing (content switching, content routing, etc…). An ingress controller is a layer 7 (HTTP) aware proxy that routes requests to the appropriate resource inside a container cluster.  The significant difference between the HTTP proxies used by NetOps in the network and those serving as entry points to container clusters is that an ingress controller must be container aware. By ‘container aware’ I mean that it is configured and managed automatically based on changes taking place in the container environment – particularly that of the resource file that describes how the ingress should route incoming requests.


Latency-Aware Load Balancing

  • Sounds cool and fresh doesn’t it? But when you pull back the kimono, NetOps will nod emphatically upon discovering this is not really anything other than leveraging a ‘fastest response’ load balancing algorithm. The intention is to improve app performance by “shifting traffic away from slow instances”. The reason this is called out is because generally speaking the native load balancing algorithms used by container orchestrators are apathetic. Round Robin is pretty much the standard, which we know is like the last algorithm you should choose if you’re trying to optimize performance. Being able to route requests based on best performance is pretty important considering that every microservice-microservice call made to fulfill a single client request adds latency of its own.


Multi-Cluster Ingress

  • I’m going to start by saying that this sounds way cooler than the term the industry has been using for almost twenty years now. Essentially this is GSLB (Global Server Load Balancing). Yeah, I know you’re disappointed, but under the hood that’s what multi-cluster ingress is doing. You take an ingress controller and you sprinkle some global traffic management goodness around it and voila! You’ve got GSLB for container clusters in a multi-cloud configuration. I’m voting to replace GSLB with this term on the NetOps side, because it just sounds more impressive.


These aren’t the only terms to crop up, nor are they the last. They are the most relevant in terms of functionality and capabilities “in the network” being subsumed by DevOps. Some of these will need the attention of NetOps as they move into production environments (like Multi-Cluster Ingress) and others will not – latency-aware load balancing inside container environments is likely to remain the purview of DevOps, though its good to have an understanding during discussions on improving performance or availability.

There’s a cultural component to DevOps that’s often overlooked or outright ignored. As the movement continues to make its mark on NetOps and traditional network operations slowly but surely adopts its principles to achieve an agile network, communication becomes critical. That means finding common ground. Understanding each other’s jargon can be a good first step toward building the more collaborative culture necessary to ensure that application deployments are as fast, secure, and reliable as their delivery.