KubeCon EU 2022 Wrap-Up

For a little bit of background, I’ve been attending KubeCon off and on for a while. I attended last year in LA. It was quite the let-down for several reasons, but most important was the surging Delta variant that prompted a new batch of fears resulting in sponsors and attendees cancelling out. This resulted in a very small conference given the space allocated, but at least met a couple of new people.

Fast forward to this past week in Valencia: the fear was just as strong, but the environment felt safe enough for everyone to travel from the low infection rates to the masking (even though it was the subject of online debate). It provided a reminder of what was missing, and how much we are social creatures. This meant lots of interactions, and talks one of which I presented.

What I do want to share are my two takeaways, both from what was being emphasized at the conference level, and a couple of the vendor conversations I had during the week:

  1. Security is still a big deal. For a first, there was a 2-day “SecurityCon” as a co-hosted event. This is in addition to a few keynotes and talks looking at what happened with the recent Log4j vulnerability, and how to address it.
  2. FinOps – Applying the “operations” mind-set to addressing a company’s cloud spend.

I apologize in advance, this will be a long post, but the most important thing to keep in mind is that traceability is your friend, regardless of whether it is tracking what ships with your software or tracking what cloud based resources are being used.

Security Focus

There were quite a few mentions during the week looking specifically at what is being termed as the software supply chain tying in a couple of today’s bogeymen: the log4j compromise, and applying the supply chain woes to software. It took a couple of events for the awareness to be raised. First with log4j, and realizing that it is included everywhere. There was mention of another case involving a commonly used NPM package where the author protested how it was being used and removed it from the registry thereby breaking everything that pulled it in, including dependent libraries upon more dependent libraries.

Some of the conversations focused on the need for building out the infrastructure to support a common software bill of materials (or SBOM). This is something that has been in the works in the Java community for a while, but the adoption and tooling is rather haphazard, and this also needs to apply to more than just Java applications. NodeJS assemblies, pips, gems, and most importantly, containers. Without such a list, or audit in place, the more likely you will be in the dark on what your software uses and what your footprint looks like.

There was one talk that took a slightly different approach to this problem with a focus on WebAssembly. The best way for me to describe the idea is to pull out all of the dependent libraries, and have it in a centralized registry/service that your application can reference. All applications would use this single artifact, and if it is discovered there is a problem, then you can update that single location to “automatically” apply the code to all running components. Granted this is attempting to re-introduce the concept of common shared libraries at the OS level to the web, but with the ugly performance implications of performing the equivalent of an RPC call to utilize the library functionality which now has a hard dependence on having access to the registry making offline mode impossible. As the saying goes, what was old is new again.

If we are to take the software supply chain abstraction to it’s end, then the packaging and bill of materials is just one aspect of it. The build process that generates your apps, pushes them to your internal registry is another part of it, as is the vetting and inclusion of open-source libraries, and it is that last
piece that Google announced during KubeCon (or was it Google I/O which happened at about the same time). For this service, Google opens up their internal tooling to review and audit open source software to look for accidental or intentional defects before allowing it’s use in the ecosystem.

FinOps Focus

This is the one area that I’ve seen talked about a lot lately. There is a sub-group of the Linux Foundation that recently formed to not only build awareness of the FinOps concept, but also provide guidelines with how to implement it within your own organization. They don’t provide code or a silver bullet to reigning in your cloud costs, but they do provide the training and framework to let an organization pick this up in stark contrast to the cited article coming from your favorite IT consultancy offering up the tracking as a service.

I can go on (and may turn it into a separate post later on), but the gist revolves around two fundamental issues:

  1. Whether through mandate or choice, the decision has been made to switch to
    the cloud, and the cloud vendors make it so easy to get started. Just create
    an account and provide a credit card.
  2. You no longer own the underlying infrastructure. You are renting it!

The first issue will get you into trouble very fast (which is why you’re here). It is really easy to create resources, and very easy to go overboard. Likewise, the vendors usually provide training to allow your team to grasp the full power of the new environment. This means, when you get the bill, you see the services, but have no idea who did what or why. Throw in the wonkiness that is their usage explorer, then it can be a daunting exercise in accountability.

The second issue is just as important as the first, but will be the hardest to wrap your organization’s head around. Think of it along the lines of renting a home vs. buying it (without the deductions aspect). On the one hand, you are paying for a datacenter and providing heating/cooling. You are purchasing physical hardware, and have a staff to ensure it gets racked, stacked, and configured for your applications. Then you have to purchase and install said applications, then tune them, and lastly debug the issues as they crop up from a disk going bad to the back-hoe. When you use the cloud, you make most of this someone else’s problem with the added flexibility of bringing up and tearing down resources as needed. All without needing to go through the traditional procurement and bring-up process.

And that’s where FinOps comes in: putting in the processes to ensure your resources are tagged properly. This is to allow you to associate the cost with the group to figure out if there is a savings plan that would make more sense, or remind them that they need to watch their spend.

Those were my two biggest takeaways from the week. I do have another one from the first day of KubeCon, but will save that for later.