How to Establish and Collect Data from Your Kubernetes Environment


Deepfactor is built purposely for cloud native applications. So for applications that are developed using containers, being deployed to orchestration platforms, such as Kubernetes. We certainly do work with legacy systems, such as virtual machines and physical machines. But today, I’ll give you a quick walkthrough of how easy it is to get up and running and collect data from your Kubernetes infrastructure.

So in order to get the application to send its telemetry into the Deepfactor portal, it’s quite straightforward. We have a admission webhook, which essentially gets deployed one time into the Kubernetes cluster, and it accepts a bunch of parameters. So in the configuration file you can do things like, monitor for specific namespace, or you can segregate applications based on labels. It’s quite flexible. So today I’m going to give you an example of how we’re monitoring entire namespaces.

Here on the left-hand side, you can see the Deepfactor portal. So when you first sign in, we present the applications, organized based on how you’ve determined you would like them to show up in the portal itself. So here I’ve got four applications. I’ve got a C app, a Node app, a Python app, as well as a Bank of Anthos application, which is actually made up of multiple components or multiple microservices, which you can see here has a total of six components. So all these have been deployed into the development namespace. What I’ll do is I’ve already pre-deployed that webhook. It’s a simple Helm command, and I’ll go ahead and simulate a CI/CD pipeline. So from the command line, I’ll just go ahead and deploy another copy of this Bank of Anthos application, but this time I’ll deploy it into the production namespace. Before I do that, let me just expand this and show you that we’re collecting all the information. It’s already loaded, and we’ll walk through each one of these here in a second.

If I head over into my Kubernetes cluster, you can see here that I’ve got the cluster up and running. This is the production namespace up here. It’s not got any applications deployed into it. What I’ll do is I’ll just do a simple Helm… Sorry, a kubectl apply. Again, I haven’t modified any of these deployment files that are the standard deployment files that the DevOps teams create. I’ve simply added that configuration into the webhook to say, “Hey, monitor for the production namespace and instrument any applications that spin up within that namespace itself.” But this time I’ll deploy them into the production namespace. So when we do that, the application starts getting deployed. And if you look at the screen up here, you can see that the pods are spinning up, it’s initializing the containers. And then within a few seconds you’ll see the data being collected and transmitted into our portals.

When I refresh this page, you now see that the number of namespaces has increased. So before it was showing one. This time if I click on the namespace number two, you’ll see that it’s now deployed the application into the production namespace. It was just deployed a few seconds ago, and that telemetry is now being collected and sent into the portal itself. So that’s how easy it is to get the data from your applications into the Deepfactor portal.