Now that our cluster is up and running (please see part 1 to see up your K8s cluster), let's take a look inside!
Once we click on the cluster, we see some well-written information right away about our cluster, the size, nodes, networking, and version.
Scrolling down we can see information about our node pools. Size, version, name, and redundancy information.
Moving over to workloads, lets go ahead and spin up a containerized application.
Here we have a very simple Nginx application that will spin up as "mikes-nginx" and pull the latest version of Nginx from the Docker repo.
We can even take a look at the YAML by clicking "view YAML" to see the code for ourselves.
Now let's go ahead and click the blue "deploy" button. You should see something similar to what I see below.
As you can see above, we can see a ton of great information about our pod. The application its running, logs, labels, active pods, and managing our pods.Let's scroll back up in the same "Workloads" page and click that Expose button. Exposing an application turns it into a Kubernetes Service and allows the application to be hit publicly. We'll go ahead and leave it on port 80 as an unsecured connection for the purposes of this post.
Now that our service is up, let's go ahead and take a look at it. We see some great info here, but we want to focus on that "External endpoints" IP and port. That's what we're going to use to hit our application. (we also see monitoring and graphs. I will be going into monitoring in part 3 of this blog series).
Let's go ahead and click on that external endpoint.
There you have it! Our application is up, public facing, our pods are active, and we have successfully spun up our pod in Kubernetes on GCP!On our 3rd and final part of the GCP Kubernetes series, we will go into monitoring our cluster, pods, and applications within GCP.