Post Image

Kubernetes – Intro

You’ve learn about Containers, more precisely Docker, you know that they are completely isolated environments that you can run your software with abstraction of the underlying OS. Now when you have a great number of containers you need to provide a proper orchestration, per example if there is a failure what will you do? How will you handle it? And multiple containers can stop operating, if you attempt to handle manually it will be very hard.

Thus a good option is to have orchestration tools, there are multiple but three of the most know are Docker Swarm, Kubernetes and Meso. You will learn on this article how to operate with Kubernetes, how to orchestrate the deployment and management of hundreds of containers.


First lets start by the lingo, there are some elements that we must know their names and meanings.

Node – machine (physical or virtual) that holds the container

Cluster – a set of nodes that help sharing the load

Master Node – is the node the manage/has power over the worker nodes

When you install K8s on a system you’re adding a set of key components, API Server (to communicate with the exterior), etcd (key-value store), kubelet (agent that runs on each node and ensures if the node is operating as expected), container runtime controller (the brain that know how to deal when a container is down), schedulers (distributes work across multiple nodes).

There are some differences between the master-node vs the worker-node:

<store values on etcd>
<container runtimes>

Setting up Kubernetes

For production deployments of Kubernetes, you should use a more robust and scalable solution, such as hosted Kubernetes services from cloud providers (e.g., Google Kubernetes Engine, Amazon EKS, Azure Kubernetes Service) or setting up your own cluster using tools like kubeadm, Rancher, or Kops, depending on your specific requirements.

But if you need to test your application in a production-like environment before deploying it to a real Kubernetes cluster, consider setting up a staging or pre-production environment that closely resembles your production environment instead of using Minikube. These environments should mirror the production cluster architecture and configurations as closely as possible to ensure a smoother transition to production.

Here we will go with Minikube to give an idea what are the tasks to perform. Minikube provides an already configured image to run it you will need to have already installed:

  • Hypervisor (Virtual Box or KVM)
  • Kubectl
  • minikube installer


Install and set up the kubectl tool:

Install Minikube:

Install VirtualBox: Downloads

Minikube Tutorial:

Basic Commands

To check if the kubectl was successfully installed you can run:

kubectl version

To start minikube you have to run:

minikube start --driver=<driver name>

To check the status of minikube:

minikube status

To get a list of all nodes in the cluster you can run:

kubectl get nodes

To create a deployment:

kubectl create deployment <name> --image=<name of the image/link>

Note: A deployment is a Kubernetes resource that defines how an application or service should run, including the number of replicas, the container image to use, and how to manage updates and rollbacks.

Here kubectl will get the image also from Docker.

To make it available to the network, you can have:

kubectl expose deployment <name> --type=NodePort --port=<portNumber>

To get information about the service running on minikube you can run:

minikube service <name> --url

It returns an IP that you can open on a browser and see more info about it.

To delete the service you can run:

kubectl delete services <name>

To delete the deployment:

kubectl delete deployment <name>


Here you’ve got an introduction about what kubernetes is and how to set it up.

svgDocker Storage and Network
svgKubernetes - PODs

Leave a reply