Kubernetes Cluster Setup: Ubuntu 20.04 Step-by-Step Guide
Kubernetes Cluster Setup: Ubuntu 20.04 Step-by-Step Guide
Hey guys! Ever wanted to dive into the awesome world of Kubernetes and set up your very own cluster? Well, you’ve come to the right place! Today, we’re going to embark on an exciting journey to set up a Kubernetes cluster on Ubuntu 20.04 , walking through each step so you can get your container orchestration engine humming along in no time. Kubernetes, often affectionately called K8s , is an open-source system for automating deployment, scaling, and management of containerized applications. It’s seriously powerful, helping you manage everything from simple web apps to complex microservices architectures with ease. If you’re looking to run your applications reliably, scale them efficiently, and deploy new features without breaking a sweat, Kubernetes is your best friend. This guide is crafted to be super easy to follow, even if you’re relatively new to K8s, focusing on a clear, step-by-step approach. We’ll be using Ubuntu 20.04 , a popular and stable choice for many developers and system administrators, making it a fantastic platform for our cluster. By the end of this tutorial, you’ll have a fully functional Kubernetes cluster ready to host your amazing applications. So, let’s roll up our sleeves and get started with this Kubernetes cluster setup adventure!
Table of Contents
- Understanding the Kubernetes Cluster Architecture
- Prerequisites: What You Need Before We Start
- Step 1: Prepare All Nodes (Control Plane & Worker Nodes)
- Step 2: Install Container Runtime (Containerd) on All Nodes
- Step 3: Install Kubeadm, Kubelet, and Kubectl on All Nodes
- Step 4: Initialize the Control Plane Node
- Step 5: Join Worker Nodes to the Cluster
- Step 6: Verify Your Kubernetes Cluster and Deploy a Sample Application
Understanding the Kubernetes Cluster Architecture
Before we jump into the nitty-gritty of the Kubernetes cluster setup on Ubuntu 20.04 , let’s quickly chat about what a Kubernetes cluster actually looks like, architecturally speaking. Understanding the core components will make the entire setup process much clearer, trust me! At its heart, a Kubernetes cluster consists of a set of machines, typically referred to as nodes . These nodes are divided into two main categories: the Control Plane (formerly known as the Master node) and Worker Nodes . The Control Plane is the brain of your cluster; it’s responsible for managing the state of the cluster, making decisions, and orchestrating everything. Think of it as the conductor of an orchestra, making sure all the musicians (worker nodes) play in harmony. Key components running on the Control Plane include the kube-apiserver , which exposes the Kubernetes API; the etcd key-value store, which serves as the cluster’s database; the kube-scheduler , which assigns pods to worker nodes; and the kube-controller-manager , which runs various controllers to manage different aspects of the cluster. Meanwhile, Worker Nodes are where your actual applications (in the form of pods and containers ) run. Each worker node has a kubelet , an agent that communicates with the Control Plane, and a kube-proxy , which maintains network rules on nodes. Plus, a container runtime (like containerd , which we’ll be using) is installed on each worker node to execute containers. For our Kubernetes cluster setup on Ubuntu 20.04 , we’ll start with one Control Plane node and at least one Worker Node, but you can always add more worker nodes later to scale up your cluster’s capacity. This distributed architecture is what makes Kubernetes so resilient and scalable. It allows your applications to be highly available, as Kubernetes can automatically restart failed containers or move them to healthy nodes. Grasping these fundamental roles will definitely help you troubleshoot or understand what’s happening behind the scenes as we configure our cluster. This foundational knowledge is crucial for anyone looking to master Kubernetes cluster setup and management, ensuring you’re not just following steps blindly but truly understanding the underlying mechanics. This holistic view ensures that your Kubernetes cluster on Ubuntu 20.04 is not just functional but also robust and well-understood from the ground up.
Prerequisites: What You Need Before We Start
Alright, before we get our hands dirty with the actual
Kubernetes cluster setup
, let’s make sure we have all the necessary ingredients. Think of it like preparing for a big cooking adventure – you need to have everything laid out first! For this tutorial, we’ll need at least two machines (virtual or physical) running
Ubuntu 20.04 LTS (Focal Fossa)
. One will serve as our Control Plane node, and the other (or others, if you want more worker nodes from the start) will be our Worker Nodes. Each of these machines should meet some minimum requirements. For the Control Plane node, I recommend at least
2 vCPUs
and
2GB of RAM
, though 4GB is even better if you can swing it. Worker nodes can be a bit leaner, but aim for at least
1 vCPU
and
1.5GB of RAM
per node. Of course, more resources will always give you better performance, especially when you start deploying resource-intensive applications. Ensure that all your nodes have stable network connectivity and can communicate with each other. It’s also a good idea to have unique hostnames for each machine; this helps in identifying them within the cluster and avoids any naming conflicts. You should also have
sudo
privileges on all these machines. We’ll be performing a lot of administrative tasks, so
sudo
access is essential. A working internet connection is also a must, as we’ll be downloading various packages and container images. Make sure your firewall rules (if any) allow necessary ports for Kubernetes communication. Specifically, the Control Plane needs ports like 6443 (API server), 2379-2380 (etcd), 10250 (kubelet), 10251 (kube-scheduler), and 10252 (kube-controller-manager) to be open. Worker nodes primarily need port 10250 (kubelet) and the port range for NodePort services (30000-32767) open, along with ports for your chosen CNI (Container Network Interface) – for Calico, this often means TCP port 4789 (VxLAN) and UDP port 4789 (for some environments), or IP protocol 4 (IP-in-IP) if you’re using that encapsulation. Don’t worry too much about the exact CNI ports right now; we’ll cover that when we install it. Having these
prerequisites
sorted out beforehand will ensure a smooth
Kubernetes cluster setup on Ubuntu 20.04
and help us avoid frustrating roadblocks down the line. It’s truly
strong
foundational work that pays off in spades, making the rest of the
Kubernetes cluster setup
process significantly less stressful. This preparation phase is often overlooked but is
critical
for a successful deployment, ensuring your journey into orchestrating containers is as smooth as possible, enabling you to focus on the exciting parts of application deployment and management rather than dealing with connectivity issues or resource bottlenecks.
Step 1: Prepare All Nodes (Control Plane & Worker Nodes)
Okay, team, it’s time to get our hands dirty! This first major step involves preparing every single node in our upcoming Kubernetes cluster on Ubuntu 20.04 . This means you’ll need to perform the following actions on both your Control Plane node and all your Worker Nodes. Consistency here is key to a smooth Kubernetes cluster setup . Let’s break it down.
First things first, we need to ensure all our systems are up-to-date. This is crucial for security and compatibility. Open a terminal on each node and run these commands:
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y
These commands will refresh your package lists, upgrade any outdated packages, and remove any unnecessary ones. After this, a reboot might be a good idea, especially if the kernel was updated:
sudo reboot
.
Next,
disable swap space
. Kubernetes works best when swap is disabled, and
kubelet
(the agent that runs on each node) will actually refuse to start if swap is enabled. To disable it temporarily, run:
sudo swapoff -a
. To make this change permanent across reboots, you’ll need to edit the
/etc/fstab
file. You can comment out the swap line by adding a
#
at the beginning of the line that refers to your swap partition. It usually looks something like
/dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
or similar. Use your favorite text editor, like
nano
or
vi
:
sudo nano /etc/fstab
Find the line(s) related to
swap
and put a
#
in front of them, then save and exit. This is a very
important
step for a stable
Kubernetes cluster setup on Ubuntu 20.04
.
Now, let’s enable some kernel modules that Kubernetes needs, specifically
overlay
and
br_netfilter
. These are essential for container networking and proper functioning of the cluster. Run these commands on all nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
To make sure these modules are loaded automatically at boot, we need to add them to
/etc/modules-load.d/k8s.conf
:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
Continuing our node preparation, we need to configure
sysctl
parameters for Kubernetes networking. This ensures that IP forwarding is enabled and that
iptables
sees bridged traffic, which is vital for how Kubernetes handles network policies and services. Execute the following commands on all nodes:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
This command creates a new
sysctl
configuration file and then applies the changes immediately. You can verify that these settings are active by running
sysctl net.bridge.bridge-nf-call-iptables net.ipv4.ip_forward
, and you should see
1
for both values. This configuration is absolutely
essential
for the networking aspect of your
Kubernetes cluster setup
, allowing pods to communicate effectively across nodes and external services to reach your applications. Without these
sysctl
settings, you’ll encounter a myriad of networking issues that can be tricky to debug. Ensuring these parameters are correctly set and persist across reboots is a foundational element of a robust
Kubernetes cluster on Ubuntu 20.04
. It truly ensures that the networking layer, which is so complex in a distributed system like Kubernetes, works as intended from the very beginning, paving the way for a smooth and efficient cluster operation. Getting this right prevents a lot of headaches later on, making your
Kubernetes cluster setup
much more resilient.
Step 2: Install Container Runtime (Containerd) on All Nodes
Alright, guys, moving on! A
Kubernetes cluster setup
needs a
container runtime
– it’s the engine that actually runs your containers. While Docker is popular, Kubernetes officially moved away from Docker as a container runtime in favor of
containerd
or
CRI-O
. For our
Kubernetes cluster on Ubuntu 20.04
, we’ll go with
containerd
, which is lightweight, robust, and specifically designed for Kubernetes. This step also needs to be performed on
all your nodes
(Control Plane and Worker Nodes).
First, we need to install some necessary packages that
containerd
relies on:
sudo apt install -y ca-certificates curl gnupg lsb-release
Next, let’s add Docker’s official GPG key. Even though we are not installing Docker directly,
containerd
often comes from the Docker repositories, or we leverage their tooling for convenience:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Now, add the Docker repository to APT sources. This will allow us to install
containerd.io
:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update your
apt
package index one more time to include the new repository:
sudo apt update
Finally, we can install
containerd.io
! This is the core container runtime we need.
sudo apt install -y containerd.io
After installation, we need to configure
containerd
to work correctly with Kubernetes. Kubernetes expects
containerd
to use
systemd
for the
cgroup
driver. By default,
containerd
uses the
cgroupfs
driver. So, we’ll configure it to use
systemd
. First, create a default
containerd
configuration file:
sudo containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
Then, we need to modify this
config.toml
file to change the
cgroup
driver. Open the file with a text editor:
sudo nano /etc/containerd/config.toml
Inside the file, search for
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
and within that section, change
SystemdCgroup = false
to
SystemdCgroup = true
. It will look something like this:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = "runc"
CriuImagePath = ""
CriuPath = ""
CriuPtrace = false
SystemdCgroup = true # Change this line to true
NoNewPrivileges = false
OomScore = 0
Root = ""
ShimCriuPath = ""
ShimRuncPath = ""
SystemdCgroup = true # Ensure this is true
NoNewPrivileges = false
Make sure to save the file (
Ctrl+O
, then
Enter
, then
Ctrl+X
if using
nano
). After modifying the configuration, restart
containerd
to apply the changes:
sudo systemctl restart containerd
sudo systemctl enable containerd
This
containerd
installation and configuration is a
critical
piece of our
Kubernetes cluster setup on Ubuntu 20.04
. Without a properly configured container runtime, Kubernetes won’t be able to launch or manage any of your applications. This step ensures that all the nodes are ready to host containers and interact with the
kubelet
agent effectively. Getting the
cgroup
driver right is a common stumbling block for beginners, so pay
close attention
to that
config.toml
modification. A correctly set up
containerd
provides the robust foundation for your entire
Kubernetes cluster
, enabling efficient resource management and reliable container execution across all your nodes. This preparation makes the subsequent steps, especially the actual
kubeadm
commands, much smoother, preventing unforeseen issues related to container orchestration. It’s a key part of building a resilient and high-performing
Kubernetes cluster on Ubuntu 20.04
.
Step 3: Install Kubeadm, Kubelet, and Kubectl on All Nodes
Alright, folks, we’re making excellent progress on our
Kubernetes cluster setup
! Now that our container runtime (
containerd
) is all set up on every node, it’s time to install the core Kubernetes tools:
kubeadm
,
kubelet
, and
kubectl
. Just like the previous steps, this needs to be done on
all your nodes
– that means your Control Plane node and all your Worker Nodes. These three tools are absolutely fundamental to managing your
Kubernetes cluster on Ubuntu 20.04
.
- Kubeadm is a tool designed to bootstrap a minimum viable Kubernetes cluster. It handles all the complex initialization steps for the Control Plane and generates the necessary join commands for Worker Nodes. It truly simplifies the initial setup.
- Kubelet is the agent that runs on each node in the cluster. It communicates with the Control Plane, ensures that containers are running in a Pod, and registers the node with the cluster. This is the workhorse of your Kubernetes nodes.
-
Kubectl
is the command-line tool that allows you to run commands against Kubernetes clusters. You can use
kubectlto deploy applications, inspect cluster resources, and view logs. This will be your primary interface for interacting with your cluster once it’s up and running.
First, we need to add the Kubernetes APT repository. This is where
kubeadm
,
kubelet
, and
kubectl
packages live. Start by adding the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Next, add the Kubernetes APT repository to your system’s
sources.list.d
directory:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Now, update your
apt
package index again to pull information from the newly added Kubernetes repository:
sudo apt update
With the repository added and updated, we can now install the magical trio:
kubeadm
,
kubelet
, and
kubectl
! It’s
important
to specify a version if you want to avoid issues with future updates. For this guide, we’ll install the latest stable version available. If you need a specific version, you can check available versions with
apt-cache madison kubeadm
and replace
1.x.x-00
with your desired version.
sudo apt install -y kubelet kubeadm kubectl
After installing, it’s a
best practice
to
hold
these packages at their current version. This prevents them from being automatically updated during a system
apt upgrade
, which could potentially break your cluster if there are breaking changes between Kubernetes versions. This controlled upgrade approach is
vital
for maintaining a stable
Kubernetes cluster on Ubuntu 20.04
.
sudo apt-mark hold kubelet kubeadm kubectl
To verify that
kubelet
is running (though it won’t be fully functional until the cluster is initialized), you can check its status:
sudo systemctl enable --now kubelet
systemctl status kubelet
You might see
kubelet
in a
waiting
or
restarting
loop; this is normal at this stage because it can’t connect to a Kubernetes API server yet. It will wait until the Control Plane is initialized. This completes the installation of the core Kubernetes tools on all your nodes, bringing us much closer to having a fully operational
Kubernetes cluster
. Getting these tools correctly installed and locked in their versions is a
fundamental
part of a stable and predictable
Kubernetes cluster setup
. It creates the communication layer and the management plane components that allow your nodes to form a cohesive cluster, ready to orchestrate your containerized applications. This methodical approach ensures that your
Kubernetes cluster on Ubuntu 20.04
is built on a solid foundation, minimizing potential upgrade headaches and compatibility issues down the line, truly making this a robust and manageable environment for your future containerized deployments.
Step 4: Initialize the Control Plane Node
Alright, guys, this is the exciting part! We’re finally going to initialize our Control Plane node and bring our Kubernetes cluster on Ubuntu 20.04 to life. Remember, this step is only performed on the designated Control Plane machine, not on your worker nodes. Make sure you’re on the correct server for this part of the Kubernetes cluster setup .
The command we’ll use is
kubeadm init
. This command performs a series of checks and then installs the Control Plane components (API Server, Controller Manager, Scheduler,
etcd
) and configures them. We need to specify a Pod Network CIDR for our cluster. The Pod Network CIDR is the range of IP addresses that will be assigned to your Pods. This range should
not
overlap with your physical network’s IP range. A common choice is
10.244.0.0/16
for
Flannel
, or
192.168.0.0/16
for
Calico
. Since we’re going with Calico later,
192.168.0.0/16
is a good choice. You might also need to specify
--apiserver-advertise-address
if your Control Plane node has multiple network interfaces or if you want to explicitly use a specific IP address for the API server.
Here’s the command to initialize the Control Plane:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
This command will take a few minutes to complete. During this process,
kubeadm
will download necessary container images, set up the core components, and perform various configurations. If it encounters any issues (like swap still being enabled, or networking misconfigurations), it will stop and provide helpful error messages.
Pay close attention
to the output! If all goes well, you’ll see a successful message at the end, along with some instructions.
Once
kubeadm init
finishes successfully, you’ll see output similar to this:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a Pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Do not close your terminal yet!
The
kubeadm join
command, including the token and hash, is extremely important for adding worker nodes later. Copy this entire
kubeadm join
command and save it somewhere safe. You’ll need it soon!
Now, as the output suggests, we need to configure
kubectl
to interact with our new cluster. Run the commands provided in the output
as a regular user
(not
sudo
):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
These commands create the
.kube
directory in your home folder, copy the
admin.conf
file (which contains your cluster’s credentials) into it, and set the correct ownership. Now,
kubectl
knows how to talk to your cluster!
At this point, if you run
kubectl get nodes
, you’ll see your Control Plane node listed, but its status will probably be
NotReady
. This is because we haven’t installed a
Pod Network Add-on
yet. A CNI (Container Network Interface) plugin is absolutely essential for pods to communicate with each other across different nodes. For our
Kubernetes cluster on Ubuntu 20.04
, we’re going to use
Calico
, a popular and robust CNI solution. It provides network policy enforcement and is a great choice for production environments. To install Calico, execute the following command:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Note: Always check the Calico website (projectcalico.org) for the latest stable release link. The version
v3.26.1
is used here as an example.
After applying the Calico manifest, it will take a few minutes for the Calico pods to start up and configure the network. You can monitor the status of your pods by running:
watch kubectl get pods -n kube-system
. Look for all Calico pods to be in the
Running
state. Once they are, your Control Plane node should eventually transition to a
Ready
state. This
initialization of the Control Plane node
is the heart of your
Kubernetes cluster setup
, defining its control and management capabilities. The installation of Calico as the
Pod Network Add-on
is equally
critical
, as it enables inter-pod communication and network policies, ensuring your applications can talk to each other and enforce security rules. Without a CNI, your pods would be isolated, and your cluster would be effectively non-functional for distributed applications. This careful orchestration of
kubeadm
and Calico ensures your
Kubernetes cluster on Ubuntu 20.04
has a robust foundation for all your containerized workloads. It’s a truly
transformative
step that brings your infrastructure vision to life, allowing you to move forward with confidence in building a scalable and resilient environment.
Step 5: Join Worker Nodes to the Cluster
Fantastic job, everyone! We’ve successfully initialized our Control Plane node, installed our CNI, and configured
kubectl
. Now, it’s time to expand our
Kubernetes cluster on Ubuntu 20.04
by adding our Worker Nodes. This step is much simpler than initializing the Control Plane, thanks to
kubeadm
providing a straightforward
join
command. Remember that command I told you to save after the
kubeadm init
step? Well, now’s the time to use it! If you accidentally lost it, don’t worry, you can regenerate a new token and discovery hash on your Control Plane node. To generate a new token, run:
sudo kubeadm token create --print-join-command
. This will give you a new command with a fresh token and hash, which you can then use on your worker nodes.
Log in to each of your Worker Nodes via SSH. On
each
Worker Node, you will execute the
kubeadm join
command that was provided after the Control Plane initialization (or the one you just regenerated). The command will look something like this:
sudo kubeadm join <control-plane-ip>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Replace
<control-plane-ip>
with the actual IP address of your Control Plane node, and substitute
<token>
and
<hash>
with the values you saved earlier (or regenerated). This command instructs the worker node to connect to the Control Plane’s API server, authenticate using the provided token, and verify the cluster’s identity using the CA certificate hash. It’s a
secure
way for new nodes to join an existing cluster. Once executed,
kubeadm
on the worker node will perform a series of actions: it will connect to the Control Plane, fetch its configuration, install the necessary
kubelet
components if not already done (though we did this in Step 3), and register itself with the cluster. This process typically takes a few moments to complete.
After running the
kubeadm join
command on a Worker Node, you should see output indicating that the node has successfully joined the cluster. For example:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Now, head back to your Control Plane node . From there, you can verify that your worker node(s) have successfully joined the cluster. Run the following command:
kubectl get nodes
You should now see your Control Plane node and all your newly joined Worker Nodes listed. Initially, the Worker Nodes might show a status of
<not ready>
or similar for a minute or two while Calico and
kubelet
fully initialize and configure themselves. Give it a bit of time, and then run
kubectl get nodes
again. Eventually, you should see all your nodes, including the Control Plane and all Worker Nodes, with a status of
Ready
. This indicates that they are healthy and ready to accept workloads. This step is
essential
for expanding the capacity and resilience of your
Kubernetes cluster on Ubuntu 20.04
. Each worker node you add increases the resources available for running your applications, allowing your cluster to handle more traffic and deploy more services. The
kubeadm join
process simplifies what could be a very complex task, making the
Kubernetes cluster setup
accessible even for those relatively new to container orchestration. Successfully adding worker nodes means your
Kubernetes cluster
is now truly distributed and ready to manage a robust set of applications. It truly highlights the power and user-friendliness that
kubeadm
brings to the table, streamlining the expansion of your infrastructure. This phase is critical for realizing a scalable and fault-tolerant environment, ensuring your
Kubernetes cluster on Ubuntu 20.04
is capable of meeting diverse application demands and maintaining high availability across your deployments.
Step 6: Verify Your Kubernetes Cluster and Deploy a Sample Application
Alright, team, congratulations! Your Kubernetes cluster on Ubuntu 20.04 is now up and running! But what’s a shiny new cluster without some verification and a quick test drive? In this final step of our Kubernetes cluster setup , we’ll confirm everything is working as expected and deploy a simple sample application to see it in action. This is the moment where all our hard work pays off, and we get to see our distributed system gracefully handling our first containerized workload. This verification phase is crucial to ensure that all the components we painstakingly set up are communicating correctly and that the cluster is truly ready for prime time.
First, let’s do a final check of our nodes. From your Control Plane node, run:
kubectl get nodes
You should see all your nodes (Control Plane and Worker Nodes) listed with a
STATUS
of
Ready
. If any node is
NotReady
, go back and review the previous steps for that specific node, checking
containerd
status (
sudo systemctl status containerd
) and
kubelet
status (
sudo systemctl status kubelet
). You can also get more details on a specific node’s issues by running
kubectl describe node <node-name>
.
Next, let’s verify that all the core Kubernetes system pods are running. These are the pods that make Kubernetes itself function, including
kube-apiserver
,
etcd
,
kube-scheduler
,
kube-controller-manager
,
kube-proxy
, and your CNI pods (like Calico). You can check their status by running:
kubectl get pods -n kube-system
All pods in the
kube-system
namespace should ideally be in a
Running
or
Completed
state. If you see any pods stuck in
Pending
,
CrashLoopBackOff
, or other error states, it indicates a problem that needs investigation. Common issues here relate to resource limitations, incorrect CNI setup, or persistent volume claims if you’re using stateful applications. For Calico specifically, you should see
calico-node
and
calico-kube-controllers
pods in
Running
state.
Now, for the fun part: deploying a sample application! We’ll deploy a simple Nginx web server. This will demonstrate that Kubernetes can schedule pods, that the container runtime is working, and that the network is configured correctly to expose services.
Create a deployment for Nginx:
kubectl create deployment nginx --image=nginx
This command tells Kubernetes to create a deployment named
nginx
using the
nginx
Docker image. Kubernetes will then create a
ReplicaSet
to ensure a specified number of
nginx
pods are always running. You can check the status of your deployment and pods with:
kubectl get deployment nginx
kubectl get pods -l app=nginx
You should see one Nginx pod in a
Running
state. To make this Nginx web server accessible, we need to expose it as a Kubernetes Service. We’ll use a
NodePort
service, which exposes the service on a static port on each node’s IP address. This is a common way to test external access in a basic
Kubernetes cluster setup
.
kubectl expose deployment nginx --type=NodePort --port=80
Now, let’s find out which port Kubernetes assigned to our Nginx service. This will be a port in the 30000-32767 range:
kubectl get service nginx
Look for the
PORT(S)
column. It will show something like
80:3xxxx/TCP
. The
3xxxx
part is your
NodePort
. For example, it might be
30080
. To access your Nginx web server, you can now use any of your Worker Node’s IP addresses (or even the Control Plane’s IP if you have no firewall blocking it) combined with this
NodePort
. For example, if a worker node’s IP is
192.168.1.101
and the NodePort is
30080
, open your web browser and navigate to
http://192.168.1.101:30080
. You should see the familiar