#90daysofdevops #33 : Working with Namespaces and Services in Kubernetes
What are Namespaces and Services in k8s
In Kubernetes, Namespaces are used to create isolated environments for resources. Each Namespace is like a separate cluster within the same physical cluster. Services are used to expose your Pods and Deployments to the network.
Kubernetes NAMESPACE is a virtual cluster for organising and structuring Kubernetes objects to ensure smooth operations and project management.
What Is a Kubernetes Namespace?
A Namespace is a Kubernetes object that helps group and structure other Kubernetes objects and partitions them in a Kubernetes cluster. This concept allows you to organize or isolate your Kubernetes resources in a box-like form according to their purpose across multiple users and projects in a cluster.
Characteristics of Kubernetes Namespaces:
It provides scope for names
It has unique names for resources within but not across namespaces
It does not allow for nesting inside each other
It is used in an environment with many users spread across multiple teams and projects
A Kubernetes object can only be in one Namespace
Default Namespace: Every namespaced Kubernetes object that is created without specifying a namespace goes to the Namespace defined in your client’s configuration.
Kube-System Namespace: This Namespace is used for system processes like etcd, kube-scheduler, etc. Do not modify or create any objects in this Namespace, as it is not meant for users (to avoid modifying the resources or deleting the components accidentally).
Kube-public Namespace: This namespace houses publicly accessible data, including a ConfigMap which stores cluster information like the cluster’s public certificate for communicating with the Kubernetes API.
How to Create a Namespace
Step 1: Create a yaml file with your desired editor:
$ vim name-space.yaml
Step 2: Copy and paste the below configuration into the yaml file:
apiVersion: v1 kind: Namespace metadata: name: names ## name of the namespace
Step 3: Use
kubectl create
command to create the Namespace:$ kubectl create -f name-space.yaml namespace/dev created
Alternatively, you can also create it imperatively on the command line with the command below:
$ kubectl create namespace prod namespace/prod created ## prod is the Namespace name
Step 4: Check the status of the Namespace with the
kubectl
command:$ kubectl get namespaces NAME STATUS AGE default Active 16m dev Active 6m23s kube-public Active 16m kube-system Active 16m prod Active 5m50s
The output shows that we have five Namespaces - the three pre-configured Namespaces and the two we just created.
Step 5: Check the detailed description of the Namespaces with
kubectl describe
command:Name: dev Labels: <none> Annotations: <none> Status: Active No resource quota. ## More about this further on in this post. No LimitRange resource. ## More about this later on in this post. ------------------------------------------------------------------ Name: prod Labels: <none> Annotations: <none> Status: Active No resource quota. No LimitRange resource.
How to create Kubernetes Objects in a Namespace
Example 1: Create a Deployment with two replicas, one in the dev and one in the prod Namespaces:
$ kubectl create deployment my-app --image=redis --replicas=2 -n dev deployment.apps/my-app created
or
$ kubectl create deployment my-app --image=redis --replicas=2 --namespace dev deployment.apps/my-app created
You can either pass a Namespace property as a flag with
-n
or--namespace
; they function the same way. Also,dev
here is the name of the Namespace.Example 2: The above object can be created in the dev namespace in declarative form by specifying a Namespace property in the Deployment manifest YAML file. The configuration will look like this:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app namespace: prod ## where prod is the name of the namespace labels: app: my-app spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - image: nginx name: nginx-img
When you create the Deployment with the
kubectl create
command, it is created in theprod
Namespace. Now check the Deployments in their respective namespaces:```bash $ kubectl get pods -n prod # Only the Pods in the prod Namespace will be delivered.
NAME READY STATUS RESTARTS AGE my-app-667cdc9ffb-bkrv4 1/1 Running 0 2m32s my-app-667cdc9ffb-mwn9k 1/1 Running 0 2m32s
$ kubectl get pods -n dev # Only the Pods in the dev Namespace will be delivered.
NAME READY STATUS RESTARTS AGE my-app-667cdc9ffb-lmqvn 1/1 Running 0 2m40s my-app-667cdc9ffb-s7sm5 1/1 Running 0 2m40s ```
Services, Load Balancing, and Networking
The Kubernetes network model
Every Pod
in a cluster gets its own unique cluster-wide IP address. This means you do not need to explicitly create links between Pods
and you almost never need to deal with mapping container ports to host ports.
This creates a clean, backwards-compatible model where Pods
can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
pods can communicate with all other pods on any other node without NAT
agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
Note: For those platforms that support Pods
running in the host network (e.g. Linux), when pods are attached to the host network of a node they can still communicate with all pods on all nodes without NAT.
This model is not only less complex overall, but it is principally compatible with the desire for Kubernetes to enable low-friction porting of apps from VMs to containers. If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
Kubernetes IP addresses exist at the Pod
scope - containers within a Pod
share their network namespaces - including their IP address and MAC address. This means that containers within a Pod
can all reach each other's ports on localhost
. This also means that containers within a Pod
must coordinate port usage, but this is no different from processes in a VM. This is called the "IP-per-pod" model.
How this is implemented is a detail of the particular container runtime in use.
It is possible to request ports on the Node
itself which forward to your Pod
(called host ports), but this is a very niche operation. How that forwarding is implemented is also a detail of the container runtime. The Pod
itself is blind to the existence or non-existence of host ports.
Kubernetes networking addresses four concerns:
Containers within a Pod use networking to communicate via loopback.
Cluster networking provides communication between different Pods.
The Service API lets you expose an application running in Pods to be reachable from outside your cluster.
- Ingress provides extra functionality specifically for exposing HTTP applications, websites and APIs.
You can also use Services to publish services only for consumption inside your cluster.
The Connecting Applications with Services tutorial lets you learn about Services and Kubernetes networking with a hands-on example.
Cluster Networking explains how to set up networking for your cluster, and also provides an overview of the technologies involved.
Service
Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.
Ingress
Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API.
Ingress Controllers
In order for an Ingress to work in your cluster, there must be an ingress controller running. You need to select at least one ingress controller and make sure it is set up in your cluster. This page lists common ingress controllers that you can deploy.
EndpointSlices
The EndpointSlice API is the mechanism that Kubernetes uses to let your Service scale to handle large numbers of backends, and allows the cluster to update its list of healthy backends efficiently.
Network Policies
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster must use a network plugin that supports NetworkPolicy enforcement.
DNS for Services and Pods
Your workload can discover Services within your cluster using DNS; this page explains how that works.
IPv4/IPv6 dual-stack
Kubernetes lets you configure single-stack IPv4 networking, single-stack IPv6 networking, or dual stack networking with both network families active. This page explains how.
Topology Aware Routing
Topology Aware Routing provides a mechanism to help keep network traffic within the zone where it originated. Preferring same-zone traffic between Pods in your cluster can help with reliability, performance (network latency and throughput), or cost.
Networking on Windows
Service ClusterIP allocation
Service Internal Traffic Policy
If two Pods in your cluster want to communicate, and both Pods are actually running on the same node, use Service Internal Traffic Policy to keep network traffic within that node. Avoiding a round trip via the cluster network can help with reliability, performance (network latency and throughput), or cost.