Software DevelopmentLearn how to work with Multiple Virtual Machines

Learn how to work with Multiple Virtual Machines

Kubernetes has the responsibility of managing a large number of different computers so that they function as a single unit. Kubernetes provides an abstraction that simplifies deployment by decoupling containerized applications and machines. With this flexibility, there is no requirement that a containerized application has to be on a specific machine. Kubernetes automatically handles placement of containerized applications on individual cluster machines.

A cluster consists of a master that performs coordination and nodes that have the responsibility of running applications. The roles of the master include application scheduling, ensuring an application is in a specified state and managing updates. On every node in the cluster, there is a kubelet that is responsible for node management and handling communication with the master. Besides a kubelet every node requires a container management tool such as Docker. For efficiency in a production environment it is advisable to have no less than three nodes.

Kubernetes is a very flexible tool that can run on a wide range of environments. Before making a final decision on the deployment environment, it is important to understand the available options. To explore the capabilities of Kubernetes a local environment powered by Docker and minikube is adequate. When you need more scalability and availability a hosted solution offers you a high degree of simplicity. Public and on-premise clouds offer simplicity in setting up a cloud and on-premise solutions offer an added level of security. This tutorial will not provide all the deployment environments available so the reader is referred to the documentation available here

Within Kubernetes, there is a Cloud Provider feature which enables control of TCP load balancers, network routes and nodes. Kubernetes does not impose any requirement of implementing Cloud Provider and it is possible to use select features.

When working with a cluster, it is important to be aware of the node concepts listed below

  • You have the option of using a physical or virtual machine
  • Although there are attempts to differentiate master and worker nodes this is not necessary.
  • Nodes can be run on 86 or 64 bit architectures using different operating systems.
  • A machine with just 1 core and 1GB RAM can adequately serve Apiserver and etcd in a cluster with several tens of nodes. For higher scalability it is necessary to increase resources.
  • The memory and CPU is not required to be the same on all nodes.

Port coordination and dynamic port allocation increase the complexity and manageability of a networking model. Kubernetes avoids this complexity by imposing the following policies

  • Container to container communication does not require NAT
  • Both way node to container communication does not require NAT
  • The IP address a container is aware of is the same IP other containers are aware of

The networking model in Kubernetes assigns each pod an IP address. During cluster creation you are required to specify an IP block from which pods will be allocated IPs. For simplicity, an IP block can just be allocated when you are adding the node to a cluster. IP block allocation can be used with or without an overlay network. When an overlay network is used, the architecture of the network is hidden from the pod network using a technique referred to as network encapsulation. Using an overlay network has the disadvantage of reducing performance but this varies from one solution to another. When an overlay network is not used the network is set up so that it can discover pod IP address. This approach does not use network encapsulation which may result in improved performance.

There are different ways in which a network can be implemented in Kubernetes. One approach is implementing a network plugin specified using the CNI interface. Some examples of plugins that can be used are Calico, Flannel and Romana among others. A detailed list of plugins that can be used is available here – If the available plugins do not meet your needs, you can also develop a custom plugin.

The second approach that can be used to implement networking is compiling support in Kubernetes. This is the approach used in Google and Amazon clouds.

The third approach is setting up an external network through manual commands or external scripts. This approach has the advantage of flexibility but it places the burden of implementation on the developer.

When allocating addresses there are different approaches that can be used and it is important to be aware only IPv4 addresses are supported. On the Google Cloud Engine, every project is allocated a block from where you can assign each cluster a /16 that results in blocks for multiple clusters. From the /16 allocated to a cluster each node gets its space allocation. On Amazon Web Services a VPC is available from which you can assign a block to each cluster.

Another approach that can be used to allocate addresses is the CIDR subnet. The number of IPs needed is maximum pods in a node multiplied by maximum nodes. So for example a /24 on every node can carry 254 pods on every machine. The IP addresses need to be set up as routable or otherwise a network overlay is required. It is important to note services are also allocated IP addresses but unlike pods service IPs are not required to be routable.

Kubernetes enables detailed control of networking through a network policy. A network policy specifies how pod groups communicate between themselves and other network points. To use network policies, a network plugin supporting NetworkPolicy is required. Pods can either be isolated or non-isolated. In a default set, pods are non-isolated so they do not reject any traffic but with a network policy only select connections are allowed.

In this tutorial, we discussed the complexities encountered in networking and how the kubernetes model solves them. Important characteristics of a node in a cluster were discussed. Implementation of networks was discussed and the tutorial ended by discussing pod IP address allocation.


Please enter your comment!
Please enter your name here

Exclusive content

- Advertisement -

Latest article


More article

- Advertisement -