Learn how Applications are Updated and Scaled in Kubernetes



After deploying your application to a production environment, you will realize resource needs are dynamic. Applications will require more resources at times, while during other times all theresources allocated to an application are not required. A smooth solution is required when we need to scale the application resources up or down. To smoothly allocate resources, Kubernetes provides you with a scale command.

Before we can demonstrate how we can scale and update applications, let us create a deployment. Use the command below to create a deployment.

create deployment
The deployment created above is shown below: –

The deployment above will run a nginx:1.7.9 Docker image.

To get information on our deployment, we can use the following command.

app info
From our application information, we can observe that the application is running 2 replicas. To scale to four replicas, we will use the scale command and specify the new number of replicas and the path to our deployment object. The command below scales up to 5 replicas. After running the command you will get a scaled output.

scale replicas
To confirm that the scaling was performed correctly, get the application information by using the command below. You will notice the number of replicas is now 5.

replica succes
Another approach to scaling is modifying your resource description, so that the number of replicas is set the new number of replicas you need. After modifying your resource file, you can then use the apply command by specifying the location of our resource file. For example, consider the previous resource description. We will now modify it, so that we will now have 4 replicas. We can then use the apply command as shown below. After running the apply command, you will get a configured output. We can then get information on our deployment to see the number of replications.

modify resource
Whichever scaling approach you prefer, can either increase or decrease the number of replicas. Updating and balancing across replicas is automatically handled

Besides scaling, another challenge you will face after deploying your application in a production environment is updating your applications. You will have application updates in the form of new features, patches and code updates. When updating our applications, we need an approach that does not interrupt application access. The model used in Kubernetes to update applications without interrupting access is known as a rolling update. In the rolling update model, instead of stopping the entire service one pod is updated at a time.

When implementing a rolling update the recommended approach to deploying applications is to use the deployment model. However, you can still use the replication controllers to deploy your applications. The rolling update works in three steps. In the first step, a new replication controller is created using the configuration that has been updated. In the second step, the number of replicas is increased or decreased on existing and new controllers up to the point, when the required number of replicas is achieved. The third step is deleting the old replication controller.

The kubetctl rolling-update command is used to implement updates. The first approach that you can use to start a rolling update is creating a configuration file, then passing the file to the rolling-update command. The configuration that will be used for the rolling update needs to meet three criteria. The first criterion is a new metadata.name value must be specified. The second criterion is in the spec.selector field at minimum of one common label needs to be overwritten. The third criterion is the previous metadata.namespace has to be used.

To demonstrate how you use a configuration file to start an update. let us consider the deployment we presented earlier in the article. In that deployment, we were running version 1.7.9 of the image so we would like to update to version 1.9.1 of the image. We need to change the metadata name and
update the image tag. This is just a simple example, in a production environment you may need to change more configuration settings. Save the configuration below as nginx-app-v1.91.yaml

To start our update we pass the name of the running contoller and the configuration that we would like to use in performing the update. The command below will initiate our update.

If we are interested in only updating the image container, we can use the rolling update command and pass the name of the replication controller that requires an update and either a replication controller configuration file or the image name and tag. There are other optional flags, you can refer to the documentation to see how they are passed. It is important to note that you can only update the image of a single-container pod, otherwise you will get an error.

In this article, we introduced the two challenges of scaling and updating applications in a production environment. We demonstrated how to use the scale command and configuration file to scale applications. We then demonstrated how to use rolling-update command to initiate application updates.


Please enter your comment!
Please enter your name here