Replication supports the scalability and resiliency of containerized applications, generally on cloud infrastructure. Replication hasn't been without operational issues, but it's getting better.
Container orchestration platform Kubernetes offers limited automatic support for some aspects of replication and features to enhance that basic capability.
Kubernetes' replicas are duplicated images of an application, distributed onto different Pods, the logical organization unit for containers. Replication is compromised if the admin assigns multiple replicas to the same Pod. If possible, distribute replica containers across Pods on multiple server complexes -- a group of servers that shares a common switch within a data center -- or across multiple data center sites. Replication control in software, however you exercise it, can't protect workload uptime if the administrator fails to assign Pods to truly independent IT resources to prevent common failures and cross-talk.
Kubernetes replication technology
Kubernetes Replica Sets and Deployments replaced the container orchestration project's original technology, Replication Controllers. Enterprises adopting containers with no previous Kubernetes commitment or no current replication setup should choose Kubernetes Replica Sets. Containers and DevOps are evolving rapidly, and adherents would be wise to stick to the mainstream trends unless there's a compelling reason to buck them. Organizations already familiar with Replication Controllers must understand the basic differences in Replica Sets and how they control availability and scalability.
Kubernetes Replication Controllers are essentially lifecycle monitoring tasks that ensure that an application container's replicas are all running. The administrator specifies the number of Pods desired, as well as any load balancing on the service. Replication Controllers support basic and external load balancers for scalability. The Replication Controller setup resembles an imperative DevOps sequence, which means the technology is given an explicit set of steps to follow. When things change, the administrator updates the application under the Replication Controller with the rollingupdate command, which changes each Pod individually rather than all at once. Rolling updates introduce a risk: If the system fails during the process, the Pod versions could be left in an inconsistent state.
Kubernetes Replica Sets are a declarative form of replication technology, which means that the administrator specifies what the correct operational state looks like, and the system takes steps to achieve it. The declarative model is easier to follow for most IT teams than imperative, and it fits with the concept of end-state modeling, where a DevOps team envisions the desired result.
Replica Sets are only half the story. The Set is the structure that the user declares, which preps the way for a Deployment. A Deployment creates Pods populated with the correct images, identifies the way that Kubernetes updates Pods as either rolling or replace and sets limits on the number of Pods that can be out of service and the maximum number that will be scaled to under load.
The update process is perhaps the biggest difference between Kubernetes' two replication mechanisms. The system updates via deployment of the new version. Because the update either works entirely or is not processed with a Deployment, there's no risk of inconsistent versions. Deployments also allow the user to stop an update if it results in Pod failures.
Another difference between Kubernetes Replica Sets and the older Controller technology could significantly affect large-scale container environments. Selection sets improved from Replication Controllers to Replica Sets. The administrator no longer has to specify a specific keyword to identify a set of Pods. Instead, the user can create a more complex grouping based on multiple attributes. The container deployment, therefore, is more finely parsed, with the operator gaining better ways to control a subset of Pods.
Use Kubernetes Replica Sets and Deployments
To create container replicas, define the Replica Set, including the number of copies to deploy, the application and its metadata. This process creates parameters -- if you're familiar with the parameters that define a Replication Controller, this will look familiar. However, the more complex selection set differentiates Replica Sets. Instead of only a single metadata value, such as Production-Version, name a number of key attributes and values. For example, test the deployment for a tier label, which identifies the application lifecycle management state, as well as an app label that specifies the versions of an application. The Deployment, therefore, applies more flexibly to a set of Pods, depending on how the admin labels them. Replica Sets have more latitude to reference Pods during deployment or version changes than the Controller.
It is common to use the labels to control deployment of new software versions. For example, choose to deploy a given version only to production units -- or to every system except production units. Other options include specifying an application among several or excluding one or more and selecting all the rest.
The defined Replica Set is sent live, via Deployment, to make it available, update the applications or manually add or delete Pods. If a Pod fails and if the Replica Set constraints set a minimum value for the number of Pods, Kubernetes will create another one to replace it. Pods can also auto scale, if that option is specified.
A user can expose the Kubernetes Replica Set so that its address is public for outside reference. That setting also exposes the collection of Pods and any associated load balancing in place.
Most of the operational work an administrator is likely to do in Kubernetes will occur on Replica Sets. Think of the Set as the deployed unit, instead of the individual Pods taking that role. The flexibility afforded by Replica Sets means that the admin can still access the Pods and their state to make changes as needed.
Docker integrates Kubernetes into Engine as native orchestration option
Kubernetes added as the orchestration tool in Rancher
Make sure your organization adopts the right container OS