Recently, I sought a simpler method to deploy and maintain Kubernetes clusters across various cloud providers. The goal was to use it for development purposes with the ability to manage the infrastructure and costs effortlessly. After exploring several options, I decided to experiment with Rancher.
Rancher offers a comprehensive software stack for teams implementing container technology. It tackles both the operational and security hurdles associated with managing numerous Kubernetes clusters. Additionally, it equips DevOps teams with integrated tools essential for managing containerized workloads. Rancher also offers an open-source version, allowing free deployment within one's infrastructure.
The Rancher platform can be deployed either as a Docker container or within a Kubernetes cluster utilizing the K3s engine. We can read the documentation on how to install Rancher on K3s using Helm. Rancher itself enables the creation and provisioning of Kubernetes clusters and their nodes using either the Rancher Kubernetes Engine (RKE) or the K3s engine.
Rancher has built-in support for many popular cloud providers like Amazon, Google, Linode, DigitalOcean, and so on. We only need to store credentials such as the API keys given by the cloud provider. If built-in support for the desired cloud provider is not available, we can deploy the nodes manually and execute the Rancher installation agent on them. For instance, we will deploy a Kubernetes cluster in DigitalOcean. First, we have to store the API token.
Next, we can proceed to create the cluster. The architecture of the cluster includes three primary components: the control plane, etcd, and worker nodes. Additionally, we can configure various parameters such as the Kubernetes engine, networking, and security features.
Then, we can wait until the process is completed.
After completing the node deployment and setup, we can visit the main panel straight away and see the status of our cluster.
The web-based control panel is embedded with numerous tools and information, such as a resource list, log monitoring, cluster configuration, and an in-browser terminal for executing kubectl
commands.
I have yet to test its effectiveness in supporting the DevOps process. Perhaps I will discuss this in a future post.
Comments
Post a Comment