RKE2, also known as RKE Government, is Rancher’s next-generation Kubernetes distribution. RKE2, the successor to RKE1, was developed to address limitations in the original Rancher Kubernetes Engine and align better with modern Kubernetes standards. Importantly, RKE2 does not rely on Docker as RKE1 does. RKE1 leveraged Docker for deploying and managing the control plane components as well as the container runtime for Kubernetes. RKE2 launches control plane components as static pods, managed by the kubelet. The embedded container runtime is containerd.
Here are key reasons why RKE2 is preferred over RKE1 for most use cases:
1. Full Kubernetes Compliance
- RKE2 is a CNCF-certified Kubernetes distribution, meaning it strictly adheres to upstream Kubernetes standards. This ensures better compatibility with the Kubernetes ecosystem.
- RKE1, while functional, had some proprietary implementations and bundled configurations, making it less compliant with upstream Kubernetes practices.
2. Improved Security
- RKE2 includes SELinux support and uses containerd (instead of Docker) as the container runtime by default, which enhances security and aligns with Kubernetes’ deprecation of Docker as a runtime.
- Built-in support for FIPS 140-2 compliance is available in RKE2, making it a better choice for organizations with strict security requirements.
- All system components in RKE2 (e.g., kubelet, etcd, controller-manager) run as non-root where possible, reducing attack surfaces.
3. Simplified Architecture
- RKE2 is essentially k3s for production, designed to be lightweight while still supporting enterprise-grade features. It consolidates the architecture by leveraging k3s’ simplicity and modular design.
- RKE1 uses Docker Compose to manage Kubernetes components, which can be more complex and error-prone, especially in large-scale deployments.
4. Native High Availability (HA)
- RKE2 natively supports HA out of the box with minimal configuration. It integrates seamlessly with multiple control plane nodes and distributed etcd setups.
- In RKE1, setting up HA requires manual configuration, external load balancers, and more effort.
5. Extensible Add-ons
- RKE2 uses Helm Charts and manifests for managing built-in components and optional features (e.g., ingress controllers, CNI, etc.).
- RKE1 has a more rigid structure for integrating add-ons, making customization more challenging.
6. Long-Term Viability
- Rancher is focusing its development and support efforts on RKE2 as the go-to Kubernetes distribution for its ecosystem. RKE1 is no longer actively developed and will not receive future updates or new features.
7. Better Cloud-Native Integrations
- RKE2 is designed with cloud-native patterns in mind, making it easier to integrate with cloud providers, external storage, and networking solutions.
- RKE1 predates many of these advancements and is less flexible in modern environments.
8. Edge and IoT Use Cases
- Leveraging its k3s heritage, RKE2 is highly optimized for edge computing and resource-constrained environments, while still supporting traditional data centers and cloud setups.
- RKE1 is heavier and more suitable for on-premises or traditional data center use.
When to Use RKE1?
RKE1 may still be a good fit in limited scenarios:
- Legacy Systems: If your infrastructure is already using RKE1 and migrating to RKE2 isn’t immediately feasible.
- Docker Dependency: If your workflows depend heavily on Docker as the container runtime (though this should change as Kubernetes has deprecated Docker).
Key Differences Summary
Feature | RKE1 | RKE2 |
---|---|---|
Kubernetes Compliance | Partial | Fully CNCF-compliant |
Security | Basic | Enhanced (SELinux, FIPS, rootless, etc.) |
Container Runtime | Docker | containerd (default) |
Architecture | Docker Compose-managed | Simplified, native Kubernetes components |
HA Support | Manual | Native, automatic |
Target Use Cases | Data center focus | Cloud, edge, and on-premises |
Long-Term Support | Limited (deprecated focus) | Actively developed and supported |
Why Choose RKE2 Today?
RKE2 offers a modern, secure, and efficient Kubernetes platform that aligns with both current and future Kubernetes standards. With active development, better performance, and enhanced security, RKE2 is the logical choice for production deployments and new projects.
Prerequisites:
- A Linux-based OS (e.g., Ubuntu, CentOS, etc.)
- At least two nodes (one for the master and one for the worker) or multiple worker nodes in a cluster
- Sudo/root privileges on all nodes
- Network connectivity between the nodes
1. Install RKE2 on the Nodes
RKE2 is installed via a simple script. Here’s how to install RKE2 on the nodes.
On Master Node:
Run the following commands to install RKE2:
curl -sfL https://get.rke2.io | sh -
This installs RKE2 and its required components.
On Worker Nodes:
Repeat the same process on the worker nodes to install RKE2.
2. Start RKE2 on the Master Node
Once the installation is complete, you need to start the RKE2 server (master node) on the master node:
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service
3. Join Worker Nodes to the Cluster
To add worker nodes to the cluster, you need to provide the master node’s token and the URL of the master node.
On the Worker Node:
- First, retrieve the RKE2 token from the master node:
sudo cat /var/lib/rancher/rke2/server/node-token
- Join the worker node to the master node using the following command (replace
<MASTER_NODE_IP>
with the actual IP address of the master node):sudo rke2 agent --server https://<MASTER_NODE_IP>:9345 --token <NODE_TOKEN>
4. Check the Cluster Status
Once the worker nodes are joined to the cluster, you can verify the status of the cluster. On the master node, run:
sudo kubectl get nodes
This will show all the nodes in the cluster, including the master and worker nodes.
5. Configure kubectl Access
To interact with the Kubernetes cluster from your local machine or a different node, you need to configure kubectl
access. The master node stores the kubeconfig
file, which contains the credentials for accessing the cluster.
You can download the kubeconfig
file from the master node to your local machine:
sudo cat /etc/rancher/rke2/rke2.yaml
Copy the contents of the file and save it as config
on your local machine in the ~/.kube
directory. Set the KUBEV2_CONFIG
environment variable to point to this file:
export KUBEV2_CONFIG=~/.kube/config
Now, you can use kubectl
to interact with the cluster.
6. Managing the Cluster
RKE2 provides several systemd services to manage the cluster.
- Check the status of RKE2 services:
sudo systemctl status rke2-server
- Restart RKE2 services:
sudo systemctl restart rke2-server
- Stop RKE2 services:
sudo systemctl stop rke2-server
7. Deploy Applications
Once your cluster is up and running, you can deploy applications using kubectl
. For example, to deploy a simple Nginx pod:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
This will create a deployment and expose it on a NodePort.
8. RKE2 Configuration (Optional)
You can customize RKE2’s settings by editing the /etc/rancher/rke2/config.yaml
file. For example, to enable persistent storage, logging, or monitoring, you can add configurations in this file.
Example:
token: <NODE_TOKEN>
selinux: false
After modifying the configuration, restart RKE2 to apply changes:
sudo systemctl restart rke2-server
9. Updating RKE2
To upgrade RKE2 to a new version, you can run the upgrade script:
curl -sfL https://get.rke2.io | sh -
It will automatically upgrade the RKE2 components while maintaining the cluster’s state.
10. Uninstalling RKE2
To uninstall RKE2, run the following commands:
sudo systemctl stop rke2-server
sudo systemctl disable rke2-server
sudo rm -rf /etc/rancher/rke2
sudo rm -f /usr/local/bin/rke2
Additional Resources
- Official Documentation: RKE2 Docs
- RKE2 GitHub: RKE2 GitHub
- Rancher Forums: Rancher Community
- Using Multipass for vm creator: