|
The Kubernetes Vertical Pod Autoscaler (vpa) provides near-instantaneous recommendations for CPU and memory requests for a pod. It can be used either as a read-only or as a fully automated recommender, where pods are mutated with the recommended requests.
When a cluster operator is considering whether or not to use VPA for a specific workload, it is helpful to simply monitor and visualize both VPA recommendations along with actual resource usage over a test period, before using it in an automated fashion. In this blog, we illustrate how we can track VPA operation over such a test period using a popular open-source monitoring and visualization stack for Kubernetes (which includes Prometheus and Grafana). Motivation for VPA tracking
Kubernetes VPA can be used in two primary update modes: Off (read-only mode) and Auto (aka Recreate). In the Off mode, the VPA custom resource provides near-instantaneous recommendations for suitable values of CPU and memory requests for pods in various types of Kubernetes resources - such as deployments, jobs, daemonsets, etc. Workload administrators can use these recommendations to manually update pod requests. Given below is an example of CPU and memory recommendations within a VPA custom resource object.
The Luna cluster autoscaler can now run with SUSE's RKE2 clusters on AWS EC2 nodes.
Compared to EKS, RKE2 on EC2 offers more operational control, better customization, improved flexibility, and federation across different infrastructures: EC2, on-prem, and edge. Luna 1.2.19 can create and manage RKE2 worker nodes, allowing you to scale your RKE2 compute resources more efficiently than with the basic Kubernetes cluster autoscaler. |
Topic
All
Archives
November 2025
|

RSS Feed