Elotl
  • Home
  • Platform
    • Luna
    • Nova
  • Resources
    • Blog
    • Youtube
    • Podcast
    • Meetup
  • Usecases
    • GenAI
  • Company
    • Team
    • Careers
    • Contact
    • News
  • Free Trial
    • Luna Free Trial
    • Nova Free Trial
  • Home
  • Platform
    • Luna
    • Nova
  • Resources
    • Blog
    • Youtube
    • Podcast
    • Meetup
  • Usecases
    • GenAI
  • Company
    • Team
    • Careers
    • Contact
    • News
  • Free Trial
    • Luna Free Trial
    • Nova Free Trial
Search

Blog

Supercharge your Cluster Autoscaling with VPA

5/13/2025

 
Picture
Choosing accurate CPU and memory request values for Kubernetes workloads is a difficult endeavor. This difficulty results in application developers overprovisioning their workloads to ensure that application performance will not be affected. This can lead to increasing cloud costs and inefficient resource usage. In addition, it is also possible that workloads can be underprovisioned inadvertently. This can negatively affect application performance and potentially even lead to service disruptions.

In this blog, we describe how Kubernetes Vertical Pod Autoscaler (VPA) can be leveraged in conjunction with Luna, a powerful cluster autoscaler - to ensure that Kubernetes workloads are right-sized by VPA and the Kubernetes cluster as well as nodes are right-sized by Luna - resulting in cost-effective and performant operations.


Overview of VPA

The Vertical Pod Autoscaler in Kubernetes leverages CPU and memory usage history of managed workloads to make recommendations of resource request values for containers and optionally update a container’s resource requests in an automated fashion. Workloads that can be vertically scaled using VPA include Deployments, Statefulsets, Daemonsets as well as Custom Resources (that have the scale subresource defined). VPA uses the Kubernetes metrics server to monitor and track CPU and memory resource usage.  

VPA is implemented as a Custom Resource in Kubernetes. An instance of the custom resource will need to be created for each workload that the user would like to manage or vertically autoscale. VPA can be used in 3 different modes. These are described below:

  1. Off: In this mode, VPA provides recommendations for pod resource request values. These recommended values can be read from the VPA custom resource object. This mode requires manual activation or human intervention to apply recommendations. 
  2. Initial: In this mode, VPA provides recommendations for pod request values in the VPA custom resource object just as in the “off” mode. In addition, these resource recommendations are applied to pods during pod creation (alone) and do not change during the lifetime of the pod. These pod creations could have been triggered either by prior pod restarts or via horizontal pod scaling. 
  3. Auto/Recreate: In this mode, VPA assigns resource requests on pod creation as well as updates these resource requests over the lifetime of the pod. 

Note: This blog focuses on the traditional behavior of the Vertical Pod Autoscaler (VPA), which involves evicting and restarting pods to apply new resource recommendations. It does not cover the newer in-place pod resizing feature introduced in Kubernetes v1.33+, which allows certain resource updates without pod restarts. If you're using Kubernetes 1.33 or later and are interested in in-place resizing, be aware that it introduces different behavior and considerations not discussed in detail in this blog post. Please review the section “VPA and In-place Pod Resizing” at the end of this blog post to learn more. 

VPA: Under the hood

The Vertical Pod Autoscaler consists of 3 components in the kube-system namespace:
  1. Recommender, vpa-recommender: The recommender utilizes past and current CPU and memory usage values to calculate recommendations for resource requests for containers within a managed pod. The recommendations are made available within the Status field of the VerticalPodAutoscaler custom resource object.
  2. Updater, vpa-updater: The updater is responsible for checking current resource requests for containers in a pod and evicting pods for which the recommended resources vary significantly from the current allocation. The pod disruption budget is respected during evictions. And the vpa-updater only comes into effect if the VPA is operated in auto mode. It is to be noted that for pods whose resources need to be updated, vpa-updater is only responsible for deleting the pod. The pod-controller is then responsible for initiating restart of the pod.
  3. Admission Controller, vpa-admission-controller: The admission controller sets correct resource requests on newly created pods. This includes pods created for the first time as well as pods recreated after eviction by the vpa-updater.

VPA also includes supporting components like a Mutating webhook configuration named vpa-webhook-config, and a ClusterIP service called vpa-webhook, which work together to apply recommended resource updates.
Picture
The VPA recommender calculates resource requests using a decaying histogram of monitored
CPU and memory usage metrics. In a decaying histogram, the weight of each metric value decreases over time. By default, a historical CPU usage sample loses half of its weight in 24 hours. This default value can be changed using the flag, --cpu-histogram-decay-half-life. The frequency at which CPU and memory metrics are fetched defaults to 1 minute and can be changed using the flag, --recommender-interval.   An extensive list of other flags to customize the vpa-recommender are documented here: VPA-recommender flags.  A detailed description of margins and confidence intervals that are applied over the decaying histogram technique can be found in this CNCF blog post: Optimizing VPA responsiveness and here.

The VPA object for each Kubernetes resource can also be configured to provide recommendations for both CPU and memory or just one of these resources using the controlledResources parameter in the VPA object (shown in the example VPA object below). It is important to note that it is not recommended to use VPA along with the Horizontal Pod Autoscaler for the same resource. More details about this limitation can be found in these references: VPA design docs, Known Limitations of VPA and VPA on GKE Limitations.

Let’s look at an example of a VPA object:

apiVersion: "autoscaling.k8s.io/v1"
kind: VerticalPodAutoscaler
metadata:
  name: workload-c-vpa
spec:
  targetRef:
        apiVersion: "apps/v1"
        kind: Deployment
        name: workload-c
  updatePolicy:
        updateMode: "Auto"
  resourcePolicy:
        containerPolicies:
        - containerName: '*'
        minAllowed:
        cpu: 100m
        memory: 50Mi
        maxAllowed:
        cpu: 2
        memory: 500Mi
        controlledResources: ["cpu", "memory"]
...
    
The targetRef field refers to the Kubernetes resource that this VPA object manages, which in this case is a deployment named “workload-a”. The updatePolicy field can be one of the four modes listed in the Overview section: Off, Initial, Auto or Recreate.  The minAllowed and maxAllowed fields are used to set the absolute minimum and maximum values that the VPA can recommend. This prevents excessive resource usage as well as resource starvation for pods and can help to keep performance and cost within acceptable bounds.

Let’s now look at an example of a recommendation within the VPA object after it begins operation:

 Recommendation:
        Container Recommendations:
        Container Name:  workload-c
        Lower Bound:
       Cpu:     382m
       Memory:  262144k
        Target:
       Cpu:     587m
       Memory:  262144k
        Uncapped Target:
       Cpu:     587m
       Memory:  262144k
        Upper Bound:
       Cpu:     1
       Memory:  500Mi
    
In the above snippet, Target, refers to the recommended values of CPU and memory requests for the container named “workload-a”. It corresponds to the 90 percentile (by default) of the decaying histogram of observed peak usage values.  This percentile value can be configured using the flags --target-cpu-percentile and --target-memory-percentile when starting up the vpa-recommender.

Uncapped Target refers to the recommended values of CPU and memory requests for the same container without taking into consideration the maxAllowed value in the Spec section of the VPA custom resource object. The lower bound and upper bound values correspond to the 50th percentile and 95th percentile of the decaying histogram and these can be configured with the flags: --recommendation-lower-bound-cpu-percentile and --recommendation-upper-bound-cpu-percentile.  

VPA: Better Together with Cluster Autoscaling

In this section, let’s look at how Vertical Pod Autoscaling and Cluster autoscaling compliment each other. VPA can be utilized to right-size pods that are initially either overprovisioned or underprovisioned. We delve into each of these cases and find out how a cluster autoscaler can help with both. 

Application Under-Provisioning

When a pod is underprovisioned, VPA recommends larger resource values than its current allocation. In this case, the current cluster nodes may not be able to accommodate the updated pod. This can result in pods remaining in pending state. In such a case, having an Intelligent Kubernetes Cluster Autoscaler, like Luna, becomes critical to keep the application or service running without interruptions. Luna automates the addition of a right-sized cluster node to accommodate these pending pods (that were recreated because of the actions of the VPA-updater).

Additionally, Luna places pods on nodes via two techniques: 
  1. Bin-packing: In this placement mode, pods with modest resource requirements are placed along with other pods sharing the same node. 

  2. Bin-selection: In this placement mode, pods with larger resource requirements are placed on their own nodes. 
The resource thresholds that determine whether a node will be bin-packed or bin-selected are configurable via these Luna parameters:  binSelectPodCpuThreshold, binSelectPodMemoryThreshold and binSelectPodGPUThreshold. Any pod whose resource request equals or exceeds these thresholds will be bin-selected.

When an underprovisioned pod’s resources are increased by VPA, a bin-pack designated pod may become a bin-select designated pod. In this case, Luna automatically detects this change and places the pod appropriately on a bin-select node. We illustrate this via an experiment in the section: “Experiment 4: VPA and Luna Interoperation to Handle Pod Under-provisioning”.

Application Over-Provisioning

When a pod is overprovisioned, VPA recommends smaller resource values than its current allocation. In this case, since the pod’s resource request is smaller, total cluster capacity will not need to change - i.e. the cluster will continue to be able to accommodate the updated pod.

However, the decrease in resource requests could result in a change in the designation of a pod from bin-select to bin-pack. In this case, the pod, after restart, will be placed on a bin-pack node by Luna. The bin-select node will automatically get scaled-in (or deleted) if no other pods were also running on that node. A detailed experiment of this scenario is described in the section: “Experiment 3: VPA and Luna Interoperation to Handle Pod Over-provisioning”.

VPA & Luna Interoperability Experiments 

In this section, we detail a number of experiments to showcase how VPA and Luna interoperate under different operational conditions and modes.

Experiment 1: Interoperation of VPA in “Auto mode” and Luna

In this experiment, we illustrate an example where VPA recommends increased resources to a managed deployment. Luna promptly detects that the pod recreated by the vpa-updater cannot be accommodated as-is in the current cluster and hence adds a new node to the cluster and places the restarted pod on this new node.

When Luna and VPA (in auto mode) are used together, their admission webhooks need to be executed in the correct order. At first, the VPA admission controller adjusts pods’ resource values and then Luna’s admission webhook needs to come into effect. Luna uses the updated resource values in a pod to then choose an appropriate node. Luna provides a configuration parameter, called webhookConfigPrefix, to enable this ordering.

1. Initial Setup

Two deployments, “workload-A” and “workload-B” are running on 2 nodes in a Luna-enabled EKS cluster.

% kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   NODE                                      
workload-a-746c7d676c-g6fvm   1/1     Running   0          14m   ip-192-168-29-254.us-west-1.compute.internal
workload-b-748848d855-8xq2x   1/1     Running   0          14m   ip-192-168-20-122.us-west-1.compute.internal
    

2. Starting a VPA managed workload

A third deployment, workload-C, managed by VPA is created on this cluster.

% kubectl apply -f workload-C.yaml
verticalpodautoscaler.autoscaling.k8s.io/workload-c-vpa created
deployment.apps/workload-c created
    
The VPA custom-resource is seen below.

% kubectl get vpa  
NAME             MODE   CPU   MEM   PROVIDED   AGE
workload-c-vpa   Auto                          5s
    
We see that the CPU and memory request values are not immediately available.

Initially, workload-C is placed by Luna on an existing Luna-managed node, ip-192-168-20-122 because there is sufficient capacity on that node.

% kubectl get pods -o wide               
NAME                          READY   STATUS    RESTARTS   AGE   NODE 
workload-c-7758ccbf84-crgqm   1/1         Running   0              4s    ip-192-168-20-122.us-west-1.compute.internal
workload-c-7758ccbf84-qgv9d   1/1         Running   0              4s    ip-192-168-20-122.us-west-1.compute.internal
workload-a-746c7d676c-g6fvm   1/1         Running   0              20m   ip-192-168-29-254.us-west-1.compute.internal
workload-b-748848d855-8xq2x   1/1         Running   0              20m   ip-192-168-20-122.us-west-1.compute.internal
    
Workload-C was chosen such that its CPU usage can be configured to spike up or down as needed: cpu-stressor-pod.

We then see that the original pods’ usage begins to spike up, as captured below: 
  
% kubectl top pods       
NAME                          CPU(cores)   MEMORY(bytes)   
workload-c-7758ccbf84-crgqm   346m         1Mi           
workload-c-7758ccbf84-jvcxz   3303m        1Mi           
workload-a-746c7d676c-g6fvm   2588m        1Mi           
workload-b-748848d855-8xq2x   275m         1Mi 
    
We see that the VPA updater evicts one of the pods, and a newly created replacement pod enters into Pending state:

% kubectl get pods       
NAME                          READY   STATUS    RESTARTS   AGE
workload-c-7758ccbf84-btrcn   0/1     Pending   0          41s
workload-c-7758ccbf84-jvcxz   1/1     Running   0          101s
workload-a-746c7d676c-g6fvm   1/1     Running   0          22m
workload-b-748848d855-8xq2x   1/1     Running   0          22m
    
At the same time, we see that the CPU and memory recommendations are updated within the VPA custom resource object. 

% kubectl get vpa        
NAME             MODE   CPU   MEM       PROVIDED   AGE
workload-c-vpa   Auto   2     262144k   True       2m58s
    
Within a minute, we see that the pending pod successfully starts running on a newly created node (ip-192-168-30-113), that was triggered by Luna. We verify that the node creation was in fact initiated by Luna by checking that node’s labels include: node.elotl.co/created-by=luna.

% kubectl get pods -o wide
NAME                          READY   STATUS   RESTARTS   AGE   NODE                                       
workload-c-7758ccbf84-btrcn   1/1     Running  0          81s   ip-192-168-30-113.us-west-1.compute.internal
    
This experiment showcases that for Vertical pod autoscaling to be used in an automated fashion, an intelligent autoscaler like Luna is critical to scale-out nodes when necessary.  

Experiment 2: Interoperation of VPA in “Initial mode” and Luna

In this experiment, we illustrate an example where VPA recommends increased resources to a managed deployment. However, since VPA is configured in “Initial” mode, resource requests are not automatically applied to containers. In this mode, requests are applied only during pod creation. So application administrators can restart a pod manually to update requests.

Pods initially run on the existing Luna-managed node, ip-192-168-20-122.

% kubectl get pods -o wide
NAME                         READY   STATUS        RESTARTS   AGE     IP   NODE                                      
workload-d-79f5997949-cdxv4  1/1     Running   0          8m12s        ip-192-168-20-122.us-west-1.compute.internal
workload-d-79f5997949-gs7tb  1/1     Running   0          8m12s        ip-192-168-20-122.us-west-1.compute.internal
workload-a-746c7d676c-g6fvm  1/1     Running   0          3d8h         ip-192-168-29-254.us-west-1.compute.internal
workload-b-748848d855-8xq2x  1/1     Running   0          3d8h         ip-192-168-20-122.us-west-1.compute.internal
    
After new VPA recommendations have been calculated in the VPA object, pods are deleted.

% kubectl get vpa                                                
NAME             MODE      CPU   MEM       PROVIDED   AGE
workload-d-vpa   Initial   2     262144k   True           22m


% kubectl delete pod workload-d-79f5997949-cdxv4 workload-d-79f5997949-gs7tb
    
We see that 1 replica of the workload gets started on a new Luna-triggered node (ip-192-168-3-62) taking into account the pod’s newly assigned resource request.

% kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP   NODE                                       
workload-d-79f5997949-4zpxw   1/1         Running   0              5m3s       ip-192-168-3-62.us-west-1.compute.internal
workload-d-79f5997949-kvwt5   1/1         Running   0              5m3s       ip-192-168-20-122.us-west-1.compute.internal
workload-a-746c7d676c-g6fvm   1/1         Running   0              3d9h       ip-192-168-29-254.us-west-1.compute.internal
workload-b-748848d855-8xq2x   1/1         Running   0              3d9h       ip-192-168-20-122.us-west-1.compute.internal
    
With the updated assignment, both the existing node (ip-192-168-20-122) and the new Luna provisioned node (ip-192-168-3-62) are operating at full capacity.

% kubectl top nodes      
NAME                                           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ip-192-168-20-122.us-west-1.compute.internal   3977m        101%   588Mi           3%            
ip-192-168-29-254.us-west-1.compute.internal   2261m        57%    1065Mi          7%            
ip-192-168-3-62.us-west-1.compute.internal     4000m        102%   531Mi           3% 
    

Experiment 3: VPA and Luna interoperation to handle Pod over-provisioning

When a pod is initially over-provisioned, VPA can recommend lower resource request values by observing resource usage over a period of time. These lower resource values, recommended by VPA, can result in a pod, initially categorized as a bin-select pod by Luna, to later be categorized as a bin-pack pod. In the experiment described below, we showcase how VPA and Luna work well together to handle this appropriately. 

1. Creation of an over-provisioned workload

We create a workload, workload-g, that is overprovisioned. A VPA object is created for this deployment. Initially, VPA does not have a resource recommendation since there is insufficient historical data.

% kubectl get vpa
NAME             MODE   CPU   MEM   PROVIDED   AGE
workload-g-vpa   Auto                          34s
    

2. Pod placement as bin-select on separate nodes 

Initially, the pods of this deployment request 2 CPUs each, as specified in the deployment manifest. Luna marks these pods as bin-select pods since the CPU request value falls below the default threshold for bin-packing in Luna.

As can be seen below, Luna places the pods on two separate nodes, ip-192-168-11-117 and ip-192-168-31-139. 

% kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE    NODE
workload-a-746c7d676c-g6fvm  1/1     Running   0          10d    ip-192-168-29-254.us-west-1.compute.internal
workload-b-9c878584d-fv7tq   1/1     Running   0          3d2h   ip-192-168-20-122.us-west-1.compute.internal 
workload-g-6bd4dc4c66-4l9kr  1/1     Running   0          96s    ip-192-168-11-117.us-west-1.compute.internal
workload-g-6bd4dc4c66-pzq6m  1/1     Running   0          96s    ip-192-168-31-139.us-west-1.compute.internal
    

3. Pod right-sizing by VPA

After a few minutes of operation, VPA utilizes the usage metrics and recommends the following resource requests. We see that the recommended CPU request for the pod is only 163m of CPU while the original CPU request in the pod’s manifest was for 2 CPUs. 

% kubectl get vpa
NAME             MODE   CPU    MEM       PROVIDED   AGE
workload-g-vpa   Auto   163m   262144k   True       101s
    

4. Right-sized Pod placement via bin-packing by Luna

We use VPA in auto update mode in this experiment. So the pods get restarted and get updated with the recommended lower resource values automatically.  Luna detects the new resource values on the restarted pods and places them as bin-pack pods on an existing bin-pack node, ip-192-168-20-122, as seen below.

% kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP   NODE
workload-a-746c7d676c-g6fvm   1/1     Running   0          10d        ip-192-168-29-254.us-west-1.compute.internal
workload-b-9c878584d-fv7tq    1/1     Running   0          3d2h       ip-192-168-20-122.us-west-1.compute.internal
workload-g-6bd4dc4c66-pjs6x   1/1     Running   0          4s         ip-192-168-20-122.us-west-1.compute.internal
workload-g-6bd4dc4c66-qq9dl   1/1     Running   0          64s        ip-192-168-20-122.us-west-1.compute.internal
    
From this experiment, we see that using Luna with VPA can help handle overprovisioned pods by right-sizing them and placing them on appropriate nodes automatically.

Experiment 4: VPA and Luna Interoperation to Handle Pod Underprovisioning

Just as applications can be overprovisioned, as we saw in Experiment 3, applications can also be under-provisioned. This can result in a degradation of application performance and necessitates prompt remediation. In the following example, we show how VPA and Luna operate together to handle this situation without any manual intervention. 

1. Creation of an under-provisioned workload

An under-provisioned Kubernetes deployment, workload-f is created. The workload’s CPU request is set to 100m in its manifest. We use the cpu-stressor-pod to configure its actual CPU usage to be much larger that this 100m request. A VPA object is also created for this deployment. Initially, the VPA object managing this deployment does not have any resource recommendations due to insufficient historical data:

% kubectl get vpa
NAME             MODE   CPU   MEM   PROVIDED   AGE
workload-f-vpa   Auto                          21s
    

2. Pod placement as bin-pack by Luna

Since the pod’s CPU request of 100m falls below the default bin-pack threshold of 2 CPUs, Luna places both replicas of workload-f on a bin-pack node, ip-192-168-20-122. 

% kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE     NODE
workload-a-746c7d676c-g6fvm   1/1     Running   0          10d     ip-192-168-29-254.us-west-1.compute.internal
workload-b-9c878584d-fv7tq    1/1     Running   0          3d22h   ip-192-168-20-122.us-west-1.compute.internal
workload-f-c9cd8df4-kkrrf     1/1     Running   0          26s     ip-192-168-20-122.us-west-1.compute.internal
workload-f-c9cd8df4-lfq2m     1/1     Running   0          26s     ip-192-168-20-122.us-west-1.compute.internal
    

3. Pod right-sizing by VPA

Using the kubectl top command, we see that workload-f’s CPU usage is much higher than its original request value of 100m for CPU.

% kubectl top pods
NAME                          CPU(cores)   MEMORY(bytes)
workload-a-746c7d676c-g6fvm   2516m        1Mi
workload-b-9c878584d-fv7tq    101m         1Mi
workload-f-c9cd8df4-774bv     1922m        1Mi
workload-f-c9cd8df4-b9fmr     1910m        1Mi
    
Soon, VPA utilizes observed CPU usage values and recommends a higher value of CPU - 2406m, as seen below: 

% kubectl get vpa
NAME             MODE   CPU     MEM       PROVIDED   AGE
workload-g-vpa   Auto   2406m   262144k   True       2m2s
    

4. Right-sized Pod placement via bin-select by Luna

Since VPA is in auto mode for this experiment, workload-f’s pods are recreated with the recommended higher CPU request values. The new CPU values now exceed Luna’s bin-packing threshold value of 2 CPUs. Luna, in turn, responds by placing these pods on newly created bin-select nodes ip-192-168-28-198 and ip-192-168-11-176. 

% kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP   NODE
workload-a-746c7d676c-g6fvm   1/1     Running   0          11d        ip-192-168-29-254.us-west-1.compute.internal
workload-b-9c878584d-fv7tq    1/1     Running   0          4d6h       ip-192-168-20-122.us-west-1.compute.internal
workload-f-c9cd8df4-6wvc9     1/1     Running   0          5h44m      ip-192-168-28-198.us-west-1.compute.internal
workload-f-c9cd8df4-l554v     1/1     Running   0          5h42m      ip-192-168-11-176.us-west-1.compute.internal
    
From this experiment, we see that Luna and VPA work well together to manage under-provisioned resource requests of pods without any manual intervention.

VPA and In-place Pod Resizing 

In-place pod update is a feature in Kubernetes that allows pods’ resource requests to be updated without having to evict and restart the pod. It has been available as an alpha feature from Kubernetes 1.27 (behind a feature gate). It is available as beta from Kubernetes 1.33.

Currently, in released versions of VPA (as of April 2025), the vpa-updater component does not utilize in-place pod resizing. However, VPA is being extended to leverage this feature; details of this development are tracked here: AEP-4016. It is important to note that VPA with in-place updates is not guaranteed to prevent pod disruptions since the actuating resize operation depends on the underlying container runtime. The end-user expectation is for pod disruptions to be minimal.

When VPA is able to utilize the in-place pod resizing feature, Luna’s hot node mitigation feature may be able to help handle those cases where pods with increased resource requests cause excessive node utilization. Hot node mitigation is described in detail in this blog post: Luna Hot Node Mitigation: A chill pill to cure pod performance problems.

Conclusion

In summary, when considering to use Vertical Pod Autoscaling for your Kubernetes workloads, leveraging an Intelligent Kubernetes Cluster Autoscaler, like Luna can ensure that restarted, scaled-up or scaled-down pods in your cluster can be placed on just-in-time, right-sized nodes in a fully automated fashion. If you would like to try VPA with an intelligent cluster autoscaler, please download Luna and reach out to us with questions or comments at [email protected] . 


Author:

Selvi Kadirvel (VP Engineering, Elotl)


Comments are closed.

    Topic

    All
    ARM
    Autoscaling
    Deep Learning
    Disaster Recovery
    GPU Time-slicing
    Luna
    Machine Learning
    Node Management
    Nova
    Troubleshooting
    VPA

    Archives

    May 2025
    April 2025
    January 2025
    November 2024
    October 2024
    August 2024
    July 2024
    June 2024
    April 2024
    February 2024

    RSS Feed

​© 2025 Elotl, Inc.
  • Home
  • Platform
    • Luna
    • Nova
  • Resources
    • Blog
    • Youtube
    • Podcast
    • Meetup
  • Usecases
    • GenAI
  • Company
    • Team
    • Careers
    • Contact
    • News
  • Free Trial
    • Luna Free Trial
    • Nova Free Trial