Spotinst’s Kubernetes controller supports Kubernetes’ Persistent Volumes (PV) and Persistent Volume Claims(PVC). Persistent Volumes are Kubernetes storage resources, while Persistent Volumes Claims are requests for storage resources which consume PVs. Pods using Persistent Volume Claims to request Persistent Volumes are placed only on nodes in the same Availability Zone as the Persistent Volume. You can utilize Persistent Volumes already created or create them dynamically by using storage classes.

Note: Currently we recommend using storageClass PVCs, which provision the PVs dynamically if needed and ensure that Pods can always be scheduled.
How it Works:

The Kubernetes Controller monitors Pods for Persistent Volume Claims, ensuring that the Autoscaler respects the PVCs during scaling events.

When Scaling Up
  • The spotinst-controller routinely searches for Pods that are pending scheduling.
  • The spotinst-controller looks for any Persistent Volume Claims associated with the Pods.
  • If PVCs are found, scaling is limited to the Availability Zones (AZs) in which the requested PVs are located.
  • Scale up activities limited to specific AZs by Persistent Volume Claims are logged in the Spotinst Console’s Elastilog:

When Scaling Down

kubernetes over AWS

By default Ocean will scale down nodes with pods having persistent volume claims scheduled on them.

  • If the PVC claims from an EBS type PV – an additional condition for scale down will be that there is another node from within the same AZ. (as EBS is AZ dependent)
  • However, when the PVC claims from a persistent volume which is of type EBS there is downtime expected (in order to disconnect the volume from the scaled down node and reconnecting it to the new node the pod will move to after scale down). 

Therefore Ocean will de-prioritize scale down for nodes having pods with PVC scheduled on them. In other words, Ocean will try to fill the scale down potential with nodes that don’t host pods with PVC. 

Note: this is also applied to other types of PVs (Efs/Nfs/Third party) although there is no downtime expected. Ocean prefers to optimize performance and reduce unnecessary activities in the cluster

  • For mission critical workloads that cannot tolerate down time, use the Spot restrict-scale-down label. This will prevent the node running these workloads from scaling down.

kubernetes over GCP

currently, Ocean will not scale down nodes with pods having persistent volume claims scheduled on them.