Constraints and Labels
To make scheduling more efficient and compatible with Kubernetes, Ocean supports all of the Kubernetes constraint mechanisms for scheduling pods:
- Node Selector – Constrain Pods to nodes with particular labels.
- Node Affinity – Constrain which nodes your Pod is eligible to be scheduled on based on labels on the node.
We support hard / soft affinity (requiredDuringSchedulingIgnoredDuringExecution /preferredDuringSchedulingIgnoredDuringExecution)
- Pod Affinity and Pod Anti-Affinity – Schedules a Pod based on which other Pods are or are not running on a node.
- Pod Port Restrictions – We validate that each pod will have required ports available on the machine
Spot labels allow you to adjust the default behavior of scaling in Ocean, by adding Spot labels to your pods you can control the node termination process or its life cycle.
|Label Key||Accepted Values||Description|
||When a node is running a pod with such a label, it will not be scaled down by the Spot Auto-Scaler|
||Pods which contain this node selector/affinity are forced to run on an on-demand instance|
||Sets the GPU accelerator
Note: This setting applies only to GKE clusters
using OD nodeSelector:
apiVersion: v1 kind: Pod metadata: name: with-node-selector spec: containers: - name: with-node-selector image: k8s.gcr.io/pause:2.0 imagePullPolicy: IfNotPresent nodeSelector: spotinst.io/node-lifecycle: od
using OD nodeAffinity:
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: spotinst.io/node-lifecycle operator: In values: - od containers: - name: with-node-affinity image: k8s.gcr.io/pause:2.0
using restrict scale down label:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spotinst.io/restrict-scale-down: 'true' spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 resources: requests: memory: "2Gi" cpu: "2" limits: memory: "4Gi" cpu: "4"