The following tutorial covers importing  existing Elastigroups running a Kubernetes cluster to Ocean.
If your cluster has only one Elastigroup, refer to Use Case 1.
To import a cluster that consists of multiple Elastigroups, refer to Use Case 2.

Use case 1 – a Single Elastigroup

If you have one Elastigroup that you would like to upgrade to Ocean, follow these steps:

  • Open the Elastigroup in question, and then click on the icon on the top right.
  • Select “Upgrade” on the pop-up window:
  • Please wait for following message to validate that the upgrade process is complete:

 

For Kubernetes Elastigroups created using KOPS, see the following guide.

Use case 2 – Multiple Elastigroups

In case you have multiple Elastigroups for the same Kubernetes cluster, you will need to follow the below instructions:

Considerations

  • Ocean manages the entirety of the Kubernetes cluster’s nodes. If the nodes are distributed across multiple Elastigroups, all of these Elastigroups should be migrated at once.

Prerequisites

To complete this tutorial, gather the following information:

  1. List the Elastigroups connected with the target Kubernetes Cluster.
  2. For each of the Elastigroups, note the following data for future use
    1. Autoscaler labels
    2. User-data script
    3. AMI Id
  3. Ensure all the Elastigroups in the cluster use the same Cluster Identifier (Id). This is used to connect the existing Spotinst Controller installed on the Kubernetes cluster to Ocean.

Step 1: Create the Ocean Cluster

  • Choose one of your Kubernetes Elastigroups and click the  button on the top right corner. This will be your source Elastigroup and Ocean will take control over its management.

Step 2: Configure Ocean Custom Launch Specifications

The next step is to configure the Ocean cluster to handle all the different label sets configured on the remaining Elastigroups.

  • Navigate to your Ocean cluster.
  • Click on the Actions menu and select Custom Launch Specifications.
  • For each of the remaining Elastigroups running Worker Nodes do the following:
    • Click Add Specification.
    • Add the Labels sets gathered as part of the prerequisites.
    • Set the matching User-Data and AMI.
  • Click Save Changes to commit changes.

Step 3: Disable Autoscaling on the imported elastigroups

Disable the autoscaling in the Elastigroups that were converted to Launch Specifications to allow Ocean to take over.

  • For each of the groups, perform the following steps:
    • Navigate to the Elastigroup.
    • Under the Actions menu, click Edit Configuration.
    • Scroll down to the Advanced section
    • Remove the Autoscaler selection.
    • Continue to the review page and click Update to commit the changes.

Step 4: Downscale secondary Elastigroups converted to Launch Specifications

The last step is to downscale the existing worker nodes and allow Ocean to launch the proper instances to fit the cluster needs.

  • Navigate to the Elastigroups configured for your Kubernetes cluster.
  • On the Actions menu, Click Manage Capacity
  • Reduce the target capacity
    • Note: It is highly recommended to reduce the target capacity in batches of 10-20% and repeat this step until complete downscale
  • Navigate to your Ocean Cluster and verify that it is spinning up the required resources and handles the cluster pending pods.

Step 5: Sit back and relax, Ocean’s got you covered!

In case of any issues please contact our support engineers via chat or email.