vCommander Build

Scenario Download Link

7.0.2
Download from GitHub


Kubernetes is an open-source system for deploying and managing containerized applications within a hybrid or cloud environment. Using the Embotics vCommander cloud management platform, you can instantiate a Kubernetes cluster, and then use vCommander’s  orchestration, self-service, cloud governance and cost optimization features to manage the cluster.

This article shows you how to use vCommander 7.0.2 and greater to get a Kubernetes  cluster up and running quickly on AWS, and to add the deployed cluster to vCommander’s inventory as a cloud account (managed system). While there are many ways to deploy Kubernetes, this solution uses the kubeadm deployment and installation method on Centos 7 Linux, 64-bit architecture. 

This article is intended for systems administrators, engineers and IT professionals. Previous experience with Linux, Docker and AWS is required.

Changelog

Version 1.0: Initial version

Prerequisites

  • vCommander release 7.0.2 or greater

Before you begin, you must:

  • Add an AWS account as a vCommander cloud account (managed system). See Adding a Cloud Account (Managed System).
  • Create a vCommander deployment destination, targeting the location in AWS where the Kubernetes cluster will be deployed. See Configuring Automated Deployment for Approved Service Requests.
  • Ensure that the user requesting the Kubernetes cluster has permission to deploy to the targeted location in AWS.

  • Ensure that the root Linux user can log in through SSH, because Docker and Kubernetes require the configuration of system-level resources.

Overview

To provision a Kubernetes cluster on AWS with vCommander, the following steps are required. Further details for each step are provided after this overview.

  1. Create a CentOS 7 AMI in AWS.

  2. Test an SSH connection to the deployed instance.

  3. Create guest OS credentials for the “centos” user; these credentials are referenced by the workflows you will import.

  4. Install a workflow plug-in step that automatically adds the deployed cluster to vCommander’s inventory.

  5. Import completion workflows from the Embotics GitHub repository; these workflows will run once the cluster is deployed.

  6. Create a custom attribute for the Kubernetes version.

  7. Create a custom attribute for the cloud account (managed system) name.

  8. Synchronize the inventory for your AWS cloud account (managed system).

  9. Create a service catalog entry for users to request a Kubernetes cluster.

  10. Submit a service request.

Create a CentOS 7 AMI in AWS

Create a generic AMI in AWS to use as the base image for all nodes in the Kubernetes cluster.

  1. Log into the AWS console, navigate to EC2, and click Launch Instance.
  2. Choose an Amazon Machine Image (AMI): Go to the AWS Marketplace tab, search for “centos”, and select CentOS 7 (x86_64) - with Updates HVM. This image has no software cost, but will incur AWS usage fees.
  3. Review the AMI details and click Continue.
  4. Instance Type page: Select t2.medium.  T2.Medium is a good starting point for Kubernetes deployments. You may want to choose a larger instance type, depending on your application workloads. Click Next: Configure Instance Details.
  5. Configure Instance Details page: Configure options appropriate for your organization. Click Next: Add Storage.
  6. Add Storage page: Kubernetes can run on any storage class or volume type. Keep the default size of 8 GiB. Click Next: Add Tags.
  7. On the Add Tags page, add tags as required. Click Next: Configure Security Group.
  8. On the Configure Security Group page, configure the following firewall rules:
    • SSH: TCP port 22
    • Custom TCP: TCP port 6443
  9. Click Review and Launch.
  10. A dialog appears, prompting you to select an existing key pair or create a new one. If you already have an AWS key pair, select it in the list. If not, select Create a new key pair. Enter a key pair name, such as "kubernetes-aws-vcommander", and click Download Key Pair. See Managing Key Pairs in the vCommander documentation to learn more.
  11. Save the .pem file to a known location.

    Important: Do not lose your SSH private key file! This PEM-encoded file is required to connect the vCommander workflow to the deployed EC2 instances.

  12. Click Launch Instances.
  13. Under Instances, right-click the instance and select Image > Create Image.

Once AWS has created the image, which may take up to five minutes, your AMI is available for use.

Test an SSH connection to the deployed instance

Ensure that you can open an SSH connection to the instance you just deployed, using the PEM-encoded SSH key you saved earlier. vCommander workflows will use this key to authenticate to AWS. For example:

ssh -i /path/to/my-key-pair.pem ec2-user@ec2-198-51-100-1.compute-1.amazonaws.com

To learn more, see Connect to Your Container Instance in the AWS documentation.

Create guest OS credentials for the “centos” user

The completion workflows use “centos” user credentials to open an SSH connection to the deployed instances. Before importing the workflows, you must create a set of credentials for the centos user, using the PEM-encoded key from the instance you just created.

  1. In vCommander, go to Configuration > Credentials.
  2. Click Add.
  3. In the Add Credentials dialog, select RSA Key for the Credentials Type.
  4. Enter "aws"for the Name.

    This name is hard-coded in the completion workflows, so you must use this exact name.

  5. For Username, enter "centos".
  6. Open the key.pem file from the instance in a text editor, copy the entire contents, and paste the contents into the RSA Key field.
  7. For Description, enter "Kubernetes-AWS".
  8. For Category, keep the default setting, Guest OS Credentials.
  9. Click OK.

Install the plug-in workflow step package

Go to Embotics GitHub / Plug-in Workflow-Steps and clone or download the k8s repository. Then install the Kubernetes plug-in workflow step package, which contains a plug-in workflow step to add the deployed Kubernetes cluster to vCommander’s inventory as a cloud account (managed system). The completion workflows in this scenario reference this plug-in step.

To learn how to download and install workflow plug-in steps, see Adding Workflow Plug-In Steps in the vCommander User Guide. 

Import the completion workflows

Import the two following vCommander completion workflows to complete the provisioning and configuration of the cluster: 

  • aws-post-deploy-k8s-kubeadm-component.yaml: a component-level completion workflow that runs on each provisioned node and provides common utilities (like Docker)

  • aws-post-deploy-k8s-kubeadm-svc.yaml: a service-level completion workflow that facilitates configuration of the Kubernetes cluster

  1. Go to Embotics Git Hub / Scenarios and clone or download the repository.

  2. In vCommander, go to Configuration > Service Request Configuration > Completion Workflow and click Import.

  3. Go to the Scenarios repo that you cloned or downloaded, then from the Deploying-Kubernetes-Cluster-AWS-kubeadm directory, select the aws-post-deploy-k8s-kubeadm-component.yaml file, and click Open.

    vCommander automatically validates the workflow and displays the validation results in the Messages area of the Import Workflow dialog.

  4. Enter a comment about the workflow in the Description of Changes field, and click Import.

  5. Repeat this process to import the second downloaded workflow, aws-post-deploy-k8s-kubeadm-svc.yaml .

Create a custom attribute for the Kubernetes version

To enable requesters to select which version of Kubernetes to install, create a custom attribute.

  1. In vCommander, go to Configuration > Custom Attributes.
  2. Click Add.
  3. Name the attribute "kubernetes_version" and keep the default values for all other settings on this page.
    This name is hard-coded in the completion workflows, so you must use this exact name.
  4. Click Next, add the appropriate Kubernetes versions as shown in the following image, and click OK.

Create a custom attribute for the cloud account (managed system) name

To store the name of the Kubernetes cloud account (managed system), create another custom attribute.

  1. In vCommander, go to Configuration > Custom Attributes.
  2. Click Add.
  3. Name the attribute "kubernetes_name".
    This name is hard-coded in the completion workflows, so enter the name exactly as shown.
  4. From the Type drop-down list, select Text.
  5. Keep the default values for all other settings on this page.
  6. Click Next, choose Free Form, and click Finish.

Synchronize the inventory for your AWS cloud account (managed system)

To ensure that your newly created AMI is available to add to the service catalog, synchronize the inventory for your AWS cloud account (managed system).

  1. In vCommander, go to Views > Operational.
  2. Right-click your AWS cloud account (managed system) and select Synchronize Inventory.

Create a service catalog entry

Next, create an entry in the service catalog that:

  • Allows the requester to choose which Kubernetes version to deploy (optional)

  • Allows the requester to specify the name of the vCommander cloud account (managed system)

  • Provisions three instances from the previously created EC2 AMI

  • Applies the component-level completion workflow to each deployed instance

  • Applies the service-level completion workflow to the deployed cluster

  1. In vCommander, go to Configuration > Service Request Configuration > Service Catalog, then click Add Service.
  2. Enter a name and description for the service, and optionally apply a custom icon and categories, then click Next.

  3. On the next page, add the AMI for provisioning the base instances for the cluster. Click Add > Template, Image or AMIand navigate to the AMI you created earlier.

    The workflows support any number of nodes, but in this example, we’re creating a cluster of a master and two worker nodes, so you must click Add to Service three times. When you click Close, the three components are visible.

  4. Create a custom component to store the value for the Kubernetes version custom attribute. On the Component Blueprints page, click Add > New Component Type.

  5. Enter a name of "kubernetes_version" for the Name, an annual cost of 0, and then click Add to Service.

    This name is hard-coded in the completion workflows, so enter the name exactly as shown.

  6. Create a second custom component to store the value for the name of the Kubernetes cluster when it’s added to vCommander as a cloud account (managed system). On the Component Blueprints page, click Add > New Component Type.
  7.  In the Create New Component Type dialog, enter a name of kubernetes_name and an annual cost of 0, then click Add to Service.

    This name is hard-coded in the completion workflows, so you must use this exact name.

  8. Next, configure the blueprint for each of the VM components. On the Infrastructure tab:
    • For Completion Workflow, select aws-post-deploy-k8s-kubeadm-component.
    • Customize the Deployed Name to match your enterprise naming convention, using vCommander variables. In the image below, the variable #{uniqueNumber[3]}is used to add a three-digit unique number to the VM name.

  9. On the Resources tab:
    • Set the Instance Type to t2.medium(at minimum).

      Note: Increase the resources to support more concurrent pods/containers per host, if needed.

    • From the Key Pair list, select the key pair created in AWS earlier.  
  10. Perform this configuration for the remaining two VM components.

  11. Once you have configured all three VM components, configure the first custom component. 
    • On the Component Blueprint page for kubernetes_version, click the Attributes tab, then click Add Attributes. In the Add Attributes dialog, select kubernetes_version in the list and click OK.

    • Back on the Attributes tab, choose a default value for kubernetes_versionfrom the drop-down list.

  12. If you want to allow requesters to choose the Kubernetes version, add the custom attribute to the request form. On the Form tab, in the Toolbox on the right, click the kubernetes_versionform element.

  13. Click Edit to enable the Required flag if desired and click OK.
  14. Configure the blueprint for the second custom component. On the Component Blueprint page for kubernetes_name:
    • Go to the Attributes tab and click Add Attributes.
    • Select kubernetes_name in the list and click OK.
  15. If you want to allow requesters to choose the name of the Kubernetes cloud account (managed system), add the custom attribute to the request form. On the Form tab, in the Toolbox on the right, click the kubernetes_name form element.
  16. Click Edit to enable the Required flag and click OK.
  17. On the Deployment page, for Completion Workflow, select aws-post-deploy-k8s-kubeadm-svc, then click Next.

  18. For the purposes of this walk-through, we’ll skip the Intelligent Placement page. Click Next. To learn more, see Intelligent Placement.
  19. On the Visibility page, specify who can request this service, then click Next.
  20. On Summary page, click Finish.

The service catalog entry is now published.

Submit a service request

The service is now configured and ready to test. In vCommander or the Service Portal, go to the Service  Catalog and request the Kubernetes service. Notice that you can specify the cluster name and select the Kubernetes version on the request form.

Once the service request has completed, the new cluster is added to vCommander’s inventory as a Kubernetes cloud account (managed system).