#Step
1Terraform Cloud workspaces

We need to maintain two workspaces - one for the Fabric Kubernetes cluster and one for the openIDL applications.

To create the workspaces use the tool located in senofi/openidl-devops:

  1. Go to openidl-devops/aws-infrastructure/environments/<env-folder>/terraform-cloud and run 

    terragrunt plan

    If everything looks ok, execute terragrunt apply. This should create two workspaces and a var set in Terraform Cloud.

  2. The apply from step 1 should fail as the execution of the script is happening on the Terraform Cloud servers and the local config in ~/.terraformrc  is only useful to the communication from the localhost to the Terraform Cloud servers. In order to fix this, find in Terraform Cloud the new workspace that has the format of <org_id>-<env>-tf_cloud  and go to Variables  section where a new environment variable will be needed with the name TFE_CLOUD , type environment variable and value the Terraform token that was previously saved in ~/.terraformrc 
  3. Create a new KMS key (symetric, encrypt/decrypt) in the AWS console. The name is not important but use a meaningful name that will associate it with this environment. Use it to populate the property in the next step

  4. Go to openidl-devops/automation/terraform-cloud and update configuration.properties Make sure that the var set name matches what is in Terraform Cloud

  5. Create SSH keys 

    ssh-keygen -t rsa -f app_eks_worker_nodes_ssh_key.pem

    ssh-keygen -t rsa -f blk_eks_worker_nodes_ssh_key.pem

    ssh-keygen -t rsa -f bastion_ssh_key.pem

  6. Populate the variable set by executing the following command in openidl-devops/automation/terraform-cloud 

    pip install -r requirements.txt
    python populate-variable-set.py

  7. Copy the contents of the public keys and populate them in Terraform Cloud UI under Variable Sets → <the newly created varset>

    1. app_eks_worker_nodes_ssh_key.pem.pub
    2. bastion_ssh_key.pem.pub
    3. blk_eks_worker_nodes_ssh_key.pem.pub
2Configure Jenkins
  1. Set Jenkins node label ‘openidl’ in Kubernetes Cloud by going to Manage Jenkins → Manage Nodes and Clouds → Configure Clouds. Make sure that under Pod Template details the labels field contains the value ‘openidl’.

    Also, remove the prepopulated ‘sleep’ command if it is set on the pod template:

  2. Create the Terraform Job Template

    1. Terraform Token Secret - Login to Jenkins go to Manage Jenkins → Manage Credentials → Stores scoped to Jenkins (Jenkins) → Global Credentials (unrestricted) → Add credentials

    2. Choose Kind as secret text, enter secret text like Token in “secret” field and name the secret ID as unique since it will be used in pipeline code.

    3. Git Credentials - Add a new credential

  3. Terraform Job

    1. Go to Jenkins → New Item. Use a name such as Terraform Job

    2. Select job type as PIPELINE and proceed.

    3. Select Definition as Pipeline Script from SCM

    4. Select SCM as Git

    5. Key in the Infrastructure code repository (openidl-gitops) URL.

    6. Select the Git credential created above

    7. Specify the relevant branch “refs/heads/<branch-name>”.

    8. Set script path to jenkins-jobs/jenkinsfile-tf

3Run Terraform Job
  1. Run the Jenkins Terraform Job

  2. Open the console log for the job. Once the job asks for an input accept and choose the apply option

  3. The job runs a second plan into the Kubernetes workspace in Terraform Cloud. When asked - accept and apply the changes

  4. Go to the AWS Console and find EKS (Elastic Kubernetes Service). Choose the blk cluster and go to Add-Ons. Find the EBS plugin and add it to the list. The plugin makes sure volumes could be created in Kubernetes

4Add Amazon EBS CSI Driver Add-on
  1. Go to AWS Console → Elastic Kubernetes Service
  2. Find the new app and k8s clusters that were created in the previous step
  3. Open each of them and go to Add-ons tab
  4. Select Get more add-ons and select the Amazon EBS CSI Driver
  5. Choose Next on the bottom of the page
  6. Review and click Create button
  7. Repeat for the other cluster






  • No labels