Install openIDL Node on AWS
Prerequisites
AWS Account
...
- The user should have access to IAM and permissions to create a new users and roles. The user will be only used to create a Terraform user and role with the following policy contents.
Wiki Markup |
---|
\{
"Version": "2012-10-17",
"Statement": \[
\{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": \[
"sns:*",
"cloudformation:ListStacks",
"application-autoscaling:*",
"s3:*",
"logs:*",
"cloudtrail:*",
"dynamodb:*",
"sts:*",
"elasticloadbalancing:*",
"autoscaling:*",
"iam:*",
"route53domains:*",
"secretsmanager:*",
"cloudwatch:*",
"ses:*",
"kms:*",
"route53:*",
"lambda:*",
"ec2:*",
"cognito-idp:*",
"ebs:*",
"eks:*",
"acm:*",
"rds:*"
\],
"Resource": "*"
\}
\]
\} |
Terraform Cloud Account
Preconfigured access in ~/.terraformrc
. Get the token from https://app.terraform.io by going to Settings → Teams → Team API Token. Generate a new token and create the file ~/.terraformrc
Code Block |
---|
credentials "app.terraform.io" {
token = "iz5o8MNxgBBPwQ...."
} |
...
Create a new folder under openidl-devops/aws-infrastructure/environments/ by copying the sample folder openidl-devops/aws-infrastructure/environments/
|
...
...
Make sure there are no other credentials in the ~/.terraform/ |
|
...
folder (if it exists) as they will |
|
...
take precedence over the ones in file ~/.terraformrc |
|
...
Procedure
The terraform/terragrunt scripts in openid-devops Git repository are used to create the required IAM roles and users, and the Kubernetes cluster that will host Jenkins and Ansible Tower. The modules' backend is configured to use Terraform Cloud.
Terragrunt executes the scripts via the preconfigured backend. The backend is configured to run Terraform from Terraform Cloud (Execution mode: remote) so the AWS credentials have to be set under the workspace's variables as Environment variables:
Image Removed
The backend (Terraform Cloud workspace) is created and initialized by Terragrunt automatically
...
| Pull the AWS credentials from
|
...
export AWS_ACCESS_KEY_ID="ASIAWRO3..."
export AWS_SECRET_ACCESS_KEY="6gLd..."
export AWS_SESSION_TOKEN="IQoJpZ2l..."
AWS Console for the AWS account
|
...
you have access to. The AWS IAM user needs to have access to IAM to create roles and other users.
|
- Go to
openidl-devops/aws-infrastructure/environments/<env-folder> as copied in
|
...
- the previous section
Configure openidl-devops/aws-infrastructure/environments/<env-folder>/org-vars.yaml Fill in the IAM AWS access and secret keys under section iam of the YAML file Configure the org ID and the environment ID (dev, test or prod)
Go to <env-folder>/iam and run terragrunt plan After a review apply the changes with terragrunt apply
The script creates: |
...
...
Ops Kubernetes Cluster | Register manually a new SSH key pair in AWS by going to EC2 → Key pairs (RSA, pem file). Create a new key with a name awx-target Keep the private key in the environments folder or anywhere on the file system you prefer Go to the Terraform Cloud workspace that was just created in the previous section and go to the States tab. Open the top state in the list and find outputs and copy access_key and secret_key values that will be used for the next step Go to <env-folder>/k8s-cluster and run terragrunt plan
|
...
The previous step should fail but it should have created a new workspace in Terraform Cloud - e.g. devnet-d3-k8s-cluster Make sure the AWS variables are set in
|
...
org-vars.yaml under terraform: property aws_access_key = terraform
|
...
user’s access key ID aws_secret_key = terraform
|
...
user’s secret access key region = us-east-2 or any other region you prefer aws_role_arn =
|
...
terraform role ARN aws_external_id = terraform
Run
|
...
terragrunt plan Review and if things look ok run terragrunt apply Acknowledge the run with yes in the prompt
The script creates: |
4 | Import the Kubernetes Cluster connection config | |
...
an AWS profile set in your ~/.aws/config and ~/.aws/credentials Code Block |
---|
| [profile tf-user]
region = us-east-2
external_id = terraform
[profile tf-role]
external_id = terraform
source_profile = tf-user
role_arn = arn:aws:iam::<aws-account-number>:role/tf_automation
region = us-east-2 |
Code Block |
---|
| [tf-user]
aws_access_key_id = AKI...
aws_secret_access_key = r3AB...
|
Find the name of the Kubernetes cluster and update the local config with it Code Block |
---|
export AWS_PROFILE=tf-role
aws eks update-kubeconfig --name ops-k8s |
|
5 | Install |
...
Nginx | Install Nginx Ingress controller Code Block |
---|
kubectl create ns ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx |
|
...
helm install -n ingress-nginx lb ingress-nginx/ingress-nginx |
Info |
---|
It is possible that the nginx LB will not be assigned DNS and IP due to the security group for the cluster and the nodes tagged with the same annotation. To fix that find the security group for the nodes (e.g. ops-k8s-node) and remove the owned tag. |
|
6 | Install Jenkins | Use the helm chart for installing Jenkins onto the Kubernetes cluster created above. Code Block |
---|
cd <devops-repo>/jenkins |
|
...
kubectl create ns jenkins |
|
...
helm repo add jenkins https://charts.jenkins.io |
|
...
helm upgrade --install -n jenkins jenkins jenkins/jenkins --values values.yaml |
Wait for Jenkins to start up. To view the Jenkins admin password: Code Block |
---|
kubectl exec --namespace jenkins -it svc/jenkins -c jenkins |
|
...
-- /bin/cat /run/secrets/additional/chart-admin-password && echo |
Set up a cloud-provisioned Jenkins node as defined in the Kubernetes plugin config in Jenkins. |
7 | Install Ansible Tower (AWX) | Create the AWX DB by connecting to the RDS PostgreSQL instance created via Terraform. Create an SSH Tunnel. Lookup the RDS DB DNS and the EC2 instance that is the AWX target public DNS and replace them in the command line template: Code Block |
---|
ssh -i <env-folder>/awx-target.pem -N -L 5432:ops-tools-db.<instance-id>.us-east-2.rds.amazonaws.com:5432 ubuntu@<awx-target-ec2>.us-east-2.compute.amazonaws.com -vv |
Connect with DBeaver (or another PostgreSQL client) on localhost port 5432 and run the following SQL after replacing <pass> with an actual password (as defined under environments/<env>/org-vars.yaml ) Code Block |
---|
create database awx; |
|
...
create user awxuser with encrypted password '<pass>'; |
|
...
grant all privileges on database awx to awxuser; |
Configure the
|
...
Kustomize script awx-custom.yaml by replacing the DB settings in awx-operator folder under openidl-devops Git repository.
Install AWX with the Kustomize |
...
command.
Code Block |
---|
cd awx-operator
helm repo add awx-operator https://ansible.github.io/awx-operator/ |
|
...
kustomize build . | kubectl apply -f - |
Watch for the script failing and if it does run it again (timing issue due to the creation of the AWX RBAC) |
8 | Update DNS record (optional) | Go to the AWS Account → Route53 Create a new Hosted Zone (e.g. d1.test.
|
...
...
openidl-org-test.net) to point to a Classic Load Balancer
Now Jenkins and AWX should be available via |
...
...
...
...
...
Terraform Cloud Workspaces
We need to maintain two workspaces - one for the Fabric Kubernetes cluster and one for the openIDL applications.
To create the workspaces use the tool located in {+}https://bitbucket.org/senofi/openidl-devops/src/develop/+
Image Removed:
- Go to openidl-devops/aws-infrastructure/environments/<env-folder>/terraform-cloud and run
terragrunt plan
If everything looks ok, execute terragrunt apply. This should create two workspaces and a var set in Terraform Cloud.
- Create a new KMS key (symetric, encrypt/decrypt) in the AWS console. The name is not important but use a meaningful name that will associate it with this environment. Use it to populate the property in the next step
- Go to openidl-devops/automation/terraform-cloud and update configuration.properties Make sure that the varset name
- Create SSH keys
ssh-keygen -t rsa -f app_eks_worker_nodes_ssh_key.pem
ssh-keygen -t rsa -f blk_eks_worker_nodes_ssh_key.pem
ssh-keygen -t rsa -f bastion_ssh_key.pem
- Populate the variable set by executing the following command in openidl-devops/automation/terraform-cloud
pip install -r requirements.txt
python populate-variable-set.py
- Copy the contents of the public keys and populate them in Terraform Cloud UI under Variable Sets → <the newly created varset>
Configure Jenkins
- Set Jenkins node label 'openidl' in Kubernetes Cloud by going to Manage Jenkins → Manage Nodes and Clouds → Configure Clouds. Make sure that under Pod Template details the labels field contains the value 'openidl'.
Image Removed
Also, remove the prepopulated 'sleep' command if it is set on the pod template:
Image Removed
- Create the Terraform Job Template
- Terraform Token Secret - Login to Jenkins go to Manage Jenkins → Manage Credentials → Stores scoped to Jenkins (Jenkins) → Global Credentials (unrestricted) → Add credentials
Choose Kind as secret text, enter secret text like Token in "secret" field and name the secret ID as unique since it will be used in pipeline code.
Image Removed
- Git Credentials - Add a new credential
- Terraform Job
- Go to Jenkins → New Item. Use a name such as Terraform Job
- Select job type as PIPELINE and proceed.
- Select Definition as Pipeline Script from SCM
- Select SCM as Git
- Key in the Infrastructure code repository (openidl-gitops) URL.
- Select the Git credential created above
- Specify the relevant branch "refs/heads/<branch-name>".
- Set script path to jenkins-jobs/jenkinsfile-tf
Run Terraform Job
- Run the Jenkins Terraform Job
- Open the console log for the job. Once the job asks for an input accept and choose the apply option
- The job runs a second plan into the Kubernetes workspace in Terraform Cloud. When asked - accept and apply the changes
- Go to the AWS Console and find EKS (Elastic Kubernetes Service). Choose the blk cluster and go to Add-Ons. Find the EBS plugin and add it to the list. The plugin makes sure volumes could be created in Kubernetes
Application Deployment
Application TLS Cert
- Deploy cert manager using helm , include CRD deployment as part of the helm install command (Note 1.11.0 doesn't work on k8s 1.20) - {+}https://cert-manager.io/docs/installation/helm/+
Image Removed - Deploy an issuer (kubectl apply)
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: openidl@senofi.ca
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
Image Removed
solvers:
...