openIDL Navigation
AWS Account
...
Wiki Markup |
---|
\{
"Version": "2012-10-17",
"Statement": \[
\{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": \[
"sns:*",
"cloudformation:ListStacks",
"application-autoscaling:*",
"s3:*",
"logs:*",
"cloudtrail:*",
"dynamodb:*",
"sts:*",
"elasticloadbalancing:*",
"autoscaling:*",
"iam:*",
"route53domains:*",
"secretsmanager:*",
"cloudwatch:*",
"ses:*",
"kms:*",
"route53:*",
"lambda:*",
"ec2:*",
"cognito-idp:*",
"ebs:*",
"eks:*",
"acm:*",
"rds:*"
\],
"Resource": "*"
\}
\]
\} |
credentials "app.terraform.io" {
token = "iz5o8MNxgBBPwQ...."
}
Make sure there are no other credentials in the ~/.terraform/ folder as they will override the ones in file ~/.terraformrc
The terraform/terragrunt scripts in openid-devops Git repository are used to create the required IAM roles and users, and the Kubernetes cluster that will host Jenkins and Ansible Tower. The modules' backend is configured to use Terraform Cloud.
Terragrunt executes the scripts via the preconfigured backend. The backend is configured to run Terraform from Terraform Cloud (Execution mode: remote) so the AWS credentials have to be set under the workspace's variables as Environment variables:
The backend (Terraform Cloud workspace) is created and initialized by Terragrunt automatically
export AWS_ACCESS_KEY_ID="ASIAWRO3..."
export AWS_SECRET_ACCESS_KEY="6gLd..."
export AWS_SESSION_TOKEN="IQoJpZ2l..."
The script creates:
The script creates:
aws eks update-kubeconfig --name ops-k8s
kubectl create ns ingress-nginx
helm install -n ingress-nginx lb ingress-nginx/ingress-nginx
It is possible that the nginx LB will not be assigned DNS and IP due to the security group for the cluster and the nodes tagged with the same annotation. To fix that find the security group for the nodes (e.g. ops-k8s-node) and remove the owned tag.
Use the helm chart for installing Jenkins onto the Kubernetes cluster created above.
cd <devops-repo>/jenkins
kubectl create ns jenkins
helm repo add jenkins https://charts.jenkins.io
helm upgrade --install -n jenkins jenkins jenkins/jenkins --values values.yaml
Wait for Jenkins to start up.
To view the Jenkins admin password:
kubectl exec --namespace jenkins -it svc/jenkins -c jenkins – /bin/cat /run/secrets/additional/chart-admin-password && echo
Set up a cloud provisioned Jenkins node as defined in the Kubernetes plugin config in Jenkins.
Create the AWX DB by connecting to the RDS PostgreSQL instance created via Terraform.
ssh -i <env-folder>/awx-target.pem -N -L 5432:ops-tools-db.<instance-id>.us-east-2.rds.amazonaws.com:5432 ubuntu@<awx-target-ec2>.us-east-2.compute.amazonaws.com -vv
create database awx;
create user awxuser with encrypted password '<pass>';
grant all privileges on database awx to awxuser;
Install AWX with the Kustomize script.
helm repo add awx-operator https://ansible.github.io/awx-operator/
Now Jenkins and AWX should be available via {+}http://ops.d1.test.senofi.net/+ and {+}http://ops.d1.test.senofi.net/jenkins+
.
We need to maintain two workspaces - one for the Fabric Kubernetes cluster and one for the openIDL applications.
To create the workspaces use the tool located in {+}https://bitbucket.org/senofi/openidl-devops/src/develop/+:
terragrunt plan
If everything looks ok, execute terragrunt apply. This should create two workspaces and a var set in Terraform Cloud.
ssh-keygen -t rsa -f app_eks_worker_nodes_ssh_key.pem
ssh-keygen -t rsa -f blk_eks_worker_nodes_ssh_key.pem
ssh-keygen -t rsa -f bastion_ssh_key.pem
pip install -r requirements.txt
python populate-variable-set.py
Also, remove the prepopulated 'sleep' command if it is set on the pod template:
Choose Kind as secret text, enter secret text like Token in "secret" field and name the secret ID as unique since it will be used in pipeline code.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: openidl@senofi.ca
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
...
Terraform Cloud Account
Preconfigured access in ~/.terraformrc
. Get the token from https://app.terraform.io by going to Settings → Teams → Team API Token. Generate a new token and create the file ~/.terraformrc
Code Block |
---|
credentials "app.terraform.io" {
token = "iz5o8MNxgBBPwQ...."
} |
# | Step | |||||||
---|---|---|---|---|---|---|---|---|
1 | Setup |
Make sure there are no other credentials in the | ||||||
2 | Create IAM User & Role |
The script creates:
| ||||||
3 | Create Ops Kubernetes Cluster |
The script creates:
| ||||||
4 | Import the Kubernetes Cluster connection config | Make sure you have AWS profile or key/secret/session environment variables set onto the terminal session Find the name of the Kubernetes cluster and update the local config with it
| ||||||
5 | Install Nginx |
| ||||||
6 | Install Jenkins | Use the helm chart for installing Jenkins onto the Kubernetes cluster created above.
Wait for Jenkins to start up. To view the Jenkins admin password: kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo Set up a cloud provisioned Jenkins node as defined in the Kubernetes plugin config in Jenkins. | ||||||
7 | Install Ansible Tower (AWX) | Create the AWX DB by connecting to the RDS PostgreSQL instance created via Terraform.
Install AWX with the Kustomize script. helm repo add awx-operator https://ansible.github.io/awx-operator/ # clone the gitops repo and go to awx-operator kustomize cd awx-operator kustomize build . | kubectl apply -f - Watch for the script failing and if it does run it again (timing issue due to the creation of the AWX RBAC) | ||||||
8 | Update DNS record |
Now Jenkins and AWX should be available via http://ops.d1.test.senofi.net/ and http://ops.d1.test.senofi.net/jenkins. | ||||||
9 | Terraform Cloud workspaces | We need to maintain two workspaces - one for the Fabric Kubernetes cluster and one for the openIDL applications. To create the workspaces use the tool located in senofi/openidl-devops:
| ||||||
10 | Configure Jenkins |
| ||||||
11 | Run Terraform Job |
|
AWX is installed and operational
AWS infrastructure is provisioned
Configuration is done and available at a private git repository
Credentials information is defined and available
aws iam user
git private repo deploy key
bastion ssh private key
hds db access
fabric console user and password
Bastion machine (gateway) host address is available
Step | Notes | User | Checklist |
---|---|---|---|
Create new Organization | Create new organization with the org_id | admin | |
Setup new org AWX user | Create new user named with the org name, assing admin permissions to the created organization above | admin | |
Setup org project | Create new project named with the org name, use openidl ansible git url and the appropriate branch. | org user | |
Setup inventory | Create new Inventory named with the org name (bastion-org_id). Add host using the bastion machine addrress Add group named ansible_provisioners Add the bastion host to the group | org user | |
Create Credentials | Create the credential types as specified below (see credentails table) | admin | |
Create AWX job tempaltes | Create the AWX job tempaltes as specified below (see AWX job tempaltes table) | org user |
Credential | Description | Definition/Type | Checklist |
---|---|---|---|
aws-git-actions | A AWS credential used to access AWS APIs. The IAM user is created during the AWS provisioning step. This user usually should have access to AWS resources and the provisioned k8s clusters (hlf and applications k8s). The user is usually named suffixed with git-actionsadmin. External AWS id is usually git-actions. The user is used by the playbooks to perform the deployment and setup actions. | fields: - id: aws_access_key type: string label: aws_access_key secret: true help_text: AWS IAM user access key for aws - id: aws_secret_key type: string label: aws_secret_key secret: true help_text: AWS IAM user secret key for aws - id: aws_external_id type: string label: aws_external_id - id: aws_assume_role_arn type: string label: AWS IAM user role to assume required: - aws.access_key - aws.secret_key - aws.external_id - aws.assume_role_arn extra_vars: aws_access_key: '{{ aws_access_key }}' aws_secret_key: '{{ aws_secret_key }}' aws_external_id: '{{ aws_external_id }}' aws_assume_role_arn: '{{ aws_assume_role_arn }}' | |
git-config | Git credentials (used to pull configuration from the private repository) | fields: - id: sshkey type: string label: Base64 encoded deploy private key string secret: true - id: repourl type: string label: GIT repo URL - id: repobranch type: string label: Git repo branch extra_vars: ssh_key: '{{ sshkey }}' git_configs_repo_url: '{{ repourl }}' git_configs_repo_branch: '{{ repobranch }}' | |
bastion | Bastion Machine SSH credential. This machine is boostrapped during the AWS infrastructucture provisioning step. It is used as a remote agent for the ansible playbooks. It is the entry point (gateway) to access the AWS infrastructure in order to setup and deploy the network. | Machine - an existing standard credenatial in AWX | |
hds-access | Access information for application HDS DB. This credetnails is injected by the playbooks to configure the openIDL applications for access to the local carrier HDS database. The ansible playbooks don’t use it to establish connection to the HDS and perform operations. | fields: - id: hds_host type: string label: HDS host help_text: HDS host address - id: hds_port type: string label: hds_port help_text: HDS port - id: hds_username type: string label: hds_username secret: true - id: hds_password type: string label: hds_password secret: true - id: hds_dbname type: string label: hds_dbname required: - hds_host - hds_port - hds_username - hds_password - hds_dbname extra_vars: hds_host: '{{ hds_host }}' hds_port: '{{ hds_port }}' hds_dbname: '{{ hds_dbname }}' hds_password: '{{ hds_password }}' hds_username: '{{ hds_username }}' | |
fabric-console | Fabric Operator Console access default user / password. Used by the playbooks to inject default user and password for the fabric consle deployment. Make sure the generate a strong password as it will secure properly the access to the node HLF managed. The playbooks also use this credentail to connect to the consle for the purpose of performing operations on the HLF nodes. Take a note of that credential as the provided user and password will be requried to login to the fabric operator console. | fields: - id: console_username type: string label: console_username help_text: Fabric Operator Console Username - id: console_password type: string label: console_password secret: true help_text: Fabric Operator Console Password required: - console_username - console_password extra_vars: console_password: '{{ console_password }}' console_username: '{{ console_username }}' |
Playbook | Template Name | Credential | Description | Checklist |
---|---|---|---|---|
ansible/environment-setup.yaml | <env_id>-<org_id>-environment-setup | aws-git-actions bastion git-config | Install open source tools on the bastion host. Setup the access to the cloud APIs | |
ansible/deploy-fabric-ingress.yml | <env_id>-<org_id>-deploy-fabric-ingress | aws-git-actions bastion git-config | Deploy Ingress controllers (classes) and cloud load balancers for the HLF k8s cluster | |
ansible/deploy-vault.yml | <env_id>-<org_id>-deploy-vault | aws-git-actions bastion git-config | Deploy Vault raft cluster for storing HLF identities (application and HLF nodes admins) | |
ansible/deploy-fabric-operator.yml | <env_id>-<org_id>-deploy-fabric-operator | aws-git-actions bastion git-config | Deploy HLF fabric operator | |
ansible/deploy-fabric-console.yml | <env_id>-<org_id>-deploy-fabric-console | aws-git-actions bastion git-config fabric-console | Deploy HLF operator console | |
ansible/deploy-openidl-app-identities.yml | <env_id>-<org_id>-deploy-app-identities | aws-git-actions bastion git-config fabric-console | Registers and enrolls the openidl application identities used to transact on the openidl fabric network | |
ansible/deploy-openidl-app-ingress.yml | <env_id>-<org_id>-deploy-app-ingress | aws-git-actions bastion git-config | Deploys the application ingress controller and class. Creates the applications load balancers for the applications k8s cluster. | |
ansible/deploy-mongodb.yml | <env_id>-<org_id>-deploy-mongodb | aws-git-actions bastion git-config | Deploys mongoDB as application database | |
ansible/deploy-openidl-app-config.yaml | <env_id>-<org_id>-deploy-app-config | aws-git-actions bastion git-config fabric-console hds-access (carrier node) | Generates and deploys the application config as secrets | |
ansible/deploy-openidl-app.yaml | <env_id>-<org_id>-deploy-app | aws-git-actions bastion git-config | Deploys the openidl applications |