Architecture 101

Access Management

  1. Principal: a person of application that can make an authentication or anonymous request to perform an action on a system
  2. Authentication: process of authenticating a principal against identity. This could be via username and password or API keys
  3. Identity: objects that require authentication and are authroized to access resources
  4. Authorization: the process of checking and allowing or denying access to a resource for an identity

Shared responsibility

Security model

  • Customer
    • Customer data
    • Platform, appliciation, identity, access
    • Operating system, Network and Firewall configuration
    • Encryption and network protection
  • AWS
    • Software
      • Compute
      • Storage
      • Database
      • Network
    • Hardware / AWS Global Infrastructure
      • Regions
      • Availability zones
      • Edeg locations

Service models

  • Iaas: Infrastructure as a Service
  • Paas: Platform as a Service
  • SaaS: Software as a Service
  • FaaS: Function as a Service (just a single one -> AWS Lambda)
System stack\model Iaas Paas SaaS
Data You You Yes
Applications You You AWS
Runtime You AWS AWS
Operating System You AWS AWS
Virtualization AWS AWS AWS
Host/Server AWS AWS AWS
Network storage AWS AWS AWS
Data center AWS AWS AWS

Availability

  • High availability: hardware, software and configuration allowing a system to recover quickly in the moment of a failure -> Downtime
graph LR

A[Users]
B[Instance - ok]
C[Instance - ko]
D[Recovery - ok]

A --> B;
B --> C;
C --> D;
A --> D;
  • Fault tolerance: system designed to operate through a failure with no user impact -> Expensive, no downtime
graph LR

A[Users]
B[load balancer]
C[Instance - ok]
D[Instance - ko]
E[Instance - ok]

A --> B
B --> C
B --> D
B --> E

RPO vs. RTO

  • Recovery Point Time (RPT): how much a business can tolerate to lose, expressed in time. The maximum time between a failure and the last successful backup
  • Recovery Time Objective (RTO): the maximum ammount of time a system can be down. How long a slution takes to recover
graph LR

A[Backup]
B[Disaster event]
C[Recovery]

A -- RPO --> B;
B -- RTO --> C;

Scaling

  • Vertical scaling: (a bigger machine) achivable by adding additional resources in the form of CP or memory to extend a machine so it can serve additional customers or be faster
    • eventually, maximum machine sizes will contraint your abuility to scale (technically or by cost -> exponencial cost increase)
  • Horizontal scaling: (paralel systems) adding additional machines into a pool of resources
    • does not suffer the limitations of vertical scaling, but needs applications support to scale effectively

Tiered application design

  • Architectural application tiers (if all code is mixed -> monolithic)

    • Presentation tier: interatcs with the consumer
    • Logic tier: delivers functionality
    • Data tier: controls interactions with DB
  • Tier

    • Isolated component
    • Independent performance -> may be provioned on separate machines

Encryption

  • types
    • symmetrical: same key for encrypt and decrypt
    • asymmetricl: different keys for encrypt and decrypt (public and private)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
echo "Cats are Amazing" > hiddenmessage.txt

## symmetrical encryption
# encrypt
gpg -c hiddenmessage.txt
cat hiddenmessage.txt.gpg
# this clears the cached password
echo RELOADAGENT | gpg-connect-agent
# decrypt
gpg -o output.txt hiddenmessage.txt.gpg
#clear
rm hiddenmessage.txt.gpg
rm output.txt

## assymetrical encryption
gpg --gen-key
# check keys
gpg --armor --output pubkey.txt --export 'User'
gpg --armor --output privkey.asc --export-secret-keys 'User'
# encrypt
gpg --encrypt --recipient 'User' hiddenmessage.txt
# decrypt
gpg --output decrypted.txt --decrypt hiddenmessage.txt.gpg

Architecture odds and ends

  • Cost efficient / cost effective:implementing a solution within AWS using products or features that provide the reqired service for as little initial and ongoing cost as possible. Using your fund effectively and knowing if product X is better or worse than product Y for a given solution.
  • Secure: in systems architecture context, implementing a given solution that secures data and operations as much as possible from an internal or external attack.
  • Application session state: data that represents what a customer is doing, what they have chosen, or what they have configured.
  • Undifferentiated heavy lifting: a part of an application, system or platform that is not specific to your business. Allowing a vendor (AWS) to handle part frees your staff to work on adding direct value to your customer.

AWS Architecture 101

AWS accounts

  • Authentication domain
    • AWS accounts are oslated
    • Create account = root user for that account -> the only identity that can use (autheticate to) the account
    • Account credentials leaked -> impact is limited to that account
  • Authorization
    • Controled by account basis
    • Root = full control
    • Additional identities can be created, and externall identities may be grnted access
    • Unless defined otherwise, only root can access to a service/resource
  • Billing
    • Accounts can be linked to allow consolidated billing, where a master account is charged for all member accounts usage
    • Every AWS account has its own isolated billing information -> default: attached credit card, can be changed to term invoice

AWS physical and networking layer

Terms

  • Region: has at least 2 Availability Zones (isolated networks)
    • AZs are connected with redundat, high-speed, low-latency network connections
    • Edge locations: small pockets of AWS compute, storage and networking close to major populations and generally used for edge computing and content delivery
    • Points of Presence:dge Locations that, by being closer to remote users, provide better performance for them

Well-architected framework

  • Security: ability to protect information, systems and assets
    • implement strong identity foundation
    • enable traceability
    • apply security at all layers
    • automate security best practices
    • protect data in transit
    • and at rest
    • prepare for security events
  • Reliability: ability to recover from infrastructure disruptions, dynamically acquire computing resources to meet demand and mitigate those discruptions
    • test recovery procedures
    • automatically recover from failure
    • scale horizontally to increase aggregate systems availability
    • stop guessing capacity
    • manage change in automation
  • Performance efficency: ability to use computing resources efficient to meet system requirements and to maintain that efficiently as demand changes and technology evolves
    • democratize advanced technologies
    • go global in minutes
    • experiment more often
    • mechanical sympathy
  • Operational excellence: ability to run andn monitor systems to deliver business value and to continually improve supporting processes and procedures
    • perform operations as code
    • annotate documentation
    • make frequent, small, reversible changes
    • refine operations procedures frequently
    • anticipate failure
    • learn from all operational failures
  • Cost optimization: ability to avoid or eliminate unneeded cost or suboptimal resources
    • adpot a consumption model
    • measure efficiently
    • stop spending money on data center operations
    • analyse and attribute expenditure
    • use managed services to reduce cost of ownership

More info at AWS well-architected framework

Elasticity

  • Vertical scaling: increase size of servers
  • Horizontal scaling: increase number of server
  • Elastic: automation and horizonatal scaling are used in conjunction to match capacity with demand
    • demand is rarely linear: it can increase and decrease -> an efficient platform should scale OUT and IN

AWS product fundamentals

Introduction to S3

  • S3 (Simple Storage Service): global object storage
  • Region -> Bucket -> Object
  • Object
    • similar to a file
    • Has a key (name) and a value (data)
    • Can contain 0 bits
    • has an unique name in the bucket
      • 3-63 characters
      • start with lowercase letter or number, can’t be like an IP address
    • default: 100 buckets per account, hard limit=1000
    • unlimited objects in bucket
    • unlimited total capacity for bucket
    • object size: 0 to 5TB

Introduction to CloudFormation

  • CloudFormation (CFN): IaC product, to create, manage and remove infrastructure via json or YAML

  • Template -> Stack -> Physical objects

    • Template: contains logical resources and configuration
    • Stack: created and modified based on templates, which can be changed and used to update a stack
    • Physical object: stacks take logical resources from sa template and create, update o delete the physical resources in AWS
  • CFN is effective if you frequently deploy the same infrastructure or require guaranteed consitent configuration

  • Template format

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    ---
    AWSTemplateFormatVersion: "2020-03-27"

    Description:
    this template does XXXX

    Metadata:
    template metadata

    Parameters:
    set of parameters

    Mappings:
    set of mappings

    Conditions:
    set of conditions

    Transform:
    set of transforms

    Resources:
    set of resources

    Outputs:
    set of outputs

  • Resource format

    1
    2
    3
    4
    5
    6
    7
    {
    "Resources": {
    "demoBucket": {
    "Type": "AWS:S3:Bucket"
    }
    }
    }
  • Facts

    • Template: max=200 resources
    • Stack deleted -> resources deleted
    • Stack update -> upload a new template
    • New logical resources -> new physical resources
    • Removed logical resource -> deleted physical resource
    • Changed local resources update with some disruption or replace physical resources

Terraform: Terraform 0.12

Setup and disclaimer

  1. Install Terraform 0.12

    1
    2
    3
    4
    5
    6
    cd /tmp
    sudo curl -O https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip
    sudo unzip terraform_0.12.2_linux_amd64.zip
    sudo cp terraform /usr/bin/terraform12
    # check up
    terraform12 version
  2. Setup a Terraform 0.12 directory

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    mkdir /home/your_user/terraform/t12
    cd /home/your_user/terraform/t12
    cp -r /home/your_user/terraform/basics .
    cd basics
    rm -r .terraform
    # test
    terraform12 init

    # also copy AWS/storage
    cd /home/your_user/terraform/t12
    cp -r ../AWS/storage .
    cd storage
  3. Edit main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    #---------storage/main.tf---------

    # Create a random id
    resource "random_id" "tf_bucket_id" {
    byte_length = 2
    }

    # Create the bucket
    resource "aws_s3_bucket" "tf_code" {
    bucket = "${var.project_name}-${random_id.tf_bucket_id.dec}"
    acl = "private"

    force_destroy = true

    tags = {
    Name = "tf_bucket"
    }
    }
  4. Work with Terraform

    • Terraform 12
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      # setup AWS keys
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      export AWS_DEFAULT_REGION="us-east-1"

      terraform12 init
      # deploy
      terraform12 apply -var project_name=la-terraform -auto-approve
      # clean up
      terraform12 destroy -var project_name=la-terraform -auto-approve
      ls -la
      rm -r .terraform terraform.tfstate*
    • Older version
      1
      2
      3
      terraform init
      terraform apply -var project_name=la-terraform -auto-approve
      terraform destroy -var project_name=la-terraform -auto-approve

Working with resources

  • environment: /home/your_user/terraform/t12/storage
  • example with storage
    1. Edit main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      # Create a random id
      resource "random_id" "tf_bucket_id" {
      byte_length = 2
      }

      # Create the bucket
      resource "aws_s3_bucket" "tf_code" {
      bucket = format("la-terraform-%d", random_id.tf_bucket_id.dec)
      acl = "private"

      force_destroy = true

      tags = {
      Name = "tf_bucket"
      }
      }
    2. Work with Terraform
      1
      2
      3
      4
      5
      6
      7
      8
      9
      # Setup AWS access key
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      export AWS_DEFAULT_REGION="us-east-1"
      terraform12 init
      terraform12 apply -auto-approve

      # clean up
      terraform12 destroy -auto-approve

Input variables

Refactor the previous storage module by adding in variables

  1. Edit variables.tf
    1
    2
    3
    variable "project_name" {
    type = string
    }
  2. Edit main.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # Create a random id
    resource "random_id" "tf_bucket_id" {
    byte_length = 2
    }

    # Create the bucket
    resource "aws_s3_bucket" "tf_code" {
    bucket = format("%s-%d", var.project_name, random_id.tf_bucket_id.dec)
    acl = "private"
    force_destroy = true
    tags = {
    Name = "tf_bucket"
    }
    }
  3. Work with Terraform
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # setup AWS access key
    export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
    export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
    export AWS_DEFAULT_REGION="us-east-1"
    terraform12 plan -var project_name=la-terraform
    terraform12 apply -var project_name=la-terraform -auto-approve

    # clean up
    terraform12 destroy -var project_name=la-terraform -auto-approve

Output values

Refactor the previous storage module by adding outputs of the S3 bucket name and project_name variable.

  1. Edit outputs.tf
    1
    2
    3
    4
    5
    6
    7
    output "bucketname" {
    value = aws_s3_bucket.tf_code.id
    }

    output "project_name" {
    value = var.project_name
    }
  2. Work with Terraform
    1
    2
    3
    4
    5
    terraform12 init
    terraform12 apply -var project_name=la-terraform -auto-approve

    # clean up
    terraform destroy -var project_name=la-terraform -auto-approve

Expressions

Dynamic nested blocks ‘for-each’

  • environment: ~/terraform/t12/loops
  1. Edit main.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    variable "vpc_cidr" {
    default = "10.123.0.0/16"
    }

    variable "accessip" {
    default = "0.0.0.0/0"
    }

    variable "service_ports" {
    default = ["22", "22"]
    }

    resource "aws_vpc" "tf_vpc" {
    cidr_block = var.vpc_cidr
    enable_dns_hostnames = true
    enable_dns_support = true

    tags = {
    Name = "tf_vpc"
    }
    }

    resource "aws_security_group" "tf_public_sg" {
    name = "tf_public_sg"
    description = "Used for access to the public instances"
    vpc_id = aws_vpc.tf_vpc.id

    # this defines a for-each loop
    dynamic "ingress" {
    for_each = var.service_ports
    content {
    from_port = ingress.value
    to_port = ingress.value
    protocol = "tcp"
    cidr_blocks = [var.accessip]
    }
    }
    }
  2. Work with Terraform
    1
    2
    3
    4
    5
    export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
    export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]"
    export AWS_DEFAULT_REGION="us-east-1"
    terraform12 init
    terraform12 plan

Dynamic nested blocks ‘for’

  • environment: ~/terraform/t12/dynamic
  1. Edit `main.tf:``

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    variable "vpc_cidr" {
    default = "10.123.0.0/16"
    }

    variable "accessip" {
    default = "0.0.0.0/0"
    }

    variable "service_ports" {
    default = [
    {
    from_port = "22",
    to_port = "22"
    },
    {
    from_port = "80",
    to_port = "80"
    }
    ]
    }

    resource "aws_vpc" "tf_vpc" {
    cidr_block = var.vpc_cidr
    enable_dns_hostnames = true
    enable_dns_support = true

    tags = {
    Name = "tf_vpc"
    }
    }

    resource "aws_security_group" "tf_public_sg" {
    name = "tf_public_sg"
    description = "Used for access to the public instances"
    vpc_id = aws_vpc.tf_vpc.id

    dynamic "ingress" {
    for_each = [ for s in var.service_ports: {
    from_port = s.from_port
    to_port = s.to_port
    }]

    content {
    from_port = ingress.value.from_port
    to_port = ingress.value.to_port
    protocol = "tcp"
    cidr_blocks = [var.accessip]
    }
    }
    }

    output "ingress_port_mapping" {
    value = {
    # for loop
    for ingress in aws_security_group.tf_public_sg.ingress:
    format("From %d", ingress.from_port) => format("To %d", ingress.to_port)
    }
    }
  2. Work in Terraform:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
    export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]"
    export AWS_DEFAULT_REGION="us-east-1"
    terraform12 init
    terraform12 plan
    terraform12 apply -auto-approve

    # clean up
    terraform12 destroy -auto-approve*

Functions

  • Terraform 0.12 and later have built-in functions, previous versions have Interpolation Syntax
  • example using cidrsubnet function to calculate a subnet address within a given IP network address prefix
  • environment: ~/terraform/t12/functions
  1. Edit main.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    variable "vpc_cidr" {
    default = "10.123.0.0/16"
    }

    variable "accessip" {
    default = "0.0.0.0/0"
    }

    variable "subnet_numbers" {
    default = [1, 2, 3]
    }

    resource "aws_vpc" "tf_vpc" {
    cidr_block = var.vpc_cidr
    enable_dns_hostnames = true
    enable_dns_support = true

    tags = {
    Name = "tf_vpc"
    }
    }

    resource "aws_security_group" "tf_public_sg" {
    name = "tf_public_sg"
    description = "Used for access to the public instances"
    vpc_id = aws_vpc.tf_vpc.id

    ingress {
    from_port = "22"
    to_port = "22"
    protocol = "tcp"
    vidr_blocks = [
    for num in var.subnet_numbers:
    # num = netnum -> value got from calculation from vpc_cidr dafault net: 16+8=24
    cidrsubnet(aws_vpc.tf_vpc.cidr_block, 8, num)
    ]
    }
    }
  2. Work with Terraform
    1
    2
    3
    4
    5
    export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
    export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]"
    export AWS_DEFAULT_REGION="us-east-1"
    terraform12 init
    terraform12 plan

Upgrade process example

  1. Refactor variables.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    variable "vpc_cidr" {
    default = "10.123.0.0/16"
    }

    variable "accessip" {
    default = "0.0.0.0/0"
    }

    # add the service ports, to populate the block
    variable "service_ports" {
    default = [
    {
    from_port = "22",
    to_port = "22"
    },
    {
    from_port = "80",
    to_port = "80"
    }
    ]
    }
  2. Refactor variables.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    resource "aws_vpc" "tf_vpc" {

    ## interpolation symbols are no loger needed
    ## cidr_block = "${var.vpc_cidr}"
    cidr_block = var.vpc_cidr
    enable_dns_hostnames = true
    enable_dns_support = true

    ## now we need and equals sign on tags
    ## tags {
    tags = {
    Name = "tf_vpc"
    }
    }

    resource "aws_security_group" "tf_public_sg" {
    name = "tf_public_sg"
    description = "Used for access to the public instances"
    ## interpolation symbols are no loger needed
    ## vpc_id = "${aws_vpc.tf_vpc.id}"
    vpc_id = aws_vpc.tf_vpc.id


    ## refator as loop, and remove interpolation symbols
    ## #SSH
    ## ingress {
    ## from_port = 22
    ## to_port = 22
    ## protocol = "tcp"
    ## cide_blocks = ["$var.accessip"]
    ## }
    ## #HTTP
    ## ingress {
    ## from_port = 80
    ## to_port = 80
    ## protocol = "tcp"
    ## cide_blocks = ["$var.accessip"]
    ## }

    ## set as dynamic block, with maps for each,
    ## whose values are stored in `variables.tf`
    dynamic "ingress" {
    for_each = [ for s in var.service_ports: {
    from_port = s.from_port
    to_port = s.to_port
    }]

    content {
    from_port = ingress.value.from_port
    to_port = ingress.value.to_port
    protocol = "tcp"
    cidr_blocks = [var.accessip]
    }
    }

    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    }
  3. Refactor output.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
        output "public_sg" {
    value = aws_security_group.tf_public_sg.id
    }

    ## add mapping for ingress port
    output "ingress_port_mapping" {
    value = {
    for ingress in aws_security_group.tf_public_sg.ingress:
    format("From %d", ingress.from_port) => format("To %d", ingress.to_port)
    }
    }
    ```
    4. Launch Terraform to test it
    ```bash
    terraform init
    terraform validate
    terraform plan
    terraform apply –auto-approve
    # select a region and go

Terraform: State

Terraform formatting and remote state

  • version it using a S3 bucket.
  • Create an S3 Bucket
    1. Search for S3 in Find Services -> Create Bucket
      • Enter an unique Bucket name
      • Choose region (e.g. US East (N. Virginia)) -> Next, next…
    2. On Review page, ‘Create bucket’
      • Add the Terraform Folder to the Bucket
      • Create terraform-aws folder on the bucket and save
  • Add Backend to Scripts
    1. Setup from the Docker Swarm Manager
      1
      2
      3
      4
      5
      cd ~/terraform/AWS
      # set environment vars
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      export AWS_DEFAULT_REGION="us-east-1"
    2. Create terraform.tf
      1
      2
      3
      4
      5
      terraform {
      backend "s3" {
      key = "terraform-aws/terraform.tfstate"
      }
      }
    3. Work with Terraform
      1
      2
      3
      4
      5
      terraform init -backend-config "bucket=[BUCKET_NAME]"
      terraform validate
      terraform plan
      terraform apply -auto-approve
      terraform destroy -auto-approve

Using Remote State with Jenkins

  • update CI/CD process to use remote state with our Jenkins Pipelines: 2 separate Pipelines: deployInfrastructure and destroyInfrastructure
  • Create S3 Bucket
    1. Search for S3 in Find Services -> ‘Create Bucket’
      • Enter an unique Bucket name
      • Choose region (e.g. US East (N. Virginia)) -> Next, next…
    2. On Review page, ‘Create bucket’
      • Add the Terraform Folder to the Bucket
      • Create terraform-aws folder on the bucket and save
    3. Create the Jenkins DeployInfrastructure job
      • Item name = “DeployDockerService”, Pipeline,
      • ‘Add Parameter’ -> String Parameter
        • Name = “access_key_id”
        • Default Value = “Access Key Id”
      • ‘Add Parameter’ -> String Parameter
        • Name = “secret_access_key”
        • Default Value = “Secret Access Key”
      • ‘Add Parameter’ -> String Parameter
        • Name = “bucket_name”
        • Default Value = “S3 Bucket”
      • ‘Add Parameter’ -> Choice Parameter
        • Name = “image_name”
        • Choices = “ghost:latest” and “ghost:alpine” (make sure they are on separate lines)
      • ‘Add Parameter’ -> String Parameter
        • Name = “ghost_ext_port”
        • Default Value = 80
      • In Pipeline section -> add to Script:
        1
        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        env.AWS_ACCESS_KEY_ID = "${access_key_id}"
        env.AWS_SECRET_ACCESS_KEY = "${secret_access_key}"
        env.AWS_DEFAULT_REGION = 'us-east-1'

        node {
        git (
        url: 'https://github.com/linuxacademy/content-terraform-docker-service.git',
        branch: 'remote-state'
        )
        stage('init') {
        sh label: 'terraform init', script: "terraform init -backend-config \"bucket=${bucket_name}\""
        }
        stage('plan') {
        sh label: 'terraform plan', script: "terraform plan -out=tfplan -input=false -var image_name=${image_name} -var ghost_ext_port=${ghost_ext_port}"
        }
        stage('apply') {
        sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfplan"
        }
        }
    4. Create the Jenkins DestroyInfrastructure job
      • Item name = “DestroyDockerService”, Pipeline,
      • ‘Copy from’ = “DeployDockerService”, Ok.
      • In Pipeline section -> edit Script to this:
        1
        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        env.AWS_ACCESS_KEY_ID = "${access_key_id}"
        env.AWS_SECRET_ACCESS_KEY = "${secret_access_key}"
        env.AWS_DEFAULT_REGION = 'us-east-1'

        node {
        git (
        url: 'https://github.com/linuxacademy/content-terraform-docker-service.git',
        branch: 'remote-state'
        )
        stage('init') {
        sh label: 'terraform init', script: "terraform init -backend-config \"bucket=${bucket_name}\""
        }
        stage('plan_destroy') {
        sh label: 'terraform plan', script: "terraform plan -destroy -out=tfdestroyplan -input=false -var image_name=${image_name} -var ghost_ext_port=${ghost_ext_port}"
        }
        stage('destroy') {
        sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfdestroyplan"
        }
        }
    5. Once Jenkins is running, check it
      1
      2
      docker container ls
      docker exec -it 73575a9ee4ac /bin/bash

Terraform: AWS

Our Architecture: What We’re Going to Build

  • Root (orchestrate all)
    • Storage (S3)
    • Networking (gateway, route tables, security group)
    • Compute (2 EC2)

Storage

S3 bucket and random ID

  • environment setup: ~/terraform/AWS/storage
    1
    2
    mkdir -p ~/terraform/AWS/storage
    cd ~/terraform/AWS/storage
  • example
    1. Edit main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      #---------storage/main.tf---------

      # Create a random id
      resource "random_id" "tf_bucket_id" {
      byte_length = 2
      }

      # Create the bucket
      resource "aws_s3_bucket" "tf_code" {
      bucket = "${var.project_name}-${random_id.tf_bucket_id.dec}"
      acl = "private"

      force_destroy = true
      tags {
      Name = "tf_bucket"
      }
      }
    2. Edit variables.tf
      1
      2
      #----storage/variables.tf----
      variable "project_name" {}
    3. Edit Coutputs.tf
      1
      2
      3
      4
      #----storage/outputs.tf----
      output "bucketname" {
      value = "${aws_s3_bucket.tf_code.id}"
      }
    4. Work with Terraform
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      terraform init
      terraform validate
      # plan the deployment
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]"
      export AWS_DEFAULT_REGION="us-east-1"
      terraform plan -out=tfplan -var project_name=la-terraform
      # deploy
      terraform apply tfplan

      # clean up
      terraform destroy -auto-approve -var project_name=la-terraform

Root module

  • environment: ~/terraform/AWS
    1
    2
    cd ~/terraform/AWS
    touch {main.tf,variables.tf,outputs.tf,terraform.tfvars}
  • example
    1. Edit main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      #----root/main.tf-----
      provider "aws" {
      region = "${var.aws_region}"
      }

      # Deploy Storage Resources
      module "storage" {
      source = "./storage"
      project_name = "${var.project_name}"
      }
    2. Edit variables.tf
      1
      2
      3
      4
      5
      #----root/variables.tf-----
      variable "aws_region" {}

      #------ storage variables
      variable "project_name" {}
    3. Edit terraform.tfvars:
      1
      2
      aws_region   = "us-east-1"
      project_name = "la-terraform"
    4. Edit outputs.tf
      1
      2
      3
      4
      5
      6
      #----root/outputs.tf-----

      #----storage outputs------
      output "Bucket Name" {
      value = "${module.storage.bucketname}"
      }
    5. Work with Terraform
      1
      2
      3
      4
      5
      6
      7
      8
      9
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      terraform init
      terraform validate
      # deploy the S3 bucket
      terraform apply -auto-approve

      # clean up
      terraform destroy -auto-approve

Networking

VPC, Internet Gateway, and Route Tables

  • environment: ~/terraform/AWS/networking
    1
    2
    3
    mkdir -p  ~/terraform/AWS/networking
    cd ~/terraform/AWS/networking
    touch {main.tf,variables.tf,outputs.tf,terraform.tfvars}
  • example
    1. Edit main.tf

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      #----networking/main.tf----

      data "aws_availability_zones" "available" {}

      resource "aws_vpc" "tf_vpc" {
      cidr_block = "${var.vpc_cidr}"
      enable_dns_hostnames = true
      enable_dns_support = true

      tags {
      Name = "tf_vpc"
      }
      }

      resource "aws_internet_gateway" "tf_internet_gateway" {
      vpc_id = "${aws_vpc.tf_vpc.id}"

      tags {
      Name = "tf_igw"
      }
      }

      resource "aws_route_table" "tf_public_rt" {
      vpc_id = "${aws_vpc.tf_vpc.id}"

      route {
      cidr_block = "0.0.0.0/0"
      gateway_id = "${aws_internet_gateway.tf_internet_gateway.id}"
      }

      tags {
      Name = "tf_public"
      }
      }

      resource "aws_default_route_table" "tf_private_rt" {
      default_route_table_id = "${aws_vpc.tf_vpc.default_route_table_id}"

      tags {
      Name = "tf_private"
      }
      }

      resource "aws_subnet" "tf_public_subnet" {
      count = 2
      vpc_id = "${aws_vpc.tf_vpc.id}"
      cidr_block = "${var.public_cidrs[count.index]}"
      map_public_ip_on_launch = true
      availability_zone = "${data.aws_availability_zones.available.names[count.index]}"

      tags {
      Name = "tf_public_${count.index + 1}"
      }
      }

      resource "aws_route_table_association" "tf_public_assoc" {
      count = "${aws_subnet.tf_public_subnet.count}"
      subnet_id = "${aws_subnet.tf_public_subnet.*.id[count.index]}"
      route_table_id = "${aws_route_table.tf_public_rt.id}"
      }

      resource "aws_security_group" "tf_public_sg" {
      name = "tf_public_sg"
      description = "Used for access to the public instances"
      vpc_id = "${aws_vpc.tf_vpc.id}"

      #SSH
      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"
      cidr_blocks = ["${var.accessip}"]
      }

      #HTTP
      ingress {
      from_port = 80
      to_port = 80
      protocol = "tcp"
      cidr_blocks = ["${var.accessip}"]
      }
      egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      }
      }
    2. Edit variables.tf

      1
      2
      3
      4
      5
      6
      7
      8
      #----networking/variables.tf----
      variable "vpc_cidr" {}

      variable "public_cidrs" {
      type = "list"
      }

      variable "accessip" {}
    3. Edit outputs.tf

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      #-----networking/outputs.tf----

      output "public_subnets" {
      value = "${aws_subnet.tf_public_subnet.*.id}"
      }

      output "public_sg" {
      value = "${aws_security_group.tf_public_sg.id}"
      }

      output "subnet_ips" {
      value = "${aws_subnet.tf_public_subnet.*.cidr_block}"
      }
    4. Edit terraform.tfvars

      1
      2
      3
      4
      5
      6
      vpc_cidr     = "10.123.0.0/16"
      public_cidrs = [
      "10.123.1.0/24",
      "10.123.2.0/24"
      ]
      accessip = "0.0.0.0/0"
    5. Work with Terraform

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      terraform init
      terraform validate
      # deploy network
      terraform apply -auto-approve

      # clean up
      terraform destroy -auto-approve
      rm terraform.tfvars

Security, and the count attribute

  • environment: ~/terraform/AWS/networking
    1
    2
    3
    mkdir -p  ~/terraform/AWS/networking
    cd ~/terraform/AWS/networking
    touch {main.tf,variables.tf,outputs.tf,terraform.tfvars}
  • example
    1. Edit main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      #----networking/main.tf----

      data "aws_availability_zones" "available" {}

      resource "aws_vpc" "tf_vpc" {
      cidr_block = "${var.vpc_cidr}"
      enable_dns_hostnames = true
      enable_dns_support = true

      tags {
      Name = "tf_vpc"
      }
      }

      resource "aws_internet_gateway" "tf_internet_gateway" {
      vpc_id = "${aws_vpc.tf_vpc.id}"

      tags {
      Name = "tf_igw"
      }
      }

      resource "aws_route_table" "tf_public_rt" {
      vpc_id = "${aws_vpc.tf_vpc.id}"

      route {
      cidr_block = "0.0.0.0/0"
      gateway_id = "${aws_internet_gateway.tf_internet_gateway.id}"
      }

      tags {
      Name = "tf_public"
      }
      }

      resource "aws_default_route_table" "tf_private_rt" {
      default_route_table_id = "${aws_vpc.tf_vpc.default_route_table_id}"

      tags {
      Name = "tf_private"
      }
      }

      resource "aws_subnet" "tf_public_subnet" {
      count = 2
      vpc_id = "${aws_vpc.tf_vpc.id}"
      cidr_block = "${var.public_cidrs[count.index]}"
      map_public_ip_on_launch = true
      availability_zone = "${data.aws_availability_zones.available.names[count.index]}"

      tags {
      Name = "tf_public_${count.index + 1}"
      }
      }

      resource "aws_route_table_association" "tf_public_assoc" {
      count = "${aws_subnet.tf_public_subnet.count}"
      subnet_id = "${aws_subnet.tf_public_subnet.*.id[count.index]}"
      route_table_id = "${aws_route_table.tf_public_rt.id}"
      }

      resource "aws_security_group" "tf_public_sg" {
      name = "tf_public_sg"
      description = "Used for access to the public instances"
      vpc_id = "${aws_vpc.tf_vpc.id}"

      #SSH
      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"
      cidr_blocks = ["${var.accessip}"]
      }

      #HTTP
      ingress {
      from_port = 80
      to_port = 80
      protocol = "tcp"
      cidr_blocks = ["${var.accessip}"]
      }

      egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      }
      }

Variables and outputs

  1. Edit variables.tf
    1
    2
    3
    4
    5
    6
    7
    8
    #----networking/variables.tf----
    variable "vpc_cidr" {}

    variable "public_cidrs" {
    type = "list"
    }

    variable "accessip" {}
  2. Edit outputs.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    #-----networking/outputs.tf----

    output "public_subnets" {
    value = "${aws_subnet.tf_public_subnet.*.id}"
    }

    output "public_sg" {
    value = "${aws_security_group.tf_public_sg.id}"
    }

    output "subnet_ips" {
    value = "${aws_subnet.tf_public_subnet.*.cidr_block}"
    }
  3. Edit terraform.tfvars
    1
    2
    3
    4
    5
    6
    vpc_cidr     = "10.123.0.0/16"
    public_cidrs = [
    "10.123.1.0/24",
    "10.123.2.0/24"
    ]
    accessip = "0.0.0.0/0"
  4. Work with Terraform
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
    export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
    terraform init
    terraform validate
    # deploy network
    terraform apply -auto-approve

    # clean up
    terraform destroy -auto-approve
    rm terraform.tfvars

Root module

  • environment: ~/terraform/AWS
  • example
    1. Edit main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      provider "aws" {
      region = "${var.aws_region}"
      }

      # Deploy Storage Resources
      module "storage" {
      source = "./storage"
      project_name = "${var.project_name}"
      }

      # Deploy Networking Resources
      module "networking" {
      source = "./networking"
      vpc_cidr = "${var.vpc_cidr}"
      public_cidrs = "${var.public_cidrs}"
      accessip = "${var.accessip}"
      }
    2. Edit variables.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      #----root/variables.tf-----
      variable "aws_region" {}

      #------ storage variables
      variable "project_name" {}

      #-------networking variables
      variable "vpc_cidr" {}
      variable "public_cidrs" {
      type = "list"
      }
      variable "accessip" {}
    3. Edit terraform.tfvars
      1
      2
      3
      4
      5
      6
      7
      8
      aws_region   = "us-east-1"
      project_name = "la-terraform"
      vpc_cidr = "10.123.0.0/16"
      public_cidrs = [
      "10.123.1.0/24",
      "10.123.2.0/24"
      ]
      accessip = "0.0.0.0/0"
    4. Work with Terraform
      1
      2
      3
      4
      5
      6
      7
      8
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      terraform init
      terraform validate
      terraform apply -auto-approve

      # clean up
      terraform destroy -auto-approve

Compute

AMI data, key pair, and the file function

  • environment: ~/terraform/AWS/compute
    1
    2
    3
    mkdir -p  ~/terraform/AWS/compute
    cd ~/terraform/AWS/compute
    touch {main.tf,variables.tf,outputs.tf}
  • create SSH key
    1
    ssh-keygen
  • example
    1. Edit main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      #----compute/main.tf#----
      data "aws_ami" "server_ami" {
      most_recent = true

      owners = ["amazon"]

      filter {
      name = "name"
      values = ["amzn-ami-hvm*-x86_64-gp2"]
      }
      }

      resource "aws_key_pair" "tf_auth" {
      key_name = "${var.key_name}"
      public_key = "${file(var.public_key_path)}"
      }
    2. Edit variables.tf
      1
      2
      3
      4
      #----compute/variables.tf----
      variable "key_name" {}

      variable "public_key_path" {}
    3. Work with Terraform
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
      export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]]"
      terraform init
      terraform validate
      # replace key_name and public_key_path for your own values
      terraform plan -out=tfplan -var 'key_name=tfkey' -var 'public_key_path=/home/myUser/.ssh/id_rsa.pub'
      terraform apply -auto-approve

      # clen up
      terraform destroy -auto-approve

EC2 Instance

  1. Edit main.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    #-----compute/main.tf#-----
    data "aws_ami" "server_ami" {
    most_recent = true

    owners = ["amazon"]

    filter {
    name = "name"
    values = ["amzn-ami-hvm*-x86_64-gp2"]
    }
    }

    resource "aws_key_pair" "tf_auth" {
    key_name = "${var.key_name}"
    public_key = "${file(var.public_key_path)}"
    }

    data "template_file" "user-init" {
    count = 2
    template = "${file("${path.module}/userdata.tpl")}"

    vars {
    firewall_subnets = "${element(var.subnet_ips, count.index)}"
    }
    }

    resource "aws_instance" "tf_server" {
    count = "${var.instance_count}"
    instance_type = "${var.instance_type}"
    ami = "${data.aws_ami.server_ami.id}"

    tags {
    Name = "tf_server-${count.index +1}"
    }

    key_name = "${aws_key_pair.tf_auth.id}"
    vpc_security_group_ids = ["${var.security_group}"]
    subnet_id = "${element(var.subnets, count.index)}"
    user_data = "${data.template_file.user-init.*.rendered[count.index]}"
    }
  2. Edit userdata.tpl
    1
    2
    3
    4
    5
    #!/bin/bash
    yum install httpd -y
    echo "Subnet for Firewall: ${firewall_subnets}" >> /var/www/html/index.html
    service httpd start
    chkconfig httpd on
  3. Edit variables.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    #-----compute/variables.tf

    variable "key_name" {}

    variable "public_key_path" {}

    variable "subnet_ips" {
    type = "list"
    }

    variable "instance_count" {}

    variable "instance_type" {}

    variable "security_group" {}

    variable "subnets" {
    type = "list"
    }
  4. Edit outputs.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    #-----compute/outputs.tf-----

    output "server_id" {
    value = "${join(", ", aws_instance.tf_server.*.id)}"
    }

    output "server_ip" {
    value = "${join(", ", aws_instance.tf_server.*.public_ip)}"
    }

Root module

  1. Edit main.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    provider "aws" {
    region = "${var.aws_region}"
    }
    # Deploy Storage Resources
    module "storage" {
    source = "./storage"
    project_name = "${var.project_name}"
    }

    # Deploy Networking Resources
    module "networking" {
    source = "./networking"
    vpc_cidr = "${var.vpc_cidr}"
    public_cidrs = "${var.public_cidrs}"
    accessip = "${var.accessip}"
    }

    # Deploy Compute Resources
    module "compute" {
    source = "./compute"
    instance_count = "${var.instance_count}"
    key_name = "${var.key_name}"
    public_key_path = "${var.public_key_path}"
    instance_type = "${var.server_instance_type}"
    subnets = "${module.networking.public_subnets}"
    security_group = "${module.networking.public_sg}"
    subnet_ips = "${module.networking.subnet_ips}"
    }
  2. Edit variables.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    #----root/variables.tf-----
    variable "aws_region" {}

    #------ storage variables
    variable "project_name" {}

    #-------networking variables
    variable "vpc_cidr" {}
    variable "public_cidrs" {
    type = "list"
    }
    variable "accessip" {}

    #-------compute variables
    variable "key_name" {}
    variable "public_key_path" {}
    variable "server_instance_type" {}
    variable "instance_count" {
    default = 1
    }
  3. Edit outputs.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    #----root/outputs.tf-----

    #----storage outputs------

    output "Bucket Name" {
    value = "${module.storage.bucketname}"
    }

    #---Networking Outputs -----
    output "Public Subnets" {
    value = "${join(", ", module.networking.public_subnets)}"
    }

    output "Subnet IPs" {
    value = "${join(", ", module.networking.subnet_ips)}"
    }

    output "Public Security Group" {
    value = "${module.networking.public_sg}"
    }

    #---Compute Outputs ------
    output "Public Instance IDs" {
    value = "${module.compute.server_id}"
    }

    output "Public Instance IPs" {
    value = "${module.compute.server_ip}"
    }
  4. Edit terraform.tfvars
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    aws_region   = "us-west-1"
    project_name = "la-terraform"
    vpc_cidr = "10.123.0.0/16"
    public_cidrs = [
    "10.123.1.0/24",
    "10.123.2.0/24"
    ]
    accessip = "0.0.0.0/0"
    key_name = "tf_key"
    public_key_path = "/home/cloud_user/.ssh/id_rsa.pub"
    server_instance_type = "t2.micro"
    instance_count = 2
  5. work with Terraform:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    export AWS_ACCESS_KEY_ID="[ACCESS_KEY]"
    export AWS_SECRET_ACCESS_KEY="[SECRET_KEY]"
    terraform init
    terraform validate
    terraform plan
    terraform apply

    # clean up
    terraform destroy

Terraform: CI/CD environment

Building a custom Jenkins image

  1. Setup environment

    1
    2
    mkdir -p jenkins
    nano vi Dockerfile
  2. Edit dockerfile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    FROM jenkins/jenkins:lts
    USER root
    RUN apt-get update -y && apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
    RUN curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey
    RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
    RUN apt-get update -y
    RUN apt-get install -y docker-ce docker-ce-cli containerd.io
    RUN curl -O https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip && unzip terraform_0.11.13_linux_amd64.zip -d /usr/local/bin/
    USER ${user}
  3. Build the image

    1
    2
    3
    docker build -t jenkins:terraform .
    # check images
    docker image ls

Setting up Jenkins

  1. Edit main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    # Jenkins Volume
    resource "docker_volume" "jenkins_volume" {
    name = "jenkins_data"
    }

    # Start the Jenkins Container
    resource "docker_container" "jenkins_container" {
    name = "jenkins"
    image = "jenkins:terraform"
    ports {
    internal = "8080"
    external = "8080"
    }

    volumes {
    volume_name = "${docker_volume.jenkins_volume.name}"
    container_path = "/var/jenkins_home"
    }

    volumes {
    host_path = "/var/run/docker.sock"
    container_path = "/var/run/docker.sock"
    }
    }
  2. Deploy via Terraform

    1
    2
    3
    4
    5
    6
    terraform init
    terraform plan -out=tfplan
    # deploy Jenkins
    terraform apply tfplan
    # get the admin password
    docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword

Creating a Jenkins simple job

  • Job will deploy a Docker container using Terraform, list the container, and then destroy it
  • example
    1. In the Jenkins dashboard: ‘New Item’
    2. Select ‘Freestyle Project’, item name = “DeployGhost”
    3. Source Code Management -> Git. Repository URL = https://github.com/linuxacademy/content-terraform-docker.git
    4. Build section -> Add build step -> Execute shell
      1
      2
      3
      4
      5
      terraform init
      terraform plan -out=tfplan
      terraform apply tfplan
      docker container ls
      terraform destroy -auto-approve
    5. ‘Build Now’ (left-hand menu) -> arrow next to #1 -> Console Output, wait for result

Building a Jenkins pipeline

Deploy out a Ghost blog

  1. Jenkins dashboard -> ‘New Item’ -> item name = “PipelinePart1”, Pipeline, Ok
  2. Check the box for ‘This project is parameterized’
    • ‘Add Parameter’ -> Choice Parameter
      • Name = “action”
      • Choices = “Deploy”, “Destroy” (make sure they are on separate lines)
      • Description “The action that will be executed”
    • ‘Add Parameter’ -> Choice Parameter
      • Name = “image_name”
      • Choices = “ghost:latest”, “ghost:alpine” (mak3 sure they are on separate lines)
      • Description = “Enter The image Ghost Blog will deploy”
    • ‘Add Parameter’ -> String Parameter
      • Name = “ext_port”
      • Default Value = “80”
      • DEscription = “The Public Port”
  3. In the Pipeline section, ‘Definition of Pipeline script’, and add
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    node {
    git 'https://github.com/linuxacademy/content-terraform-docker.git'
    if(action == 'Deploy') {
    stage('init') {
    sh """
    terraform init
    """
    }
    stage('plan') {
    sh label: 'terraform plan', script: "terraform plan -out=tfplan -input=false -var image_name=${image_name} -var ext_port=${ext_port}"
    script {
    timeout(time: 10, unit: 'MINUTES') {
    input(id: "Deploy Gate", message: "Deploy environment?", ok: 'Deploy')
    }
    }
    }
    stage('apply') {
    sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfplan"
    }
    }

    if(action == 'Destroy') {
    stage('plan_destroy') {
    sh label: 'terraform plan destroy', script: "terraform plan -destroy -out=tfdestroyplan -input=false -var image_name=${image_name} -var ext_port=${ext_port}"
    }
    stage('destroy') {
    script {
    timeout(time: 10, unit: 'MINUTES') {
    input(id: "Destroy Gate", message: "Destroy environment?", ok: 'Destroy')
    }
    }
    sh label: 'Destroy environment', script: "terraform apply -lock=false -input=false tfdestroyplan"
    }
    }
    }

Deploy out a Swarm service

  • Setup of a Docker Swarm

    1. On the manager node get the join token
      1
      docker swarm join-token worker
    2. On the worker node run the join command (pasting the join token)
      1
      docker swarm join --token [JOIN_TOKEN] [IP]:2377
  • Get Jenkins Running

  1. Get the Jenkins password
    1
    sudo cat /var/lib/jenkins/secrets/initialAdminPassword
  2. Browse to http://< Swarm_Manager_Public_IP >:8080/ , use the password
  3. Install the suggested plugins
  4. Create a user. Done
  5. If issues with updates or dependencies: Dashboard: ‘Manage Jenkins’, fix things there
  • Set up a pipelin
    1. Jenkins dashboard -> ‘New Item’. Item name =”PipelinePart2”, Pipeline, Ok
    2. Check box for “This project is parameterized”
      • ‘Add Parameter’ -> Choice Parameter
        • Name = “action”
        • Choices = “Deploy” and “Destroy” (make sure they are on separate lines)
        • Description = “The action that will be executed”
      • ‘Add Parameter’ -> Choice Parameter
        • Name = “image_name”
        • Choices = “ghost:latest” and “ghost:alpine” (make sure they are on separate lines)
        • Description = “The image Ghost Blog will deploy”
      • ‘Add Parameter’ -> String Parameter
        • Name = “ghost_ext_port”
        • Default Value = “80”
        • Description = “The Public Port”
    3. In the Pipeline section -> ‘Definition of Pipeline script’, add:
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      node {
      git 'https://github.com/linuxacademy/content-terraform-docker-service.git'
      if(action == 'Deploy') {
      stage('init') {
      sh label: 'terraform init', script: "terraform init"
      }
      stage('plan') {
      sh label: 'terraform plan', script: "terraform plan -out=tfplan -input=false -var image_name=${image_name} -var ghost_ext_port=${ghost_ext_port}"
      script {
      timeout(time: 10, unit: 'MINUTES') {
      input(id: "Deploy Gate", message: "Deploy environment?", ok: 'Deploy')
      }
      }
      }
      stage('apply') {
      sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfplan"
      }
      }

      if(action == 'Destroy') {
      stage('plan_destroy') {
      sh label: 'terraform plan', script: "terraform plan -destroy -out=tfdestroyplan -input=false -var image_name=${image_name} -var ghost_ext_port=${ghost_ext_port}"
      }
      stage('destroy') {
      script {
      timeout(time: 10, unit: 'MINUTES') {
      input(id: "Destroy Gate", message: "Destroy environment?", ok: 'Destroy')
      }
      }
      sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfdestroyplan"
      }
      stage('cleanup') {
      sh label: 'cleanup', script: "rm -rf terraform.tfstat"
      }
      }
      }

Create a MySQL Swarm service that uses Docker Secrets

  1. Jenkins dashboard -> ‘New Item’. Item name = “PipelinePart3”, Pipeline, Ok
  2. Check the box for ‘This project is parameterized’
    • ‘Add Parameter’ -> Choice Parameter.
      • Name = “action”
      • Choices = “Deploy” and “Destroy”, (make sure they are on separate lines)
      • Description = “The action that will be executed”
    • ‘Add Parameter’ -> String Parameter
      • Name = “mysql_root_password”
      • Default Value = “P4ssW0rd0!”
      • Description = “MySQL root password”
    • ‘Add Parameter’ -> String Parameter
      • Name = “mysql_user_password”
      • Defaut value = “paSsw0rd0!”
      • Description = “MySQL user password”
  3. In the Pipeline section, ‘Definition of Pipeline script’, and add:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    node {
    git 'https://github.com/linuxacademy/content-terraform-docker-secrets.git'
    if(action == 'Deploy') {
    stage('init') {
    sh label: 'terraform init', script: "terraform init"
    }
    stage('plan') {
    def ROOT_PASSWORD = sh (returnStdout: true, script: """echo ${mysql_root_password} | base64""").trim()
    def USER_PASSWORD = sh (returnStdout: true, script: """echo ${mysql_user_password} | base64""").trim()
    sh label: 'terraform plan', script: "terraform plan -out=tfplan -input=false -var mysql_root_password=${ROOT_PASSWORD} -var mysql_db_password=${USER_PASSWORD}"
    script {
    timeout(time: 10, unit: 'MINUTES') {
    input(id: "Deploy Gate", message: "Deploy ${params.project_name}?", ok: 'Deploy')
    }
    }
    }
    stage('apply') {
    sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfplan"
    }
    }

    if(action == 'Destroy') {
    stage('plan_destroy') {
    def ROOT_PASSWORD = sh (returnStdout: true, script: """echo ${mysql_root_password} | base64""").trim()
    def USER_PASSWORD = sh (returnStdout: true, script: """echo ${mysql_user_password} | base64""").trim()
    sh label: 'terraform plan', script: "terraform plan -destroy -out=tfdestroyplan -input=false -var mysql_root_password=${ROOT_PASSWORD} -var mysql_db_password=${USER_PASSWORD}"
    }
    stage('destroy') {
    script {
    timeout(time: 10, unit: 'MINUTES') {
    input(id: "Destroy Gate", message: "Destroy ${params.project_name}?", ok: 'Destroy')
    }
    }
    sh label: 'terraform apply', script: "terraform apply -lock=false -input=false tfdestroyplan"
    }
    stage('cleanup') {
    sh label: 'cleanup', script: "rm -rf terraform.tfstat"
    }
    }
    }
0%