Definition

  • Semantinc HTML
  • Tabindex
  • Aria Attributes
  • Aria Role
  • Keyboard navigation and screen readers

Semantic HTML

Assistive technologies such as screen readers are able to interpret what’s on the page by parsing the HTML of the page. They enable users to take actions based on the elements (e.g. Button = “you should click on it”).

Button, not div

  • button get accessibilty “for free”.
  • divs are container elements, so when a screen reader encounters a div, it automatically thinks it is a presentational element.
  • if a div has content or children within it, the screen reader announces role="group" and the user will completely miss that the div is interactive.

Headings with h1-6 tags, not CSS

  • Heading tags such as <h1> and <h2> let an assitive technology know that this is important text, and the screen reader will announce “Heading”.
  • Using CSS means losing the significance to screen reader.

Tabindex

IT makes interactive elements keyboard-navigable.

  • tabindex=0: set focus to an element in the default tabbing order.
  • tabindex=-1: programmatically focus an element using JavaScript.
  • Do not assign a value of > 1 to tabindex.
  • You should only add tabindex to interactive elements. Don’t add it to div, use a semantic element such as button.

ARIA attributes

Aria attributes are a set of HTML attributes that you use to provide additional information about the purpose and state of elements on a web page. These attributes are especially beneficial to assistive technologie,s to provide more context and better navigation for users.

  • aria-label: used to provide a label or name for an element.
  • aria-hidden: used to indicate that an element should be hidden from assistive technologies. This can be useful for elements that are used for layout purposes but are not relevant to the content of the page.
  • aria-describedby: used to associate an element with a description, which helps to provide context of an element.
  • aria-live: used to indicate that an element’s content may change dynamically, and that assistive technologies should pay attention to changes in the element’s content.

Aria Role

The aria-role attribute to defines the purpose of an HTML element, and provide its semantic meaning.

  • button: an element should be treated as a button.
  • alert: an element is an alertbox.
  • presentation: an element is only presentational.
  • grid: in a grid component with CSS and divs, you can use role="grid" to let assistive technologies know about the semantics of the component. Exercise with caution.

Keyboard navigation and screen readers

Many users with motor disabilities rely on their keyboard and assitive technologies to navigate the web. So it’s critical that every component be navigable using a keyboard and screen reader.

  • tab key to navigate to different sections of the site.
  • spacebar to select elements, such as a checkbox.
  • enter to press buttons.

Tests

  • Focus remains visible: ensure that you can clearly see which element is being focused on the page. Focus should always remain visible.
  • Tab order: when tabbing through sections, the order of tabbing should follow the natural flow and logical structure of the website. It should not jump back and forth between sections.
  • Keyboard traps: ensure that when navigating with the keyboard, the focus doesn’t get trapped on an element. e.g. modals, widgets: ensure you are able to navigate back to the site.

Shell (zsh)

ZSH provides:

  • Auto-correct of misspelled commands.
  • Easy drop-in replacement of bash.
  • Better cd completion using <tab>.
  • Path expansion: cd /u/t/d/d + <tab> = cd /user/thalion/dev/demo.

Configuration framework: Oh My Zsh

Oh My Zsh includes 200+ plugins and 140+ themes. Most important are:

  • git: tons of aliases and useful functions for git.
  • tmux: alias and settings for integrating zsh with tmux.
  • node: adds node-docs command for opening website docs.
  • web-search: initialize web searches from command line.
  • auto-suggestions: fast, unobtrusive suggestions as you type based on history.

Session Management (tmux)

It allows you to create session, so you can work on several projects at once.

  • Each session can be customized to the exact layout you need.
  • You can name sessions for easy switching, and even save and restore sessions if your terminal is closed.
  • It has its own customizable status line that will allow you display things like time, date, CPU usage, and more.
  • It even has a plugin manager and a whole slew of awesome plugins & features.

Search (ripgrep)

“Use ripgrep if you like speed, filtering by default (ignoring anything that your .gitignore file ignores, and skips binaries and hidden files), fewer bugs and Unicode support”.

Fuzzy Finding (fzf)

fzf is a general-purpose command-line fuzzy finder. It can also be used with any list: files, command history, processes, hostnames, bookmarks, git commits… There are many examples fzf wiki.

Terminal Prompt (Spaceship)

Spaceship is simple, clean, and provides only relevant information:

  • git/mercurial integration.
  • battery level indicator.
  • clever host name and user data.
  • version numbers for a variety of libraries
  • icons.

Changing directories (z)

Once installed, z will start learning which directories you visit. Then, you can give it a regex (or simple folder name) to hop to the most likely candidate.

1
2
3
4
# before
cd ~/dev/code-notepad
# using z
z code-notepad

Bonus

Weather

1
curl wttr.in

Star wars

1
telnet towel.blinkenlights.nl

Types of tests

  • Unit tests: ensure the logical part of your code is working
  • Integration/service tests: ensure that your DAO (database access) and BO (Business objects) layers work as expected
  • UI tests: automate your UI

Behavior driven development (BDD)

  • Story: should have a clear, explicit title.
  • Scenario: acceptance criteria or scenario description of each specific case of the narrative. Its structure is:
    • Given: it starts by specifying the initial condition that is assumed to be true at the beginning of the scenario. This may consist of a single clause, or several.
    • When: it then states which event triggers the start of the scenario.
    • Then: it states the expected outcome, in one or more clauses.

Unit and Integration tests, with UnitTest

  • unittest is similar in structure to Junit.

Project structure

  • Example project structure
    1
    2
    3
    4
    5
    6
    7
    * scripts
    * process.py
    * tests
    * test_process.py
    * README.md
    * requirements.txt
    * .pylintrc
  • Naming: tests file for process.py may be test_process.py files.
  • Location: you may put your test files in a tests folder under the root folder of the project.
    • You should try to replicate under tests the same structure of your scripts folder.

File structure

  • Example, not real code

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    # imports
    import unittest
    # if we are working on AWS
    import boto3
    # external mock library for AWS
    from moto import mock_ec2
    # file to test
    from scripts.process import Process

    # activate mock from moto
    @mock_ec2
    class TestProcess(unittest.TestCase):

    # VARIABLES
    # private constants
    __LOCATION = "eu-west-1"

    # instance to test
    process = Process()

    # SET UP TEST INTEGRITY
    # test setup
    def setUp(self):
    # initalize, generic for tests, to keep integrity
    started_ec2_data = {}

    def tearDown(self):
    # clean up after process, to keep integrity
    process = ""

    # SET UP MOCKS
    def get_mock_helper(self):
    # mock code from another class
    helper = unittest.mock.Mock()
    helper.get_dict.return_value = self.__DUMMY_ROLE_DICT
    return helper

    # TESTS
    def test_report_ec2_has_instance(self):
    # set up input
    dict_profiles = {"demo-dev-1": "arn:demo_dev_1"}

    # patch mocks for external calls on test file
    with unittest.mock.patch(
    "scripts.process.Helper",
    return_value=self.get_mock_helper()
    ):
    # invoke method to test
    response = self.process.report_ec2(dict_profiles)
    # test assertion to check up results
    assert response["report"] != [] and response["errors"] == []

Run tests

  • You can run the tests manually via

    1
    python -m unittest discover -s "./tests/"
  • You may also check the code coverage for those unit tests.

    1
    2
    3
    4
    5
    6
    7
    8
    # install
    python -m pip install coverage
    # run
    python -m coverage run -m --source="scripts" unittest discover
    # get coverage report on CLI
    python -m coverage report
    # get html coverage
    python -m coverage html

Tests E2E on UI, with Gherkin and Python

Write Gherkin files

Project structure

  • Example project structure
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    * scripts
    * process.py
    * tests
    * test_process.py
    * features
    * features
    aplication.feature
    * images
    * application_running.png
    * steps
    * application.py
    * application_walkthrough.py
    * README.md
    * requirements.txt
    * .pylintrc
    • Naming: source files have .feature extension.
    • Location: under features folder, single Gherkin source file contains a description of a single feature.
      • feature_tests: .feature files
      • images: screenshots, sorted out by platform folder
      • steps: .py test steps

File structure

.feature files

  • Every .feature must consists of a single feature.
  • Lines starting with the keyword Feature followed by three indented lines starts a feature.
  • A feature usually contains a list of scenarios, which starts with Scenario keyword on a new line.
    • Every scenario starts with the Scenario: keyword (or localized one), followed by an optional scenario title.
    • Every scenario has steps:
      • Given.
      • When.
      • Then.
    • If scenarios are repetitive, you may use scenario outlines, using a template with placeholders.
  • You can use tags to group features and scenarios together, independent of your file and directory structure.
  • Background: it is like an untitled scenario, containing a number of steps. The background is run before each of your scenarios, but after your BeforeScenario hooks.

Examples

  • Basic example

    1
    2
    3
    4
    5
    6
    7
    8
    Feature: Serve coffee
    In order to earn money Customers should be able to buy coffee at all times

    Scenario: Buy last coffee
    Given there are 1 coffees left in the machine
    And I have deposited 1 dollar
    When I press the coffee button
    Then I should be served a coffee
  • Outline example

    1
    2
    3
    4
    5
    6
    7
    Scenario Outline: Eating
    Given there are <start> cucumbers
    When I eat <eat> cucumbers
    Then I should have <left> cucumbers Examples:
    | start | eat | left |
    | 12 | 5 | 7 |
    | 20 | 5 | 15 |
  • Background example

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    Feature: Multiple site support

    Background:
    Given a global administrator named "Greg"
    And a blog named "Greg's anti-tax rants"
    And a customer named "Wilson"
    And a blog named "Expensive Therapy" owned by "Wilson"

    Scenario: Wilson posts to his own blog
    Given I am logged in as Wilson
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published."
    Scenario: Greg posts to a client's blog
    Given I am logged in as Greg
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published.

Get Python files

Quick how-to

  • Gherkin files can be transformed into python files using Behave and autogui

    1
    2
    3
    4
    5
    6
    # install
    python -m pip install behave
    python -m pip install pyautogui
    # use
    python -m behave
    # python -m behave features/

Code example

  • original features/features/example.feature file

    1
    2
    3
    4
    5
    6
    Feature: Showing off behave

    Scenario: Run a simple test
    Given we have behave installed
    When we implement 5 tests
    Then behave will test them for us!
  • resulting features/steps/example.py file

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    from behave import given, when, then, step

    @given('we have behave installed')
    def step_impl(context):
    pass

    @when('we implement {number:d} tests')
    def step_impl(context, number):
    # number is converted into integer
    assert number > 1 or number == 0
    context.tests_count = number

    @then('behave will test them for us!')
    def step_impl(context):
    assert context.failed is False
    assert context.tests_count >= 0

Introduction

  • Terraform as a tool is cloud agnostic (it will support anything that exposes its API and has enough developer support to create a “provider” for it).
  • Terraform by itself will not natively abstract this at all, please consider if this is a good idea at all unless you have a really good use case.
    • If you did need to do this you would need to:
      • build a bunch of modules on top of things that abstracts the cloud layer from the module users.
      • allow them to specify the cloud provider as a variable (potentially controllable from some outside script).
  • You can check the registry.

Example: abstract when there is resource parity

Original DNS modules

Google Cloud DNS

  • modules/google/dns/record/main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    variable "count" = {}

    variable "domain_name_record" = {}
    variable "domain_name_zone" = {}
    variable "domain_name_target" = {}

    resource "google_dns_record_set" "frontend" {
    count = "${variable.count}"
    name = "${var.domain_name_record}.${var.domain_name_zone}"
    type = "CNAME"
    ttl = 300

    managed_zone = "${var.domain_name_zone}"

    rrdatas = ["${var.domain_name_target}"]
    }

AWS

  • modules/aws/dns/record/main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    variable "count" = {}

    variable "domain_name_record" = {}
    variable "domain_name_zone" = {}
    variable "domain_name_target" = {}

    data "aws_route53_zone" "selected" {
    count = "${variable.count}"
    name = "${var.domain_name_zone}"
    }

    resource "aws_route53_record" "www" {
    count = "${variable.count}"
    zone_id = "${data.aws_route53_zone.selected.zone_id}"
    name = "${var.domain_name_record}.${data.aws_route53_zone.selected.name}"
    type = "CNAME"
    ttl = "60"
    records = [${var.domain_name_target}]
    }

Generic inluding both module code

  • modules/generic/dns/record/main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    variable "cloud_provider" = { default = "aws" }

    variable "domain_name_record" = {}
    variable "domain_name_zone" = {}
    variable "domain_name_target" = {}

    module "aws_dns_record" {
    source = "../../aws/dns/record"
    count = "${var.cloud_provider == "aws" ? 1 : 0}"
    domain_name_record = "${var.domain_name_record}"
    domain_name_zone = "${var.domain_name_zone}"
    domain_name_target = "${var.domain_name_target}"
    }

    module "google_dns_record" {
    source = "../../google/dns/record"
    count = "${var.cloud_provider == "google" ? 1 : 0}"
    domain_name_record = "${var.domain_name_record}"
    domain_name_zone = "${var.domain_name_zone}"
    domain_name_target = "${var.domain_name_target}"
    }

Providers

AWS: Set up apache server

  • Code

    • main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      # Create and bootstrap webserver
      # create EC2 server, parameters come from setup.tf file
      resource "aws_instance" "webserver" {
      ami = data.aws_ssm_parameter.webserver-ami.value
      instance_type = "t3.micro"
      key_name = aws_key_pair.webserver-key.key_name
      associate_public_ip_address = true
      vpc_security_group_ids = [aws_security_group.sg.id]
      subnet_id = aws_subnet.subnet.id
      # if its remote, execute this code using the parameters embedded on connection
      provisioner "remote-exec" {
      inline = [
      "sudo yum -y install httpd && sudo systemctl start httpd",
      "echo '<h1><center>My Test Website With Help From Terraform Provisioner</center></h1>' > index.html",
      "sudo mv index.html /var/www/html/"
      ]
      connection {
      type = "ssh"
      user = "ec2-user"
      private_key = file("~/.ssh/id_rsa")
      host = self.public_ip
      }
      }
      tags = {
      Name = "webserver"
      }
      }
    • setup.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      90
      91
      92
      93
      94
      95
      96
      97
      98
      provider "aws" {
      region = "us-east-1"
      }

      # Create key-pair for logging into EC2 in us-east-1
      resource "aws_key_pair" "webserver-key" {
      key_name = "webserver-key"
      public_key = file("~/.ssh/id_rsa.pub")
      }


      # Get Linux AMI ID using SSM Parameter endpoint in us-east-1
      data "aws_ssm_parameter" "webserver-ami" {
      name = "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
      }

      # Create VPC in us-east-1
      resource "aws_vpc" "vpc" {
      cidr_block = "10.0.0.0/16"
      enable_dns_support = true
      enable_dns_hostnames = true
      tags = {
      Name = "terraform-vpc"
      }

      }

      # Create IGW in us-east-1
      resource "aws_internet_gateway" "igw" {
      vpc_id = aws_vpc.vpc.id
      }

      # Get main route table to modify
      data "aws_route_table" "main_route_table" {
      filter {
      name = "association.main"
      values = ["true"]
      }
      filter {
      name = "vpc-id"
      values = [aws_vpc.vpc.id]
      }
      }
      # Create route table in us-east-1
      resource "aws_default_route_table" "internet_route" {
      default_route_table_id = data.aws_route_table.main_route_table.id
      route {
      cidr_block = "0.0.0.0/0"
      gateway_id = aws_internet_gateway.igw.id
      }
      tags = {
      Name = "Terraform-RouteTable"
      }
      }

      #Get all available AZ's in VPC for master region
      data "aws_availability_zones" "azs" {
      state = "available"
      }

      #Create subnet # 1 in us-east-1
      resource "aws_subnet" "subnet" {
      availability_zone = element(data.aws_availability_zones.azs.names, 0)
      vpc_id = aws_vpc.vpc.id
      cidr_block = "10.0.1.0/24"
      }


      #Create SG for allowing TCP/80 & TCP/22
      resource "aws_security_group" "sg" {
      name = "sg"
      description = "Allow TCP/80 & TCP/22"
      vpc_id = aws_vpc.vpc.id
      ingress {
      description = "Allow SSH traffic"
      from_port = 22
      to_port = 22
      protocol = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      }
      ingress {
      description = "allow traffic from TCP/80"
      from_port = 80
      to_port = 80
      protocol = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      }
      egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      }
      }

      output "Webserver-Public-IP" {
      value = aws_instance.webserver.public_ip
      }
  • On CLI

    1
    2
    3
    4
    5
    6
    terraform init
    terraform validate
    terraform plan
    terraform apply
    # terraform provisioner tries to connect to the EC2 instance
    # then runs bootstraped code

Azure: Deploy WebApp

  • Code

    • Basic main.tf for Azure

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      # Configure the Azure provider
      terraform {
      required_providers {
      azurerm = {
      source = "hashicorp/azurerm"
      version = ">= 2.26"
      }
      }

      required_version = ">= 0.14.9"
      }

      provider "azurerm" {
      features {}
      skip_provider_registration = true
      }

      # Create a virtual network
      resource "azurerm_virtual_network" "vnet" {
      name = "BatmanInc"
      address_space = ["10.0.0.0/16"]
      location = "Central US"
      resource_group_name = "<ADD YOUR RESOURCE GROUP NAME>"
      }
    • maint.tf fpor webapp on Azure

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      provider "azurerm" {
      version = 1.38
      }

      resource "azurerm_app_service_plan" "svcplan" {
      name = "Enter App Service Plan name"
      location = "eastus"
      resource_group_name = "Enter Resource Group Name"

      sku {
      tier = "Standard"
      size = "S1"
      }
      }

      resource "azurerm_app_service" "appsvc" {
      name = "Enter Web App Service Name"
      location = "eastus"
      resource_group_name = "Enter Resource Group Name"
      app_service_plan_id = azurerm_app_service_plan.svcplan.id


      site_config {
      dotnet_framework_version = "v4.0"
      scm_type = "LocalGit"
      }
      }
  • CLI

    1
    2
    3
    terraform init
    terraform validate
    terraform plan

Kubernetes deployment

  • Code

    • kubernetes.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      terraform {
      required_providers {
      kubernetes = {
      source = "hashicorp/kubernetes"
      }
      }
      }

      variable "host" {
      type = string
      }

      variable "client_certificate" {
      type = string
      }

      variable "client_key" {
      type = string
      }

      variable "cluster_ca_certificate" {
      type = string
      }

      provider "kubernetes" {
      host = var.host

      client_certificate = base64decode(var.client_certificate)
      client_key = base64decode(var.client_key)
      cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
      }
    • terraform.tfvars
      1
      2
      3
      4
      host                   = "DUMMY VALUE"
      client_certificate = "DUMMY VALUE"
      client_key = "DUMMY VALUE"
      cluster_ca_certificate = "DUMMY VALUE"
    • .kind-config
      1
      2
      3
      4
      5
      6
      7
      8
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
      extraPortMappings:
      - containerPort: 30201
      hostPort: 30201
      listenAddress: "0.0.0.0"
  • CLI

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    # PREPARE KUBERNETES
    # create Kubernetes cluster, using kind-cli
    kind create cluster --name lab-terraform-kubernetes --config kind-config.yaml
    kubectl cluster-info --context kind-lab-terraform-kubernetes
    # verify
    kind get clusters
    # get the server data
    kubectl config view --minify --flatten --context=kind-lab-terraform-kubernetes
    # put the server address, client-key-data into terraform.tfvars

    # DEPLOY IT
    terraform init
    terraform validate
    terraform plan
    # validate "long-live-the-bat" exists
    kubectl get deployments

AWS EKS deployment

  • Code

    • kubernetes.tf
      1
      2
      3
      4
      5
      provider "kubernetes" {
      host = data.aws_eks_cluster.cluster.endpoint
      token = data.aws_eks_cluster_auth.cluster.token
      cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
      }
    • eks-cluster.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      module "eks" {
      source = "terraform-aws-modules/eks/aws"
      version = "17.24.0"
      cluster_name = local.cluster_name
      cluster_version = "1.20"
      subnets = module.vpc.private_subnets

      tags = {
      Environment = "training"
      GithubRepo = "terraform-aws-eks"
      GithubOrg = "terraform-aws-modules"
      }

      vpc_id = module.vpc.vpc_id

      workers_group_defaults = {
      root_volume_type = "gp2"
      }

      worker_groups = [
      {
      name = "worker-group-1"
      instance_type = "t2.small"
      additional_userdata = "echo foo bar"
      asg_desired_capacity = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
      },
      {
      name = "worker-group-2"
      instance_type = "t2.medium"
      additional_userdata = "echo foo bar"
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
      asg_desired_capacity = 1
      },
      ]
      }

      data "aws_eks_cluster" "cluster" {
      name = module.eks.cluster_id
      }

      data "aws_eks_cluster_auth" "cluster" {
      name = module.eks.cluster_id
      }
    • security-groups.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      resource "aws_security_group" "worker_group_mgmt_one" {
      name_prefix = "worker_group_mgmt_one"
      vpc_id = module.vpc.vpc_id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"

      cidr_blocks = [
      "10.0.0.0/8",
      ]
      }
      }

      resource "aws_security_group" "worker_group_mgmt_two" {
      name_prefix = "worker_group_mgmt_two"
      vpc_id = module.vpc.vpc_id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"

      cidr_blocks = [
      "192.168.0.0/16",
      ]
      }
      }

      resource "aws_security_group" "all_worker_mgmt" {
      name_prefix = "all_worker_management"
      vpc_id = module.vpc.vpc_id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"

      cidr_blocks = [
      "10.0.0.0/8",
      "172.16.0.0/12",
      "192.168.0.0/16",
      ]
      }
      }
    • vpc.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      variable "region" {
      default = "us-east-1"
      description = "AWS region"
      }

      provider "aws" {
      region = "us-east-1"
      }

      data "aws_availability_zones" "available" {}

      locals {
      cluster_name = "education-eks-${random_string.suffix.result}"
      }

      resource "random_string" "suffix" {
      length = 8
      special = false
      }

      module "vpc" {
      source = "terraform-aws-modules/vpc/aws"
      version = "2.66.0"

      name = "education-vpc"
      cidr = "10.0.0.0/16"
      azs = data.aws_availability_zones.available.names
      private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
      public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
      enable_nat_gateway = true
      single_nat_gateway = true
      enable_dns_hostnames = true

      tags = {
      "kubernetes.io/cluster/${local.cluster_name}" = "shared"
      }

      public_subnet_tags = {
      "kubernetes.io/cluster/${local.cluster_name}" = "shared"
      "kubernetes.io/role/elb" = "1"
      }

      private_subnet_tags = {
      "kubernetes.io/cluster/${local.cluster_name}" = "shared"
      "kubernetes.io/role/internal-elb" = "1"
      }
      }
    • versions.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      terraform {
      required_providers {
      aws = {
      source = "hashicorp/aws"
      version = ">= 3.20.0"
      }

      random = {
      source = "hashicorp/random"
      version = "3.0.0"
      }

      local = {
      source = "hashicorp/local"
      version = "2.0.0"
      }

      null = {
      source = "hashicorp/null"
      version = "3.0.0"
      }

      template = {
      source = "hashicorp/template"
      version = "2.2.0"
      }

      kubernetes = {
      source = "hashicorp/kubernetes"
      version = ">= 2.0.1"
      }
      }

      required_version = "> 0.14"
      }
    • outouts.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      output "cluster_id" {
      description = "EKS cluster ID."
      value = module.eks.cluster_id
      }

      output "cluster_endpoint" {
      description = "Endpoint for EKS control plane."
      value = module.eks.cluster_endpoint
      }

      output "cluster_security_group_id" {
      description = "Security group ids attached to the cluster control plane."
      value = module.eks.cluster_security_group_id
      }

      output "kubectl_config" {
      description = "kubectl config as generated by the module."
      value = module.eks.kubeconfig
      }

      output "config_map_aws_auth" {
      description = "A kubernetes configuration to authenticate to this EKS cluster."
      value = module.eks.config_map_aws_auth
      }

      output "region" {
      description = "AWS region"
      value = var.region
      }

      output "cluster_name" {
      description = "Kubernetes Cluster Name"
      value = local.cluster_name
      }
    • lab_kubernetes_resources.tf(for nginx, added to dir when required by CLI steps)
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      resource "kubernetes_deployment" "nginx" {
      metadata {
      name = "long-live-the-bat"
      labels = {
      App = "longlivethebat"
      }
      }

      spec {
      replicas = 2
      selector {
      match_labels = {
      App = "longlivethebat"
      }
      }
      template {
      metadata {
      labels = {
      App = "longlivethebat"
      }
      }
      spec {
      container {
      image = "nginx:1.7.8"
      name = "batman"

      port {
      container_port = 80
      }

      resources {
      limits = {
      cpu = "0.5"
      memory = "512Mi"
      }
      requests = {
      cpu = "250m"
      memory = "50Mi"
      }
      }
      }
      }
      }
      }
      }
  • CLI

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    # DEPLOY THE EKS CLUSTER
    terraform init
    terraform plan
    terraform apply
    # Configure kubectl to interact with the cluster
    aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
    # verify deployment
    kubectl get cs

    # DEPLOY NGINX PODS
    # add lab_kubernetes_resources.tf file
    terraform plan
    terraform apply
    # verify 2 "long-live-the-bat" pods are up and running
    kubectl get deployments

    # CLEAN UP
    terraform destroy
    # verify
    terraform show

Troubleshooting

graph LR

subgraph Primary UI
A[configuration language]
end

subgraph Metadata
B[state]
end

subgraph Resource graph comms
C[TF core application]
end

subgraph Auth mapping
D[Cloud Provider]
end

A --> B
B --> C
C --> D
  • Detect syntax errors on files

    1
    terraform fmt
  • Detect errors on dependencies (e.g. cycles, invalid references, unsupported group vars)

    1
    terraform validate
  • Detect version missmatching

    1
    terraform version
  • Get TF trace, setting environment variables

    1
    2
    3
    4
    export TF_LOG_CORE=TRACE
    export TF_LOG_PROVIDER=TRACE
    export TF_LOG_PATH=logs.txt
    terraform refresh
  • Detect state discrepancies

    1
    terraform apply
0%