Syslog (container logging)

  1. Find rsyslog.conf
    1
    nano /etc/rsyslog.conf
  2. Edit the file uncomment the two lines under “Provides UDP syslog reception” by removing ‘#’
    1
    2
    $ModLoad imudp
    $UDPServerRun 514
  3. Start the syslog service and configure docker to use it via daemon.json
    1
    2
    3
    systemctl start rsyslog
    sudo mkdir /etc/docker
    nano /etc/docker/daemon.json
  4. Edit daemon.json
    1
    2
    3
    4
    5
    6
    {
    "log-driver": "syslog",
    "log-opts": {
    "syslog-address": "udp://<PRIVATE_IP>:514"
    }
    }
  5. Start the Docker service
    1
    2
    3
    systemctl start docker
    # are there docker logs?
    tail /var/log/messages
  6. Create two new containers using the httpd image
    • Syslog on log driver
      1
      2
      3
      4
      5
      6
      7
      docker container run -d --name syslog-logging httpd
      docker logs syslog-logging
      # Error response from daemon: configured logging does not support reading
      # check the content of '/var/log/messages':
      # verify that the syslog-logging container is sending its logs to syslog
      tail /var/log/messages
      # the output shows us the logs that are being input to syslog
    • JSON file as log driver
      1
      2
      3
      docker container run -d --name json-logging --log-driver json-file httpd
      docker logs json-logging
      # the logs do not appear in /var/log/messages

Watchtower (updating containers)

  • Watchtower: monitoring containers with other container and update them when the image is updated
  • Watchtower needs images pushed to a repository (see DockerHub)
    1. Create a dockerfile for a nodejs ‘express’ app
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      # set base image
      FROM node

      # create the directory where the app will be copied to
      RUN mkdir -p /var/node
      # add our express content to that directory
      ADD content-express-demo-app /var/node/
      # set the working directory
      WORKDIR var/node/

      # scripts to build the app
      RUN npm install

      # execute
      CMD ./bin/www
    2. Log into DockerHub, build and push the image
      1
      2
      3
      4
      5
      docker login
      # -f = determine the dockerfile
      # . = the dockerfile in this directory
      docker build -t myDockerHubUser/express -f .
      docker push myDockerHubUser/express
    3. Execute dockerized app and watchtower
      1
      2
      3
      4
      5
      6
      7
      8
      9
      ## express app
      # -d = run in the background
      # -p = port, 80 on local, 3000 on container
      docker run -d --name demo -p 80:3000 --restart always myDockerHubUser/express
      docker ps

      ## watchtower
      # last paramerter is refresh period (30 seconds)
      docker run -d --name watchtower -p 80:3000 --restart always -v /var/run/docker.sock:/var/run/docker.sock v2tec/watchtower -i 30
    4. Make a small change on the ‘express’ app dockerimage, and push it to teh repository. Watchtower should autoupdate it

Metadata and labels

  • Use 2 different consoles to avoid issues (we will name them ‘docker workstation’ and ‘docker server’)
  1. Create a Dockerfile (on the docker workstation)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    FROM node

    LABEL maintainer=user@mail.com

    ARG BUILD_VERSION
    ARG BUILD_DATE
    ARG APPLICATION_NAME

    LABEL org.label-schema.build-date=$BUILD_DATE
    LABEL org.label-schema.applicaiton=$APPLICATION_NAME
    LABEL org.label-schema.version=$BUILD_VERSION
    RUN mkdir -p /var/node
    ADD weather-app/ /var/node/
    WORKDIR /var/node
    RUN npm install
    EXPOSE 3000
    CMD ./bin/www
  2. Build the Docker image (on the docker workstation)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # log in to DockerHub
    docker login
    # build the image with parameters
    docker build -t myDockerHubUser/weather-app --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
    --build-arg APPLICATION_NAME=weather-app --build-arg BUILD_VERSION=v1.0 -f Dockerfile .
    # show image id (IMAGE_ID), and use it to inspect it
    docker images
    docker inspect IMAGE_ID
    # push the image to DockerHub
    docker push myDockerHubUser/weather-app
  3. Create the weather-app container (on the docker server)
    1
    2
    docker run -d --name demo-app -p 80:3000 --restart always myDockerHubUser/weather-app
    docker ps
  4. Check out version v1.1 of the weather app (on the docker workstation)
    1
    2
    3
    cd weather-app
    git checkout v1.1
    cd ../
  5. Rebuild the weather-app image (on the docker server)
    1
    2
    3
    4
    5
    6
    docker build -t mydockerHubUser/weather-app --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') /
    --build-arg APPLICATION_NAME=weather-app --build-arg BUILD_VERSION=v1.1 -f Dockerfile .
    docker push USERNAME/weather-app
    # show image id (IMAGE_ID), and use it to inspect it
    docker ps
    docker inspect IMAGE_ID

Load balancing containers

  • 2 servers: swarm manager and swarm worker (always on swarm manager unless you are told so)
  1. Create a Docker Compose file on Swarm Server 1 (on lb-challenge directory)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    # Change to the lb-challenge directory
    version: '3.2'
    services:
    weather-app1:
    build: ./weather-app
    tty: true
    networks:
    - frontend
    weather-app2:
    build: ./weather-app
    tty: true
    networks:
    - frontend
    weather-app3:
    build: ./weather-app
    tty: true
    networks:
    - frontend

    loadbalancer:
    build: ./load-balancer
    image: nginx
    tty: true
    ports:
    - '80:80'
    networks:
    - frontend
    networks:
    frontend:
  2. Update nginx.conf (on load-balancer directory)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    events { worker_connections 1024; }

    http {
    upstream localhost {
    server weather-app1:3000;
    server weather-app2:3000;
    server weather-app3:3000;
    }
    server {
    listen 80;
    server_name localhost;
    location / {
    proxy_pass http://localhost;
    proxy_set_header Host $host;
    }
    }
    }
  3. Execute docker-compose up
    1
    2
    3
    cd ../
    docker-compose up --build -d
    docker ps
  4. Create a Docker service using Docker Swarm
    1
    2
    3
    4
    cd ~/
    # review the token
    cat swarm-token.txt
    # Copy the 'docker swarm join' command from the previous step
  5. On swarm worker: execute the command that was copied from the previous step
  6. Back to swam manager: create a Docker service
    1
    2
    3
    docker service create --name nginx-app --publish published=8080,target=80 --replicas=2 nginx
    docker ps
    #verify that the default nginx page loads in the browser (PUBLIC_IP_ADDRESS:8080)

Compose: building services

  1. Create a Ghost Blog and MySQL Service
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    version: '3'
    services:
    ghost:
    image: ghost:1-alpine
    container_name: ghost-blog
    restart: always
    ports:
    - 80:2368
    environment:
    database__client: mysql
    database__connection__host: mysql
    database__connection__user: root
    database__connection__password: P4sSw0rd0!
    database__connection__database: ghost
    volumes:
    - ghost-volume:/var/lib/ghost
    depends_on:
    - mysql
    mysql:
    image: mysql:5.7
    container_name: ghost-db
    restart: always
    environment:
    MYSQL_ROOT_PASSWORD: P4sSw0rd0!
    volumes:
    - mysql-volume:/var/lib/mysql
    volumes:
    ghost-volume:
    mysql-volume:
  2. Start Ghost blog service
    1
    docker-compose up -d

Docker basics

  • Running a container

    1. Install and setup docker
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      # install
      sudo yum -y install docker
      ## set up permissions
      # drop down to root
      sudo -i
      # set up a docker group
      groupadd docker
      # add myUser to docker group
      usermod -aG docker myUser
      ## enable and start docker
      systemctl enable --now docker
      # log out from root
      logout
    2. Run a standard docker image
      1
      2
      # verify the installation
      docker run docker.io/hello-world
    3. Pull docker images
      1
      2
      docker pull user1repo/catImage
      docker pull user2repo/dogImage
  • Deploying a static website to the container

    1
    2
    3
    4
    5
    6
    7
    # check local images
    docker images
    # -d = detached mode
    # -p fromPort : toPort = ports
    docker run -d --name pipo -p 80:80 user2repo/dogImage
    # verify the container is running
    docker ps
  • Building container images

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    ## Get the image
    # check the local images
    docker images
    # get an OS image
    docker pull centos:6
    # start the Docker container in interactive mode
    # -i = interactive
    # -t = tag
    docker run -it --name websetup centos:6 /bin/bash

    ## Prepare the system
    # update the system
    yum -y update
    # install apache
    yum -y install httpd git
    ## clone a repository, and set it in the Apache hmtl folder
    git clone https://github.com/linuxacademy/content-dockerquest-spacebones
    cp content-dockerquest-spacebones/doge/* /var/www/html
    # to display correctly, rename the default 'welcome.conf' file to 'welcome.conf.bak'
    mv /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.d/welcome.conf.bak

    ## Test everything works
    # enable & start the Apache service
    chkconfig httpd on && service httpd start
    # exit the container
    exit

    ## Save the edited image
    docker commit websetup spacebones:thewebsite
  • Dockerizing an application

    1. Clone repo and set it in subdir
      1
      2
      git clone https://github.com/linuxacademy/content-dockerquest-spacebones
      cd ~/content-dockerquest-spacebones/nodejs-app
    2. Use the Dockerfile below to build a new image
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      # base for new container
      FROM node:7
      # working directory
      WORKDIR /app
      # copy from work directory to app
      COPY package.json /app
      RUN npm install
      COPY . /app
      CMD node index.js
      # expose port
      EXPOSE 8081
    3. Build and run
      1
      2
      3
      4
      5
      ## build container image
      # . = add all
      docker build -t baconator:dev .
      # (optional) Run the image to verify functionality
      docker run -d -p 80:8081 baconator:dev

Docker optimization

  • Optimizing docker builds with onbuild

    ONBUILD = build only on “child image”
    Build using dockerfile you create a new docker image, but ONBUILD are not applied to the current docker image

    1. Find the docker file and prepare to edit it
      1
      2
      cd content-dockerquest-spacebones/salt-example/salt-master
      nano dockerfile
    2. Edit the dockerfile
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      FROM jarfil/salt-mastermini:debian-stretch

      MAINTAINER Jaroslaw Filiochowski <jarfil@mail.com>

      COPY ./

      RUN apt-get -y update && \
      apt-get -y upgrade && \
      apt-get install \
      salt-minion \
      salt-ssh \
      salt-cloud && \
      apt-get -y autoremove && \
      apt-get clean
      rm -rf /var/libs/apt/lists/

      ONBUILD RUN chmod +x \
      /docker-entrypoint.sh

      EXPOSE 4505 4506

      ONBUILD CMD /docker-entrypoint.sh
    3. Build the image
      1
      2
      3
      4
      5
      # build "tablesalt:master"
      # . = using the dockerfile in my current directory
      docker build -t tablesalt:master .
      # check it worked
      docker images
  • Ignoring files during docker build
    • .dockerignore file
    1. Find the docker file and prepare to edit it
      1
      2
      cd content-dockerquest-spacebones/salt-example/salt-master
      nano .dockerignore
    2. Edit the dockerignore
      1
      2
      3
      badscript.sh
      *.conf
      README.md

Storing data and Networking in Docker

Storage

  • Creating data containers
    1. Create postgres data container image
      1
      2
      3
      4
      5
      # is docker running?
      docker ps
      # -v volume to bind on: /data
      # /bin/true = if it runs properly, it won't return anything
      docker create -v /data --name posgresData spacebones/postgres /bin/true
    2. Mount image in several containers
      1
      2
      3
      4
      5
      6
      7
      8
      docker run -d --volumes-from posgresData --name posgresContainer1 spacebones/postgres
      docker run -d --volumes-from posgresData --name posgresContainer2 spacebones/postgres
      # check it worked
      docker ps
      # check the volumes ID
      docker volume list
      # check posgresContainer1 ID is the same volume from the list
      docker inspect posgresContainer1

Networking

  • Container networking with links (legacy)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    # is docker running?
    docker ps

    ## create the website container
    docker run -d -p 80:80 --name spacebones spacebones/spacebones:thewebsite
    # create the database container
    # -P publish exposed ports to free ports on my host
    # --link = give the name to the container we want want to link to
    docker run -d -P --name posgresContainer --link spacebones:sppacebones spacebones/postgres

    ## verify that the link works
    # docker inspect -f "{{ .HostConfig.Links }}" $CONTAINERNAME
    docker inspect -f "{{ .HostConfig.Links }}" posgresContainer
  • Container networking with networks (bridges)
    1. Create network

      1
      2
      3
      4
      docker network create --driver=bridge --subnet=192.168.10.0/24 --gateway=192.168.10.250 borkspace
      # verify existing bridges
      docker network ls
      docker network inspect borkspace
    2. Launch mytransfer container using the borkspace network

      1
      2
      docker run -it --name mytransfer --network=borkspace spacebones/cat
      exit
  • Persistent data volumes
    • volume = directories (or files) that are outside of the default Union File System and exist as normal directories and files on the host filesystem
    • volumes allow to persist data from processes inside docker
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      # is docker running?
      docker ps
      # get the code
      git clone https://github.com/linuxacademy/content-dockerquest-spacebones.git

      ## Create a volume
      docker volume create missionstatus
      # check results
      docker volume ls
      docker volume inspect missionstatus
      # keep the Mountpoint information

      ## Copy website data to the volume
      sudo cp -r /home/cloud_user/content-dockerquest-spacebones/volumes/* /var/lib/docker/volumes/missionstatus/_data/
      # check it worked
      ls /var/lib/docker/volumes/missionstatus/_data/
      exit

      ## Create a container
      # source = what to mount
      # target = where to mount it
      # finally add the name of the image
      docker run -d -p 80:80 --name fishin-mission --mount source=missionstatus,target=/usr/local/apache2/htdocs httpd
      # check it worked
      docker ps

Secure Global Infrastructure and Compliance

  • Regions, Availability Zones, and Endpoints
    • Regions: (dotted line in diagram) compliance, latency
    • Availability zones: (letter after region id) independent datacenters in regions
    • Endpoints: webconsole and AWS-CLI
  • VPC Endpoints
    • Methods of accessing AWS environments, connect resources without going through the Internet
    • 2 Types
      • Interface (Elastic Network Interface) -> EC2
      • Gateway (target for Route Table environment) -> DBs
    • Limitations
      • Same region only
      • IPv6 only
      • Only direct connections, no VPN or VPC peering connection
  • IAM and Compliance
    • global scope across AWS
    • allows central management
    • IAM = 1 of main topics

Shared responsibility

  • Shared Responsibility Model (not include S3)
    • Infrastructure services (VPC, EC2, autoscaling)
      • AWS: security off-the-cloud, foundations
      • User: code deployed, data, OS and network configurations, IAM. Encryption + integrity
    • Container (RDS, EMR, ED2)
      • AWS: platform, foundations
      • User: customer data, customer IAM, Encryption + integrity. Encryption + integrity
    • Abstracted services (lambda)
      • Netwok traffic protection

Trusted Advisor

  • Reports on our environment
    • cost optimization, performance, security, fault tolerance, service limits
    • available to all customers: core checks (6)
    • available to business/enterprise (all)

Identity and Access Management (IAM)

  • Root User
    • credentials: email and password on sign up for AWS account
    • should not be used for daily work, no access keys and have MFA
  • Users and Groups
    • Users
      • admin rights (daily work)
      • users need policies to access resources
      • deny policies override grant policies
      • credentials should not be shared
      • do not store EC2 credentials, forward them
    • Groups
      • users can be members of groups, which have policies attached
      • organize users by functions (DB admin, SysAdmin…)
  • Roles
    • IAM roles
      • Temporary security credential by Secure Token Service (STS, single endoint sts.amazon.com in us-east-1 (N-Virginia), reduce latensy using APIs, default time given: 1h)
      • For AWS recources or user outside AWS
    • Roles and AWS
      • used because policies can not be attached to AWS recources
      • 1 service -> ONLY 1 roles
      • roles are attached to resources, no credentials
      • can be changes on running instances via CLI or console
    • Other uses
      • crosss account access (delegation)
      • identity federation (non-AWS. link identity through different systems - recommend use of AWS Cognito, use SAML (Security Acknoledge Markup Language) for corporate domain accounts)
  • Policies
    • json document taht states permissions
    • deny overrides allow
    • templates (admin, power-user (no user creation), view-only)
    • you may use policy generatorcreate them from scratch or use the visual editor
    • users can have more than 1 policy
    • not for resources
  • Access Advisor
    • user should have as few perissions as possible
    • unused permissions can be deteced with access advisor (audit user, group or role)

Encryption essentials

  • Overview
    • Key: cypher (AES-256, block cyphers): Message -> Scrambler with key -> Encryption
      • Server side encryption (at rest): on disk (read/write)
      • Client side encryption (in transit): on message (sent/received)
    • Enveloping: using keys to cypher keys (master key, KEK=Key encryption keys)
  • Symmetric encryption
    • same key to encrypt-decrypt
  • Asymmetric encryption (SSL, SSH)
    • different keys to encrypt decrypt (encrypt=public, decrypt=private)
  • HSM and KMS (Amazon Elastic File System)
    • Hardware security model (HSM)
      • Physical device to store keys on premise
      • AWS-cloudHSM: HSM can be in multiple regions (clusters)
      • Load balñancer replicates keys
    • Key Management Service (KMS) -> KMS service
      • Create and control encryption keys
      • Advantage over HSM: integration on AWS, can use IAM policies for access
      • Customer Master key are stored in KMS
      • Both data and encrypted key are stored
        graph LR
        A[Plain text date]
        B(key-DataKey)
        C[Cypher text]
        D[Storage]
        E(EncryptedKey)
        F(MasterKey)
        G((DataKey))
        A--use-->B;
        B--encrypt-->C;
        C-->D;
        E-->D;
        F--encrypt-->E;
        G--generate-->F;

S3 bucket encryption policies override the settings of the folders within them.
If you need to use separate encryption keys for some documents within a bucket, you will need to change the settings on each document individually.

OS-level access

  • Overview
    • EC2 is under infrastructure model: user thinks about IAM, encryption, security groups and NACL
    • Unix-> Linux = (cloudinet) terminal SSH, auth keys
    • Windows -> (ec2config) Windows = remote desktop
    • Windows -> Linux = PuTTY
  • SSH
    • connection (symmetric): faster
    • key pair authentication (asymmetric)
      • RSA in ~/.ssh/authorized_keys, chmod 400 <keyname>.pem
    • process
      1. Client: connection request
      2. Server: public key
      3. Both: Cypher negociation
      4. Both: Key exchange algortithm
      5. Both: Connection encrypted using SSH key
  • Bastion Host
    • “jump box” (go to security groups, configure inbound & outbound)
    • deploy in 2 availability zones, with autoscalinginpublic subnets, access form a list of addresses
graph LR
A[User]
B(Internet gateway)
C[Autoscaling]
D[Bastion]
E((Private subnet))

subgraph Public subnet
    C
    D
end

C-->D;

A-->B;
B--NAT gateway-->C;
D-->E;
  • Linux example

    • SH forwarding whenever is possible

      1
      2
      3
      4
      5
      6
      7
      8
      chmod 400 <path-to-key>.pem
      ssh-agent bash
      ssh add <path-to-key>.pem
      # add first host (-A, e get to instance)
      ssh -A ec2-user@<ip-address>
      # second host, we are already there
      # due to ssh forwarding
      ssh ec2-user@<ip-private-address>
  • Windows remote desktop example (RDP protocol)

    • AWS get script, add key on console, double click and go
  • Windows Bash example

    1. Go to update and security - for developers - bash shell (beta)
    2. Add or remove windows features -> Windows subsystem for Linux
    3. Windows store -> choose distro Linux on Windows (Ubuntu)
  • Windows PuTTY example

    • Download and add pem, change it to ppk, go on

Data Security

  • Securing data at rest
    • Concerns
      • accidental information disclosure
      • data integrity compromised
      • accidental deletion
      • availability
    • S3
      • permissions: bucket, object level, IAM policies, delete MFS
      • versioning: helps against accidental deletion
      • replication: automatic on availabilty zones
      • backup: replication and versioning = unnecessary. Rules to store on another region?
      • server-side encryption: S3 master key or KMS
      • VPC endpoint: use data inside VPC without making it public
    • Glacier
      • server-side encryption: encrypted in AES-256, 1 archive = 1 unique key, there is a master key created and stored securely
    • EBS
      • replication: 2 copies of each volume in each availability zone (for disk failure)
      • backup: snapshots of volumes + IAM for access
      • server -side encryption: AWS MNS master key, OS tools
    • RDS
      • permission: IAM policies
      • encryption: KMS (except micro-instances), DB cryptographic options (reference on DB fields)
    • DynamoDB
      • permissions: IAM
      • encryption: Application level encryption, same as RDS,
      • VPC endpoint: can use data inside VPC without making it public
    • EMR
      • Amazon managed service: AWS provides AMIs, no custom
      • data store: S3 or DynamoDB, HDFS (Hadoop Distributed File System -> defaults to Hadoop KMS)
      • techniques to improve data security: SSL, application level encryption, hybrid
  • Decommissioning data and Media
    • different from on-prem decomission
      • delete -> blocks become unallocated, reassigned somewhere
    • reading and writing to blocks
      • write = overwrite existing
      • read = data or hypervisor returns 0
    • end of life
      • DoD 5220-M (National Industrial Security Operating Manual)
      • NIST SP 800-88 (Guidelines for media sanitization)
      • Both previous
      • None of previous = destroy device
  • Securing data in transit
    • Concerns with communicating over public links (Internet)
    • Approaches
      • use HTTPS
      • offload traffic to ELB
      • use SSH
      • database traffic and AWS console and traafic use SSL/TLS
    • X.509 certificates (client browser, use public key)
    • AWS certificate manager (free)
      • SSL/TSL certificates (ELB, Cloudfront, API gateway, Cloudformation)
      • automatic renewal, import 3rd party

OS Security

  • Recommendations
    • Disable root user API access keys
    • Use limited source IPs in security groups
    • Password protect pem files
    • Keep authorized_key file up to date
    • Rotate credentials (access keys)
    • Use Access Advisorto identify and remove unnecessary permissions
    • Bastion hosts
  • Custom AMIs
    • Base configuration, “snapshots”
    • Clean up/hardening tasks before upload
      • protect credentials (disable insecure apps, software should not use default accounts, SSH keys must not be published, disable guest account)
      • protect data (delete shell history)
      • remove shared devices (e.g. printers)
      • Do not violate AWS Acceptable Use Policy (example: SMTP/proxy server)
  • Bootstrapping
    • cloud-init, cfn-init, tools like Puppet and Chef
    • patching/updates: update AMIs frequently!
      • consider dependecies
      • security software updates might update beyond the patch level of AMI
      • application updates might patch beyond the build in the AMI
    • take into account environment differences (production, test…)
    • instance updates might break external management and security monitoring (tst 1st on non-critical)
  • AWS Systems Manager - Patching/Automation
    • Resource groups (logically)
    • Insights (CloudTrail, CloudWatch, Trusted Advisor…)
    • Inventory (can collect data on apps, files, network configs, services…)
    • Automation (via scheduling, alarm triggering…)
    • Run command (secure remote management replacing bastion host or ssh)
    • Patch manager (deploy OS and software patches on demand)
    • Maintenace Window (scheduling administrative and maintenance tasks)
    • State manager and parameter store (for config management)
  • Mitigating problems
    • Malware
      • use only trusted AMIs
      • principle of least privilege
      • keep patches up to date
      • antivirus/antispam software
      • host-based IDS
    • Abuse
      • AWS will shut down malicious abusers
      • compromised resource, unintentional abuse (web crawlers may be confused with DDOS), secondary abuse (user of your system uploaded infected file), false complaints
      • Best practices: do not ignore AWS communications follow security best practices, mitigate identified compromises

Infrastructure security

  • VPC Security
    • Internet only
      • Use SSL/TLS
      • Build your own VPN solution
      • Planned routing and placement
      • Security groups and NACLs
    • IPsec tunel over Internet
      • Deploy VPN (AWS or other)
      • VPC networking (subnets, security groups, NACLs)
    • AWS direct connect (links to peer AWS)
      • No additional security, check organization requirements
      • Terminates at Availability Zones in a region
      • VPC networking (subnets, security groups, NACLs)
    • Hybrid (direct + IPsec)
      • Best practices of teh previous ones
      • VPC networking (subnets, security groups, NACLs)
  • Network segmentation
    • VPC (isolate workload, e.g. departments)
    • Security groups: stateful (TCP UDP ports in both directions)
    • NACLs: stateless, granular control on protocols, work on security groups, ephimeral ports (client requests depend on OS)
    • Host based firewalls (Os level)
  • Strengthening
    • Customer side of shared responsability model (control access, network security, secure traffic)
    • Best practices
      • Security groups
      • NACLs
      • Direct connect or IPSec for other selves
      • Encrypt data in transit often
      • Layer network security
      • Logs
    • Secure periphery systems
      • DNS use SSL/TLS to prevent spoofing
      • Active directory/LDAP
      • Time servers (synch from trusted source)
      • Repositories (do not post credentials)
  • Threat Protection Layer
    • Concern: untrusted connections
    • Layers
      1. Threat protection (IDS, IPs, firewall)
      2. DM2 presentaion (NACL and Security groups)
      3. Application (NACL and Security groups)
      4. Data (NACL and Security groups)
  • Testing and measurement
    • Vulnerability (risk assessment)
      • 3rd party evaluation with littel inside knowledge
    • Penetration testing
      • AWS must be notified before
      • AWS vulnerability penetration form
      • m1.small or micro can not be tested
    • Meassuring risk management
      • Monitor procedure
      • Meassure effectiveness
      • Review effectiveness
      • Internal audit
      • Management reviews (scope)
  • AWS Web Application Firewall (WAF)
    • Conditions/rules set on Cloudfront or Applciation Load Balancer
    • Watch cross site scripting, IP addreses, locations of requests, queryString and SQL
    • Multiple conditions = ANDS (al must be true)
  • AWS Shield (DDOS protection)
    • Basic = included with WAF
    • Advanced (3000$/month per organisation)
      • Expand WAF services to ELB, Cludfront, Route53, resources with elastic IPs
      • Contact 24x7 DDOS Response Team (DRT)
      • Expandded protection DDOS and others

Monitoring, alerting, and auditing

  • Monitoring Basics
    • Questions
      • What parameters
      • How to meassure them
      • Threshold
      • Can they be escalated?
      • Storage
    • Log
      • Individual actions
      • Trail access
      • Invalid access attempt
      • IAM
      • Creation of new logs
      • Create/delete system elements
  • AWS Config (resources, list of supported services)
    • you may
      • evaluate resources
      • snaphsot of config
      • retrieve config (resources, historical)
      • get changes notifications
      • view relationships between resources
    • uses
      • administer resources
      • audit compliance
      • config troubleshooting via history
      • security analysis
  • AWS Systems Manager - Inventory and insights (for resource groups)
    • Insights: aggregate outputs
    • Inventory: -> CloudWatch dashboards
  • AWS Inspector
    • Analizing behavious, identify potential security issues
    • Target: collection of AWs resources
    • Assessment template: security rules -> findings
    • Assessment run: apply assessment template to target
    • Features
      • configure scanning and activity monitor engine
      • built-in library content (rules, reports, recommendations)
      • API automation (allow security testing on design and dev)
    • Using AWS inspector: console, API, SDK, CLI
  • AWS GuardDuty
    • continuous security monitoring solution
    • checks
      • VPC flow logs
      • CloudTrail event logs
      • Event logs
    • Uses threat intelligent feeds and machine learning to determine unauthorized or malicious activity
    • Behavious analysis
      • monitor behavious for signs of compromise
      • unauthorised infrastructure deployment
      • unexpected API calls

Linux Foundation Certified SysAdmin (LFCS): Storage management

Manage physical storage partitions

Always create bakup before doing this

  • fdisk
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # list mounted devices
    lsblk
    # make partition: add device (dev) and partition name 'xvdf'
    sudo fdisk /dev/xvdf
    # now you use commands, 'a' -> flag is from "before UEFI" times
    g # gpt
    p # partition
    n # new partition
    # add sector, partition size
    q # exit
    # check list of mounted devices again
    lsblk
    # remove that new partition (number 2)
    sudo fdisk /dev/xvdf
    d # delete
    2 # partition id, to delete it
    w # write
    q # exit
  • gparted and parted CLI
    1
    sudo parted /dev/xvf

    Maximum number of primary partitions on an MBR disk device = 4

LVM storage

Logical Volume Managers -> group physical devices together, as a single thing

  • Create partitions

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    yum install lvm2
    # let's join multiple partitions or devices
    # file time 8e = Linux LVM
    sudo fdisk /dev/xvdf
    #create multiple partitions, primary type, type Linux LVM
    # add data about that partition
    l # show list of partition types
    t # file type
    8e # LVM
    # create physical volumes
    pvcreate /dev/xvdf1 /devxvdf2

    ## create volume group
    vgcreate tinydata /dev/xvdf1 /devxvdf2
    # create logical volume, last value is the LVM name
    lvcreate --name logical-tiny --size 600M tinydata
    # show what we have
    lvdisplay

    # use expandable file system (ext4)
    mkfs -t ext4 /dev/tinydata/logical-tiny
    # mount it
    cd /mnt
    mkdir teeny
    mnt /dev/tinydata/logical-tiny /mnt/teeny
    # show results
    df -h
  • Extend a previously existing volume

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # create backup before this!
    fdisk /dev/xvdf
    # reboot
    # create a new physical volume
    pvcreate /dev/xvdf3

    # extend the volume group with the new partition
    vgextend tinydata /dev/xvdf3
    vgdisplay # check empty space to add (e.g. 105)
    # resize logical
    lvextend -l +105 /dev/tinydata/logical-tiny
    # resize file system
    # check status
    e2fsck -f /dev/tinydata/logical-tiny
    # resize
    resize2fs /dev/tinydata/logical-tiny
    mount /dev/tinydata/logical-tiny /mnt/teeny
  • Check volumes

    1
    2
    3
    pvs # physical volume list
    vgs # group volume list
    lvs # logical volume list

Encrypted storage

  • Format
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # is the encription module loaded?
    grep -i config_dm_crypt /boot/config-$(uname -r)
    yum install cryptsetup
    # check partitions
    lsblk

    # create the encripted part
    cryptsetup -y luksFormat /dev/xvdf1
    # add passphrase
  • Use
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # open it, and use password
    crypstsetup luksOpen /dev/xvdf1 mySecret
    # list devices
    lsblk
    # is it mounted?
    df -h
    # create file system
    mkfs -t ext4 /dev/mapper/mySecret
    # mount it
    mount /dev/mapper/mySecret /mnt/encrypted
    # walk on by
    cd encrypted/
    ls -la
    df -h
  • Close
    1
    2
    umonut /mnt/encrypted
    cryptsetup luksClosemySecret

Mount FS at or during boot

  1. Find information
    1
    2
    3
    4
    5
    6
    7
    # table system: check manual
    man tstab
    lsblk
    # get UUid
    sudo blkid
    # add to tables
    sudo nano /etc/fstab
  2. Edit table file and it is done
    1
    2
    # define our ext4 value __ where? __ tyoe __ nodumps __ partitionNumber
    UUID="f23e9b01-fdb4-4d40-997e-e85b0afa0bb8" /mnt/ext4 ext4 defaults 0 2

Swap space

  • swap = when there is no enough RAM, it moves inactive pages to disk
  • Never below 32MB!
  • Turn off swapoff -a, turn on swap on -a
  • Configured at boot time at fstab
    1
    2
    nano  /etc/fstab/
    #check this line '/root/swap swap swap sw 0 0'

The swap file must have a minimum of 0644 permissions, but a recommended 0600 in order to be enabled with the mkswap’ and ‘swapon’ commands

1
2
3
4
5
6
7
8
# example
sudo su
dd if=/dev/zero of=/root/extraswap.swp bs=1024 count=524288
chmod 600 /root/extraswap.swp
mkswap /root/extraswap.swp
swapon /root/extraswap.swp
cat /proc/swaps
# edit /etc/fstab to include the line: /root/extraswap.swp swap swap defaults 0 0

RAID devices

Redundant Array of Independent Disks -> unify presentation of devices + use taht space for file durability

  • Create
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    fdisk /dev/xvdf
    # create all partitions, type: fd
    # install multidisk admin
    apt-get mdadm
    # in case something goes wrong (update at the same time)
    pkg --configure -a
    ## creation
    mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/xvdf1 /dev/xvdf2
    # check of everything went fine
    cat /proc/mdstat
    mdadm detail /dev/md0
    # file system
    mkfs -t ext4/dev/md0
    # mount
    mount /dev/md0 /mnt
  • Manage
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    # make it permanent
    # get the ARRAY line
    mdadm --detail scan
    # edit file adding the ARRAY line at the end of file
    nano /etc/mdadm/mdadm.conf
    mdadm --assemble --scan
    # update on ubuntu
    update-rc.d mdadm defaults
    ## mdmonitor for CentOS
    nano /etc/deafult/mdadm
    # AUTOSTART=true

    #check of you get fails
    mdadm --detail /dev/mdo
    # add saveguard: "if md0 fails use md2"
    mdadm /dev/md0 --add /dev/md2

Mount file systems on demand

  • connect to a samba share at will
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    yum install samba-client samba-common cifs-utils
    ## on your usual private network, -L for list
    # smbclient -U user -L share
    ## use IP on public network
    smbclient -I 172.31.2.893 -U user -L share

    # create a samba share
    mkdir samba
    # create credentials
    echo "username=user" > .smbcredentials
    echo "password=p4ss" >> .smbcredentials
    # it is plain text, so secure it with "no access"
    chmod 600 .smbcredentials

    sudo nano /etc/fstab
  • edit fstab: use IPs
    1
    2
    # cifs always
    //172.31.2.893/myshare /mt/samba cifs credentials=/mnt/.smbcredentials,defaults 0 0
  • mount
    1
    2
    # mount everything in the fstab
    mount -a

Advanced file system permissions

  • Check with ls -la, use chmod
  • ‘Sticky’ bit prevents userss from deleteing files they are not the owner of (value=1 ot T)
    1
    2
    3
    4
    5
    6
    7
    8
    # check my user groups and permissions
    whoami
    # make use `ls -la`

    ## create the sticky bit (e.g. can not delete)
    # 'stiky' bit goes before the permissions (now 4 bits)
    sudo chmod 1770 adv-perm/
    # on sticky bit, you see a 'T' on the `ls -la` command
  • Set gid bit (group ownership: value=2)
    1
    sudo setgid 2750 adv-perm/
  • Both sticky bit and gid bit (value=3)
    1
    sudo chmod 3770 adv-perm/
  • Find directory by this kind of permissions
    1
    sudo find -type d -perm -2000
  • Run app with someone’s other permission (e.g. password app -> value=4)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    ls -la paasw # -rw-rrrr 1
    # change password commands works!
    passwd
    # set uid -> execute with the file owner permissions instead of mine
    which passwd # where is it
    cd /usr/bin
    ls -la passwd # -rwsr-xr-x 1-> the 's' marks this
    # I can change my own password, root too, everyone else can not
    sudo chmod 4755 passwd

Setup user and group disk quotas for file systems

  1. Install quota
    1
    2
    3
    sudo apt-get install quota
    # edit fstable
    nano /etc/fstab
  2. Edit tstab
    1
    2
    # add usrquota on the field after ext4
    LABEL=cloudimg-rootfs / ext4 defaults,discard,usrquota 0 0
  3. Remount the root and check the quota
    1
    2
    3
    4
    5
    6
    mount -o remount /
    ## try avoiding having users uploading something while doing this
    ## -c=create, -u=newUserIndex file, -g=groupIndexFile, -m=NoReadOnlyMountRequired
    quotacheck -cugm /
    # edit the quota file for user1
    edquota user1
  4. Edit the quota file (0=no limits)
    1
    2
    3
    # 200MB = 20000000
    Filesystem blocks soft hard inodes soft hard
    /dev/xvda1 24 20000000 25000000 8 0 0
  5. Check for user
    1
    2
    3
    4
    # quota for a certain user
    quota user1
    # get report
    repquota -a
  6. Setup grace period (you have some time to go down the limit)
    1
    edquota -t
  7. Edit the grace quota file
    1
    2
    Filesystem   block grace period  Inode grace period
    /dev/xvda1 7 days 7 days

Create and configure file systems

  1. Create ext4
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    ## check status: there is '/dev/xvf1' partition
    lsblk
    ## create partition file system
    # -t=type, -V=verbose, -v=version
    # you may also add partition size, else it is default value
    sudo mkfs -t ext4 /dev/xvf1
    # create 2 directories
    sudo mkdir ext4
    # mount partitions
    sudo mount /dev/xvf1 mnt/ext4
    ## check status: there is '/dev/xvf1' partition
    lsblk
  2. Create btrfs
    1
    2
    3
    4
    5
    6
    # prepare a second partition on '/dev/xvf2'
    sudo mkdir btrfs
    sudo mkfs -t btrfs /dev/xvf2
    sudo mount /dev/xvf2 mnt/btrfs
    ## check status: there is '/dev/xvf2' partition
    lsblk

Linux Foundation Certified SysAdmin (LFCS): Service Configuration

Configure a caching DNS Server

  1. Install required tools
    1
    2
    yum install bing bind-utils
    nano /etc/named.conf
  2. Edit the named.conf file in order to able to cache
    1
    2
    3
    4
    ## add any so you can use it as cache
    allow-query {localhost; any; };
    # also add the following line
    allow-query-cache {localhost; any; };
  3. Check the security contexts
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    cd /etc/
    ls -la named.conf
    # check 'conf_t' is on it
    ls -lZ named.conf
    # if not
    semanage fcontext -a -t named confg t /etc/named.conf
    # check security context to check 'con_t' is on it
    ls -lZ named.rfc1912.zones
    # check our configuration, to avoid typos. No news = good news
    named-checkconf /etc/named.conf
    # restart and enable it for teh next boot
    systemctl restart named
    systemctl enable named
    systemctl status
    # usually you may open port 53 for this kind of service

Maintain a DNS zone

  1. Install required tools
    1
    2
    yum install bing bind-utils
    cat /etc/named/named.conf
  2. Edit the named.conf file in order to able to cache
    1
    2
    3
    4
    5
    ## add any so you can use it as cache
    zone "la.local" in {
    type master;
    file "la.local.zone";
    };
  3. Create your zone file
    1
    nano la.local.zone
  4. Edit the file content
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    # local network
    $ORIGIN la.local
    # time to live, bigger reduces number of queries
    # in seconds (10 minutes)
    $$TL 600

    # Start of authority resource record (SOE)
    # dnsServer primaryEmail
    @ IN SOA dns.la.local mail.la.local(
    # serial number, to refresh the zone, always increment
    1
    # slaves servers wait this time to ask the master the time
    21600;
    # retry
    3600;
    # expire
    604800;
    # min time to live
    86400;
    );

    # A records
    # webserver IN - recordType - IPADDRESS
    webserver IN A 10.98.80
    user1client IN A 10.9.8.25
    mail IN A 10.9.8.150
    dns1 IN A 10.9.8.53
    # alias/canonical records(CNAME)
    www IN CNAME webserver
    # mail exchange record (MX)
    IN MX 10 mail.la.local
    IN MX 20 labackup.ca.local

Connect to network shares

  • Server
    1. Install and setup nf-sutils
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      yum install nfs.utils
      # create the share directory
      mkdir /share
      # set rights
      chmod -R 755 /share
      # owned by 'nfsnobody' to avoid issues
      chown nfsnobody:nfsnobody share
      # enable and start services
      systemctl enable rpcbind
      systemctl enable nfs-server
      systmctl enable nfs-idmap
      systemctl start rpcbind
      systemctl start nfs-server
      systemctl start nfs-idmap
      # edit configuration
      nano /etc/exports
    2. Edit configuration
      1
      2
      # who we share with, plus rights
      /share 172.31.96.178(rw,sync,no_root_squash,no_all_squash)
    3. Reboot
      1
      systemctl restart nfs-server
  • Client
    1. Install and setup nf-sutils
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      yum install nfs.utils
      # make a mount point (to mount shared)
      cd /mnt
      mkdir -p /mnt/remote
      # is it connected? - probably not
      df -h
      #mount nfs, ip:directoryToMount where to mount
      mount -t nfs 172.31.124.130:/share /mnt/remote
      # is it connected? - probably yes
      df -h
      # test it
      cd remote

Configure email aliases

  • Simple POSTFIX
    1. Find configuration folder
      1
      2
      cd /etc/postfix
      nano aliases
    2. Configure it
      1
      2
      3
      4
      5
      6
      # alias: webmaster mail goes to user mail
      # webmaster will have 0 emails
      webmaster: user1
      # redirect mail to several accounts mail
      # boss will receive a copy, user1 will still get it
      user1: user1, boss
    3. Run with the alias configuration
      1
      sudo postalias /etc/posfix/aliases

Configure SSH servers and clients

  • Classical server setup

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    sudo apt install openssh-server
    # check configuration
    less /etc/ssh/sshd_config
    # create key pair
    ssh-keygen
    ## you get `id_rsa` and `id_rsa.pub`
    # copy the public key to other server
    ssh-copy-id user@remoteHost.lab.com
    # connect to the remote machine, no password needed
    ssh user@remoteHost.lab.com
    ## check keys
    # if ssh-copy-id does not work, you should add your public key here
    cat authorized_keys
  • Script to copy the key manually

    1
    2
    # if ssh-copy-id does not work
    cat ~/.ssh/id_rsa.pub | ssh user@remoteHost.lab.com "mkdir -p ~/.ssh && cat >> ~/.ssh/authorised_keys"

    Root login should not be allowed
    Remember to prevent root login ia directive PermitRootLogin no

Restrict access to HTTP proxy servers

  1. Install server
    1
    2
    cd /etc/squid
    nano squid.conf
  2. Edit text file
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    acl SSL_ports   port  443
    acl Safe_ports port 80 # http
    http_access deny !Safe_ports
    http_access allow localhost manager
    # some custoom denials, '!' means no
    http_access allow !nomachine
    http_access allow !nonetwork
    ## remember to set up the reference alias
    # a machine
    acl nonetwork src 192.168.1.0
    # a network
    acl nomachine src 192.168.1.0/24
  3. Restart the squid server

Configure an IMAP and IMAPS service (and Pop3 and Pop3S!)

  1. Install and configure core
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # check postfix permissions
    cat /etc/group | grep postfix
    # check mail
    cd /var/mail
    # install dovecot
    sudo apt install dovecot-core
    # check configuration
    cd /etc/dovecot/conf.d
    sudo nano 10.mail.conf
  2. Edit the configuration
    1
    2
    3
    mail_location =mbox:~mail:INBOX=/var/mail/%u
    # who may get access to the mail directory?
    # mail_privileged_group = mail
  3. Install and configure pop3 server
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    # install server
    sudo apt install dovecot-pop3 dovecot-imapd
    # check configuration
    cd /etc/dovecot/conf.d
    # imap.conf and pop3.conf were created
    cat nano 10.imap.conf
    cat nano 10.pop3.conf
    # check certificates
    cd /usr/share/dovecot
    ## script to create certificates
    /makecert.sh
    # point the right certificates
    cd /etc/dovecot/private
    # check dovecot.pem is there
    cd ../conf.d
    sudo nano 10-ssl.conf
  4. Edit the configuration
    1
    2
    3
    4
    5
    # update the SSL value
    ssl = yes
    # uncomment the keys
    ssl_cert = </etc/dovecot/dovecot.pem>
    ssl_key = </etc/dovecot/private/dovecot.pem>
  5. Restart the service so everything takes effect
    1
    2
    3
    4
    5
    sudo systemctl restart dovecot
    # check it is running
    ps aux
    # check the ports to listen to are correct
    sudo netstat -ntplu | grep dove

Configure an HTTP server

  • CentOS

    1. Install and setup
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      # install Apache
      yum httpd
      # install text based browser
      yum lynx
      lynx http://localhost
      # not started out-of-the-box, start apache
      systemctl httpd
      systemctl start httpd
      systemctl status
      lynx http://localhost
      # check configuration
      cd /etc/httpd
      ls -la
      # check conf, conf.d
      cd conf
      nano httpd.conf
    2. Edit the configuration
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
        ## load config files
      IncludeOptional conf.d/*.conf
      # virtual hosts
      IncludeOptional vhost.d/*.conf
      ``
      3. Restart
      ```bash
      # create the virtual host directory
      mkdir ../vhost.d
      systemctl restart httpd
      # check for errors
      systemctl status httpd.service
    3. Example for virtual host content
      On nano www.transapi.com_http.conf
      1
      2
      3
      4
      5
      <VirtualHost *:80>
      ServerName www.transapi.com
      ServerAlias www
      DocumentRoot /var/www/html/transapi
      </VirtualHost>
  • Debian

    1. Install and setup
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      # install Apache
      sudo apt install apache2
      # install text based browser
      sudo apt install lynx
      lynx http://localhost
      service apache2 restart
      # check configuration
      cd /etc/apache2
      ls -la
      cd conf_available/
      cd..
      # symlinks to available for enable
      cd conf-enabled/
      # something similar happens with sites
      check conf, conf.d
      cd sites-available/
      cd..
      cd sites-enabled/
      cd conf
      less apache2.conf
      # the binary is apache2ctl
    2. Helper applications for symlinks
    • a2enmod/a2dismod
    • a2ensite/a2dissite
    • a2encof/a2disconf

Configure HTTP server log files

  1. Find configuration
    1
    sudo nano httpd.conf
  2. Check configuration
    1
    2
    3
    4
    5
    6
    7
    8
    # check modules
    <IfModule log_config_module>
    # LogFormat "%%%%%"" logName
    # error log is not customizable
    # access log is customizable
    # %h=host %l=login %u=user %t=dateAndTime
    # %r=firstLineOfRequest #s=finalStatus
    CustomLog "logs/access_log" combined
  3. Find configuration
    1
    2
    3
    cd /etc/httpd/logs
    less access.log
    nano /conf/httpd.conf
  4. Change log format on configuration file
    1
    2
    3
    4
    5
    6
    7
    8
    <IfModule logio_module>
    LogFormat "Host: %h - Dateand time: %t - Requested %r" userCustom
    </IfModule>

    # then search for this and edit
    <IfModule log_config_module>
    customLog "logs/access_log" userCustom
    </IfModule>
  5. Restart to get configuration
    1
    systemctl restart httpd

Restrict access to a web page

  1. Find the configuration
    1
    2
    3
    4
    5
    # check access
    lynx http://localhost
    # only local browsers should view that page
    cd /etc/httpd/
    nano httpd.conf
  2. Edit the configuration
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    <Directory "/var/www">
    AllowOverride None
    # Allow open access:
    Require all granted
    </Directory>

    <Directory "/var/www/html/test/">
    Order allow,deny
    # allow from my machine - public address IPv4, IPv6
    Allow from 52.123.123.123
    # you may also allow from your private IP
    # allow from localhost at last (IPv4, IPv6)
    Allow from 127
    Allow from ::1
    </Directory>
  3. Test everything went fine
    1
    2
    lynx http://localhost
    # you may check access logs

Configure a database server

There are many different DBs, we use MariaDB (MySQL free implementation)

  1. Find the configuration
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # install
    apt-get install mariadb-server mariadb-client
    # secure the installation
    mysql_secure_installation
    ## shell for mysql
    mysql -u root -p
    show databases;
    # you may run scripts
    create database test;
    show databases;
    use database 'test';
    exit;
    # mariadb is only the installer - the service is mysql
    systemctl status mysql

Manage and configure containers

Docker

  1. Build a server in container
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    # show list
    docker ps
    # run a container, inteeractive, not on terminal
    # -p is for computerPort:containerPort
    # -v volume directoryMachine:directoryContianer
    # imageName:version (no version = latest version)
    docker run -dit --name my-test-web -p 8080::80 -v /home/user1/webstuff:usr/local/apache2/htdocs/ httpd:2.4
    # its is live, so if I add something to webstuff after container creation, it is server too
    # stop a container
    docker stop my-test-web
    # start a container
    docker start my-test-web
  2. Remove a server in container
    1
    2
    3
    4
    5
    6
    # remove container
    docker stop my-test-web
    docker rm my-test-web
    # remove docker image
    docker image ls
    docker image rm httpd:2.4

Manage and configure VMs

  1. Install

    1
    2
    3
    4
    5
    6
    7
    8
    # install virtual machines
    yum install qemu-kvm libvirt libvirt-client libviewer
    # check your server ha hardwares virtualization options
    # intel=vmx , amd=svm
    cat /proc/cpuinfo | grep vmx
    # if you get nothing, you get none, qemu will give you software support, slower
    # install, inside VMs no 64architecture allowed
    virt-install --name=tinyalpine --cpus= 1 --memory=1024 --cdrom=alpinestandard-3.7.0-x86.iso
  2. Virtual shell

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    virsh list --all
    # edit setup
    virsh edit tinyalpine
    # autostart when you start machibe
    virsh autostart tinyalpine
    # disable autostart
    virsh autostart --disable tinyalpine
    ## clone machine and change configuration
    # pause or stop before cloning
    virt-clone --original=tinyalpine --name=tiny2 --file=/var/lib/libviages/tinyalpine.qcow2
0%