Vocabulary

Name Description
Layer set of read-only files or commands that describe how to set up the underlying system beneath the container. Layers are built on top of each other, and each one represents a change to the filesystem.
Image immutable layer that forms the base of the container.
Container instance of the image that can be executed as an independent application. The container has a mutable layer that lies on top of the image and that is separate from the underlying layers.
Registry storage and content delivery system used for distributing Docker images.
Repository collection of related Docker images, often different versions of the same application.

❗ Notes

  • Try to keep your images as small as possible, so containers are faster
  • Don’t include libraries and dependencies unless they’re an absolute requirement for the application to run.

Commands

Developing with Docker Containers

Command Description
docker create [image] Create a new container from a particular image.
docker login Log into the Docker Hub repository.
docker pull [image] Pull an image from the Docker Hub repository.
docker push [username/image] Push an image to the Docker Hub repository.
docker search [term] Search the Docker Hub repository for a particular term.
docker tag [source] [target] Create a target tag or alias that refers to a source image.

Running Docker Containers

Command Description
docker start [container] Start a particular container.
docker stop [container] Stop a particular container.
docker exec -ti [container] [command] Run a shell command inside a particular container.
docker run -ti — image [image] [container] [command] Create and start a container at the same time, and then run a command inside it.
docker run -ti — rm — image [image] [container] [command] Create and start a container at the same time, run a command inside it, and then remove the container after the command execution.
docker pause [container] Pause all processes running within a particular container.

Using Docker Utilities

Command Description
docker history [image] Display the history of a particular image.
docker images List all of the images that are currently stored on the system.
docker inspect [object] Display low-level information about a particular Docker object.
docker ps List all of the containers that are currently running.
docker version Display the version of Docker that is currently installed on the system.

Cleaning Up Your Docker Environment

Command Description
$ docker kill [container] Kill a particular container.
$ docker kill $(docker ps -q) Kill all containers that are currently running.
$ docker rm [container] Delete a particular container that is not currently running.
$ docker rm $(docker ps -a -q) Delete all containers that are not currently running.

The conditional statement

1
2
3
# Check if Java is installed and set the path
if [ -d "$JAVA_HOME/bin" ] ; then
PATH="$JAVA_HOME/bin:$PATH"

Limit execution

Place logic within a script using Linux environment variables

Env var Description
$USER provides the username
$UID provides the user’s identification number (UID) of the executing user
  • User

    1
    2
    3
    4
    5
    6
    # only user JBOSS1 can run this script
    if [ "$USER" != 'jboss1' ]; then
    echo "Sorry, this script must be run as JBOSS1!"
    exit 1
    fi
    echo "continue script"
  • User identification name

    1
    2
    3
    4
    5
    6
    # only user ROOT can run this script, whose UID is 0
    if [ "$UID" -gt 0 ]; then
    echo "Sorry, this script must be run as ROOT!"
    exit 1
    fi
    echo "continue script"

Use arguments

  • No arguments

    1
    2
    3
    4
    5
    6
    # Don't do anything if there is no argument
    if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
    fi
    echo "arguments found: $#"
  • Multiple arguments

    1
    echo $1 $2 $3
  • Agument 0 is the name of the script being executed

    1
    2
    # log the name of the script executed
    echo test >> $0.log

User input

  • Offer user input

    1
    2
    3
    echo "enter a word please:"
    read word
    echo $word
  • Offer choices to the use*

    1
    2
    3
    4
    5
    read -p "Install Software ?? [Y/n]: " answ
    if [ "$answ" == 'n' ]; then
    exit 1
    fi
    echo "Installation starting..."

Exit on failure

1
2
3
4
5
6
#install JDK8: extract the file and check success before setting symbolic links
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
echo "Installation failed - exiting."
exit 1
fi

Introduction

  • Account basic overview

    • AWS free tier (for learning) different services, different free periods
    • Create AWS account: email / pass / unique account name / credit card info -> access to AWS console
  • Navigate AWS console

    • One click navigations (shortcuts): go to arrow, drag and drop what you want to the navbar
    • AWS services: user search bar
    • Resource groups (tag/label): manage production groups
    • System alerts: (know if your resources were affected by outage) list of issues, status, regions…
    • Check / switch regions
    • Support
    • Documentation

Note

  • Create billing alarm (dashboard/billing)
  • email: billing preferences -> receive free tier usage alert -> enabled
  • on cloudwatch: (left side) uder alarm -> billing -> create an alarm -> select metric (>=1$), use SNS (Simple Notificaction service)

Identity and Access Management (IAM)

  • Users / accounts /services / groups / roles / policies (you get root automatically, admin for everything)
  • Best practices (settings, configuration and architecture for high security, accesibility and availability)
    • delete root access keys (use IAM keys)
    • activate MFA (Multi-Factor Authentication)
    • create individual IAM users (by default no access)
    • use groups to assign permissions (manage accounts permissions easily)
    • apply on IAM password policy
  • Users and policies
    • create users
      • programatic access (key + secret key, no password)
      • AWS management console access (password, less secure)
    • attach existent policy (e.g. AWS S3 full access) -> easy, but bad practice: use groups!
  • Groups and policies
    • on a group several users get access to the same policies
    • a user can be part of several groups
  • IAM roles
    • services get roles: e.g. “role for EC2 to grant it access to S3”
    • set on “Roles” on IAM dashboard -> create role (to AWS service, e.g. EC2), them add pòlicy nad name it
    • “Deny” always overrides “Allow”

Network and connectivity

  • VPC (private cloud which allows access to services)
    • A private subsection af AWS
    • Management console -> “Network & content delivery” -> VPC
graph TD

A(Internet)
B[Cable/Modem]
C[Router/Switch]
D[Firewall]
E[Device 1]
F[Device 2]

G(Internet)
H[Internet Gateway]
I[Route Table]
J[Network Access Control List - NACL]
K[EC2 instance 1]
L[EC2 instance 2]

subgraph Home
  A
  B
  C
  D
  E
  F
end

subgraph VPC
  G
  H
  I
  J
  K
  L
end

A-->B;
B-->C;
C-->D;
D-->E;
D-->F;

G-->H;
H-->I;
I-->J;
J-->K;
J-->L;

Note

  • Both EC2 instance may be on the same subnet
  • You get a default VPC when you create an Amazon AWS account
  • Internet Gateways (IGW)

    • Hardware and software to connect to the Internet
    • Horizontally scaled
    • Redundance and highly availale VPC
    • Allows communication between VPC instances and the Internet
  • Route Tables (RTs)

    • VPC -> Route tables -> has routes
    • Routes can still exist if there us bo IGW (shows “black hole”)
    • Default “not local” address: 0.0.0.0/0
    • If there is no route, it doesn’t matter that we have no IGW
  • Network Access Control List

    • Default NACL on default VPC: 6 default subnets (1 per availability zone)
    • Stateless (input definition doesn’t imply output)
    • Security -> “Network ACL”
    • Rules
      • Types
        • Inbound (Internet -> Subnet)
        • Outbound (Subnet -> Internet)
      • Default: “DENY everything”
      • Matching from top/down, overrides eveything else
      • Be careful, you may cut all communication here!
  • Subnets

    • Sub-network: public/private
    • Default: subnets go to main route table
  • Availability zones (VPC specific)

    • Add subnets to availability zones
    • Utilize several for high availability (redundancy) + fault tolerance

Computer Services (EC2)

  • Elastic Cloud Compute (EC2)
    • Basics: server on cloud, scalable
    • Purchase options
      • On demand (most expensive, most efficient)
      • Reserved (period of time: pricing discount)
      • Spot (“bid for instance”, if it fits your price you got it while it is available)
      • Free tier
    • Tier variations
      • Instance type
        • general
        • compute optimized
        • accelerated computing
        • memory optimized
        • storage optimized
      • EBS oriented
      • AIM type (image = OS + extra modules)
      • Data transfer
      • Region

❗ You may compare AWS structure to a local computer structure

AWS Local PC
AMI OS
Instance Type Processor
EBS (Elastic Block Storage) Local storage
IP Addressing Network Adapter
Security Groups Firewall
RAM Memory
  • Amazon Machine Images (AMI)

    • Template to deploy EC2 instances (e.g. Linux + Apache)
    • Prevent human error
    • Categories
      • Community
      • AWS marketplace images
      • Custom
  • Instance Types (“CPU of your instance”)

    • Family (category on what they are optimized to do)
    • Type (subcategory)
    • CPU (number of virtual CPUs)
    • Memory (ammount of RAM)
    • Instance storage (memory type - harddrive: SSD, classic)
    • EBS optimized available
    • Network performance (based on transfer rate)
  • Elastic Block Storage (EBS)

    • “Block level storage for use on EC2 instances”
    • On the same availability zone as EC2 instance (consider it its hard drive)
    • IOPS (Input-output Operations per second)
    • EC2 -> instance -> select AMI -> delete storage on termination?
    • Root (EC2 must have one) + additional (you may swap it between EC2 instances, like a pendrive)
    • Snapshots (image/backup/duplicate)
  • Security groups

    • Similar to NACL: allow/deny traffic on instance level (virtual firewall)
    • Stateful (input implies output)

Note
Elastic Load Balancer (ELB) sends traffic to either subnet 1 or 2 depending on the use rates/load
If you are using an ELB, the rules on the 2 security groups must be the same

graph TD

A[Internet]
B[Gateway]
C[ELB]
D[EC2 - 1]
E[EC2 - 2]

subgraph Sec group 1
  D
end

subgraph Sec group 2
  E
end

A-->B;
B-->C;
C--NACL-->D;
C--NACL-->E;
  • IP adressing

    • Providing a public IP Address to an EC2 instance (address on the network)
    • Default: all EC2 have private IP address (communicate each other inside VPC)
    • After selecting instance type -> “auto-assign public address?”
  • Launching and using EC2 instances

    • Launch

      1. select AMI
      2. select instance type
      3. configure instance details
      4. add storage
      5. add tag (give instance)
      6. confugure/assign security group
      7. review launch
      8. create & download a key pair
    • Connect (Linux SSH)

      1. select instance
      2. under actions, choose connect
      3. follow terminal instructions
        3.1. open terminal to access command line
        3.2. navigate to directory where you donwnloaded the key pair
        3.3. run this command on thekey pair to change its permissions chmod essentialkp.pom
        3.4. run the example command provided by the AWS management console
    • Connecting to an EC2 instance with a windows PC

      • Use PuTTY Putty-gen application to convert key files
      • You may use RDP (Remote Desktop, GUI, for Windows instances)

Storage services

  • Basics

    • S3 (simple Storage Service): online bulk storage service
    • Bucket: root level folder in S3. They have regions, closer to you = fastest
    • Folder: a bucket subfolder
    • Object: file in a bucket
    • Pricing: by storage or by requests (operations: get, put, copy…)
  • Buckets and objects

    • Bucket names must be unique, lowercase, with number, hypens, and avoid IP address format
    • Use GUI: Upload/create folder
    • Properties (bucket, level and object level)
  • Storage classes

    • “classification assigned to object” (from top availability and price to lower rates)
      • Standard
      • Indifferent timing
      • Standard infrequent access
      • One-time standard infrequent access (One-time IA)
      • Glacier
      • Glacier deep archive
      • Reduced Redundancy (not recommended)
    • Classes have different cost, availability, durability (% over a year file is not lost) and frequency of access (availability: % over a year file will be accesible)
    • Each object has been assigned a class when added to bucket
    • You can change an object’s storage class
    • Settings / changing storage class
      • Default: standard
      • Change
        • Select another type during upload
        • using object lifecycle policies (glacier = only this method, takes 2 days for effect)
        • manual change (allow multipart for big files)
  • Object lifecycles

    • Rules that automate the migration of an object storage class to a different one, or its deletion based on time intervals
    • Located at bucket level, but you can decide which elements you want to change
graph LR
A[Standard]
B((30 days))
C[Infrequent]
D((60 days))
E[Glacier]

A --> B;
B --> C;
C --> D;
D --> E;
  • Permissions
    • Allow you granular control over who can view, access and use specific buckets of object
    • 2 level
      • Bucket
        • List (bucket name)
        • Upload/delete (objects)
        • Permissions (add, edit, view)
      • Object
        • open/download
        • view permissions
        • edit permissions
    • S3 permissions: “share object ith the world”
      1. on object (grantee (everyone), check (open/download)
      2. under actions: “make public”
      3. link under properties is now live
      4. remove access (delete permissions, remove bucket policy that provided public access)
  • Object versioning
    • “Feature which keeps track of and stores all versions of an object so that you can access and use an older version if you like”
    • ON/OFF, once ON you may only suspend (previous versions will still exist)
    • Set only on bucket level, applies to all objects in bucket

Database Services

  • RDS & Dynamo basics
    • RDS: SQL relational

      • very structured (tables)
      • engines available: Aurora, MySQL, MariaDB, PosgresSQL, Oracle SQL, MSSQL
      • pricing depends on:
        • on demand/reserved
        • instace type
        • RDS engine
        • DB storage
        • Data transfer to RDS
    • Dynamo DB

      • json like documents
      • no alternative engines
      • pricing depends on:
        • processed by capacity
        • indexed data storage
        • DynamoDB storage
        • reserved capacity
        • data transfer
    • Provisioning RDS (MySQL)

      • needs private subnet group which contains EC2 subnets (navigate to subnet groups and “create”, complete form)
graph TD
A[EC2 - 1]
B[EC2 - 2]
C[Route table]
D[RDS]

A --- C;
B --- C;
C --> D;
  • Launching a RDS DB

    1. select engine
    2. Specify DB details (instance specs)
    3. Configure advanced settings (network security (private subnet groups, do not create publicly accessible), DB options, backup, monitoring, maintenance)
    4. Launch
  • Connect to MySQL RDS DB

    • Download, install 3rd party app (MySQL workbench) and open app and setup connection:
      • Name it
      • Standard TCP/IP over SSH
      • SSH hostname (public IP address of EC2 to tunnel to)
      • SSH username (default user for EC2)
      • SSH Keyfile (.pem from EC2)
      • Hostname (writer cluster endpoint from RDS console)
      • Port: 3306 (important! check it!)
      • Username & pass word are stored in the keychain (they were used when you created the DB)
      • Click on “Test connection”, and if it successfull, the “Connect”

Monitoring, Alerts and Notifications

  • Simple Notification Service (SNS)
    • SNS automates sending emails and notifications (PUSH) according to events in AWS
      • topic: “group tag”
      • publisher (producer, endpoints)
      • subscribers (consumer, e.g. SQS Queues)
      • pricing: depends on number of publishers, notification deliveries, data transfer
    • Important data
      • max subscriptions per topic: 12.500.000
      • max topics per account: 10.000
      • can trigger EC2, S3, lambda
    • Using SNS
      • create a topic (name, display name for SMS, create)
      • add subscriptions
        1. select topic
        2. create subscription
        3. select protocol
        4. enter endpoint (e.g. email address)
        5. create subscription
      • publishing to topic
        1. click on “publish to topic”
        2. enter a subject and message
        3. click on “publish message”

Management tools

  • CloudWatch (what happened?)
    • “service which allows you to monitor services in AWS account”
    • Dashboard with metrics from EC2, S3, billing… Set threshold to trigger alarms
    • Pricing (depends on regions)
      • per dashboard
      • detailed monitoring EC2 instance
      • custom metrics
      • CloudWatch API requests
      • CloudWatch logs
      • CloundWatch events/custom events
    • Metrics and alarms
      • Dashboard: create -> add widget -> explore metrics -> set timer -> create
      • Alarms: create -> choose category -> explore metrics -> name + description + threshold + action (e.g. connect to SNS topic :smile:) + period statistics -> create
  • CloudTrail (who did it?)
    • service which tracks actions taken while using your AWS account (governance, compliance)
    • logs if actions happens (stored in S3 bucket)
    • pricing
      • management events (save logs in bucket)
      • data events
      • using charges (e.g. encryption)
    • create trail -> name it -> all regions? encryption? -> select events and S3 bucket for logs (create if non-existent)

Load balancing, Elasticity and Scalability

  • Elastic Load Balancer (ELB)
    • “evenly distributes traffic betwwen multiple EC2 instances”
    • takes into account availability zones and instance health
    • security groups must have the same configuration in order to be able to balance them equally
    • pricing: per hour, per GB of data transferred throuh it
    • creating an ELB (application ELB, other one is legacy)
      • Basic configuration (name, scheme internet-facing, IP address type, protocol, port, availability zones)
      • configure security settings
      • configure routing (target group with name, protocol, port)
      • configure health checks (protocol and thresholds: 5 OKs healthy, 2 KOs unhealthy)
      • register targets (where you want to serve traffic)
      • add tags, review and create
  • Autoscaling
    • “Automates the process of adding/removing instances as demand increases/decreases”
    • Autoscaling groups: collections of EC2 (can contain instances from different subnets)
    • Components
      • Launch configuration: EC2 template used when autoscaling needs to add an additional server to scaling group
      • Autoscaling group: rules + settings that govern when EC2 server is added/removed (min: 2 EC2s)
    • Pricing: free, but you need to pay for the resources provisioned
    • Using Autoscaling
      • Create a launch configuration
        1. Select AMI
        2. select instance type
        3. create launch configuration (name, public IP, optional bash script)
        4. select add storage type
        5. configure security group (check ports)
        6. Review and create
      • Create an auto-scaling group
        1. Create autoscaling groups using the launch configuration
        2. Configure details (name, number of instances, VPC and subnets you want to autoscale, Advanced: ELB + health checks)
        3. Configure policies (min, max, execute)
        4. Configure notifications (SNS topic)
        5. Configure add
        6. Review and create
graph TD
A[User]
B[ELB]
C[EC2-1]
D[EC2-2]
E[EC2-3 - added on demand]

subgraph Autoscaling group
  C
  D
  E
end

A --- B;
B --- C;
B --> D;
B --> E;
  • Route 53 (Domain and DNS - Domain Name System)
    • “Configure and manage web domains of websites and apps on AWS”
      • register domain names
      • Domain Name System (DNS) service
      • Health checking
    • Pricing
      • hosted zones
      • traffic flow (per policy)
      • latency based routing
      • geo DNS queries
      • helth checks
      • register transfer to a domain
    • Using Route 53 (A-records or address records)
      • domain registration
        1. search and select domain name available
        2. fill out contact details page
        3. review details and purchase
        4. complete and wait for “domain registration process to complete”
      • hosted ones and record sets
        1. navigate to Hosted zones and select the domain name you just registered
        2. create 2 type A record sets for your domain to route the ELB
        3. create record set, and done
    • Cloudfront
      • “AWS service which replicates data, video and apps around the world to reduce latency (speed up distribution)”
      • Set up S3 objects distribution
        1. upload data to S3 bucket
        2. go to CloudFront console -> distribution
        3. choose delivery method (e.g. web)
        4. create distribution -> origin settings -> choose S3 bucket
        5. default cache behaviour settings (forward requests, HTTP access, cache behaviour (Edge location))
        6. distribution settings: price, class, alternate domain names, distribution state
        7. Choose “Create Distribution”
        8. Ensure alternate domain names are configured in Route53 if they are set in CloudFront (alias field = ELB)

Serverless compute: lambda

graph LR
A[User]
B[Internet]
C[Route 53]
D[Lambda]

subgraph AWS
  D
end

A --> B;
B --> C;
C --> D;
  • Basics

    • “Lambda is SERVERLESS computing, forget about EC2. It executes your code and autoscales alone. You pay for the code running.”
    • Supports different runtimes, such as Java, NodeJS, Go, C#…
    • Pricing (min: 100ms)
      • requests (to execute your code)
      • duration (length of time it takes the code to execute)
      • accessing data from other AWS services/resources (trigger = CloudWatch alarm to monitor EC2 instances, so if they are shut down, they can be rebooted)
  • Test

    • How to create a lambda function
      1. select a lambda blueprint that fits your needs, or “author from scratch”
      2. configure the function (name, runtime, role)
      3. lambda function code
      4. lambda execution role (permissions to interact with other AWS services)
      5. advanced settings (ammount of memory, run on VPC, encrypt variables with AWS KMS (Key Management Service), tags, debugging and error handling, concurrency (limit scalability, audit & compliance with CloudTrail))
    • How to execute (test) a lambda function
      1. select function and click on “test”
      2. enter a “test event” (if required)
      3. click “Save”
      4. click “Test” and review the result

SCRUM

Framework: teams: roles, events, artifacts and rules

  • begin: start with what can be seen or known
  • track teh progress and tweaks
  • it is based in pillars and values
    • values: commitment, courage, focus, openness and respect
    • pillars: transparency, inspection, adaptation
  • team members learn and explore those values with events, roles and artifacts

Pillars

  • transpareny: understanding progress status of features or product by all stakeholders
  • inspection: progress status and tools used should be reviewed regularly to identify undesirable gaps
  • adaptation: adjustment of process or material should be done as soon as possible, when several aspects of the process cross acceptable boundaries, as a result that the product will not be acceptable

Values

  • commitment: commit to achieve goals
  • courage: do the right thing
  • focus: on work and goals
  • openness: open about work and challenges
  • respect: members are capable and independent

Events

Time boxed, sprint duration can not be changed once it is defined

  • sprint
  • sprint planning
  • daily scrum
  • sprint review
  • sprint retrospective

Sprint

  • the heart of scrum
  • time boxed in a month or less
  • during the sprint
    • no changes are made which could endanger the goals
    • quality goals do not decrease
    • scope may be clarified and re-negociated between the PO and dev team as more is learned
  • considerations
    • each sprint may be considered a project with no more than 1 month horizon
    • are used to accomplish something
    • each sprint has a definition, a design and a planning
    • limited to 1 month top: enable predictability and limit risk

Sprint planning

  • plan with the whole team
  • 8 hours top
  • scrum master ensures the event happens
  • define goals
  • what can be delivered/how can that be delivered
  • what can be done
    • work selected from product backlog
    • the number of items is solely up to the scrum team
    • only developers can assess what it can be accomplished
    • backlog = forecast, not commitment
    • the goal is defined on planning
    • goal is objective to be met within sprint duthrough implementation of backlog, provides gudance to devs on why it is building this increment
  • how the chosen work gets done
    • once items are selected, scruym team decides how it will be on done during that time
    • product backlog items + plan to deliver = sprint backlog
    • after meeting, dev team self-orgnizes to undertake the work in the backlog and decompose this work in units of 1 day or less
    • by the ned of the sprint planning: the development team should be able to explain the PO and Scrum master how thet organize

Daily SCRUM

  • 15 minutes internal meeting for devs, other people should not disrupt
  • every dayplan work for 24 hours
  • optimize collaboration
  • after taht devs meet for detailed discussions

Sprint review

  • at the end of sprint
  • inspect Increment and backlog
  • informal meeting
  • result: revised product backlog that defiens probable backlog items for next sprint
  • SCRUM team + key stakeholders invited by PO
  • PO explains what is being done and what is not
  • dev team: what went well, what was solved
  • dev team demo
  • PO discuss backlog as it stands
  • eveyone gives valluable input for next planning
  • review timeline, budget…

Sprint restrospective

  • team inspects itself and creates plan for improvements
  • after the review, prior to next planningScrum master ensures it takes place, it is positive and productive

SCRUM team

  • Definition

    • 3 types of members
      • Product Owner (PO)
      • Development Team (DT)
      • Scrum Master (SM)
    • self organized and cross-functional
    • designed to optimize flexibility, creativity and productivity
    • delivers product iteratively and incrementally, maximizing opportunities for feedback.
  • Roles

    • PO

      • maximize value of the product resulting from DT work
      • clearly express and order the backlog to optimize DT work
      • makes backlog visible, transparent and clearly
      • 1 person, not a comittee
    • SM

      • responsible SCRUM is understood and enacted
      • facilitator for PO and DT
      • organizes events, helps improving
    • DT

      • deliver “done” product
      • 3 to 9 persons

SCRUM artifacts

  • Product backlog
    • ordered list with eveything needed by the product
    • PO is responsible of this
    • living artifact: it is never completed as it evolves with the productive
    • changes on business requirements, market conditions or tech = changes on backlog
    • product backlog refinement: adding details, estimates, order, done by PO + DT
  • Sprint backlog
    • items from product backlog selected for the sprint + plan to deliver them
    • forecast
    • highly visible, real time picture
  • Increment
    • sum of all items of sprint backlog completed on the sprint
    • 1 step towards a vision or goal
graph LR;
A[Product backlog]
B(sprint planning)
C[Sprint backlog]
D(Scrum team daily scrum)
E(sprint review)
F[Increment]
G(sprint retrospective)

A --> B;
B --> C;
C --> D;
D --> E;
E --> F;
E --> G;
G --> B;

Basic

Action Command example
Run a container named “web”and expose service on port docker run -d -p 80:80 --name web nginx
Stop container docker stop web
Remove container docker rm web
Run the same “web “container and set a point as web content docker run -d -v $PWD/web:/usr/share/nginx/html:ro -p 80:80 --name web nginx

Build images

Action Command example
Build docker build -t alpine-mongo .
List built images docker images

Compose and swarm

Swarm

  • The manager node generates the swarm
  • A second node executes the output
Action Command example
Init manager node docker swarm init
New node orchestrated by manager docker swarm join --token SWMTKN-1-69yjut8vfhelsyujw0kayrifdj42a4dj74j29mykwekkyuczbq-6xw587ux0fr1gykx8rddy6rmd 192.168.1.205:2377s

Compose

  • Compose groups microservice content to set up a stack
Action Command example
Deploy service in stack docker stack deploy -c wordpressv3.yml wordpress
0%