systemctl start docker # are there docker logs? tail /var/log/messages
Create two new containers using the httpd image
Syslog on log driver
1 2 3 4 5 6 7
docker container run -d --name syslog-logging httpd docker logs syslog-logging # Error response from daemon: configured logging does not support reading # check the content of '/var/log/messages': # verify that the syslog-logging container is sending its logs to syslog tail /var/log/messages # the output shows us the logs that are being input to syslog
JSON file as log driver
1 2 3
docker container run -d --name json-logging --log-driver json-file httpd docker logs json-logging # the logs do not appear in /var/log/messages
Watchtower (updating containers)
Watchtower: monitoring containers with other container and update them when the image is updated
Watchtower needs images pushed to a repository (see DockerHub)
Create a dockerfile for a nodejs ‘express’ app
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# set base image FROM node
# create the directory where the app will be copied to RUNmkdir -p /var/node # add our express content to that directory ADD content-express-demo-app /var/node/ # set the working directory WORKDIR var/node/
# scripts to build the app RUN npm install
# execute CMD ./bin/www
Log into DockerHub, build and push the image
1 2 3 4 5
docker login # -f = determine the dockerfile # . = the dockerfile in this directory docker build -t myDockerHubUser/express -f . docker push myDockerHubUser/express
Execute dockerized app and watchtower
1 2 3 4 5 6 7 8 9
## express app # -d = run in the background # -p = port, 80 on local, 3000 on container docker run -d --name demo -p 80:3000 --restart always myDockerHubUser/express docker ps
## watchtower # last paramerter is refresh period (30 seconds) docker run -d --name watchtower -p 80:3000 --restart always -v /var/run/docker.sock:/var/run/docker.sock v2tec/watchtower -i 30
Make a small change on the ‘express’ app dockerimage, and push it to teh repository. Watchtower should autoupdate it
Metadata and labels
Use 2 different consoles to avoid issues (we will name them ‘docker workstation’ and ‘docker server’)
Build the Docker image (on the docker workstation)
1 2 3 4 5 6 7 8 9 10
# log in to DockerHub docker login # build the image with parameters docker build -t myDockerHubUser/weather-app --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \ --build-arg APPLICATION_NAME=weather-app --build-arg BUILD_VERSION=v1.0 -f Dockerfile . # show image id (IMAGE_ID), and use it to inspect it docker images docker inspect IMAGE_ID # push the image to DockerHub docker push myDockerHubUser/weather-app
Create the weather-app container (on the docker server)
Check out version v1.1 of the weather app (on the docker workstation)
1 2 3
cd weather-app git checkout v1.1 cd ../
Rebuild the weather-app image (on the docker server)
1 2 3 4 5 6
docker build -t mydockerHubUser/weather-app --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') / --build-arg APPLICATION_NAME=weather-app --build-arg BUILD_VERSION=v1.1 -f Dockerfile . docker push USERNAME/weather-app # show image id (IMAGE_ID), and use it to inspect it docker ps docker inspect IMAGE_ID
Load balancing containers
2 servers: swarm manager and swarm worker (always on swarm manager unless you are told so)
Create a Docker Compose file on Swarm Server 1 (on lb-challenge directory)
events { worker_connections 1024; } http { upstream localhost { server weather-app1:3000; server weather-app2:3000; server weather-app3:3000; } server { listen 80; server_name localhost; location / { proxy_pass http://localhost; proxy_set_header Host $host; } } }
Execute docker-compose up
1 2 3
cd ../ docker-compose up --build -d docker ps
Create a Docker service using Docker Swarm
1 2 3 4
cd ~/ # review the token cat swarm-token.txt # Copy the 'docker swarm join' command from the previous step
On swarm worker: execute the command that was copied from the previous step
Back to swam manager: create a Docker service
1 2 3
docker service create --name nginx-app --publish published=8080,target=80 --replicas=2 nginx docker ps #verify that the default nginx page loads in the browser (PUBLIC_IP_ADDRESS:8080)
# install sudo yum -y install docker ## set up permissions # drop down to root sudo -i # set up a docker group groupadd docker # add myUser to docker group usermod -aG docker myUser ## enable and start docker systemctl enable --now docker # log out from root logout
Run a standard docker image
1 2
# verify the installation docker run docker.io/hello-world
## Get the image # check the local images docker images # get an OS image docker pull centos:6 # start the Docker container in interactive mode # -i = interactive # -t = tag docker run -it --name websetup centos:6 /bin/bash
## Prepare the system # update the system yum -y update # install apache yum -y install httpd git ## clone a repository, and set it in the Apache hmtl folder git clone https://github.com/linuxacademy/content-dockerquest-spacebones cp content-dockerquest-spacebones/doge/* /var/www/html # to display correctly, rename the default 'welcome.conf' file to 'welcome.conf.bak' mv /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.d/welcome.conf.bak
## Test everything works # enable & start the Apache service chkconfig httpd on && service httpd start # exit the container exit
## Save the edited image docker commit websetup spacebones:thewebsite
Dockerizing an application
Clone repo and set it in subdir
1 2
git clone https://github.com/linuxacademy/content-dockerquest-spacebones cd ~/content-dockerquest-spacebones/nodejs-app
Use the Dockerfile below to build a new image
1 2 3 4 5 6 7 8 9 10 11
# base for new container FROM node:7 # working directory WORKDIR /app # copy from work directory to app COPY package.json /app RUN npm install COPY . /app CMD node index.js # expose port EXPOSE8081
Build and run
1 2 3 4 5
## build container image # . = add all docker build -t baconator:dev . # (optional) Run the image to verify functionality docker run -d -p 80:8081 baconator:dev
Docker optimization
Optimizing docker builds with onbuild
ONBUILD = build only on “child image” Build using dockerfile you create a new docker image, but ONBUILD are not applied to the current docker image
Find the docker file and prepare to edit it
1 2
cd content-dockerquest-spacebones/salt-example/salt-master nano dockerfile
# build "tablesalt:master" # . = using the dockerfile in my current directory docker build -t tablesalt:master . # check it worked docker images
Ignoring files during docker build
.dockerignore file
Find the docker file and prepare to edit it
1 2
cd content-dockerquest-spacebones/salt-example/salt-master nano .dockerignore
Edit the dockerignore
1 2 3
badscript.sh *.conf README.md
Storing data and Networking in Docker
Storage
Creating data containers
Create postgres data container image
1 2 3 4 5
# is docker running? docker ps # -v volume to bind on: /data # /bin/true = if it runs properly, it won't return anything docker create -v /data --name posgresData spacebones/postgres /bin/true
Mount image in several containers
1 2 3 4 5 6 7 8
docker run -d --volumes-from posgresData --name posgresContainer1 spacebones/postgres docker run -d --volumes-from posgresData --name posgresContainer2 spacebones/postgres # check it worked docker ps # check the volumes ID docker volume list # check posgresContainer1 ID is the same volume from the list docker inspect posgresContainer1
Networking
Container networking with links (legacy)
1 2 3 4 5 6 7 8 9 10 11 12 13
# is docker running? docker ps
## create the website container docker run -d -p 80:80 --name spacebones spacebones/spacebones:thewebsite # create the database container # -P publish exposed ports to free ports on my host # --link = give the name to the container we want want to link to docker run -d -P --name posgresContainer --link spacebones:sppacebones spacebones/postgres
## verify that the link works # docker inspect -f "{{ .HostConfig.Links }}" $CONTAINERNAME docker inspect -f "{{ .HostConfig.Links }}" posgresContainer
# is docker running? docker ps # get the code git clone https://github.com/linuxacademy/content-dockerquest-spacebones.git
## Create a volume docker volume create missionstatus # check results docker volume ls docker volume inspect missionstatus # keep the Mountpoint information
## Copy website data to the volume sudocp -r /home/cloud_user/content-dockerquest-spacebones/volumes/* /var/lib/docker/volumes/missionstatus/_data/ # check it worked ls /var/lib/docker/volumes/missionstatus/_data/ exit
## Create a container # source = what to mount # target = where to mount it # finally add the name of the image docker run -d -p 80:80 --name fishin-mission --mount source=missionstatus,target=/usr/local/apache2/htdocs httpd # check it worked docker ps
cost optimization, performance, security, fault tolerance, service limits
available to all customers: core checks (6)
available to business/enterprise (all)
Identity and Access Management (IAM)
Root User
credentials: email and password on sign up for AWS account
should not be used for daily work, no access keys and have MFA
Users and Groups
Users
admin rights (daily work)
users need policies to access resources
deny policies override grant policies
credentials should not be shared
do not store EC2 credentials, forward them
Groups
users can be members of groups, which have policies attached
organize users by functions (DB admin, SysAdmin…)
Roles
IAM roles
Temporary security credential by Secure Token Service (STS, single endoint sts.amazon.com in us-east-1 (N-Virginia), reduce latensy using APIs, default time given: 1h)
For AWS recources or user outside AWS
Roles and AWS
used because policies can not be attached to AWS recources
1 service -> ONLY 1 roles
roles are attached to resources, no credentials
can be changes on running instances via CLI or console
Other uses
crosss account access (delegation)
identity federation (non-AWS. link identity through different systems - recommend use of AWS Cognito, use SAML (Security Acknoledge Markup Language) for corporate domain accounts)
Policies
json document taht states permissions
deny overrides allow
templates (admin, power-user (no user creation), view-only)
you may use policy generatorcreate them from scratch or use the visual editor
users can have more than 1 policy
not for resources
Access Advisor
user should have as few perissions as possible
unused permissions can be deteced with access advisor (audit user, group or role)
Server side encryption (at rest): on disk (read/write)
Client side encryption (in transit): on message (sent/received)
Enveloping: using keys to cypher keys (master key, KEK=Key encryption keys)
Symmetric encryption
same key to encrypt-decrypt
Asymmetric encryption (SSL, SSH)
different keys to encrypt decrypt (encrypt=public, decrypt=private)
HSM and KMS (Amazon Elastic File System)
Hardware security model (HSM)
Physical device to store keys on premise
AWS-cloudHSM: HSM can be in multiple regions (clusters)
Load balñancer replicates keys
Key Management Service (KMS) -> KMS service
Create and control encryption keys
Advantage over HSM: integration on AWS, can use IAM policies for access
Customer Master key are stored in KMS
Both data and encrypted key are stored
graph LR
A[Plain text date]
B(key-DataKey)
C[Cypher text]
D[Storage]
E(EncryptedKey)
F(MasterKey)
G((DataKey))
A--use-->B;
B--encrypt-->C;
C-->D;
E-->D;
F--encrypt-->E;
G--generate-->F;
S3 bucket encryption policies override the settings of the folders within them. If you need to use separate encryption keys for some documents within a bucket, you will need to change the settings on each document individually.
OS-level access
Overview
EC2 is under infrastructure model: user thinks about IAM, encryption, security groups and NACL
Unix-> Linux = (cloudinet) terminal SSH, auth keys
Windows -> (ec2config) Windows = remote desktop
Windows -> Linux = PuTTY
SSH
connection (symmetric): faster
key pair authentication (asymmetric)
RSA in ~/.ssh/authorized_keys, chmod 400 <keyname>.pem
process
Client: connection request
Server: public key
Both: Cypher negociation
Both: Key exchange algortithm
Both: Connection encrypted using SSH key
Bastion Host
“jump box” (go to security groups, configure inbound & outbound)
deploy in 2 availability zones, with autoscalinginpublic subnets, access form a list of addresses
graph LR
A[User]
B(Internet gateway)
C[Autoscaling]
D[Bastion]
E((Private subnet))
subgraph Public subnet
C
D
end
C-->D;
A-->B;
B--NAT gateway-->C;
D-->E;
Linux example
SH forwarding whenever is possible
1 2 3 4 5 6 7 8
chmod 400 <path-to-key>.pem ssh-agent bash ssh add <path-to-key>.pem # add first host (-A, e get to instance) ssh -A ec2-user@<ip-address> # second host, we are already there # due to ssh forwarding ssh ec2-user@<ip-private-address>
Windows remote desktop example (RDP protocol)
AWS get script, add key on console, double click and go
Windows Bash example
Go to update and security - for developers - bash shell (beta)
Add or remove windows features -> Windows subsystem for Linux
Windows store -> choose distro Linux on Windows (Ubuntu)
Windows PuTTY example
Download and add pem, change it to ppk, go on
Data Security
Securing data at rest
Concerns
accidental information disclosure
data integrity compromised
accidental deletion
availability
S3
permissions: bucket, object level, IAM policies, delete MFS
versioning: helps against accidental deletion
replication: automatic on availabilty zones
backup: replication and versioning = unnecessary. Rules to store on another region?
server-side encryption: S3 master key or KMS
VPC endpoint: use data inside VPC without making it public
Glacier
server-side encryption: encrypted in AES-256, 1 archive = 1 unique key, there is a master key created and stored securely
EBS
replication: 2 copies of each volume in each availability zone (for disk failure)
backup: snapshots of volumes + IAM for access
server -side encryption: AWS MNS master key, OS tools
RDS
permission: IAM policies
encryption: KMS (except micro-instances), DB cryptographic options (reference on DB fields)
DynamoDB
permissions: IAM
encryption: Application level encryption, same as RDS,
VPC endpoint: can use data inside VPC without making it public
EMR
Amazon managed service: AWS provides AMIs, no custom
data store: S3 or DynamoDB, HDFS (Hadoop Distributed File System -> defaults to Hadoop KMS)
techniques to improve data security: SSL, application level encryption, hybrid
Decommissioning data and Media
different from on-prem decomission
delete -> blocks become unallocated, reassigned somewhere
reading and writing to blocks
write = overwrite existing
read = data or hypervisor returns 0
end of life
DoD 5220-M (National Industrial Security Operating Manual)
NIST SP 800-88 (Guidelines for media sanitization)
Both previous
None of previous = destroy device
Securing data in transit
Concerns with communicating over public links (Internet)
Approaches
use HTTPS
offload traffic to ELB
use SSH
database traffic and AWS console and traafic use SSL/TLS
X.509 certificates (client browser, use public key)
AWS certificate manager (free)
SSL/TSL certificates (ELB, Cloudfront, API gateway, Cloudformation)
automatic renewal, import 3rd party
OS Security
Recommendations
Disable root user API access keys
Use limited source IPs in security groups
Password protect pem files
Keep authorized_key file up to date
Rotate credentials (access keys)
Use Access Advisorto identify and remove unnecessary permissions
Bastion hosts
Custom AMIs
Base configuration, “snapshots”
Clean up/hardening tasks before upload
protect credentials (disable insecure apps, software should not use default accounts, SSH keys must not be published, disable guest account)
protect data (delete shell history)
remove shared devices (e.g. printers)
Do not violate AWS Acceptable Use Policy (example: SMTP/proxy server)
Bootstrapping
cloud-init, cfn-init, tools like Puppet and Chef
patching/updates: update AMIs frequently!
consider dependecies
security software updates might update beyond the patch level of AMI
application updates might patch beyond the build in the AMI
take into account environment differences (production, test…)
instance updates might break external management and security monitoring (tst 1st on non-critical)
Inventory (can collect data on apps, files, network configs, services…)
Automation (via scheduling, alarm triggering…)
Run command (secure remote management replacing bastion host or ssh)
Patch manager (deploy OS and software patches on demand)
Maintenace Window (scheduling administrative and maintenance tasks)
State manager and parameter store (for config management)
Mitigating problems
Malware
use only trusted AMIs
principle of least privilege
keep patches up to date
antivirus/antispam software
host-based IDS
Abuse
AWS will shut down malicious abusers
compromised resource, unintentional abuse (web crawlers may be confused with DDOS), secondary abuse (user of your system uploaded infected file), false complaints
Best practices: do not ignore AWS communications follow security best practices, mitigate identified compromises
Infrastructure security
VPC Security
Internet only
Use SSL/TLS
Build your own VPN solution
Planned routing and placement
Security groups and NACLs
IPsec tunel over Internet
Deploy VPN (AWS or other)
VPC networking (subnets, security groups, NACLs)
AWS direct connect (links to peer AWS)
No additional security, check organization requirements
Terminates at Availability Zones in a region
VPC networking (subnets, security groups, NACLs)
Hybrid (direct + IPsec)
Best practices of teh previous ones
VPC networking (subnets, security groups, NACLs)
Network segmentation
VPC (isolate workload, e.g. departments)
Security groups: stateful (TCP UDP ports in both directions)
NACLs: stateless, granular control on protocols, work on security groups, ephimeral ports (client requests depend on OS)
Host based firewalls (Os level)
Strengthening
Customer side of shared responsability model (control access, network security, secure traffic)
Best practices
Security groups
NACLs
Direct connect or IPSec for other selves
Encrypt data in transit often
Layer network security
Logs
Secure periphery systems
DNS use SSL/TLS to prevent spoofing
Active directory/LDAP
Time servers (synch from trusted source)
Repositories (do not post credentials)
Threat Protection Layer
Concern: untrusted connections
Layers
Threat protection (IDS, IPs, firewall)
DM2 presentaion (NACL and Security groups)
Application (NACL and Security groups)
Data (NACL and Security groups)
Testing and measurement
Vulnerability (risk assessment)
3rd party evaluation with littel inside knowledge
Penetration testing
AWS must be notified before
AWS vulnerability penetration form
m1.small or micro can not be tested
Meassuring risk management
Monitor procedure
Meassure effectiveness
Review effectiveness
Internal audit
Management reviews (scope)
AWS Web Application Firewall (WAF)
Conditions/rules set on Cloudfront or Applciation Load Balancer
Watch cross site scripting, IP addreses, locations of requests, queryString and SQL
Multiple conditions = ANDS (al must be true)
AWS Shield (DDOS protection)
Basic = included with WAF
Advanced (3000$/month per organisation)
Expand WAF services to ELB, Cludfront, Route53, resources with elastic IPs
Contact 24x7 DDOS Response Team (DRT)
Expandded protection DDOS and others
Monitoring, alerting, and auditing
Monitoring Basics
Questions
What parameters
How to meassure them
Threshold
Can they be escalated?
Storage
Log
Individual actions
Trail access
Invalid access attempt
IAM
Creation of new logs
Create/delete system elements
AWS Config (resources, list of supported services)
you may
evaluate resources
snaphsot of config
retrieve config (resources, historical)
get changes notifications
view relationships between resources
uses
administer resources
audit compliance
config troubleshooting via history
security analysis
AWS Systems Manager - Inventory and insights (for resource groups)
Linux Foundation Certified SysAdmin (LFCS): Storage management
Manage physical storage partitions
Always create bakup before doing this
fdisk
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# list mounted devices lsblk # make partition: add device (dev) and partition name 'xvdf' sudo fdisk /dev/xvdf # now you use commands, 'a' -> flag is from "before UEFI" times g # gpt p # partition n # new partition # add sector, partition size q # exit # check list of mounted devices again lsblk # remove that new partition (number 2) sudo fdisk /dev/xvdf d # delete 2 # partition id, to delete it w # write q # exit
gparted and parted CLI
1
sudo parted /dev/xvf
Maximum number of primary partitions on an MBR disk device = 4
LVM storage
Logical Volume Managers -> group physical devices together, as a single thing
yum install lvm2 # let's join multiple partitions or devices # file time 8e = Linux LVM sudo fdisk /dev/xvdf #create multiple partitions, primary type, type Linux LVM # add data about that partition l # show list of partition types t # file type 8e # LVM # create physical volumes pvcreate /dev/xvdf1 /devxvdf2
## create volume group vgcreate tinydata /dev/xvdf1 /devxvdf2 # create logical volume, last value is the LVM name lvcreate --name logical-tiny --size 600M tinydata # show what we have lvdisplay
# use expandable file system (ext4) mkfs -t ext4 /dev/tinydata/logical-tiny # mount it cd /mnt mkdir teeny mnt /dev/tinydata/logical-tiny /mnt/teeny # show results df -h
Extend a previously existing volume
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# create backup before this! fdisk /dev/xvdf # reboot # create a new physical volume pvcreate /dev/xvdf3
# extend the volume group with the new partition vgextend tinydata /dev/xvdf3 vgdisplay # check empty space to add (e.g. 105) # resize logical lvextend -l +105 /dev/tinydata/logical-tiny # resize file system # check status e2fsck -f /dev/tinydata/logical-tiny # resize resize2fs /dev/tinydata/logical-tiny mount /dev/tinydata/logical-tiny /mnt/teeny
Check volumes
1 2 3
pvs # physical volume list vgs # group volume list lvs # logical volume list
Encrypted storage
Format
1 2 3 4 5 6 7 8 9
# is the encription module loaded? grep -i config_dm_crypt /boot/config-$(uname -r) yum install cryptsetup # check partitions lsblk
# create the encripted part cryptsetup -y luksFormat /dev/xvdf1 # add passphrase
Use
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# open it, and use password crypstsetup luksOpen /dev/xvdf1 mySecret # list devices lsblk # is it mounted? df -h # create file system mkfs -t ext4 /dev/mapper/mySecret # mount it mount /dev/mapper/mySecret /mnt/encrypted # walk on by cd encrypted/ ls -la df -h
swap = when there is no enough RAM, it moves inactive pages to disk
Never below 32MB!
Turn off swapoff -a, turn on swap on -a
Configured at boot time at fstab
1 2
nano /etc/fstab/ #check this line '/root/swap swap swap sw 0 0'
The swap file must have a minimum of 0644 permissions, but a recommended 0600 in order to be enabled with the mkswap’ and ‘swapon’ commands
1 2 3 4 5 6 7 8
# example sudo su ddif=/dev/zero of=/root/extraswap.swp bs=1024 count=524288 chmod 600 /root/extraswap.swp mkswap /root/extraswap.swp swapon /root/extraswap.swp cat /proc/swaps # edit /etc/fstab to include the line: /root/extraswap.swp swap swap defaults 0 0
RAID devices
Redundant Array of Independent Disks -> unify presentation of devices + use taht space for file durability
Create
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
fdisk /dev/xvdf # create all partitions, type: fd # install multidisk admin apt-get mdadm # in case something goes wrong (update at the same time) pkg --configure -a ## creation mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/xvdf1 /dev/xvdf2 # check of everything went fine cat /proc/mdstat mdadm detail /dev/md0 # file system mkfs -t ext4/dev/md0 # mount mount /dev/md0 /mnt
Manage
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
# make it permanent # get the ARRAY line mdadm --detail scan # edit file adding the ARRAY line at the end of file nano /etc/mdadm/mdadm.conf mdadm --assemble --scan # update on ubuntu update-rc.d mdadm defaults ## mdmonitor for CentOS nano /etc/deafult/mdadm # AUTOSTART=true
#check of you get fails mdadm --detail /dev/mdo # add saveguard: "if md0 fails use md2" mdadm /dev/md0 --add /dev/md2
Mount file systems on demand
connect to a samba share at will
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
yum install samba-client samba-common cifs-utils ## on your usual private network, -L for list # smbclient -U user -L share ## use IP on public network smbclient -I 172.31.2.893 -U user -L share
# create a samba share mkdir samba # create credentials echo"username=user" > .smbcredentials echo"password=p4ss" >> .smbcredentials # it is plain text, so secure it with "no access" chmod 600 .smbcredentials
‘Sticky’ bit prevents userss from deleteing files they are not the owner of (value=1 ot T)
1 2 3 4 5 6 7 8
# check my user groups and permissions whoami # make use `ls -la`
## create the sticky bit (e.g. can not delete) # 'stiky' bit goes before the permissions (now 4 bits) sudochmod 1770 adv-perm/ # on sticky bit, you see a 'T' on the `ls -la` command
Set gid bit (group ownership: value=2)
1
sudo setgid 2750 adv-perm/
Both sticky bit and gid bit (value=3)
1
sudochmod 3770 adv-perm/
Find directory by this kind of permissions
1
sudo find -type d -perm -2000
Run app with someone’s other permission (e.g. password app -> value=4)
1 2 3 4 5 6 7 8 9
ls -la paasw # -rw-rrrr 1 # change password commands works! passwd # set uid -> execute with the file owner permissions instead of mine which passwd # where is it cd /usr/bin ls -la passwd # -rwsr-xr-x 1-> the 's' marks this # I can change my own password, root too, everyone else can not sudochmod 4755 passwd
# add usrquota on the field after ext4 LABEL=cloudimg-rootfs / ext4 defaults,discard,usrquota 0 0
Remount the root and check the quota
1 2 3 4 5 6
mount -o remount / ## try avoiding having users uploading something while doing this ## -c=create, -u=newUserIndex file, -g=groupIndexFile, -m=NoReadOnlyMountRequired quotacheck -cugm / # edit the quota file for user1 edquota user1
Edit the quota file (0=no limits)
1 2 3
# 200MB = 20000000 Filesystem blocks soft hard inodes soft hard /dev/xvda1 24 20000000 25000000 8 0 0
Check for user
1 2 3 4
# quota for a certain user quota user1 # get report repquota -a
Setup grace period (you have some time to go down the limit)
1
edquota -t
Edit the grace quota file
1 2
Filesystem block grace period Inode grace period /dev/xvda1 7 days 7 days
Create and configure file systems
Create ext4
1 2 3 4 5 6 7 8 9 10 11 12
## check status: there is '/dev/xvf1' partition lsblk ## create partition file system # -t=type, -V=verbose, -v=version # you may also add partition size, else it is default value sudo mkfs -t ext4 /dev/xvf1 # create 2 directories sudomkdir ext4 # mount partitions sudo mount /dev/xvf1 mnt/ext4 ## check status: there is '/dev/xvf1' partition lsblk
Create btrfs
1 2 3 4 5 6
# prepare a second partition on '/dev/xvf2' sudomkdir btrfs sudo mkfs -t btrfs /dev/xvf2 sudo mount /dev/xvf2 mnt/btrfs ## check status: there is '/dev/xvf2' partition lsblk
Linux Foundation Certified SysAdmin (LFCS): Service Configuration
Configure a caching DNS Server
Install required tools
1 2
yum install bing bind-utils nano /etc/named.conf
Edit the named.conf file in order to able to cache
1 2 3 4
## add any so you can use it as cache allow-query {localhost; any; }; # also add the following line allow-query-cache {localhost; any; };
Check the security contexts
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
cd /etc/ ls -la named.conf # check 'conf_t' is on it ls -lZ named.conf # if not semanage fcontext -a -t named confg t /etc/named.conf # check security context to check 'con_t' is on it ls -lZ named.rfc1912.zones # check our configuration, to avoid typos. No news = good news named-checkconf /etc/named.conf # restart and enable it for teh next boot systemctl restart named systemctl enable named systemctl status # usually you may open port 53 for this kind of service
# local network $ORIGIN la.local # time to live, bigger reduces number of queries # in seconds (10 minutes) $$TL 600
# Start of authority resource record (SOE) # dnsServer primaryEmail @ IN SOA dns.la.local mail.la.local( # serial number, to refresh the zone, always increment 1 # slaves servers wait this time to ask the master the time 21600; # retry 3600; # expire 604800; # min time to live 86400; );
# A records # webserver IN - recordType - IPADDRESS webserver IN A 10.98.80 user1client IN A 10.9.8.25 mail IN A 10.9.8.150 dns1 IN A 10.9.8.53 # alias/canonical records(CNAME) www IN CNAME webserver # mail exchange record (MX) IN MX 10 mail.la.local IN MX 20 labackup.ca.local
Connect to network shares
Server
Install and setup nf-sutils
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
yum install nfs.utils # create the share directory mkdir /share # set rights chmod -R 755 /share # owned by 'nfsnobody' to avoid issues chown nfsnobody:nfsnobody share # enable and start services systemctl enable rpcbind systemctl enable nfs-server systmctl enable nfs-idmap systemctl start rpcbind systemctl start nfs-server systemctl start nfs-idmap # edit configuration nano /etc/exports
Edit configuration
1 2
# who we share with, plus rights /share 172.31.96.178(rw,sync,no_root_squash,no_all_squash)
Reboot
1
systemctl restart nfs-server
Client
Install and setup nf-sutils
1 2 3 4 5 6 7 8 9 10 11 12
yum install nfs.utils # make a mount point (to mount shared) cd /mnt mkdir -p /mnt/remote # is it connected? - probably not df -h #mount nfs, ip:directoryToMount where to mount mount -t nfs 172.31.124.130:/share /mnt/remote # is it connected? - probably yes df -h # test it cd remote
Configure email aliases
Simple POSTFIX
Find configuration folder
1 2
cd /etc/postfix nano aliases
Configure it
1 2 3 4 5 6
# alias: webmaster mail goes to user mail # webmaster will have 0 emails webmaster: user1 # redirect mail to several accounts mail # boss will receive a copy, user1 will still get it user1: user1, boss
Run with the alias configuration
1
sudo postalias /etc/posfix/aliases
Configure SSH servers and clients
Classical server setup
1 2 3 4 5 6 7 8 9 10 11 12 13
sudo apt install openssh-server # check configuration less /etc/ssh/sshd_config # create key pair ssh-keygen ## you get `id_rsa` and `id_rsa.pub` # copy the public key to other server ssh-copy-id user@remoteHost.lab.com # connect to the remote machine, no password needed ssh user@remoteHost.lab.com ## check keys # if ssh-copy-id does not work, you should add your public key here cat authorized_keys
Script to copy the key manually
1 2
# if ssh-copy-id does not work cat ~/.ssh/id_rsa.pub | ssh user@remoteHost.lab.com "mkdir -p ~/.ssh && cat >> ~/.ssh/authorised_keys"
Root login should not be allowed Remember to prevent root login ia directive PermitRootLogin no
Restrict access to HTTP proxy servers
Install server
1 2
cd /etc/squid nano squid.conf
Edit text file
1 2 3 4 5 6 7 8 9 10 11 12
acl SSL_ports port 443 acl Safe_ports port 80 # http http_access deny !Safe_ports http_access allow localhost manager # some custoom denials, '!' means no http_access allow !nomachine http_access allow !nonetwork ## remember to set up the reference alias # a machine acl nonetwork src 192.168.1.0 # a network acl nomachine src 192.168.1.0/24
Restart the squid server
Configure an IMAP and IMAPS service (and Pop3 and Pop3S!)
Install and configure core
1 2 3 4 5 6 7 8 9
# check postfix permissions cat /etc/group | grep postfix # check mail cd /var/mail # install dovecot sudo apt install dovecot-core # check configuration cd /etc/dovecot/conf.d sudo nano 10.mail.conf
Edit the configuration
1 2 3
mail_location =mbox:~mail:INBOX=/var/mail/%u # who may get access to the mail directory? # mail_privileged_group = mail
Install and configure pop3 server
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
# install server sudo apt install dovecot-pop3 dovecot-imapd # check configuration cd /etc/dovecot/conf.d # imap.conf and pop3.conf were created cat nano 10.imap.conf cat nano 10.pop3.conf # check certificates cd /usr/share/dovecot ## script to create certificates /makecert.sh # point the right certificates cd /etc/dovecot/private # check dovecot.pem is there cd ../conf.d sudo nano 10-ssl.conf
Edit the configuration
1 2 3 4 5
# update the SSL value ssl = yes # uncomment the keys ssl_cert = </etc/dovecot/dovecot.pem> ssl_key = </etc/dovecot/private/dovecot.pem>
Restart the service so everything takes effect
1 2 3 4 5
sudo systemctl restart dovecot # check it is running ps aux # check the ports to listen to are correct sudo netstat -ntplu | grep dove
Configure an HTTP server
CentOS
Install and setup
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
# install Apache yum httpd # install text based browser yum lynx lynx http://localhost # not started out-of-the-box, start apache systemctl httpd systemctl start httpd systemctl status lynx http://localhost # check configuration cd /etc/httpd ls -la # check conf, conf.d cd conf nano httpd.conf
# install Apache sudo apt install apache2 # install text based browser sudo apt install lynx lynx http://localhost service apache2 restart # check configuration cd /etc/apache2 ls -la cd conf_available/ cd.. # symlinks to available for enable cd conf-enabled/ # something similar happens with sites check conf, conf.d cd sites-available/ cd.. cd sites-enabled/ cd conf less apache2.conf # the binary is apache2ctl
# then search for this and edit <IfModule log_config_module> customLog "logs/access_log" userCustom </IfModule>
Restart to get configuration
1
systemctl restart httpd
Restrict access to a web page
Find the configuration
1 2 3 4 5
# check access lynx http://localhost # only local browsers should view that page cd /etc/httpd/ nano httpd.conf
Edit the configuration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<Directory "/var/www"> AllowOverride None # Allow open access: Require all granted </Directory> <Directory "/var/www/html/test/"> Order allow,deny # allow from my machine - public address IPv4, IPv6 Allow from 52.123.123.123 # you may also allow from your private IP # allow from localhost at last (IPv4, IPv6) Allow from 127 Allow from ::1 </Directory>
Test everything went fine
1 2
lynx http://localhost # you may check access logs
Configure a database server
There are many different DBs, we use MariaDB (MySQL free implementation)
Find the configuration
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# install apt-get install mariadb-server mariadb-client # secure the installation mysql_secure_installation ## shell for mysql mysql -u root -p show databases; # you may run scripts create database test; show databases; use database 'test'; exit; # mariadb is only the installer - the service is mysql systemctl status mysql
Manage and configure containers
Docker
Build a server in container
1 2 3 4 5 6 7 8 9 10 11 12
# show list docker ps # run a container, inteeractive, not on terminal # -p is for computerPort:containerPort # -v volume directoryMachine:directoryContianer # imageName:version (no version = latest version) docker run -dit --name my-test-web -p 8080::80 -v /home/user1/webstuff:usr/local/apache2/htdocs/ httpd:2.4 # its is live, so if I add something to webstuff after container creation, it is server too # stop a container docker stop my-test-web # start a container docker start my-test-web
# install virtual machines yum install qemu-kvm libvirt libvirt-client libviewer # check your server ha hardwares virtualization options # intel=vmx , amd=svm cat /proc/cpuinfo | grep vmx # if you get nothing, you get none, qemu will give you software support, slower # install, inside VMs no 64architecture allowed virt-install --name=tinyalpine --cpus= 1 --memory=1024 --cdrom=alpinestandard-3.7.0-x86.iso
Virtual shell
1 2 3 4 5 6 7 8 9 10
virsh list --all # edit setup virsh edit tinyalpine # autostart when you start machibe virsh autostart tinyalpine # disable autostart virsh autostart --disable tinyalpine ## clone machine and change configuration # pause or stop before cloning virt-clone --original=tinyalpine --name=tiny2 --file=/var/lib/libviages/tinyalpine.qcow2