Introduction

  • Characteristics

    • decentralized - no unifying schema
    • polyglot - heterogenous approach
    • independent - they don’t affect each other
    • do 1 thing well - if it is too complex, split it
    • black box - each microservice is a box, communication is done via API
    • you build it, you run it
  • Benefits

    • Agility - different teams own different services
    • Innovation - any in any language
    • Quality - gets POO advantages: reusability, composability, maintenaibility
    • Scalability - vertical (bigger machines) horizontal (more machines) + change failing components
    • Availability - easier failure isolation
  • Challenges

    • distributed systems (bandwith, network usage)
    • migration (define patterns)
    • versions (define pipeline release)
    • organization (build effective team structure)

Migration to Microservices

  • Brownfield applications are common (old not build with latest approaches)
  • Use structure of web application (individual URIs) to divide them in functional domains, and replace those domains with new microservices
  • Strangler application patterns: a vine strangles the tree that it is wrapped around
    • Strangler application steps:
      • 1- Transform: create paralel site
      • 2- Coexist: leave the old one, but redirect every new functionallity as it is implemented incrementally
      • 3- Eliminate: remove old functionality
  • Pre-requisites
    • Web or API based monolith: base on URL structure
    • Standarized URL structure: careful with intermediate layers
    • Meta UIs: larger chucnk so UI constructed on the fly
  • Bad patterns to apply
    • No 1 page at a time: it may result on consitency problems: the smallest sliver is a microservice
    • All at once
  • Good patterns
    • Refactor back (Inside part)
      • identify bouned contexts in application design (domain driven design)
      • chooe the smalles bounded bounded context, easier to refactor
      • conceptual plan of microservices within the context (rough URL structure)
    • Refactor front to accommodate the microservices on back (Outside part)
      • analize relationships between existing screens
      • apply principle of least astonishment to the aspects of model manipulation
      • conceptually plan the microservices within the context
      • choose wether to release an entire chunk at a time, or each chunk as a series os slivers
  • Relase process (Agile)
    • MVP (Minimal Viable Product) -> User Stories
    • User Stories may be grouped in EPICS (high level fencionality)
    • Squads implements EPICS (samlles element of implementation)

Complexity

  • Architectural
Model Complexity Meassure
Monolithic Code complexity Dependencies
Microservices Complexity in interactions Interaactions of the individual services’ domain
  • Operational
    • scaling + cost-efficient
    • operate 100 microservice components without multuplying effort
    • keeop track of pipelines
    • monitor system health
    • track debug interactions
    • analyze high ammounts of log data in distributed applications
    • deal with lack of standards
    • value diversity without locking technologies
    • versioning
    • ensure are in services use without consitent pattern usage
    • ensure decoupling abd communication

Microservices and the Cloud

  • AWS: Cloud services provider
    • Advantages
      • High availability (access from any device)
      • Fault tolerant (version storage)
      • Scalabiliy (add/grow services easily)
      • Elasticity (remove/shrink services easily)
    • Services provided
      • Amazon S3 (large “unlimited” storage)
      • Amazon EC2 (computer power) - part of VPC, the user connects here
      • Amazon RDS (databases) - part of VPC
    • Resolves most important challenges
      • On demand-resources
      • Programmability
      • Experiment with low cost and risk
      • Infrastructure as code
      • Continuous delivery
      • Service orientation
      • Managed services
      • Poliglot

Simple Microservices Architecture

Split functionalities into cohesive “verticals”

User Interface Microservices Datastore
Cloud front Application load balancer ElastiCache
(Static Content) ECS RDS
S3 DynamoDB

User Interface (CDN and WAF)

  • Route 53 provides features that can be leveraged for service discovery (user -> DNS resolver (doesn’t have the user) -> Route 53 (authority))

    • Adds health checks and simple failover recovery (if health check fails, activate secoundary server, either calculated or with latency measurement health checks)
    • Routing policies: multiple records in a single DNS (different weight)
  • Virtual private clouds (VPCs)

  • Cloud front is global Content Delivery Network that accelerates delivery of websites, APIs, video content… (example: on JS calls, images and CSS are on CDNs)

    • Fastest access
    • Reduce risks
    • Possibility to cache contents in an easy way
    • Use of CDN is an standard among companies with selling capabilities on Internet
    • Empower possibiliy to use HTT/2 which is much performant protocol adapted to smartphone and devices bandwith consume and features pattern
  • WAF (Web Application Firewall) - control traffic + custom rules for common attack patterns + API to handle rules, AWS WAF payment: price depends on number of rules.

    • Part of CDM solutions or Application Load Balancer (ALB) that fronts web servers or origin servers running on EC2.
    • Increased protection against web attacks
    • Security integrated with how you develop applications
    • Ease of payment and maintenance
    • Improved web traffic visibility
    • Cost effective web application protection

Microservices

  • OSI model (remember)
    • 7: Application
    • 6: Presentation
    • 5: Session
    • 4: Transport
    • 3: Network
    • 2: Data-link
    • 1: Physical

graph LR
A[Amazon Route 53]
B[ELB 1]
C[ELB2]
D[EC2 instance 1]
E[EC2 instance 2]
subgraph ELB-VPC
  B
  C
end
subgraph Customer-VPC
  D
  E
end
A --> B;
B --> D;
A --> C;
C --> E;
  • Components used
    • API of a microservice: REST
    • ELB (Elastic Load Balancer) distributes traffic
    • detects which instances are online and distributes trafic among them, mitigates DDOS
    • Idle timeouts (60-36000 seconds): close connections by load balancer when they are not longer used
    • Listeners (1-10 per ELB) with routung rules (actions to forward requests, use path patterns formats)
    • Multiple availability zones: 2 or more subnets in deiiferent zones with the load balancer
    • ELB sends latency information to cloudwatch -> which will request auto-scaling(AS) via AS Policy -> which will send action to ELB
      • ELB metrics available
      • Healthy Host Count: healthy instances in each availability zone
      • Latency: elapsed time from when the request leaves the load balancer until the response is received
      • Rejected Connection count: when ELB can not establish connection with healthy target in order to route the request
    • ALB Application Load Balancer) -> level 7 OSI model: rules to redirect `+ target groups of VPCs
    • ECS (Container Service) + autoscaling
graph TD
A[Load balancer]
B[Listener]
C[Rule 1]
D[Rule 2]
E[EC2]
F[EC2]
G(Health check)
H(Health check)
I[Listener]
J[Rule 3]
K[EC2]
L(Health check)
subgraph Listener layer A
  B
  C
  D
end
subgraph Login
  E
  G
end
subgraph Img
  F
  H
end
subgraph Listener layer B
  I
  J
end
subgraph Payment
  K
  L
end
A --> B;
B --> C;
C --> E;
B --> D;
D --> F;
A --> I;
I --> J;
J --> K;
  • Key features

    • High availability
    • Health checks
    • Security groups
    • SSL Termination
    • Sticky sessions
    • VPCs
    • Idle connection timeout
    • Connection draining
    • Dynamic Por Mapping
    • Protocols
    • Backend server Auth (on classic, not ALB)
    • Cloudwatch Metrics
    • Access Logs
    • Path-based routing
    • Deletion Protection
  • EC2 (Amazon ECS) or Amazon EC2 Container Service: processing activities

    • Supports Docker containers and allows to easily run applications on a managed cluster of amazon EC2 instances

    • Scaled in/out, use Auto-scaling

    • Amazon EKS provides container services for instance orchestration via Kubernetes

    • Docker on AWS

      • Configuration and deployment
      • Microservices
    • Batch processing

    • Amazon ECS Container Agent: checks the clusters

    • Clusters: regional resource poll for grouping container instances, which start empty and can be scaled

    • Typical workflow

      • 1- User pushes an image to DockerHub
      • 2- User creates task definition on Amazon ECS (declare source requirements)
      • 3- User runs instances on EC2 (custom AMI with Docker support and ECS Agent. Instances will be registered with default cluster)
      • 4- User describes cluster on Amazon ECS (get information about cluster state and available resources)
      • 5- User runs task on amazon ECS (using task definition from step 2, which schedules it)
      • 6- User describes cluster again and checks the cluster has the docker image loaded
    • Ways to start a task

      • StartTask
        1
        aws ecs starttask -cluster default -task-definition sleep360:1 -container-instance <instance arn>
      • RunTask
        1
        aws ecs run-task --cluster default -task-definition sleep360:1 --count 1
      • Bring your own scheduler (Mesos, Marathon, custom)
    • Use of AWS CLI:

Action Example
List all clusters aws ecs list-clusters -- profile myProfile
List all container instances in cluster aws ecs list-container-instances --cluster default --profile myProfile
List all task definition aws ecs list-task-definitions --profile myProfile
List tasks running aws ecs list-tasks --cluster dafult -- profile myProfile
Describe task via taskArn aws ecs describe-tasks --cluster dafult -- profile myProfile -- tasks f94dbd87-7d84-4e27-ab70-0461d455d1ba
Describe container instance via containerInstanceArn aws ecs describe-container-intances --cluster dafult -- profile myProfile -- container_instances 25340-a6ff-45de-b3eb-fa43a88e9313
Run task aws ecs run-task --task-definion sleep360:2 --cluster default --profile myProfile
  • Amazon Elastic File System (Amazon EFS): storage for EC2, can be mouted AS-IS when connected to Amazon VPC (Virtual Private Cloud)
    • NFSv4 protocol, multiple EC2s can share it

Data storage

Persist data:

  • Amazon ElastiCache Service
  • Amazon Relational Database Service (MS SQL server, Oracle, MySQL, MariaDB, PostgreSQL, Amazon Aurora)
  • Amazon DynamoDB: NoSQL: no strict schema (can not join table, info must be merged in applications)

Learning about RDS

  • AWS proviisioned database storage
  • Amazon Dynamo DB
    • read consistency -> strong consistent
    • provisioned throughput capacity (read/write throughputs)

Tools

Springboot

Spring framework

  • Dependency injection and integration (@Component, @Autowire, @Context)

  • Super easy central configuration service creation (@EnableConfigServer creates new SpringBoot Application)

  • Straightforward service discovery registry (Registry = also a microservice, other services can register themselves)

    • Integration with Eureka REST service registry by Netflix
    • Spring Security: athentication and authorization
    • Spring Boot: top of the Framework: (SpringBoot = (Spring Framework) + (Embedded HTTP Servers) - (XML <bean> Configuration or @Configuration))
  • Benefits

    • Easy to develop Java or Groovy
    • Reduces dev time
    • Avoid boilerplate code, annotations, configuration
    • Easy integration with other Spring resources
    • Provides CLI to develop and test Springboot apps
    • Provides plugins to test, build (Maven, Gradle) or work with embedded DBs

Docker

“Docker doesn’t create containers, but it does containers for you making it easier by using an standard”

  • Linux server
  • Docker daemon
  • Login/interact DockerHub
  • Use Docker Client to connect to Docker Daemon

Example: go to DockerHub, find apache image,then send it to daemon which will send it to the Linux Kernel

1
docker run apache
  • Advantages

    • Uses LXCs (Linux Containers), which are user space interface for the Linux Kernel Containment
    • Several LXCs run in one control hist LXCs
    • LXCs are an alternative to hypervisors (Virtualbox, VMware…)
    • Prevents “Runs in your machine, but not in mine”
  • Benefits

    • Portability
    • Productivity
    • Efficiency
    • Control

Therefore, on Amazon AWS:

  • Amazon ECS eliminates the eed to install, operate and scale cluster management infrastructure
  • Use API to launch Docker-enabled applications, query the status…
  • Amazon EDS is scalable and elastic
  • Amazon EKS leverages and replaces ECS to be used under the context of Kibernetes orchestrator

Kubernetes

  • Introduction
    *Container orchestration to deploy containers

    • Portable
    • Extensible
    • Self-healing
  • Benefits

    • Deploy is fast and predictive
    • Scale on the fly
    • Roll out new features seamlessly
    • Limit hardware usage
  • Common needs satified: it provides simplicity of Platform as a Service (PaaS) with felixibility of Infrastructure as a Service (IaaS)

    • Co-location helper processes
    • Mounting storage systems
    • Distributing secrets
    • Checking application health
    • Monitoring resources
    • Using horizonatl pod autoscaling
    • Naming and discovery
    • Balancing loads
    • Rolling updates
    • Accessing and ingesting logs
    • Debugging applications
    • Providing authentication and authorization
    • Replicating application instances
  • Amazon EKS

    • Amazon Elastic Container Service for Kubernetes: install and manage Kubernetes clusters

      • Integrated with many AWS services
      • Elastic load balancing
      • IAM authen tication
      • Amazon VPC for isolation
      • AWS PrivateLink for private network access
      • AWS CloudTrail for logging
    • Benefits

      • Fully managed and highly available
      • Secure
      • Fully compatible with Kubernetes Community Tools

Installing the Kubernetes Command Line

The Kubernetes command line (CLI) tool is called kubectl (pronounced: cube-cuttle) to interact with any Kubernetes cluster, not just the local one. If you don’t want to have a local Kubernetes installation, but you want to operate a Kubernetes cluster, you’ll only use kubectl. Run the following command to verify the installation:

1
kubectl version

Installing locally

Start by installing Docker. Kubernetes does support other container systems, but I think using Docker is the easiest way to learn Kubernetes.

Depending on your OS, I’d recommend a specific install method:

  • Linux: install minikube. You’ll need a hypervisor like VirtualBox or KVM.
  • Windows or Mac: latest version of Docker for desktop. You might need to enable Kubernetes if you are already using the newest version.

Verify that Kubernetes is running with the following command:

1
kubectl get svc

Core concepts

Use the API via CLI to tell Kubernetes how your application should run.

  • Objects: way to set a desired state, defined in a YAML format.
  • Pods: smallest compute unit object you create in Kubernetes, which groups containers (1 container = 1 responsability) for performance and co-scheduling purposes. Containers in a pod share the same networking, the same storage, and are scheduled in the same host.

Define the pod by creating a YAML file definition named pod.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Pod
metadata:
name: helloworld
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: christianhxc/helloworld:1.0
ports:
- containerPort: 80
resources:
requests:
cpu: 50m
limits:
cpu: 100m

Then create it by running the following command:

1
kubectl apply -f pod.yaml

Finally verify that the pod is running:

1
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
helloworld 1/1 Running 0 41s

If for some reason the pod dies, you don’t have to tell Kubernetes to create the pod again. Kubernetes will recreate one automatically because the pod’s state is not the same as the one you defined.

Deployments

  • Pods are mortal, and that means that your application is not resilient.
  • A deployment object is how you can give immortality to pods. You define how many pods you want to have running all the time, the scaling policies, and the policy for zero-downtime deployments.
  • If a pod dies, Kubernetes will spin up a new pod. Kubernetes continually verifies that the desired state matches with the object’s definition.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
labels:
app: helloworld
spec:
replicas: 3
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: christianhxc/helloworld:1.0
ports:
- containerPort: 80
resources:
requests:
cpu: 50m
limits:
cpu: 100m

Create the deployment object:

1
2
$ kubectl apply -f deployment.yaml
deployment.apps/helloworld created

Verify that the deployment exists:

1
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
helloworld 3 3 3 3 40s

You’ll see that now you have three pods running plus the one we created previously.

1
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
helloworld 1/1 Running 0 8h
helloworld-75d8567b94-7lmvc 1/1 Running 0 24s
helloworld-75d8567b94-n7spd 1/1 Running 0 24s
helloworld-75d8567b94-z9tjq 1/1 Running 0 24s

How to kill a pod

Kill a pod actually restarts it, as it will be recreated to match the deployment definition. Ti update the container image, you need to update the deployment.yaml via the apply command.

1
2
apply
kubectl delete podhelloworld-75d8567b94-7lmvc

Services

You need to expose the pods via the service object to communicate with it. Other pods in the cluster can communicate between each other by using the internal IP address. But you can’t rely on an IP to communicate with pods for its dynamism, which is why the solution is still a service.

A service then is a representation of a load balancer for pods. You can configure the load balancing algorithm, and if Kubernetes is integrated with a cloud provider, you’ll use the native load balancers from the cloud provider.

Example service.yaml using labels and selectors:

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: helloworld

Create the service:

1
2
$ kubectl apply -f service.yaml
service/helloworld created

To verify that the service is running, and to get the IP address to test it:

1
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloworld LoadBalancer 10.105.139.72 localhost 80:30299/TCP 21s
kubernetes ClusterIP 10.96.0.1 443/TCP 12h

You can test the container image with http://localhost/api/values.

Functional interfaces

  • static: behaves like final methods
  • default: are similar to abstract methods: “default behaviour, but it can be overriden”
1
2
3
4
5
6
7
8
public interface Vehicle{
static String producer() {
return "N&F Vehicles";
}
default String getOverview() {
return "ATV made by " + producer();
}
}

Stream API

  • Reference to static methods
1
2
3
4
// with lambdas
boolean isReal = list.stream().anyMatch(u -> User.isRealUser(u));
// avoiding lambdas
boolean isReal = list.stream().anyMatch(User::isRealUser);
  • Reference to an instance method
1
long count = list.stream().filter(String::isEmpty).count();
  • Reference to a constructor
1
Stream<User> stream = list.stream().map(User::new);

Optional as nullpointer protection

  • Definition

    1
    Optional<String> optional = Optional.of("value");
  • Simplifying conditionals

    1
    2
    3
    4
    5
    // conditional
    List<String> listOpt = list != null ? list : new ArrayList<>();
    // optional
    List<String> listOpt = getList().orElse(new ArrayList<>());
    List<String> listOpt = getList().orElseGet(ArrayList::new);
  • Handling nullables with default values

    1
    2
    3
    4
    5
    Optional<OptionalUser> optionalUser = Optional.ofNullable(getOptionalUser());
    String result = optionalUser
    .flatMap(OptionalUser::getAddress)
    .flatMap(OptionalAddress::getStreet)
    .orElse("not specified");
  • Handling nullable exceptions

    1
    2
    3
    String value = null;
    Optional<String> valueOpt = Optional.ofNullable(value);
    String result = valueOpt.orElseThrow(CustomException::new).toUpperCase();

Apache Maven basics

Basic concepts

Introduction

  • Automatizes project construction process
  • Alternative to ANT (older) or Gradle (newer)
  • Maven repository stores maven packages on a remote mirror
Command Action
mvn -v Get Maven version info
mvn archetype:generate -Dgroup=org.test -Dartifact=test-maven -DarchetyoeArtifactId=maven archetyoe-quickstart -DinteractiveMode=false create a project from a group, with a name and with and using an archetype
mvn <phase> Execute a phase
mvn <phase> -P <profile> Execute a phase for a certain profile

How does it work?

graph LR;
A[Java project]
B[Spring library];
C[Servlet library];
D[pom.xml];
E((compile and build));
F(MAVEN install);
G[Java apps]
H(Maven local repository)
I(Maven central repository)
A --> E;
E --> G;
E --> F;
F --> E;
H --> F;
I --> F;
B --> D;
C --> D;
D --> F;

Lifecycles

  • Lyfecyle: flow to build software
  • Phase: step in in the lifecylce flow
  • Goal: granular task in a phase
  • plugin: goal logic group

pom.xml content

  • Content subsets

    • POM relationaships
    • General product information
    • Build settings
    • Build environment
  • Goals and phases (can be configured via plugins)

    • Goal: what should Maven do?
    • Phase: execute a phase and those before it
Phase Action
clean Removes the artifacts from previous compilations
validate Validates elements needed
compile Compiles the code
test Launch unit tests
site Generates the documentation (e.g. Javadoc)
package Deploys and execution and test it
integration.test Deploys and execution and test it
verify Quality validations (e.g. SONAR rules)
install Install on local repository to solve dependencies
deploy Install on integration or release, to some remote repository
  • Dependencies and scopes
    • Packages may reference other packages
    • The scope explains in which phase is that dependecy needed
    • Dependencies can be excluded via <exclusions><exclusion></exclusion></exclusions>
Scope Action
compile Needed for compilephase. Includes the packages
test Needed for testphase. Doesn’t include the packages
provided Needed for compile phase, but doesn’t add them on execution phase
system Needed for compile phase, added as path
import Includes an artifact with pom format
  • Build plugins
Plugin Action
surefire Launches the unit tests
checkstyle Checks the source style
clover Evaluates the code coverture
enforcer Verifies environment settings
assembly Creates ZIP files and other packages with their dependencies (JARs)
  • Properties
Property Description
${env.PATH} OS environment var
${project.groupId} Project group identifier
${project.artifactDir} Project group identifier
${project.baseId} Path of the xml file (and base project)
${settingst.localRepository} Path of the local user repository
${java.home} Java system attribute
${java.vendor} JRE provider
${my.somevar} Java system attribute
  • Profiles configuration: provides Maven configuration which can be activated via command line (or triggered automatically)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    <project>
    <profile>
    <id>YourProfile</id>
    <dependencies>
    <dependency>
    <groupId>com.yourcompany</groupId>
    <artifactId>yourlib</artifactId>
    </dependency>
    </dependencies>
    </profile>
    </project>
  • Reporting configuration

    1
    2
    3
    4
    5
    6
    7
    <reporting>
    <plugins>
    <plugin>
    <artifactId>maven-javadoc-pluging</artifactId>
    </plugin>
    </plugins>
    </reporting>

POM basic example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<project xmlns="http://maven.apache.org/POM/4.0.0" 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<!-- POM relationaships -->
<modelVersion>4.0.0</modelVersion>
<groupId>com.everis</groupId>
<artifactId>test-maven</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>test-maven</name>
<url>http://maven.apache.org</url>

<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
<exclusions>
<exclusion>
<artifactId>hamcrest-core</artifactId>
<groupId>org.hamcrest</groupId>
</exclusion>
</exclusions>
</dependency>
</dependencies>

<repositories>
<repository>
<id>Central</id>
<url>http://mvnrepository.com</url>
</repository>
</repositories>
</project>

Advanced projects

Modular projects

A parent project may contain multiple modules

  • On a module project you need to set the parent data

    1
    2
    3
    4
    5
    6
    <parent>
    <groupId>org.test.mygroup</groupId>
    <artifactId>my-parent</artifactId>
    <version>2.0</version>
    <relativePath>../my-parent</relativePath>
    </parent>
  • On a parent, you need to specify the modules in the right compilation order

    1
    2
    3
    4
    <modules>
    <module>util</module>
    <module>app</module>
    </modules>

Source Control Manager (SCM)

System for tracing changes on source code

graph TD;
A[Remote repository];
B[User_1 local working copy];
C[User_2 local working copy];
A --> B;
A --> C;

Advantages

  • Reliable
  • Backup
  • Paralel development of different issues on the same project

Operations

  • update (update the local version with the server version)
  • checkout (download a file to local in order to modify it)
  • commit/check in (add to the server)
  • discard/revert (undo local changes)

Git

Definition

  • Offical Git page
  • The system is distributed
  • Free/Libre Software
  • Branches (origin/master): the HEAD points the current working branch
graph TD;
A[Remote repository];
B[User_1 local working copy];
C[User_1 local working copy];
D[User_2 local working copy];
E[User_2 local working copy];
A --> B;
B --> C;
A --> D;
D --> E;

Operations

  • clone
  • fetch
  • stage
  • commit
  • push

Commands

Command Action
git init Initialize local repository
git remote add origin <url_remote_repo> Link local to remote repository
git config [--global] user.name "UserNameRemote" Configure user credentials
git config [--global] user.email "myMail@mail.com" configure mail on reporitory service
git config [--global] user.name Check configuration
git tag 1.0.0 <commitId> Name a status on the repository
git tag Retrieve list of tags
git status Check the latest status
git clone /userrName@host:repository/path clone a remote repository
git add myFile.txt adds a file to the stage (control version)
git commit -m "My first commit" persist changes on the local repository
git push origin master pesist the changes stored on the local repository to the remote repository
git pull origin master retrieves the code from the remote repository and merge it with the local branch
git branch myIssueFix creates an alternative path for your data and sets the HEAD there
git checkout myPartnerFix changes the HEAD to a the selected branch
git merge <myBranchContentNeeded> merges the content of branchContentNeeded into the current branch
git request-pull <start> <url> [<end>] generates a summary of pending changes

Useful alias to render logging

1
2
3
[alias]
lg = log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit
lg2 = log --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n'' %C(white)%s%C(reset) %C(dim white)- %an%C(reset)' --all
0%