Install

Writing slides

  • Pages can be split with horizontal rulers
    1
    2
    3
    4
    5
    6
    7
    # Slide 1
    foo

    ---

    # Slide 2
    bar

Directives

  • Directives

    • Insert front matter on top of markdown
      1
      2
      3
      ---
      theme: default
      ---
    • You may use HTML anywhere
      1
      <!-- theme: default -->
  • directives apply from the page they are defined, onwards.

  • if they start with an underscore, they are only applied to a single page.

Global

Directive Action
theme choose theme (built-in: default, uncover and gaia)
size choose size (16:9 and 4:3)
headingDivider divide slude pages at before of specific heading levels
1
2
3
4
---
theme: gaia
size: 4:3
---

Local

Directive Action
paginate show pagination by set true
header specify contents for header
footer specify contents for footer
class set HTML class for current slide (invert inverts the color scheme)
color set color text
backgroundColor set bakcground color
backgroundImage background-image style
backgroundPosition background-position style
backgroundRepeat background-repeat style
backgroundSize background-size style
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
backgroundColor: black
---

# Slide 1

---

<!-- background Color: yellow - ->
# Slide 2

---

<!-- _color: white - ->
# Slide 3

---

# Slide 4

Image syntax

  • You can reize an image with the width (h) and heigh (h) keywords.
    1
    ![width:100px heigh:100px](image.png)
  • You can add CSS filters with blur
    1
    ![blur sepia:50%](image.png)
  • you can define background imagess (you can add vertical to change ailgnment)
    1
    2
    ![bg opacity](image.png)
    ![bg blur:3px](image2.png)
  • you can split the background (e.g. text left, image right)
    1
    ![bg right](image.png)

Fragmented list

1
2
3
4
5
6
7
8
9
10
11
12
# Bullet list
- One
- Two
- Three

---

# Fraagmented list

- One
- Two
- Three

Math typesetting

  • KaTeX or pandoc
    1
    2
    $ax^2+bc+c$
    $$I_{x}=\int\intRy^2f(x,y)\cdot{}dydx$$

Autoscaling

  • some themes use it
    1
    ## <!-- fit --> Auto-fitting header

Themes

CSS

Marp uses <section> as the container of each slide.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
/* @theme my-custom-theme */

@import 'default';

@section {
/* Specify slide size */
width: 960px;
heigh: 720ps;
}

h1 {
font-size: 30px;
color: #c33;
}

Tweaks on Markdown

  • You may use the style tag
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    ---
    theme: default
    ---

    <style>
    section{
    background: yellow;
    }

    The background is yellow now
    </style>
  • You may add a custom style to a class
    1
    <!-- _class: custom-class -->

Scoped style

  • One-shot stylings
    1
    2
    3
    4
    5
    6
    <style scoped>
    a {
    color: green;
    }
    </style>
    ![Green link](https://marp.app)

Bangs

!Bang are shortcuts that let you target searches on specific sites. And if DuckDuckGo is you browser default search engine you can use it directly from your address bar.

General

Apps

Android

Haskell

Java

PHP

Python

Ruby

Repositories

SysAdmin

Humor

Binary conversion

Type some binary code, and add decimal.

Calendar

  • Simple calendar: type calendar.
  • Calendar from date: type calendar 17 August 1345.

Change casing

Type lowercase or uppercase in-front of your search.

Cheatsheets

Add cheatsheet at the end of your search.

Colour codes

Type color codes

Generate lorem ipsum text

Type “lorem ipsum” in the browser.

Generate passwords

Type password 50, and it will automatically generate a secure 50 chacraters password.

Generate passphrases

Type random passphrase.

HTML codes

Typing HTML chars.

Links

  • Expand: type expand in-front of a shortened link.
  • Shorten: type shorten in-front of a shortened link.

QR Codes

Type qr https://angelesbroullon-codenotepad.statichost.eu/.

Social media information

Type “@” before a user or organization.

Cronometer

Type stopwatch.

URL encoding

Type url encode.

Type annotations

Since Python there is support for annotations of variable types, class fields, arguments, and return values ​​of functions. However it breaks backwards compatibility. If you want to keep it, write types in docstrings.

Basics

  • Variable annotations are written with a colon after the identifier. This can be followed by value initialization.

    1
    2
    price: int = 5
    title: "str"
  • Function parameters are annotated in the same way as variables, and the return value is specified after the arrow -> and before the trailing colon.

    1
    2
    3
    def func(a: int, b: float) -> str:
    a: str = f"{a}, {b}"
    return a
  • For class fields, annotations must be specified explicitly when the class is defined. However, analyzers can automatically infer them based on the __init__ method, but in this case, they will not be available at runtime:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    class Book:
    title: "str"
    author: str

    def __init__(self, title: str, author: str) -> None:
    self.title = title
    self.author = author

    b: Book = Book(title="Fahrenheit 451", author="Bradbury")

Built-in types (sub-modules)

Optional

1
2
3
4
5
6
7
amount: int
amount: None
# Gives "Incompatible types" error

price: Optional[int]
price: None
# It will work!

Any

1
2
3
4
5
6
7
8
9
10
11
12
13
# do not restrict possible types
some_item: Any = 1
print(some_item)
print(some_item.startswith("hello"))
print(some_item // 0)

# object may have some issues, avoid it
some_object: object
print(some_object)
print(some_object.startswith("hello"))
# ERROR: "object" has no attribute "startswith"
print(some_object // 0)
# ERROR: Unsupported operand types for // ("object" and "int")

Union

1
2
3
4
5
6
7
8
9
10
11
# allow only some types
def hundreds(x: Union[int, float]) -> int:
return (int(x) // 100) % 100

hundreds(100.0)
hundreds(100)
hundreds("100")
# ERROR: Argument 1 to "hundreds" has incompatible type "str";
# expected "Union[int, float]"

# Also `Optional[T]` is equivalent toUnion[T, None]

Collections

See PEP484 - Generics.

  • Generics

    1
    2
    3
    4
    5
    6
    from typing import Mapping, Set

    def notify_by_email(
    employees: Set[Employee],
    overrides: Mapping[str, str]
    ) -> None: ...
  • Generics with parameters

    1
    2
    3
    4
    5
    6
    7
    8
    from typing import Sequence, TypeVar

    # Declare type variable
    T = TypeVar('T')

    # Generic function
    def first(l: Sequence[T]) -> T:
    return l[0]
  • User defined

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    from typing import TypeVar, Generic, Iterable
    from logging import Logger

    T = TypeVar('T')

    class LoggedVar(Generic[T]):
    def __init__(self, value: T, name: str, logger: Logger) -> None:
    self.name = name
    self.logger = logger
    self.value = value

    def set(self, new: T) -> None:
    self.log('Set ' + repr(self.value))
    self.value = new

    def get(self) -> T:
    self.log('Get ' + repr(self.value))
    return self.value

    def log(self, message: str) -> None:
    self.logger.info('{}: {}'.format(self.name, message))

    def zero_all_vars(vars: Iterable[LoggedVar[int]]) -> None:
    for var in vars:
    var.set(0)

Lists

1
2
3
4
5
6
7
8
9
10
titles: List[str] = ["hello", "world"]
titles.append(100500)
# ERROR: Argument 1 to "hundreds" has incompatible type "str";
# expected "Union[int, float]"

titles = ["hello", 1]
# ERROR: List item 1 has incompatible type "int"; expected "str"

items: List = ["hello", 1]
# Everything is good!

Note: there are similar annotations for sets: typing.Set and typing.FrozenSet.

Tuples

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
price_container: Tuple[int] = (1,)
price_container: ("hello")
# ERROR: Incompatible types in assignment (expression has type "str",
# variable has type "Tuple[int]")

price_container = (1, 2)
# ERROR: Incompatible types in assignment (expression has type "Tuple[int, int]",
# variable has type "Tuple[int]")

price_with_title: Tuple[int, str] = (1, "hello")
# Everything is good!

# you can use ellipsis to define an "unknown number of elements"
prices: Tuple[int, ...] = (1, 2)
prices: (1,)
prices: (1, "str")
# ERROR: Incompatible types in assignment (expression has type
# "Tuple[int, str]", variable has type "Tuple[int]")

# no type specification equals to Tuple[Any, ...]
something: Tuple = (1, 2, "hello")
# Everything is good!

Dictionaries

1
2
3
4
5
6
7
8
9
# key and type are specified separate
book_authors: Dict[str, str] = {"Fahrenheit 451": "Bradbury"}
book_authors["1984"] = 0
# ERROR: Incompatible types in assignment
# (expression has type "int", target has type "str")

book_authors[1984] = "Orwell"
# ERROR: Invalid index type "int" for "Dict[str, str]";
# expected type "str"

There is also typing.DefaultDict and typing.OrderedDict

Function execution results

  • Function returns nothing:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    def nothing(a: int) -> None:
    if a == 1:
    return
    elif a == 2:
    return
    elif a == 3:
    # No return value expected
    return ""
    else:
    pass
  • Function never returns control (e.g. sys.exit)

    1
    2
    3
    def forever() -> NoReturn:
    while True:
    pass
  • Generator function (its body contains an operator yield)

    1
    2
    3
    4
    5
    def generate_two() -> Iterable[int]:
    yield 1
    yield "2"
    # ERROR: Incompatible types in "yield"
    # (actual type "str", expected type "int")

Documentation

README file

Inline documentation

  • Use Google docstrings format.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    """Module for Google style docstrings example.

    This module demonstrates documentation as specified by the `Google Python
    Style Guide`_. Docstrings may extend over multiple lines. Sections are created
    with a section header and a colon followed by a block of indented text.

    Example:
    Examples can be given using either the ``Example`` or ``Examples``
    sections. Sections support any reStructuredText formatting, including
    literal blocks::

    $ python example_google.py

    Section breaks are created by resuming unindented text. Section breaks
    are also implicitly created anytime a new section starts.

    Attributes:
    module_level_variable1 (int): Module level variables may be documented in
    either the ``Attributes`` section of the module docstring, or in an
    inline docstring immediately following the variable.

    Either form is acceptable, but the two should not be mixed. Choose
    one convention to document module level variables and be consistent
    with it.

    Todo:
    * For module TODOs
    """
    module_level_variable1 = 12345
    module_level_variable2 = 98765

    def function_with_types_in_docstring(param1, param2):
    """Example function with types documented in the docstring.

    `PEP 484`_ type annotations are supported. If attribute, parameter, and
    return types are annotated according to `PEP 484`_, they do not need to be
    included in the docstring:

    Args:
    param1 (int): The first parameter.
    param2 (str): The second parameter.

    Returns:
    bool: The return value. True for success, False otherwise.

    .. _PEP 484:
    https://www.python.org/dev/peps/pep-0484/
    """
    return param1 == param2

Generate code doc

  • Install pdoc3.
    1
    python -m pip install pdoc3
  • Run pdoc.
    1
    python -m pdoc --html --output-dir .doc src_file.py

Convert to docx

  • Install Pandoc.

  • Header with metadata on *.md: it must be in YML format:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    ---
    title: "Main Title"
    subtitle: "Sub Title"
    author: [Angeles Broullón]
    date: "2020-10-13"
    keywords: [Markdown]
    ...

    <!-- \newpage -->
  • Convert the document

    1
    2
    3
    4
    5
    # no template
    pandoc README.md -o output-readme.docx
    # with template, you must generate one first
    pandoc -o custom-reference.docx --print-default-data-file reference.docx
    pandoc -t docx README.md --reference-doc=template.docx -o output-template.docx

    You may check other templates in Pandoc Latex Template

  • Hack for been able to add newpage breaks

    • You will need a lua script (pagebreak.lua) into the pandoc folder, so you can use <!-- \newpage --> in your document to split into pages without further issues.
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      --- Return a block element causing a page break in the given format.
      local function newpage(format)
      if format == 'docx' then
      local pagebreak = '<w:p><w:r><w:br w:type="page"/></w:r></w:p>'
      return pandoc.RawBlock('openxml', pagebreak)
      elseif format:match 'html.*' then
      return pandoc.RawBlock('html', '<div style=""></div>')
      elseif format:match 'tex$' then
      return pandoc.RawBlock('tex', '\\newpage{}')
      elseif format:match 'epub' then
      local pagebreak = '<p style="page-break-after: always;"> </p>'
      return pandoc.RawBlock('html', pagebreak)
      else
      -- fall back to insert a form feed character
      return pandoc.Para{pandoc.Str '\f'}
      end
      end

      -- Filter function called on each RawBlock element.
      function RawBlock (el)
      -- check that the block is TeX or LaTeX and contains only \newpage or
      -- \pagebreak.
      if el.text:match '\\newpage' then
      -- use format-specific pagebreak marker. FORMAT is set by pandoc to
      -- the targeted output format.
      return newpage(FORMAT)
      end
      -- otherwise, leave the block unchanged
      return nil
      end
    • Call pandoc with extra parameter
      1
      2
      3
      pandoc -t docx README.md --reference-doc=custom-reference.docx \
      -o output-template.docx --highlight-style=breezedark \
      --lua-filter=pagebreak.lua

Minify code

  • Install pyminifer.
    1
    python -m pip install pyminifer
  • Reduce the size, remove documentation
    1
    pyminifier -o "dist/min.py" "MY_FILE.py"

General script

  • You may generate a bash file for this matter
    1
    2
    3
    4
    python -m pdoc --html --output-dir .doc src_file.py
    pandoc -t docx README.md --reference-doc=custom-reference.docx \
    -o output-template.docx --highlight-style=breezedark
    python -m pyminifier -o "dist/min.py" "MY_FILE.py"

Extra: .docx to .md

  • You may reverse the operation, extrating all the images.
    1
    pandoc -o "output.md" --extractMedia="/." "inputFile.md"

Assumptions

  1. The script-under-test is simple and short and does not require its own file system (mock it or run in Docker)
  2. Commands to be mocked are not addressed by their full path (such scripts can usually be refactored to utilise $PATH)

Set up

Project structure

  • Tree
    1
    2
    3
    4
    ▶ workspace .
    ├── script.sh
    └── shunit2
    └── test_script.sh

Running tests

  • Run

    1
    bash shunit2/delete_stack.sh
  • Locate the script-under-test, add a line like this at the start of every test file:

    1
    script_under_test=$(basename "$0")

Installing shUnit2

  • You may take it from the master branch using something like this:
    1
    2
    3
    curl \
    https://raw.githubusercontent.com/kward/shunit2/6d17127dc12f78bf2abbcb13f72e7eeb13f66c46/shunit2 \
    -o /usr/local/bin/shunit2

Mocks

  • The script can be divided into

    • commands related to the internal logic of the script
    • commands related to the external behaviour of the script (things it changes or whatever it actually does)
      • The commands_log created by the mocks can be queried to make assertions about the script’s actual behaviour
  • A simple mock that just silently intercepts and logs the inputs passed into it (copy/past style) looks like:

    1
    2
    3
    4
    5
    6
    7
    8
    # ${FUNCNAME[0]} in Bash is the name of a function
    chmod() {
    echo "${FUNCNAME[0]} $*" >> commands_log
    }

    chown() {
    echo "${FUNCNAME[0]} $*" >> commands_log
    }
  • A more complicated mocks can respond with fake responses:

    1
    2
    3
    4
    5
    6
    7
    some_command() {
    echo "${FUNCNAME[0]} $*" >> commands_log
    case "${FUNCNAME[0]} $*"
    "${FUNCNAME[0]} some_arg_a some_arg_b") ; echo some_response_1 ;;
    "${FUNCNAME[0]} some_arg_c some_arg_d") ; echo some_response_2 ;;
    esac
    }
  • The tearDown function provided by shUnit2 is later expected to clean up the commands_log

    1
    2
    3
    tearDown() {
    rm -f commands_log
    }

Learn by example

Cloudformation

  • Cloudformation script

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    #!/usr/bin/env bash

    usage() {
    echo "Usage: $0 STACK_NAME S3_BUCKET"
    exit 1
    }

    delete_all_artifacts() {
    aws ec2 delete-key-pair \
    --key-name "$stack_name"
    aws s3 rm --recursive --quiet \
    s3://"$s3_bucket"/deployments/"$stack_name"
    }

    resume_all_autoscaling_processes() {
    asgs=$(aws cloudformation describe-stack-resources \
    --stack-name "$stack_name" \
    --query \
    'StackResources[?ResourceType== \
    `AWS::AutoScaling::AutoScalingGroup`].PhysicalResourceId' \
    --output text)

    for asg in $asgs
    do
    aws autoscaling resume-processes \
    --auto-scaling-group-name "$asg"
    done
    }

    [ $# -ne 2 ] && usage
    # validates the inputs
    read -r stack_name s3_bucket <<< "$@"

    # deletes a key pair and deployment artifacts
    delete_all_artifacts
    # resumes any suspended processes in auto-scaling groups
    resume_all_autoscaling_processes

    # deletes the CloudFormation stack specified in the inputs
    aws cloudformation delete-stack \
    --stack-name "$stack_name"
  • Designing the tests

    • Test cases

      • a usage message is expected if incorrect inputs are passed
        • for a stack with no auto-scaling groups:
          • key pairs and deployment artifacts are expected to be deleted
          • aws cloudformation delete-stack should be issued
        • for a stack with multiple auto-scaling groups:
          • a resume-processes command should be issued for each auto-scaling group
      • Issue: it doesn’t try to handle a non-existent S3 bucket and a non-existent CloudFormation stack. So we can:
        • document this as a known issue
        • fixing the script to be more defensive (recommended!)
        • writing tests to test for and demonstrate the known issue
    • Structure of the tests on shunit2/delete_stack.sh

      1. the variable $script_under_test as mentioned above
      2. a mocks section to replace commands that make calls to AWS
      3. a more general setup/teardown section
      4. some test cases, being the shell functions whose names start with test*
      5. the final call to shUnit2 itself
  • Test cases

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    #!/usr/bin/env bash

    # section 1 - the script under test
    script_under_test=$(basename "$0")

    # section 2 - the mocks
    aws() {
    echo "aws $*" >> commands_log
    case "aws $*" in
    "aws ec2 delete-key-pair --key-name mystack") true ;;
    "aws s3 rm --recursive --quiet s3://mybucket/deployments/mystack") \
    true ;;

    "aws cloudformation describe-stack-resources \
    --stack-name mystack \
    --query "'StackResources[?ResourceType== \
    `AWS::AutoScaling::AutoScalingGroup`].PhysicalResourceId'" \
    --output text")
    echo mystack-AutoScalingGroup-xxxxxxxx
    ;;

    "aws autoscaling resume-processes \
    --auto-scaling-group-name mystack-AutoScalingGroup-xxxxxxxx")
    true
    ;;

    "aws cloudformation delete-stack --stack-name mystack") true ;;
    *) echo "No response for >>> aws $*" ;;
    esac
    }

    # section 3 - other setup or teardown
    tearDown() {
    rm -f commands_log
    rm -f expected_log
    }

    # section 4 - the test cases
    testSimplestExample() {
    . "$script_under_test" mystack mybucket

    cat > expected_log <<'EOF'
    aws ec2 delete-key-pair --key-name mystack
    aws s3 rm --recursive --quiet s3://mybucket/deployments/mystack
    aws cloudformation describe-stack-resources --stack-name mystack \
    --query StackResources[?ResourceType== \
    `AWS::AutoScaling::AutoScalingGroup`].PhysicalResourceId --output text
    aws autoscaling resume-processes \
    --auto-scaling-group-name mystack-AutoScalingGroup-xxxxxxxx
    aws cloudformation delete-stack --stack-name mystack
    EOF

    # diff -wu ensures that during failures, a nice readable unified diff
    # of “expected” compared to “actual” is seen
    assertEquals "unexpected sequence of commands issued" \
    "" "$(diff -wu expected_log commands_log | colordiff | DiffHighlight.pl)"
    }

    # section 5 - the call to shUnit2 itself
    . shunit2

Bad inputs

  • Test cases
    1
    2
    3
    4
    5
    6
    7
    # check if the input on method call is wrong
    testBadInputs() {
    # STDOUT is captured using command substitution $( ... )
    actual_stdout=$(. "$script_under_test" too many arguments passed)
    assertTrue "unexpected response when passing bad inputs" \
    "echo $actual_stdout | grep -q ^Usage"
    }

No auto-scaling

  • Additional mocks

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    aws() {
    ...
    # responses for motherstack
    "aws ec2 delete-key-pair --key-name myotherstack") true ;;
    "aws s3 rm --recursive --quiet s3://mybucket/deployments/myotherstack") \
    true ;;

    "aws cloudformation describe-stack-resources \
    --stack-name myotherstack \
    --query "'StackResources[?ResourceType== \
    `AWS::AutoScaling::AutoScalingGroup`].PhysicalResourceId'" \
    --output text")
    ## Manual tests show that this command returns an empty string for this
    echo ""
    ;;

    "aws cloudformation delete-stack --stack-name myotherstack") true ;;
    }
  • Test cases

    1
    2
    3
    4
    5
    testNoASGs() {
    . "$script_under_test" myotherstack mybucket
    assertFalse "a resume-processes command was unexpectedly issued" \
    "grep -q resume-processes commands_log"
    }

Running the tests

  • Command plus output
    1
    bash shunit2/test_delete_stack.sh
0%