Last Updated:

Python RESTful APIs running on containers, the easy way

Writing a public article can be really difficult task. Trying to connect with the readers of all kinds can be tricky. Offer something for reading both to advanced and entry users on technology, bearing in mind how fast the technology ecosystems changes is definitely a challenge. But, on the other hand, if no one writes about his/her story, at the end of the day, nothing is written. So, I’ll try to describe here a personal situation I had some time ago. Can or cannot be fitted to today, but at least, this was a real use case. Let’s start from the very beginning.

Table of Contents

  1. Why Python?
  2. Which tools do I use to program in Python?
  3. Microservices and Python
  4. What a microservice looks like in Python?
  5. Structure of a Python microservice
  6. An example “echo” microservice in Python
  7. Testing and analyzing the code
  8. Dockerfile: the containerization process

Why Python?

For a security practitioner (what is my case), Python represents more than a simple interpreted language. Python is a Swiss knife. It is present from code snippets, to Wireshark sniffer as dissectors to full GUI applications or front-ends.

If your skills allow you to confront programming, or at least if you can review and modify code, Python it’s the screwdriver that will allow you to handle all the cloud/systems/applications/software/security machinery without a huge effort. Serve as an example the fact that AWS and GCP, among others, are using it heavily to build its cloud offering and tool chain. Even we have it in use in the house, with Ansible, a configuration management and orchestration engine built on top of Python.

With the current market trends, that imposes an API-first directive, and the free of choice in the selection of the programming language, Python is probably one of the most flexible tools for an enterprise architect. It’s powerful, it’s relatively simple to learn (the learning curve is not so hard), it’s really well documented, can be used naked, through micro-frameworks or with fat-frameworks, and surely the most important detail, has a magnificent community around.

Personally, and this is a matter of likes and dislikes, I prefer to use Python the most naked I can, but, without losing my time in rather generic and very low-level actions. For that reason, and in direct dependency of the type of software piece that needs to be written, my first option is to use Python with a micro-framework. Among the available ones, my preference is Flask. Flask allows me to program in Python very low level, as I want, but, at the same time, it allows me to obviate all the hard work needed for generic things, like in example, handle an HTTP or TCP connection. Connection is handled by the micro-framework, but, If I want to interact with the session yet I can do it. That’s fantastic. It allows me get concentrated on programming the business logic and I only need to mess with the very low-level actions if I really want or need.

WHICH TOOLS DO I USE TO PROGRAM IN PYTHON?

Probably, one of the most asked question: Which tools do you use when you do program? There’s no simple, single, valid answer here. Again, this is a matter or likes and dislikes, but, as I need to present the tools you later are going to see in this article’s screenshots, this is the list of tools that you will see:

· Python interpreter, currently in version 3.7
· PyCharm Community IDE.
· Postman API Development Environment.
· Fork git client or Atlassian’s SourceTree git client.
· Sequel Pro (Mac) or SQLYog Community Edition (Windows, Linux).
· Firefox Developer Edition.
· Docker Community Edition, stable channel.

All the above tools are conscientiously selected, as all of them (mostly) are multi-platform (Windows, Linux, Mac).

MICROSERVICES AND PYTHON

I know that you already know what a microservice is, but, I would like to define it again following a less common way:

· Microservices are the highest exponent of a flexible software architecture.
· Microservices are the empiric demonstration that software can be delivered:
o Being strongly modularized.
o Providing easy replaceability capabilities.
o Enabling sustainable development efforts.
o Simplifying legacy software refactory, rearchitecture, rebuild and replacement capabilities.
o Speeding time to market (this is an interesting topic that can be worth of a single article illustrating the reasons and procedures).
o Enabling independent scaling.
o Providing free of choice of technologies.
o Finally, all of the above, but on a Continuous Delivery manner.
· Microservices are typically developed by two-pizza teams (6–8 people at most).
· Microservices has no more than few hundred lines of code, let’s say ~ 200 lines at most. We should not stick to this, but, if you want to use “Lines of Code” (LoC) metric in order to determine how fast can be a microservice architecture deployment, as a rough example, think for a while a good programmer typically writes a range of 10 to 30 good lines of code per day. There’s an explanation for this: depending on the language, and specially with interpreted languages, a single line of Python requires about 40% more mental load compared with a single line of C, Java, JavaScript or CSS and probably it implements at least twice as much functionality. I’ll resume it for you: it’s blazing fast.
· Microservices are totally decoupled among them. Databases, typically used in the software industry as a coupling mechanism, here are only an implementation detail.
· Microservices require a test-oriented development approach (like DDD, TDD, ATDD, BDD) to maximize its capacity of use and reduce coding errors.
· Microservices can be versioned at will, meanwhile a common interface is available (an API).
· Microservices can be written on different languages. Meanwhile they can interact each other using a common interface (an API) there’s no problem.

WHAT A MICROSERVICE LOOKS LIKE IN PYTHON

With a clear dependence on the language in use, normally a microservice is exactly that, a “micro” program acting as a server or backend service. It can be as small as the following:

from flask import Flaskapp = Flask(__name__)@app.route(‘/’)
def index():
return “Hello, World!”
app.run(debug=True)

Let’s examine the above code:

from flask import Flask

This is a python import directive. Flask is a micro-framework, thus is, a set of python files written by other that we want to import into our main programme file. It’s necessary to use the Flask micro framework.

app = Flask(__name__)

Here we declare that our app is going to be a Flask app. The special operand __name__ is equal to the name of the file where we are writing the code, in our case, app.py

@app.route(‘/’)

This is a decorator. Decorators are professional programmer’s technique to remove unwanted code from screen, hiding pieces of code in libraries. In this case, this decorator belongs to the Flask micro-framework libraries and it allows to set a portion of the URL available for a function to execute its code.

def index():

This statement defines a function. This function is going to be executed every time the URL route defined before by a Flask decorator is called via URL.

return “Hello, World!”

This is what is going to be sent back to the client when the route decorator path is called on a URL.

app.run(debug=True)

This is the statement that effectively runs the Python program defined before. And it will be executed with debugging details on stdout and stderr.

STRUCTURE OF A PYTHON MICROSERVICE

As a contrast to traditional monoliths, microservices have a very simplistic approach. Most of them are programs contained on a single file, with a short list of calls to another programs or libraries (what is called “includes”) and with a rather simple or even without folder structure.

The example program we are going to build is kept for learning purposes in a repository on the public GitHub. The project is this:

https://github.com/ea1het/PythonAPIContainer

You can clone it using one of the following two command lines:

a) git clone https://github.com/ea1het/PythonAPIContainer.git
b) git clone git@github.com:ea1het/PythonAPIContainer.git

When you are programming larger applications on fat frameworks like Django, the folder structure, and the application configuration file, are key elements that will make your program behave smooth, or otherwise you would end up having a mess in the code. In that sense, and despite a microservice needs to be small in essence, I yet think a good folder structure, if needed, and a configuration file is a very convenient feature. Let’s see how a microservice is organized and how a configuration file looks like

a) Folder / File structure

Due to the fact a microservice is small enough, in almost all cases the folder structure won’t exist. All the potential files in use will be in flat on a single folder or directory. This can be a problem or a virtue, depending on the place where you intend to run code. In example, running a microservice in AWS as a lambda function imposes the obligation to have the file with the main programme in the application’s folder root. In contrast, a very typical in Python, is the use of application factories. An application factory is a useful pattern where you create your application using a function, an object. This allows you to pass different configurations to the main program, for example for unit testing, and also allows you to run multiple instances of the same application running in the same application process. That is a nice trick senior programmers tend to use to avoid circular dependency includes and to reuse code and server resources, albeit can be difficult to understand for novice programmers. Anyway, the important point here is the fact lambda functions and application factories are almost incompatible. That will explain further decisions taken on design.

./
├── .env
├── Dockerfile
├── __init__.py
├── app.py
├── boot.sh
├── requirements.txt
├── settings.py
└── venv

The above contents aren’t compulsory “as-is”. They just reflect the way I’m creating a microservice. Let me explain what’s the usage of each file:

  • .env: This is a very special file. It holds the configuration variables your program requires in order to start working. This file is special because only changing this file’s contents you can change the way your program behaves. If you configure this file properly you can use your program on QA stage and with minor adjustments only on this file your program can be ready to be deployed in production. The name represents the type of configuration strategy followed, that is environment variables. Environmental variables are one of the “12 Factor App” principles in software development. Sensible parameters should never be hardcoded in programs. Environment variable loading is a clever way to avoid hardcoding. In addition to that, the name of the file is “.env”, with a dot (“.”) at the beginning of the name. On UNIX-like systems, a file starting with a dot is a hidden file. Tools like Git, massively deployed on UNIX-like systems, won’t see a .env file (by convention) so this file won’t be staged during development, thus, avoiding the hassle to send sensible credentials to a git repository.
  • Dockerfile: This file is the script that will convert this microservice written in Python into a container image that later on we can run on production with, i.e. Rancher Labs, our container orchestrator.

  • __init__.py: This is a very important file in Python language. The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as “string”, from unintentionally hiding valid modules that occur later, deeper, on the module search path. In the simplest case, __init__.py can just be an empty file or even it’s not needed (our case can be a good candidate to remove this file), but can be used to export selected portions of the package under more convenient name, hold convenience functions, etc… It can also execute initialization code for the package or set variables. If you remove the __init__.py file, Python will no longer look for submodules inside that directory, so, attempts to import one of those submodules will fail.

  • app.py: This is the microservice itself. Here is where the business logic is coded.

  • boot.sh: This file holds the initialization script the microservice requires when it has been converted into a container. In few words, this is the script that will be executed inside the container image once the container orchestrator, or Docker, call it into execution.

  • requirements.txt: This file holds the list of dependencies your code has generated. Basically, it’s the list of additional packages your code requires in order to work as expected.

  • settings.py: This file is a configurator file. It holds different execution types your program requires for, i.e. development stage, QA stage or production stage.

  • venv/: This is a folder with Python’s virtual environment, where Flask and other dependencies are installed.

a) Microservice configuration file in Python

As explained before, configuration files and application factories are quite typical practice for senior programmers. Unfortunately, in the scope of a microservice implementation I don’t consider essential an application factory given the fact that each microservice is going to run independently, in the form of a lambda function or container, that is going to be executed several times on an auto-scaled orchestration or choreography. For that reason, application factory pattern “per se” is not in use, albeit a configuration file with that appearance is in use.

A good configuration file follows the “12 Factor App” methodology and principles, as long as the microservice itself should do. If it’s done the right way, it will enable applications to be built with portability and resiliency when deployed to the Web. Other benefits of the “12 Factor App” are:

  • Uses declarative formats for setup automation, to minimize time and cost for new developers joining the project.
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments.
  • Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration.
  • Minimize divergence or drift between development and production, enabling continuous deployment for maximum agility.
  • And can scale up without significant changes to tooling, architecture, or development practices.

Specifically, on configuration file matters, Factor III of the “12 Factor App” methodology and principles states that […] configuration that varies between deployments should be stored in the environment. […], so, this is what we are going to do in our configuration file, given the fact that from an architecture standpoint, every piece of code will always traverse, at least, 3 different stages: 1/development, 2/test and 3/production.

At this point, a clarification is needed: an application’s configuration is everything that is likely to vary between named deploys. This includes, among possibly others:

  • Resource handles to the database, caches, and other backing services.
  • Credentials to external services such as Amazon S3 or AAA mechanisms.
  • Per-deploy values such as the canonical hostname for the deploy.

Applications sometimes store configuration as constants in the code. This is what is known as “hardcoding”. This is a violation of basic security measures applied in development. Accessing the code, in development, test or production, means compromising any credentials used, as well as the systems related. On the other hand, note that the definition of “configuration” does not include internal application configuration, or how coded modules are connected among them. This type of configuration does not vary between deploys, therefore, this is not what we want to control.

The “12 Factor App” methodology and principles propose storing configuration in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code. Unlike configuration files, there is little chance of them being checked into the Git code repository accidentally (as a trick, normally they are hidden files that passes unnoticed to Git), and, unlike custom configuration files, or other configuration mechanisms such as Java System Properties, they are a language-agnostic and OS-agnostic standard. Env vars are granular controls, each fully orthogonal to other env vars. Ideally, they are never grouped together as “environments”. This is a model that scales up smoothly as the application can expands into more deploys over its lifetime. Albeit the general recommendation is good, working with thousands of developers at the same time imposes the obligation to standardize environments used in the DevSecOps pipeline. Therefore, we’ll treat the last recommendation as such, and in regard to env vars and grouped environments will be rather a “relaxed” that recommendation.

Finally, after all this long description, this is what a microservice configuration file looks like in Python. Pay special attention to the line in colour red:

””” Microservice configurator file ”””
##
#
# This file configures the microservice.
#
# This microservice uses ‘dotenv’ (.env) files to get environmental
# variables. #This is done to configure sensible parameters, like
# database connections,application secret keys and the like. In case
# the ‘dotenv’ file does not
# exists, a warning is generated at run-time.
#
# This application implements the ’12 Factor App’ principle:
#
https://12factor.net and https://12factor.net/config
#
# Note about PyLint static code analyzer: items disabled are false
# positives.
#
##
# pylint: disable=too-few-public-methods;
# In order to avoid false positives with Flask
from os import environ, path
from environs import Env
ENV_FILE = path.join(path.abspath(path.dirname(__file__)), ‘.env’)if path.exists(ENV_FILE):
ENVIR = Env()
ENVIR.read_env()
else:
print(‘Error: .env file not found’)
exit(code=1)
class Config:
””” This is the generic loader that sets common attributes ”””
JSON_SORT_KEYS = False
DEBUG = TRUE
TESTING = TRUE
class Development(Config):
””” Development loader ”””
ENV = ‘development’
if environ.get(‘KEY_DEVL’):
SECRET_KEY = ENVIR(‘KEY_DEVL’)
if environ.get(‘DATABASE_URI_DEVL’):
DATABASE_URI = ENVIR(‘DATABASE_URI_DEVL’)
TESTING = False
class Testing(Config):
””” Testing loader ”””
ENV = ‘testing’
if environ.get(‘KEY_TEST’):
SECRET_KEY = ENVIR(‘KEY_TEST’)
if environ.get(‘DATABASE_URI_TEST’):
DATABASE_URI = ENVIR(‘DATABASE_URI_TEST’)
DEBUG = False
class Production(Config):
””” Production loader ”””
ENV = ‘production’
if environ.get(‘KEY_PROD’):
SECRET_KEY = ENVIR(‘KEY_PROD’)
if environ.get(‘DATABASE_URI_PROD’):
DATABASE_URI = ENVIR(‘DATABASE_URI_PROD’)
DEBUG = False
TESTING = False

If you did realize the above marked coloured in red, this is a call to a hidden file named “.env” (“dot” “env”). A “dot” “env” file is a file holding “env vars”. Get the trick? Remember I said a file starting with a “dot” is a hidden file, a file that Git does not recognize? Hey! You got it know!

This “dot” “env” file, and not the “configuration file” listed before, is the real configuration file holding environmental variables as the “12 Factor App” defines. That’s the reason because before I said the […] recommendation on env vars and grouped environments will be “relaxed” in most cases.

The content of this “dot” “env” file holding env vars is, for example, like this:

#
#
# This is the configuration file for this microservice
#
# =================================================================
#
# MODE_COMFIG variable will affect the way the application is run.
# Available running modes are:
#
# — Development
# — Testing
# — Production
#
export MODE_CONFIG = ‘Production’
export KEY_PROD = ‘example–0000–7890-aaaa-54a063e6d5a3’
export KEY_DEVL = ‘example-1111–7890-bbbb-5ddc2947910c’
export KEY_TEST = ‘example-2222–7890-cccc-0da6c41293a8’
export DATABASE_URI_PROD = ‘mysql+pymysql://user:pass@127.0.0.1:3306/database’
export DATABASE_URI_DEVL = ‘mysql+pymysql://user:pass@127.0.0.1:3306/database’
export DATABASE_URI_TEST = ‘sqlite:///test_app.db’
export REDIS_CACHE_URI = ‘’
export AWS_ACCESS_KEY = ‘’
export AWS_SECRET_KEY = ‘’

Before moving into another related topics, I want to cycle back for a while on security aspects, the “12 Factor App” and the env vars.

a) Is secure to use env vars?

Saying sharply “yes” or “not”, without some supportive arguments, for me, could be erroneous. Let me elaborate a bit on this matter but, also let me start my argumentation with a clear statement: process environments are only accessible to the user that owns the process, and root, of course.

Environment variables are plenty secure. The question in itself is loosely related to the real concern behind using env vars, that is, What happens if the system is compromised? Well, in such case, with the system already compromised, the less of the problems is the use of env vars. In fact, that’s funny because even in such scenario, there’s a silly security benefit of using env vars over configuration files and, amazingly, it is obscurity. Meaning that, if someone has gained root access, (s)he can get to both, files and env vars, but env vars will be less apparent at glance.

On a modern UNIX-like system, you can only access the data stored in an environment variable in 5 different situations, all of them with native controls:

1. The running environment of the process:

When the process is running, the environment variables of that process can be accessed through /proc/$PID/environ. However, only the user who owns the process, or root, can access that file.

2. The source of the environment variables:

If you’re using an init script, and the variables are stored in that init script, the variables can of course be obtained by reading that script, or, if the env vars are coming from somewhere else, then wherever that is. Permissions on files here are the key factor to consider.

3. Executing the ‘ps’ command and reviewing the output:

If the process is (badly) launched via something like…

sh -c ‘cd /foo/bar; POP=star /my/executable’

…then that “sh” process will be visible in the “ps” command output. The portion of the above code in red is an env var assignment (badly) done in run-time.

4. During the “execve” system call:

The “execve” system call does not directly leaks the environment. However, there is a generic audit mechanism that can log the arguments of every system call, including execve. So, if the UNIX system auditing is enabled, the environment can be sent through the audit mechanism and end up in a log file. On a decently configured system, only the root has this access, that means, data is sent to, i.e., to /var/log/audit/audit.log, that is only readable by root, and written by the auditd daemon, that is also running as root.

5. During a “ptrace” system call:

This system call allows a process to inspect the memory and even execute code in the context of another process. It’s what allows debuggers to exist. Only a specific UID can trace its own processes. Furthermore, if a process is privileged (setuid or setgid), only root can trace it.

So, as a succinct resume, one unprivileged user can’t observe env vars for another user through the process table (“ps auxwwe” command). If somebody is observing another user’s process, that is root. The commands that set environment variables (i.e., export) are shell builtins which don’t make it onto the process table and, by extension, aren’t in /proc/$pid/cmdline. Finally, /proc/$pid/environ is only readable by the UID of the process owner.

Env vars are an essential part of an UNIX-like system. That doesn’t mean they must be used for everything, or they are best suited that any other technique. That wouldn’t be true. The problem with env vars is allowing anyone to modify environment variables of significance. And that doesn’t happen by weird hacking techniques, no, that happens by wrong programming actions. Programmers should always be cautious as to what data their programs accept and use for subsequent processing/directives.

AN EXAMPLE “ECHO” MICROSERVICE IN PYTHON

In order to demonstrate how a microservice can be quickly crafted in Python, we’re going to build an “echo” program. An echo program will be a simple microservice containing two namespaces:

  • One namespace acting as URL index, the well-known ‘/’.
  • One namespace accepting POSTing in the “/echo” and /echo/<value> URLs.

The example code follows:

“”” Microservice main programm file “””
##
#
# This file is the microservice itself.
#
##
# pylint: disable=invalid-name;
# In order to avoid false positives with Flask
from os import environ
from datetime import datetime
from flask import Flask, jsonify, make_response, url_for, request
import settings
# — Application initialization. — — — — — — — — — — — — — — — — — — —
__modeConfig__ = environ.get(‘MODE_CONFIG’) or ‘Development’
APP = Flask(__name__)
APP.config.from_object(getattr(settings, __modeConfig__.title()))
# — This functions control how to respond to common errors. — — —
@APP.errorhandler(404)
def not_found(error):
“”” HTTP Error 404 Not Found “””
headers = {}
return make_response(
jsonify({‘error’: ‘true’,‘msg’: str(error)}), 404, headers)
@APP.errorhandler(405)
def not_allowed(error):
“”” HTTP Error 405 Not Allowed “””
headers = {}
return make_response(jsonify({‘error’: ‘true’,‘msg’: str(error)}), 405, headers)
@APP.errorhandler(500)
def internal_error(error):
“”” HTTP Error 500 Internal Server Error “””
headers = {}
return make_response(jsonify({‘error’: ‘true’,‘msg’: str(error)}), 500, headers)
# — This piece of code controls what happens during the whole
# HTTP transaction.
@APP.before_request
def before_request():
“”” This function handles HTTP requests arriving the API “””
pass
@APP.after_request
def after_request(response):
“”” This function handles HTTP responses to client “””
return response
# — This is where the API effectively starts. — — — — — — — — — — —
@APP.route(‘/’, methods=[‘GET’])
def index():
“””
This is the API index endpoint with HATEOAS support
:param: none
:return: a JSON (application/json)
“””
headers = {}
return make_response(jsonify({‘msg’: ‘this is index endpoint’,‘tstamp’: datetime.utcnow().timestamp(),‘endpoints’: {‘url_echo’: url_for(‘echo’, _external=True)}}), 200, headers)
@APP.route(‘/echo’, methods=[‘POST’])
@APP.route(‘/echo/<string:item>’, methods=[‘POST’])
def echo(**kwargs):
“””
This is the ECHO endpoint with HATEOAS support
:param **kwargs: gets an item from the url as a string of any size and format
:return: a JSON (application/json)
“””
if kwargs:
content = kwargs[‘item’]
else:
content = ‘none’

if request.args.get(‘lang’, type=str) is None:
lang = ‘none’
else:
lang = request.args.get(‘lang’, type=str)
headers = {}
return make_response(jsonify({‘msg’: ‘this is an echo endpoint’,‘tstamp’:datetime.utcnow().timestamp(),‘namespace_params’: {‘content_received’: content,‘language’: lang},‘endpoints’: {‘url_index’: url_for(‘index’, _external=True)}}), 200, headers)
# — Finally, the application is run, more or less ;) — — — — — — —
if __name__ == ‘__main__’:
APP.run()

Now, let’s review the code to understand what happens in each block:

Python imports

Python imports

The above statements are the main application imports. At the end of the block it’s imported the configuration file with the env vars that we have defined on separate files.

Python application initialization from env vars

Python application initialization from env vars

Later, the default configuration is effectively loaded. If it cannot be loaded, because the variable Is not defined or because the option in the variable is erroneous, then, a default value is configured. The default value for this microservice is “Development”. This option will act as a failsafe in our scenario, but, in other scenario other potential options can fit better, like Production or even not allowing the microservice execution without a proper configuration, which makes more sense.

Typical HTTP errors managed in APIs

Typical HTTP errors managed in APIs

This block controls the typical HTTP errors that we can find while executing the microservice program. May sound silly, but, controlling this tree errors, we are basically controlling around 90% of the situations we can face with an API in production.

Pay special attention to the first clause after the docstring, the one that says “headers = {}”. This clause allows to add more headers, if needed, to the default HTTP headers the web server is presenting to clients. You’ll see this in more detail a bit later.

The “jsonify” clause generates a JSON-formatted response based on the key-values “error” and “msg”. The JSON is sent back to the client.

Special functions controlling HTTP request/response transactions

Special functions controlling HTTP request/response transactions

The “before request” and “after request” are one of the reasons why Flask is so powerful. These two predefined decorators in the micro-framework. In combination with a function, it allows to modify the HTTP request sent by the client to the server (“before request”) and the HTTP response originated by the API on-the-fly to the customer (“after request”). Typical uses for this include:

  • Add headers for server-side purposes, like caching or proxying.
  • Add personalized eTag.
  • Open a database connection.
  • Initialize the logging facility.
  • Alter server response to adapt to client, i.e. to the User-Agent or View-Point.
  • Adapt HTTP responses to clients in the interim periods.

An index namespace on a RESTful API

An index namespace on a RESTful API

Finally, we get to the first piece of the microservice business logic in itself. This piece of code configures the response a client will receive when it’s called the index of the domain using an HTTP GET verb/method. This response is a JSON containing:

  • A text literal indicating where the client is exactly on this API
  • A timestamp (server-side time)
  • A list browsable namespaces and endpoints this API has available (HATEOAS)

Very basic handling of URL parameters received on a POST

Very basic handling of URL parameters received on a POST

And this is the second piece of the microservice business logic. The URL endpoint is different. Before it was “/” (the domain index) whereas now the endpoint is named “/echo”. There’s an additional line that accepts any string literal after the “/echo” in the form “/echo/<string_literal>”. On both cases the HTTP verb/method in use is POST, so, we accept data sent from client reaching the API, but, not in any way.

After the routing options:

  • In (1), we control the arguments we are receiving from the client. Unauthenticated clients are always treated as non-trusted, so, anything coming from clients need to be quarantined. And this is exactly what we are doing with this statement. From the POST URI we load all the received POST from the client as a long string. That way, we disable potential errors while dealing with a SQLi or XSS, as we have converted the whole URI that can contain scriptable code on a text literal. Moreover, we can quickly limit the length of the string and filter the presence of unwanted special characters, source of the problem on SQL and XSS.
  • In (2), we run along the POST URI looking for fixed parameters, like the language clause. We can be receiving a SQLi or an XSS but we won’t pay attention to those pieces of the POST URI, only to the elements we want to pick. If we identify them, they are captured and again converted into strings on new variables. The rest is garbage.

In this occasion, the response sent back to the client after his/her post is another JSON, this time containing:

  • A text literal indicating where the client is exactly on this API
  • A timestamp (server-side time)
  • The string literals to be echoed
  • A list browsable namespaces and endpoints this API has available (HATEOAS)

Finally, the application is run if the file containing the program is the main program file, and not a file included (called) by another program:

Main program execution

Main program execution

Most of the “echo” microservice lines of code are boilerplate, but, they were included in order to document the code and demonstrate good programming skills. Programmer skills, as well as code, can be tested, in fact should always be tested, and about testing is the next section.

TESTING AND ANALYSING THE CODE

I’m mentioning here the words “testing” and “analysing” because, for me, both terms have slightly different meanings:

Testing (the code) for me means an intrinsic development sub-process, that can take place before or after the business logic is created as code. This will depend on the type of programming paradigm being followed, i.e., Waterfall, Extreme, Agile, Lean, DDD, BDD, TDD, ATDD, etc…

In the DevSecOps model, the model we are following in the cloud industrialization sub-programme of the cloud programme that is in course, DevSecOps is about DEVelopment, SECurity, OPerationS and QA, of course. The fact the letters Q & A aren’t in the title doesn’t affect the fact QA is essential to DevSecOps. QA is an enabler.

In DevSecOps, QA is about trying to push tests with automation into the continuous integration systems. Tests must have zero human intervention and should generate their own test data. QA also works with operations on establishing monitoring tools and, perhaps, on continuously running chaos monkey tests in production to ensure a high level of maturity is achieved.

Analysing (the code) for me means an additional security layer on the way the code, and the tests, are built in order to ensure that code, holding the business logic and the testing process, is secure enough.

Source code analysis can be either static or dynamic. In static analysis, debugging is done by examining the code without actually executing the program. This can reveal errors at an early stage in program development, often eliminating the need for multiple revisions later. After static analysis has been done, dynamic analysis is performed in an effort to uncover more subtle defects or vulnerabilities. Dynamic analysis consists more of real-time program testing.

NOTE: The most appropriate on a working environment is to perform both static and dynamic code analysis. While static code analysis is dead simple to achieve, the dynamic code analysis might require several external tooling, so, this part of the analysis won’t be documented extensively.

In Python, and with any micro-framework, testing stage is relatively simple. I won’t discuss here and now the fact of which development process/paradigm is best or when do tests need to be written. Of course, I’m a human and I have a preference but my preference is just an opinion. My preference is following BDD, thus, writing meaningful tests before writing the code, so, when you start coding the business logic, you get more concentrated on building what you specifically need to build.

With all this introduction already set, I’ll start by approaching the nature of testing stage and later we’ll address the nature of the code analysis stage.

a) Testing code with Python tools

Python provides all means for code testing. I’m not going to enter in very profound details, but it’s possible to perform all types of testing with a single unified or few distributed set of tools. All of the Open source tools, all of them, are detailed on https://wiki.python.org/moin/PythonTestingToolsTaxonomy. In order to selected a tool, you have to have clear what’s what you try to test:

  • Unit test: Typically, “mock” or “dummy” implementations. Specify and test one point of the contract of single method of a class. This should have a very narrow and well-defined scope. Complex dependencies and interactions to the outside world are stubbed or mocked.
  • Integration test: Test the correct inter-operation of multiple subsystems. There is whole spectrum there, from testing integration between two classes, to testing integration with the production environment.
  • Smoke test (aka Sanity check): A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up suddenly:
  • Smoke testing is both an analogy with electronics, where the first test occurs when powering up a circuit. if it smokes, of course, it’s bad :) …
  • … and, apparently, with plumbing, where a system of pipes is literally filled by smoke and then checked visually. If anything smokes, the system is leaky, which is also bad :)
  • Regression test: A test that was written when a bug was fixed. It ensures that this specific bug will not occur again. The full name is “non-regression test”. It can also be a test made prior to changing an application to make sure the application provides the same outcome.

To the above, I will add:

  • Acceptance test: Test that a feature or use case is correctly implemented. It is similar to an integration test, but with a focus on the use case to provide rather than on the components involved.
  • System test: Tests a system as a black box. Dependencies on other systems are often mocked or stubbed during the test, otherwise, it would be more of an integration test.
  • Pre-flight check: Tests that are repeated in a production-like environment, to alleviate the “it builds/it works on my machine”syndrome, by the way, a syndrome that will entirely disappear with containerization.
  • Canary test: Test that is automated, non-destructive and that is run on a regular basis in a live environment, such that if it ever fails, something really bad has happened in reality.

So, to resume, the toolbox that Python can provide for testing is huge and well-detailed. In a sake for a better understanding, I will stick my examples to Pytest in order to exemplify unit testing and test-runs .

b) Analysing code with Python tools

Python is an awesome programming language that includes support for almost everything you can image, even linter tools. A linter or lint refers to tools that analyze source code to flag programming errors, bugs, stylistic errors, and suspicious constructs. The term originates from a UNIX utility that examined C language source code. Lint-like tools generally perform static analysis of source code.

Specifically, in Python, you can use several lint tools, because they all complement each other nicely:

  • pycodestyle (before pep8): for checking the style.
  • pyflakes: for fast static analysis.
  • mccabe: to find code that is too complex and needs refactoring.
  • pylint: for everything.

As this article is about demonstrating a DevSecOps pipeline can be followed with simplicity, I will concentrate in the use of Pylint for the static code analysis.

c) Tests in execution: PYTEST

Rather than a tool, Pytest is a framework that makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries. The best part is, the overhead for creating unit tests is close to zero.

Pytest makes use of Python’s “assert” statement to compute assertions. Assertions are a systematic way to check that the internal state of a program is as the programmer expected, with the goal of catching bugs. In particular, they’re good for catching false assumptions that were made while writing the code, or abuse of an interface by another programmer. In addition, they can act as in-line documentation to some extent, by making the programmer’s assumptions obvious.

A simple Pytest check can be as simple as this:

import unittest
from unnecessary_math import multiply
def test_numbers_3_4():
assert(multiply(3,4) == 12)

Pytest works having a test folder in your project. Inside this folder, Pytest will look for Python files with the following conventions:

  • Files should be named “test_<function>.py”, being <function> the name of the python function that you want to (unit) test.
  • The unit test inside the test file should be named “test_<type_of_data>_<function>.py”, being <type_of_data> the data type and <function> the name of the function to test.

In the tests folder of the “echo” microservice I’ve included a couple of dumb/dummy tests to document the testing process with Pytest.

== test_index.py ==

“””
This is a test file for pytest
“””
import pytest
from flask import Flask, request

TESTAPP = Flask(__name__)

@pytest.fixture
def test_client():
“”” This sets testing mode “””
TESTAPP.config[‘TESTING’] = True
def test_string_index():
“”” This will test the echo endpoint for a specific behavior”””
with TESTAPP.test_request_context(‘/’):
assert request.path == ‘/’

== test_echo.py ==

“””
This is a test file for pytest
“””
import pytest
from flask import Flask, request

TESTAPP = Flask(__name__)

@pytest.fixture
def test_client():
“”” This sets testing mode “””
TESTAPP.config[‘TESTING’] = True

def test_string_echo():
“”” This will test the echo endpoint for a specific behavior”””
with TESTAPP.test_request_context(‘/echo/1’):
assert request.path == ‘/echo/1’

Running Pytest on the above pieces of code will result in something like this:

(venv) jon@Laptop ~/DevOps/PythonAPIContainers $ pytest -vv — disable-warningsç
============= test session starts =============
platform darwin — Python 3.7.0, pytest-3.8.1, py-1.6.0, pluggy-0.7.1 –
/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7
cachedir: .pytest_cache
rootdir: /Users/jon/Documents/DevOps/Github/PythonAPIContainers, inifile:
collected 2 items
tests/test_echo.py::test_string_echo
PASSED [ 50%]
tests/test_index.py::test_string_index
PASSED [100%]
========= 2 passed, 3 warnings in 0.31 seconds =========

Notice the two lines in green where Pytest confirms the files in the “tests” folder have been executed and the result has been satisfactory.

In case the executed tests would fail, the resume would be showing a complete python stack trace with the programming issues found and a final message saying “0 tests passed”.

a) Tests in execution: PYLINT

Pylint’s range of features, and the fact that it is highly active and maintained and can be easily automated (with Apycot, Hudson, Travis or Jenkins), make it the absolute must have tool for the job. It supports a number of features, from coding standards to error detection, and it also helps with refactoring by detecting duplicated code.

Pylint is overly pedantic out of the box and benefits from a minimal effort of configuration, but it is fully customizable through a pylintrc file where you select which errors or conventions are relevant/irrelevant to you. Nonetheless, most messages sent out by the tool will be self-explanatory.

If we slightly alter the content of the “echo” app.py microservice included in this article before, and perform some alterations on the file to force to have irregularities yet leaving it operational from programming perspective, we can see how Pylint works:

(venv) jon@Laptop ~/DevOps/PythonAPIContainers $ pylint app.py
************* Module app
app.py:135:0: C0304: Final newline missing (missing-final-newline)
app.py:28:0: C0111: Missing function docstring (missing-docstring)
app.py:41:0: C0111: Missing function docstring (missing-docstring)
app.py:54:0: C0111: Missing function docstring (missing-docstring)
app.py:69:0: C0111: Missing function docstring (missing-docstring)
app.py:74:0: C0111: Missing function docstring (missing-docstring)
app.py:81:0: C0111: Missing function docstring (missing-docstring)
app.py:100:0: C0111: Missing function docstring (missing-docstring)
app.py:12:0: C0411: standard import “from os import environ” should be placed before “import settings” (wrong-import-order)
app.py:13:0: C0411: standard import “from datetime import datetime” should be placed before “import settings” (wrong-import-order)
app.py:14:0: C0411: third party import “from flask import Flask, jsonify, make_response, url_for, request” should be placed before “import settings” (wrong-import-order)
— — — — — — — — — — — — — — — — — -
Your code has been rated at 6.76/10

The results of Pylint can be interpreted this way:

  • First section, in maroon color, as the text between brackets indicates, a final blank line needs to be added to the file.
  • Second section, in purple color, indicates that the docstring is missing for several functions. A docstring is the string literal specified in source code that is used, like a comment, to document a specific segment of code.
  • Third section, in blue color, indicates the source code is unordered and requires tidiness.
  • Finally, at the end of the report, in color red, source code is rated on a scale from 0 to 10. This an important value for the developer to check how complaint (s)he is against the PEPs (Python Enhancement Proposals)

When the code is corrected, Pylint asserts:

Your code has been rated at 10.00/10 (previous run: 6.76/10, +3.24)

On a DevSecOps (and QA) pipeline that means automation of tests (static code analysis) can be fairly simple to address, by not just mentioning the fact the developer should always be interested in coding correctly.

DOCKERFILE: THE CONTAINERIZATIN PROCCESS

Dockerfile is a text document that contains all the commands a user could call on the command line to assemble a container image. Using docker-build users can create an automated build that executes several command-line instructions in succession.

An example Dockerfile follows:

FROM alpine:3.8
MAINTAINER Jonathan Gonzalez <j@0x30.io> @EA1HET

RUN apk add — no-cache python3 \
&& python3 -m ensurepip \
&& pip3 install -U pip \
&& rm -rf /usr/lib/python*/ensurepip \
&& rm -rf /root/.cache \
&& rm -rf /var/cache/apk/* \
&& find / \
\( -type d -a -name test -o -name tests \) \
-o \( -type f -a -name ‘*.pyc’ -o -name ‘*.pyo’ \) \
-exec rm -rf ‘{}’ + \
&& mkdir /app

EXPOSE 5000
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
ENTRYPOINT /app/boot.sh

The Dockerfile used for the containerization of the “echo” microservice is the one above, but, How it works?

  • The first line, in red, defines what is the base content the container uses in order to get built. A base can be a set or libraries or a micro-operating system. In the present case, a micro BusyBox-based system is used to run the Python microservice.
  • The second block, in orange, defines the different actions on packages required to prepare the micro BusyBox-based system for running the current version of Python and the microservice written in Python.
  • The next line, in violet, indicates which TCP port will be used to expose IP communications from the container to the container orchestrator.
  • Finally, in green, code is copied from the developer computer to the container image. All the files will be placed in the container image as if it were the on de developer computer, so, if something works on the developer computer, it will work on the container image. A final sentence is included describing which executable file needs to be run when the container image is loaded in memory by the container orchestrator.

Now that a Dockerfile is ready, we just need to issue the different commands required to assembre the container image. They are

BUILD A CONTAINER IMAGE

docker build . -t my_api:v1

where:

  • “.” represents the current directory where you are.
  • “-t” means you want to tag your container image with a compound name.
  • “my_api” is the name of the container image.
  • “v1” is the part of the tag name related to the container version.

RUN A CONTAINER IMAGE

docker run -p 5000:5000 my_api:v1 -it api_container

where:

  • “-p 5000:5000” is the TCP port mapped internally and externally to a container.
  • “my_api:v1” is the local image you want to run as a container.
  • “-it” runs interactively providing a more readable name to this container in order to clearly identify it when issuing a “docker ps” command.
  • “api_container” is the name you want to assign to this container image when it is run.

EXECUTE A SHELL INSIDE THE DOCKER CONTAINER (IF ANY)

docker exec -it <container_id> ash

where:

  • “exec” indicates the intention to run a command associated to a container.
  • “-it <container_id> means the command to execute will be executed in interactive format. A container ID (that can be retrieved issuing a “docker ps” needs to be detailed.
  • “ash” is the name of an Alpine Linux shell.

Finally, in order to publish a container in a container registry, a few more commands need to be executed on a sequence. They are:

export DOCKER_ID_USER=”username”
docker login
docker tag my_api:v1 $DOCKER_ID_USER/my_api:v1
docker push $DOCKER_ID_USER/my_api:v1
  1. An env var is defined with the Id of the Docker Registry user that will register the container in the repository.
  2. Next command will perform a login negotiation with the Docker Registry and will validate the user.
  3. If not done before, the container image will be properly tagged before being uploaded to the Docker Registry.
  4. Finally, the Docker image is pushed to the Docker Registry with the naming convention defined before.

A log on the above actions is next:

jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $ docker build . -t my_api:v1
Sending build context to Docker daemon 50.18kB
Step 1/8 : FROM alpine:3.8
3.8: Pulling from library/alpine
4fe2ade4980c: Pull complete
Digest: sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528
Status: Downloaded newer image for alpine:3.8
---> 196d12cf6ab1
Step 2/8 : MAINTAINER Jonathan Gonzalez <j@0x30.io> @EA1HET
---> Running in 9e3607810efb
Removing intermediate container 9e3607810efb
---> 85c02539f662
Step 3/8 : RUN apk add --no-cache python3 && python3 -m ensurepip && pip3 install -U pip && rm -rf /usr/lib/python*/ensurepip && rm -rf /root/.cache && rm -rf /var/cache/apk/* && find / \( -type d -a -name test -o -name tests \) -o \( -type f -a -name '*.pyc' -o -name '*.pyo' \) -exec rm -rf '{}' + && mkdir /app
---> Running in ef7cb26a0cf9
fetch
http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch
http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/11) Installing libbz2 (1.0.6-r6)
(2/11) Installing expat (2.2.5-r0)
(3/11) Installing libffi (3.2.1-r4)
(4/11) Installing gdbm (1.13-r1)
(5/11) Installing xz-libs (5.2.4-r0)
(6/11) Installing ncurses-terminfo-base (6.1_p20180818-r1)
(7/11) Installing ncurses-terminfo (6.1_p20180818-r1)
(8/11) Installing ncurses-libs (6.1_p20180818-r1)
(9/11) Installing readline (7.0.003-r0)(10/11) Installing sqlite-libs (3.24.0-r0)
(11/11) Installing python3 (3.6.6-r0)
Executing busybox-1.28.4-r1.trigger
OK: 67 MiB in 24 packages
Looking in links: /tmp/tmpngqpumos
Requirement already satisfied: setuptools in /usr/lib/python3.6/site-packages (39.0.1)
Requirement already satisfied: pip in /usr/lib/python3.6/site-packages (10.0.1)
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 10.0.1
Uninstalling pip-10.0.1:
Successfully uninstalled pip-10.0.1
Successfully installed pip-18.0
Removing intermediate container ef7cb26a0cf9
---> e578960ef329
Step 4/8 : EXPOSE 5000
---> Running in 533208720ceb
Removing intermediate container 533208720ceb
---> 889a718aae42
Step 5/8 : WORKDIR /app
---> Running in 2ab712110618
Removing intermediate container 2ab712110618
---> b6fe8d313419
Step 6/8 : COPY . /app
---> 7eb793bbe089
Step 7/8 : RUN pip install --no-cache-dir -r requirements.txt
---> Running in ae95eb68de95
Collecting aniso8601==3.0.2 (from -r requirements.txt (line 1))
Downloading
https://files.pythonhosted.org/packages/17/13/eecdcc638c0ea3b105ebb62ff4e76914a744ef1b6f308651dbed368c6c01/aniso8601-3.0.2-py2.py3-none-any.whl
Collecting click==6.7 (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)
Collecting environs==4.0.0 (from -r requirements.txt (line 3))
Downloading
https://files.pythonhosted.org/packages/64/19/c1b8df73d2b2e4c704e65e1ec1423714f10cf2bf5489e7dac724eda62218/environs-4.0.0-py2.py3-none-any.whl
Collecting Flask==1.0.2 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
Collecting gunicorn==19.9.0 (from -r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/8c/da/b8dd8deb741bff556db53902d4706774c8e1e67265f69528c14c003644e6/gunicorn-19.9.0-py2.py3-none-any.whl (112kB)
Collecting itsdangerous==0.24 (from -r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)
Collecting Jinja2==2.10 (from -r requirements.txt (line 7))
Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
Collecting MarkupSafe==1.0 (from -r requirements.txt (line 8))
Downloading
https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz
Collecting marshmallow==2.15.6 (from -r requirements.txt (line 9))
Downloading https://files.pythonhosted.org/packages/3f/4d/cb555dfc2e2f926179884665fa1e6ae6b8f8102e4f8228b73e2a30eb0ee0/marshmallow-2.15.6-py2.py3-none-any.whl (44kB)
Collecting python-dotenv==0.9.1 (from -r requirements.txt (line 10))
Downloading
https://files.pythonhosted.org/packages/24/3d/977140bd94bfb160f98a5c02fdfbb72325130f12a325cf993182956e9d0e/python_dotenv-0.9.1-py2.py3-none-any.whl
Collecting pytz==2018.5 (from -r requirements.txt (line 11))
Downloading https://files.pythonhosted.org/packages/30/4e/27c34b62430286c6d59177a0842ed90dc789ce5d1ed740887653b898779a/pytz-2018.5-py2.py3-none-any.whl (510kB)
Collecting Werkzeug==0.14.1 (from -r requirements.txt (line 12))
Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
Installing collected packages: aniso8601, click, python-dotenv, marshmallow, environs, itsdangerous, Werkzeug, MarkupSafe, Jinja2, Flask, gunicorn, pytz
Running setup.py install for itsdangerous: started
Running setup.py install for itsdangerous: finished with status 'done'
Running setup.py install for MarkupSafe: started
Running setup.py install for MarkupSafe: finished with status 'done'
Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 aniso8601-3.0.2 click-6.7 environs-4.0.0 gunicorn-19.9.0 itsdangerous-0.24 marshmallow-2.15.6 python-dotenv-0.9.1 pytz-2018.5
Removing intermediate container ae95eb68de95
---> 4e18577c050b
Step 8/8 : ENTRYPOINT /app/boot.sh
---> Running in 9006f33efa75
Removing intermediate container 9006f33efa75
---> 59ad5a3902a3
Successfully built 59ad5a3902a3
Successfully tagged my_api:v1
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
my_api v1 59ad5a3902a3 6 seconds ago 43.2MB
alpine 3.8 196d12cf6ab1 13 days ago 4.41MB
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $ export DOCKER_ID_USER="ea1het"
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $ docker login
Authenticating with existing credentials...
Login Succeeded
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $ docker tag my_api:v1 $DOCKER_ID_USER/my_api:v1
jon@MiniHET ~/Documents/DevOps/Github/PythonAPIContainers $ docker push $DOCKER_ID_USER/my_api:v1
The push refers to repository [docker.io/ea1het/my_api]
578236ec6f44: Pushed
f4bab11cc8bc: Pushed
58e68352ccbd: Pushed
df64d3292fd6: Mounted from library/alpine
v1: digest: sha256:be0cbe0fd7290ac017745473fafabf611eaaf026f6a5d5b5fbd7a21a6fa7754f size: 1158

And this is the reflection of the above action on the Docker Hub (the public Docker Registry):

Docker Hub stores public images of containers

Docker Hub stores public images of containers

 

Detail on the container creation. Pay attention to the badges at the bottom left

Detail on the container creation. Pay attention to the badges at the bottom left

 

Size detail on the content recently created

 Size detail on the content recently created

 

Metadata detail on container composition

Metadata detail on container composition

Colophon

Writing this article was an interesting experience. Writing for newcomers is difficult, more than for experienced professionals, but, this is the more rewarding, in my humble opinion. Hope this article served you.