This is part two of a series that aims to shed some light and provide practical exposure on key topics in the modern software industry, namely cloud native applications. This post covers containers and serverless applications. Part one covers architecture for micro-services and cloud native applications.
The technology of software containers is the next key technology that needs to be discussed to explain cloud native applications. A container is simply the idea of encapsulating some software inside an isolated user space or “container.”
For example, a MySQL database can be isolated inside a container where the environmental variables and the configurations that it needs will live. Software outside the container will not see the environmental variables or configuration contained inside the container by default. Multiple containers can exist on the same local virtual machine, cloud virtual machine, or hardware server.
Containers provide the ability to run numerous isolated software services, with all their configurations, software dependencies, runtimes, tools and accompanying files on the same machine. In a cloud environment, this ability translates into saved costs and efforts, as the need for provisioning and buying server nodes for each micro-services will diminish, since different micro-services can be deployed on the same host without disrupting each other. Containers combined with micro-services architectures are powerful tools to build modern, portable, scalable and cost efficient software. In a production environment, more than a single server node combined with numerous containers would be needed to achieve scalability and redundancy.
Containers also add more benefits to cloud native applications beyond micro-services isolation. With a container, you can move your micro-services, with all the configuration, dependencies and environmental variables that it needs, to fresh server nodes without the need to reconfigure the environment, achieving powerful portability.
Due to the power and popularity of the software containers technology, some new operating systems like CoreOS, or Photon OS, are built from the ground up to function as hosts for containers.
One of the most popular software container projects in the software industry is Docker. Major organizations such as Cisco, Google, and IBM utilize Docker containers in their infrastructure as well as in their products.
Another notable project in the software containers world is Kubernetes. Kubernetes is a tool that allows the automation of deployment, management, and scaling of containers. It was built by Google to facilitate the management of their containers, which are counted by the billions per week. Kubernetes provides some powerful features such as load balancing between containers, restart for failed containers, and orchestration of storage utilized by the containers. The project is part of the cloud native foundation along with Prometheus.
In case of containers, sometimes the task of managing them can get rather complex for the same reasons as managing expanding numbers of micro-services. As containers or micro-services grow in size, there needs to be a mechanism to identify where each container or micr-oservices is deployed, what their purpose is and what they need in resources to keep running.
Serverless architecture is a new software architectural paradigm that was popularized with the AWS Lambda service. To fully understand serverless applications, it helps to go over an important concept known as function- as-a-service, or FaaS for short. FaaS is the idea that a cloud provider such as Amazon or even a local piece of software such as Fission.io or funktion can provide a service where a user can request a function to run remotely in order to perform a very specific task. After the function concludes, those results return back to the user. No services or stateful data are maintained and the function code is provided by the user to the service that runs the function.
The idea behind properly designed cloud native production applications that utilize the serverless architecture is that instead of building multiple micro-services expected to run continuously in order to carry out individual tasks, build an application that has fewer micro-services combined with FaaS, where FaaS covers tasks that don’t need services to run continuously.
FaaS is a smaller construct than a micro-service. For example, in case of the event booking application we covered earlier, there were multiple micro-services covering different tasks. If we use a serverless applications model, some of those micro-services would be replaced with a number of functions that serve their purpose.
Here’s a diagram that showcases the application utilizing a serverless architecture:
In this diagram, the event handler micro-services as well as the booking handler micro-services were replaced with a number of functions that produce the same functionality. This eliminates the need to run and maintain the two existing micro-services.
Serverless architectures have the advantage that no virtual machines and/or containers need to be provisioned to build the part of the application that utilizes FaaS. The computing instances that run the functions cease to exist from the user point of view once their functions conclude. Furthermore, the number of micro-services and/or containers that need to be monitored and maintained by the user decreases, saving cost, time, and effort.
Serverless architectures provide yet another powerful software building tool in the hands of software engineers and architects to design flexible and scalable software. Known FaaS are AWS Lambda by Amazon, Azure Functions by Microsoft, Cloud Functions by Google and many more.
Another definition for serverless applications is the applications that utilize the BaaS or backend as a service paradigm. BaaS is the idea that developers only write the client code of their application, which then relies on several software pre-built services hosted in the cloud, accessible via APIs. BaaS is popular in mobile app programming, where developers would rely on a number of backend services to drive the majority of the functionality of the application. Examples of BaaS services are: Firebase and Parse.
Disadvantages of serverless applications
Similarly to micro-services and cloud native applications, the serverless architecture is not suitable for all scenarios.
The functions provided by FaaS don’t keep state by themselves which means special considerations need to be observed when writing the function code. This is unlike a full micro-service, where the developer has full control over the state. One approach to keep state in case of FaaS, in spite of this limitation, is to propagate the state to a database or a memory cache like Redis.
The startup times for the functions are not always fast since there is time allocated to sending the request to the FaaS service provider then the time needed to start a computing instance that runs the function in some cases. These delays have to be accounted for when designing serverless applications.
FaaS do not run continuously like micro-services, which makes them unsuitable for any task that requires continuous running of the software.
Serverless applications have the same limitation as other cloud native applications where portability of the application from one cloud provider to another or from the cloud to a local environment becomes challenging because of vendor lock-in
Cloud computing architectures have opened avenues for developing efficient, scalable, and reliable software. This paper covered some significant concepts in the world of cloud computing such as micro-services, cloud native applications, containers, and serverless applications. Microservices are the building blocks for most scalable cloud native applications; they decouple the application tasks into various efficient services. Containers are how micro-services could be isolated and deployed safely to production environments without polluting them. Serverless applications decouple application tasks into smaller constructs mostly called functions that can be consumed via APIs. Cloud native applications make use of all those architectural patterns to build scalable, reliable, and always available software.
If you are interested in learning more, check out his book Cloud Native programming with Golang to explore practical techniques for building cloud-native apps that are scalable, reliable, and always available.
Mina Andrawos is an experienced engineer who has developed deep experience in Go from using it personally and professionally. He regularly authors articles and tutorials about the language, and also shares Go’s open source projects. He has written numerous Go applications with varying degrees of complexity.
Other than Go, he has skills in Java, C#, Python, and C++. He has worked with various databases and software architectures. He is also skilled with the agile methodology for software development. Besides software development, he has working experience of scrum mastering, sales engineering, and software product management.
- Digital Sovereignty – Why Open Infrastructure Matters - December 18, 2020
- OpenStack in Production and Integration with Ceph: A European Weather Cloud User Story - December 2, 2020
- #OpenInfraSummit Track: Public Cloud - October 12, 2020