Serverless Computing

In software engineering, abstraction is a powerful tool to hide what varies. Serverless computing is a modern computing paradigm primarily germinated from the power of abstraction. Serverless computing — a modern cloud computing paradigm – is getting popular quickly. In this chapter, we will learn what serverless computing is, its features, use cases, pros & cons, and many other related aspects. Let's start with what serverless computing is.

Table of Contents

What is Serverless Computing?

Serverless computing is a subtype of cloud computing. The interesting point to note is that word serverless in the term serverless computing is a misnomer; servers are an essential component of the computing process in serverless computing. In other words, the server or servers are used to process requests in a typical fashion – there is nothing special about serverless computing in the processing sense.

The question is then how it differs from regular computing. In serverless computing, we deal with an abstraction layer. In other words, we don't deal with or manage the server or operating system directly, which we would do in traditional computing or cloud computing. This abstraction layer sets up an illusion of serverless.

Let's understand that further. Suppose we have a use case to create a thumbnail image when an image is uploaded to a particular folder on the server. We develop a web service and deploy the code on the server, and the program is working fine – which means the web service is creating thumbnail images of the uploaded images.

This thumbnail processing application is working fine. But we have an issue that we get billed from the cloud provider for the instance even though the server is idle most of the time. The reason is that the average number of images processed each hour is around five (5 images/hour), and the processing time is 1 second. So, this is where we (as a software engineer) would need to think about a solution where we would pay only for the actual processing time, not for the idle time of the server.

And that's where serverless computing fits in. In serverless computing, servers are made available on-demand to process the request, and you would only be billed for the processing time. Serverless computing vendors have different cloud services, classifying them as serverless computing types. We will know about those services later in this chapter to get more clarity about serverless computing.

Serverless Computing Features

Automatic Provisioning of Computing Resources

Serverless vendors automatically provision computing resources needed to run code both on-demand or in response to the trigger of an event in an event-driven programming model.

Elastic Scalability

Not only does it automatically provision computing resources, but serverless computing also has the feature of elastic scalability. It means computing resources are scaled up to meet the demand of increased requests to maintain service level agreements (SLA) without degradation in performance to maintain quality output. Conversely, it scales down as the number of requests goes down and shuts down completely when there is no request. The idea of elastic scalability saves costs, and it helps vendors efficiently utilize resources.

Faster Delivery Code

Another feature is productivity. To be more specific, it is about the faster delivery of code. In serverless architecture, engineers would have to focus on code instead of managing backend cloud infrastructure and tasks such as provisioning, scaling, patching, and other related things. As a result, serverless computing helps increase faster delivery of code. The serverless computing provider manages backend cloud infrastructure and operational tasks such as provisioning servers, scaling, patching.

Serverless Computing Backend Service Types

Mainly database, storage, function-as-a-service (FaaS) are the type of backend services in serverless computing. Serverless computing (serverless architecture) is well suited in event-driven and stream processing-related applications because these applications have some critical quality attributes to consider. These quality attributes are scalability and latency; the other is, at times, idleness (for example, an online store may not get any online order in some time duration) attributes.

The other type of backend service in serverless computing is the API gateway. The API gateway manages (delegates) HTTP call to web services such as routing, rate limits, CORS, and authentication. Or in other words, API Gateway delegates HTTP requests to the code block implemented as function-as-a-service (FaaS). Essentially, these web services are wrappers over the FaaS code.

Serverless Computing Stack

We understood serverless computing and types of backend services that qualified as serverless computing. Now, let's know how we can combine and use these backend services — which form a serverless computing stack — to implement serverless computing use cases.  An understanding serverless stack will help us rationalize how to combine different back-end services to design and build serverless architectural solutions. Serverless computing stack mainly includes function-as-a-service (FaaS), database and storage, event-driven & stream processing, and API gateway. We will look at each of them in this section.

Function-as-a-Service (FaaS)

The function-as-a-service (FaaS) is central to the serverless architecture. FaaS, which is central to the serverless architecture, deals with the application code in the serverless stack. Therefore, any serverless computing-based application will most likely include FaaS.

To understand function-as-a-service from the serverless stack, let's look into the diagram shown below. Let's assume in this diagram that the web application shown in the diagram is implemented using serverless architecture.

Let's focus on Customer Service and User Service.  Customer Service is interacting with an external API.  User Service is interacting with an external database for the user information, for example, user verification and new user registration. In this use case, since Customer and User services are not called very less (let's assume five requests/hour), we can leverage function-as-a-service, for example, AWS Lambda, from the serverless stack if you use the AWS cloud platform. That way, we would only pay for the processing time of serverless services instead of running a server that would be sitting idle most of the time.

Function-as-a-service (FaaS) of serverless computing is a massive plus if you look at it from the timesaving and productivity perspective. Using function-as-a-service, we would have to write only code for the functional aspect of the service or business logic part of the service, for example, to implement Customer Service and User Service. The execution and scalability aspects of the application will be taken care of by the serverless provider, for example, AWS.

Database and Storage

Let's look into other components of a serverless stack. In most enterprise-grade applications, database and storage are the foundation. We usually run a database instance (or storage such as AWS S3 commonly used in cloud-based data engineering applications) or instances on a separate server and build an abstraction layer to connect to the database. This abstraction layer is called Data Layer or Database Layer.

When applying serverless architecture, for the database and storage, instead of previsioning database instances with defined capacity or fixed storage space, we can use serverless database and storage services and pay for what we use.  Additionally, serverless database and storage services will scale automatically. For example, using serverless architecture, User Service can store user registration in the serverless database such as DynamoDB or Amazon Arora to optimize the cost and scalability of User Service.

Event-Driven & Stream Processing

Another type of component in the serverless stack is event-driven and stream processing. If your application is event-driven or a stream processing application, you could use serverless architecture.  

One of the challenges of these applications is the unpredictability of loads, which makes planning and managing scalability needs challenging. As a result, you may over-provision or under-provision resources. The serverless architecture is beneficial for these applications to handle scalability and manage runtime resource needs. AWS has many serverless services for event-driven and stream processing applications. For example, AWS SNS (Simple Notification Service), AWS SQS (Simple Queue Service), AWS Kinesis are some examples of serverless services.

API Gateway

API gateways are another serverless architectural component. API gateways act as a front-end controller and take care of routing HTTP requests to appropriate FaaS components. Additionally, API gateways can handle rate limits, CORS, and other security and management aspects of API calls. 

Coming back to this architecture diagram of User Management which we are looking into. The API gateway component in the diagram, as shown above, is a serverless component that handles the authentication of a user and other API management-related aspects. Imagine building the API gateway in a non-serverless way – as you can imagine, handling scalability will become a challenging engineering exercise.

AWS Serverless Services

Since AWS is a cloud provider and provides backend services for serverless computing. Let's discuss AWS backend services of serverless type.

In the code execution category, AWS provides AWS Lambda and AWS Step Functions. AWS Lambda is a fully FaaS type of serverless service, which is used to write code. For example, we can write AWS Lambda code in Java, Python, or Node.js and in many other languages to implement business functions. 

AWS Step Functions is a visual workflow service that helps build serverless applications by orchestrating AWS services to automate business processes. 

In the database and storage category, AWS has S3 (Simple Storage Service), Amazon Arora, and DynamoDB. Amazon Arora is a relational database service, and DynamoDB is a NoSQL service.  These are all serverless services.   

In event-driven and stream processing backend services, AWS has SQS (Simple Queue Service), SNS (Simple Notification Service), and Kinesis.  SQS is AWS queuing service, SNS is a notification service, and Kinesis is a streaming service. Functionally, Kinesis is like Kafka.

The other services are Amazon API Gateway, Amazon Cognito.  Amazon Cognito provides authentication, authorization, and user management-related functionalities to web and mobile applications. In addition, AWS KMS is an essential management service for encryption/decryption.

These are a few examples of AWS serverless services to give you an overview of serverless services from a serverless provider perspective.

Serverless Computing Pros & Cons

The first is about cost. In general, serverless computing is a cost-effective solution for many application types—particularly web applications with an unpredictable number of requests. In a traditional cloud, we end up paying for the entire server resource, in which we may be paying for the unused or idle resources.  But in serverless computing, we don't pay for idle resources. 

Next is scalability. Scalability is one of the main advantages of serverless computing. In serverless computing, engineers would not have to give much thought to scalability.  We must design code to be scalable and stateless, though. And the serverless vendor would take care of making sure the system doesn't degrade as load increases.

Next on the list here is simpler backend code. As we know that function-as-a-service is one of the significant components of serverless architecture. Using FaaS, we can write highly cohesive code – implementing one and only one functional aspect. Since non-functional concerns are offloaded, this offloading simplifies the FaaS code.

And the last one we have is reduced time-to-market. Serverless architecture cuts down the significant time to market. Instead of planning and setting up servers for dev, test, and production, we can leverage serverless offerings from serverless providers — thus saving huge in time. 

You may wonder whether serverless computing doesn't have any drawbacks – yes, there are a few.

Let's talk about the cons of serverless before completing this topic – which is mainly latency in start-up and monitoring, debugging.

Since to run FaaS optimally, the server must be running. However, if the server or container is not processing, the provider shuts down the server or container to save energy and computing resources.  That being the case, when the subsequent request comes, the container needs to be started fresh – this start-up adds latency. The latency might be an issue if the load has lots of lags. However, if the requests are continuous, the restart will not be an issue. Serverless doesn't provide much cost savings for consistent or predictable workloads. However, it offers excellent protection for a sudden spike or unpredictable load patterns. 

Another issue with serverless computing is related to monitoring and debugging.  Monitoring and debugging is generally challenging in a distributed environment. For example, low-level debugging is tedious, mainly because of the abstraction of distributed cloud computing.   

Serverless Computing Use Cases

The first typical use case is API-based applications. For example, suppose we have a web application that has APIs driven backend. Here, we can use FaaS to write functional code for APIs that interact with the database, and we can front the FaaS code with an API gateway. Typically, in API-based web applications on the cloud, we can leverage serverless if the load is unpredictable or there is a chance of a sudden spike in load. 

The following use case is about microservices. As we notice, nowadays, many microservices applications are built using containers. However, using serverless can be an advantage with a fast turnaround as we can write highly cohesive code using FaaS to implement microservices.

Another potential use case is related to ETL type of applications.  Serverless computing is a good solution for ETL projects such as data cleaning, transformation, enrichment, and validation because of unpredictable data requirements. Building a real-time ETL application on the same token and a related concept is also a good use case for serverless. We can leverage FaaS to write code for enrichment, validation, cleaning, and orchestration. Applications inherently parallel in computation, such as map-reduce type of applications, are also a good use case for serverless computing.

Summary

The serverless architecture stack provides a function-as-a-service component to write services without thinking about scalability issues. For example, it includes database and storage services without going through how many extra servers we need to handle additional loads. The serverless provider will take care of provisioning and scalability needs. And we only pay for the usage. 

It provides services for event-driven and stream processing. However, in the non-serverless environment, we need to monitor and troubleshoot issues continuously and handle scalability issues. Therefore, there is always a possibility of over-provisioning or under-provisioning. 

However, if you have worked on an event-driven or stream processing application in a non-serverless environment, you can easily understand how helpful serverless architecture is. We just focus on writing code for business logic or functional requirements and let serverless providers provide and manage runtime computing resources.  Finally, we have an API gateway that fronts it with all kinds of things, such as request routing, rate limit, CORS, authenticati

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses User Verification plugin to reduce spam. See how your comment data is processed.
Hide picture