Going Serverless with AWS: Part 2

Going Serverless with AWS: Part 2

In our previous post, we discussed the history of serverless, Platform as a Service (PaaS), Backend as a Service (BaaS), and Functions as a Service (FaaS). We also introduced AWS Lambda, a service that runs code in response to different events happening across the whole application, as well as examining the limitations of it. In this second part of our or series about Going Serverless with AWS, we will discuss how to integrate the various components within serverless, the stateless and event-based flow, as well as what is next for AWS in the serverless space.

Integrating Components

If you have ever worked with microservices or taken part in rebuilding an existing monolith app into microservices, it should be relatively easy for you to understand the concepts and apply your experience to creating serverless applications. In any case, the main idea is that instead of building a single application, you split the logic into small, independent pieces. 

Assume you have a simple application that enables users to create and edit their profiles, create boards, and upload images to those boards. You can divide functionality into three main parts: user management, board management, and images. Then you can start implementing each part as a function, or split each part into even more pieces (function per CRUD operation) and develop them separately. 

Some functions can be invoked directly by another function, but the usual approach is to invoke the function via an event that occured. 

Stateless and Event-Based Flow

Let’s continue with the photo-board application mentioned above. You would like your users to upload pictures, so, of course, all uploaded files should be resized before they are placed on the board. To do this, you’ll want to trigger a resizing mechanism each time a new file appears in the storage. In monolith applications, such a mechanism can be very straightforward, but also inefficient. For example, the user uploads the file to the application server; then the image is resized, uploaded to S3, and, finally, removed from the server’s disk. There are more ways to do this, but all of them have the same drawback: an additional load to the main server leading to worse performance or unnecessary scaling. 

Using the serverless method, however, this can be done with two functions: upload token generator and image processor. The upload token generator function is triggered via an HTTP event and is responsible for checking user access rights, generating a one-time S3 token, and sending it back to the client. The image processor function is triggered via an S3 “create object” event. It should get the image from S3, resize it, and save the resized image.

So, the general flow should look like this:

  1. Client-side application requests one-time token via call to the first Lambda.
  2. The first Lambda performs a user check, generates a token, and sends it back to the client.
  3. The client performs a direct upload of the file to the S3 bucket using the generated token.
  4. After the object is uploaded, the second Lambda is automatically triggered.
  5. The second Lambda reads the file info from the “create object” event, downloads an image, changes its size, and saves the copy again to S3.

In this approach, both functions are stateless (they don’t rely on previous invocations in any way), and neither one creates any additional load to other parts of the whole application. Moreover, even if hundreds of users start uploading files simultaneously, that load spike will be handled by AWS without any implications. AWS does this by automatically creating new containers and reusing old ones when they finish the job. The great thing about this is that you will be charged only for the total execution time of all functions. After the load normalizes, AWS will “kill” all unnecessary containers and release used resources.

What’s Next?

In the few years since Lambda functions were released, AWS has developed and released more on-demand computing services that require minimum configuration and offer a “pay-as-you-go” billing model. For example, AWS Fargate is another cost-effective service designed to free you from managing containers and infrastructure.

Every year, more and more AWS services become serverless, or at least easier to integrate with serverless applications. Amazon Aurora Serverless, which launched in 2018, is a great example of that trend. It’s auto-scalable, self managed, and cost effective, but is still a simple relational database suitable for infrequently used applications or applications with a spiky load.

Summary

Today, it is possible to build fully serverless applications that cost you almost nothing if traffic is low. Nevertheless, applications are always ready to handle huge loads. Almost every service on AWS has a free tier which refreshes each month. For example, Lambda’s free tier includes 1 million invocations per month, which means free operation of low-usage applications. 

While it is still quite difficult to build, support, and monitor complex applications consisting of hundreds of functions, tools like Serverless Framework and Epsagon make life much easier.

So, is moving an existing app to serverless architecture worth it? The general answer is yes. Even moving just a little part of your functionality to serverless can improve general performance and decrease your monthly bill. Start with parts that are really independent and continue with bigger ones, if possible. If you’d like to learn more about serverless, check out these great courses for beginners and more experienced developers alike.

 

WS_Partner_Logo
As an AWS Advanced Consulting Partner, Media Temple can help you get the most from your AWS cloud. Reach out anytime.

Comments

Related Articles