The term serverless generates a lot of discussion. What exactly does it mean, and how can it help developers move from a monolithic architecture into a distributed one? Similarly, there is also confusion about the different benefits of containers and serverless architectures. Both architectures are modern approaches to application management, and each has specific benefits.

The best way to understand the difference between containers and serverless architecture is to look at the developer communities around each. Most documentation for Docker’s container approach addresses issues that surround how to manage your infrastructure. The tools are designed to help more easily manage underlying hardware or virtual machines, and spread containers across multiple servers or instances in AWS. Documentation that addresses serverless frameworks and activity within the serverless community tends to focus on building serverless applications.

Approaches to serverless development
There are several different approaches to serverless development. Most developers who transition from a traditional framework, such as Flask, Rails or Express, might choose to use a serverless framework, such as Chalice for Python or Serverless for Node.js. These frameworks are similar to the traditional ones, which help ease the transition for those developers.

Additionally, when a developer uses AWS CloudFormation, he will discover there is a limit to how complex the APIs can be. Therefore, he will need to split them apart once you have too many endpoints or operations. Furthermore, all of the same pitfalls of any monolithic service apply, so it becomes harder to upgrade, harder to maintain and a single-point-of-failure for the environment. However, with a single function, cold starts are easier to manage.

Serverless functions are often chained together, which is a common pattern that can help mitigate the five-minute runtime limit, as well as the 50 MB size limit. In the email marketing system example, the first function, which handles building the recipient list, needs to have access to Amazon DynamoDB to pull down recipients. But it doesn’t need to have the code installed to process the email template or send the actual email messages.

Tools for debugging serverless architectures
Traditionally, developers could simply log into the system, run the application, tail logs and test input to debug. In a serverless architecture, there is no server to log into, and running locally can be a lot more complicated. Some AWS plug-ins, such as Serverless Offline and SAM Local, offer support for running a majority of applications offline. However, these don’t function so well when an authorization step happens in another repository or where there are multiple functions that need to be chained together. In many cases, developers must run their own stack for development and test and then push changes to a development AWS account.

The serverless tradeoff
Overall, serverless lets development teams focus more on the product and the output of an organization, but it does require more planning to handle testing and monitoring. Organizations that plan to use serverless should first build out a project map that helps them decide if they want to use a microservices architecture or rely on a single-function router to handle API requests. If executed correctly, a serverless architecture can save development teams time when they push out new features and can scale nearly infinitely. If developers skip advance planning and take precautions, it can lead to future problems.

Understanding the benefits of serverless functions

The task of simultaneously managing server infrastructure and writing code is exhausting. For overworked developers, today’s RESTful APIs and serverless platforms may be a match made in heaven.

Despite the name, serverless computing doesn’t actually mean that there aren’t any servers involved. It simply means that developers aren’t required to think about the servers. This characteristic is what allows developers to build powerful, single-serve applications without having to deal with resource management. This certainly can make developers lives easier, while at the same time providing helpful abstractions away from the complexities of API integrations.

Beware of the caveats
It is important to note that the above scenarios focus on a very specific type of serverless infrastructure, called function as a service. This is the infrastructure made popular by Amazon Lambda. The other type of infrastructure is the one that completely removes all consideration of servers. These serverless platforms tend to offer highly vertical functionality, such as supporting only one language or only one type of application.

Benefits of serverless APIs
Building and deploying the right type of API on serverless platforms brings many benefits, including reducing repetitive, low-value tasks in software development. Even better, it relieves worries about scaling and managing APIs. For a business, serverless computing can lower development costs and increase monetization opportunities, but without the risk of vendor lock-in.

Some key benefits to deploying serverless APIs include the following:

  • Security. Serverless APIs run in a completely trusted and secure environment, Reynolds said. Serverless is a way to deploy proprietary APIs and code in almost any environment without anybody being able to see what’s going on. It promises, and has so far delivered, that “nobody can actually attack that code,” he said.
  • Scalability. The traditional way to build safe applications doesn’t scale well, Reynolds said. Serverless scales automatically, which pays off when APIs and apps must scale to meet the needs of many customers at once. Reynolds cited Nordstrom’s use of Amazon Web Services’ Lambda serverless platform to scale its customer requests for product recommendation and reduce response time from minutes to seconds. “You can’t keep adding mainframes to do this stuff,” Reynolds said.
  • Cost. Users only pay for the compute time they use. For example, Cannabiz Media has clients nationwide and a very random traffic pattern. There are spikes first thing in the morning in each U.S. time zone, but traffic can also spike in the early evening. “It doesn’t make sense for us to run even one server overnight when most of the time there’s not going to be anyone using it,” Moyer said. With the requests routed through a serverless API, the company is paying per request, not for the time people aren’t using it.
  • Management. It’s automated. “You don’t have to manage anything,” Reynolds said. “If there need to be another 10 instances in Canada, boom, they’re there,” Reynolds said. “You didn’t even have to know about it. They’re just there as opposed to all the paperwork you’d have to do if you needed a new server.”
  • Monetization. Businesses can sell their APIs as a serverless function, rather than taking the longer route of patenting it, Reynolds said. For example, a developer creates a black box function that helps a retailer sell more frozen peas. The developers could provide it as a serverless interface available in a cloud provider’s API library and then charge for a call of that function, Reynolds said. “Serverless opens the opportunity to monetize algorithms, APIs and other ideas that you couldn’t see how to do before,” he said.
  • No lock-in. While Amazon delivered the first modernized cloud-based serverless offering, it’s not the only game in town, Moyer said. Today, changing serverless framework providers is simple. “A DevOps team can choose to re-architect for serverless using a cloud provider’s FaaS platform or taking a simple approach by deploying software on a serverless framework,” Moyer said.

To read full download the whitepaper:
The Pros and Cons of a Serverless Architecture, and How to Prepare

SEND ME WHITEPAPER