By Slobodan Stojanovic
Serverless functions are amazing. I remember how impressed I was the first time I created a serverless image resizing tool. Just a few minutes of programming, and it was ready!
Okay, I did spend more than an hour figuring out how to deploy it, but the moment I managed to do so, everything worked seamlessly. I uploaded an image to my S3 bucket, and a few seconds later, a thumbnail was there. Magic.
One month and a few functions later, I realized that the deployment process is a more significant risk than the code of my function. My functions were often a few lines of code, but then I needed to set up everything before I could use them. That was a tedious manual process. Did I forget some of my Node.js dependencies? Was the trigger set correctly? What about roles and policies? Debugging a deployed function was a nightmare.
You don’t need to be a serverless expert to realize that help from a sophisticated deployment tool will make your development flow much more pleasant and reliable. In this article, I’ll give you some advice about how to deploy your Lambdas without too much hassle.
Removing the Risk from Deployment
Today there are a wide range of frameworks and deployment tools for serverless applications, but that wasn’t the case in the early days. As Node.js was, and still is, my language of choice, I had just a few options. The Serverless Framework was de facto tool number one. This tool is still one of the leading frameworks on the market, with excellent documentation and examples. However, I am not a big fan of config files, so I decided to try out a small deployment library called Claudia.js.
Claudia.js is not a framework; it’s a deployment library for Node.js functions to AWS Lambda. With its Claudia API Builder, it became the easiest way to build and deploy serverless APIs.
Using Claudia API Builder, I was able to build my APIs faster, and then to deploy them with a single command. Claudia installs all the dependencies for me, zips and uploads the code, configures roles and Amazon API Gateway, and gives me the API url. It’s as simple as that.
Functions triggered by Amazon S3 or Amazon SNS were just slightly more complicated, as I needed to create buckets and topics manually, but it was easier and faster than the old way of building and deploying my serverless app.
With a deployment tool, serverless stopped being a toy and started being a tool that helped me do my job faster. I slowly started using serverless at my day job, then presented it to my team—and they loved it.
A few months later, we had a lot of serverless functions in production. Claudia did even more for us: whenever we tried to deploy a function with a Node.js syntax error, it caught the error before the deployment process started, saving us the time we would have spent debugging it.
However, we still had some issues from time to time. For example, trying to deploy a function subscribed to the wrong Amazon SNS topic still created the function and its role. Each deployment with issues created more garbage that was not collected automatically.
Plan B (or How to Fight Deployment Issues)
As I mentioned above, debugging serverless functions is not fun. Digging through CloudWatch logs can take a lot of time, and often gives you partial pieces of information. Errors often require redeployment with additional logs and potential fixes.
To save time and lower the risk of errors, our application team used ESLint and similar tools. We even tried TypeScript, but it never went to production. Also, we increased test coverage. All of our functions now have at least some unit tests to prevent simple errors. All essential functions have high integrational test coverage.
These actions decreased errors in our code, although they still pop up from time to time. The deployment issues were more critical. Sometimes roles were created, but a piece of the deployment failed, so we had a few “garbage collection” scripts and a script that removed some older versions of our Lambda functions.
IOD IS A CONTENT CREATION AND RESEARCH COMPANY WORKING WITH SOME OF THE TOP NAMES IN I.T.
Our philosophy is experts are not writers and writers are not experts, so we pair tech experts with experienced editors to produce high-quality, deeply technical content.
A Rocky Road to Production
As our application and team grew, we worked on our development and deployment process.
If you’ve ever tried to build a serverless app, you know that running your app locally is almost impossible. So we tried to find an architecture that made our development flow straight forward.
After a few iterations, we ended up with a hexagonal architecture. Hexagonal architecture, or ports and adapters, is a pattern that allows an app to be equally driven by users, programs, automated tests, or batch scripts. You can develop and test your app in isolation from its future runtime and databases, which makes this pattern a perfect fit for microservices and serverless applications.
Once we adopted hexagonal architecture, our development flow was much more comfortable. However, some things still required a deployed environment (i.e., you need a deployed API to test your front-end application).
Deploying a complex application with many services to multiple environments is an error-prone process. If you forget just one piece of the puzzle, your serverless application won’t be complete, and you’ll end up with unexpected behavior. It was evident that we needed a better way to deploy the whole application.
After some simple research, we decided to slowly move to the AWS Serverless Application Model (AWS SAM). AWS SAM is an open-source framework consisting of a template specification (an extension of AWS CloudFormation templates) and a command-line interface (CLI).
Once we fully migrated to AWS SAM, adding a new environment was a matter of running a single command. Easier deployment enabled us to have a test environment. However, we didn’t stop there; now we had a separate environment for each developer. Why? Because they are useful, and because development environments cost us nothing, as they have low usage. Remember, in a serverless application, you only pay when someone is using your app. For example, for each request to your serverless API, you pay for the API Gateway request and a Lambda function execution.
Automating the Process
Our serverless puzzle was almost complete, and the last piece was automatization. The deployment process was simple, but manual deployments through all of our environments would slow us down, and we wanted our deployment process to be fast and independent. To fully automate our deployment process, we needed a continuous integration/continuous delivery service (CI/CD).
Which CI/CD tools can you use for serverless applications? Most of the time, your favorite tool will work seamlessly. In the beginning, we used our favorite, Semaphore CI, as Claudia requires minimal permissions to update functions. Semaphore is a hosted continuous integration and deployment service used for testing and deploying software projects hosted on GitHub and Bitbucket.
We used a git branching model similar to GitFlow. With GitFlow, changes pushed to a specific branch trigger a deployment to the selected environment. For example, a push to the “testing” branch triggers a deployment to the testing environment.
However, migrating to AWS SAM brought a new significant risk: to update our application, we needed to grant many more permissions to the AWS user that the CI/CD tool was using. We trusted Semaphore CI, but were a bit worried about sharing our access keys with too many permissions. We needed a better approach.
The best way to reduce the number of permissions that we shared with third-party applications was not to share them at all. Of course, AWS had us covered, and we replaced Semaphore CI with AWS CodePipeline. According to Amazon, “AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software.”
So You Deployed Your App to Production: What’s Next?
Automatization completed our development and deployment process. However, even after you deploy your serverless application to production, the game is not over. Out there in the wild, your app encounters many challenges.
Sure, our apps are well-tested, but we can only test for expected circumstances, and once the app is in production, you see a lot of surprising situations. Users can be creative, and apps fail in weird ways.
Even if you are lucky enough to be spared from unexpected issues, many other things can fail. The infrastructure of our serverless applications is fully managed, but our code and business logic are not. That’s not all; serverless apps heavily depend on integrations, and even if your code works seamlessly, any of the integrations can fail.
So, is there a solution, or are all of our apps doomed to fail at some point? Well, we can’t guarantee that nothing will fail, but we can monitor for fails and make sure we fix them quickly. In serverless apps, tracking an error can be challenging.
Imagine that your API request contains some incorrect data, but you don’t detect it immediately. Instead, your Lambda function saves the data to a file on Amazon S3, which triggers another Lambda function that does some background processing and saves your data to the DynamoDB table. Eventually, the DynamoDB table triggers another Lambda function, and you finally detect the error. But how do you track the error to its origin?
The answer is not simple. However, there are some excellent tools on the market that help us monitor and troubleshoot our serverless application issues. After trying a few of them, my team ended up using this one. It’s simple to integrate, gives us many insights that aren’t visible with native AWS tools like CloudWatch, and helps us understand how information flows through our app. As a bonus, it also helps us understand the cost of our serverless apps and decide what to improve and optimize.