Obviously there are always going to be servers somewhere that actually need to run your code, but something called ‘serverless’ is on its way to become mainstream. We can keep calm though, it is, as they say with cloud computing, just someone else’s computer. And don’t panic, we don’t all necessarily need to do serverless architectures now. I mean to say all this in an objective manner, not to be negative, because I actually see a lot of potential in this new phenomenon. Making the most informed choice for your architecture, whether to go serverless or not, will be helped by keeping the aforementioned in mind. Just see it as another tool in your toolbox. Having said that, let’s look at some descriptions by people far smarter and eloquent than me:
“Serverless architectures refer to applications that significantly depend on third-party services (known as Backend as a Service or “BaaS”) or on custom code that’s run in ephemeral containers (Function as a Service or “FaaS”)”
~ Mike Roberts at https://martinfowler.com/articles/serverless.html
“These are Platform as a Service (PaaS) services that take the “fully managed” aspects of the service to the max.”
~ Chris Pietschmann at https://buildazure.com/2016/11/03/what-is-serverless-architecture/
“Serverless architecture is an approach that replaces long-running virtual machines with ephemeral compute power that comes into existence on request and disappears immediately after use.”
I’m going to borrow the terms BaaS and FaaS van Mike Roberts here. In this article the focus will be on the FaaS part of the quotes. You see, BaaS refers to third party services (usually in the cloud) for stuff like authentication (AWS Cognito, Azure AD) which mobile applications for instance would use extensively for a range of purposes. This could’ve been the beginning of serverless architectures, but just as much as IaaS (Infrastructure as a Service) and PaaS could’ve been. I see FaaS (or “PaaS to the max” as Chris Pietschmann called it) as the direction it needed to go to come to its adulthood. And with the coming of a few popular technologies which reside at FaaS level, like AWS Lamba’s, Google Cloud Function (alpha) & Azure Functions (which I’ll be particularly writing about in this article), I thought the time was right for me to dive in to the serverless world.
A typical before/after shot of a serverless transition to get to grips with the concept:
A very simplistic diagram just to show the basics. We see some client accessing some authentication/authorization logic, some backend logic and some database eventually. All on servers you configure/host. This could look in a serverless environment as follows:
We see here some client hosted in some cloud (could be Web App Service from Azure for instance), which integrates with some hosted security service (like Azure AD for instance) and has its backend logic split up in multiple hosted components (think AWS Lamba’s, Azure Web Jobs/Functions). And eventually the backend logic stores/retrieves data from hosted storage solutions (like AWS DynamoDB/Azure DocumentDB or Azure Blog Storage/Amazon S3). The important thing to note here is that we got rid of the servers in the diagram.
So let’s see some pro’s and cons.
To me a serverless architecture means that I don’t need to worry about the actual machines my code is running on and the automatic elastic scaling that can come with it. Why is this important? This becomes more clear later on when we discuss Microservices. First let’s see where we come from.
We have seen the shift to DevOps, with cloud computing as catalyst. But did you expect Operations to do more Development work, or developers to do more Operations work? I hope you expected the latter. For developers to actually embrace Continuous Integration/Continuous Delivery (do what you find difficult more), tightening the feedback loop (have more moments to verify you are building the right thing and you’re building the thing right) and making the software more supple (low coupling/high cohesion), we as developers needed to make the mind shift that our work doesn’t stop at at the business value adding code. It goes beyond that. We learned all about Powershell and tools like Visual Studio Team Services, Team City, Octopus Deploy to help us get our application lifecycle managed. Thus getting our CI/CD pipelines up and running. We learned about SOLID principles, architectures like Microservices and perhaps read a book or two from Uncle Bob. Just so we know how to get our code in the condition it needs to be. Heck, you might have even heard of IaC (Infrastructure as Code), bringing your DevOps game to a whole new level!
So amidst all these patterns and principles, why do we need yet another? Well how about a world where we can have the cake and eat it? Didn’t you sometimes feel as though the whole DevOps practice gave you less time to focus on adding business value? Isn’t IaaS enough then? Or how about PaaS? These are here to stay and will in my opinion always have their use cases. But how much time have you spent setting up a VM in Azure? Trying to figure out the best set of rules to spin up the number of instances when it is needed. And so forth, and so on. FaaS takes away all that pain. How about Docker? What about it? I love Docker! And as long as there are IaaS and PaaS there will also be a place for Docker and other container technologies. But if my use case doesn’t require me to spin up a VM on the Amazon cloud myself, and for instance AWS Lamba’s with BaaS technology like AWS Kinesis for capturing and storing data will do the trick just fine, then why would you?
Of course before you can make that decision you’ll need to know the disadvantages and we’ll come to that now.
So the biggest advantage of not having to provision or manage your servers is also one of the potential disadvantages, or at least something you should be aware of. You see, the automatic upgrades, patches, security and configuration means you are also not in control anymore. Is this a bad thing per definition? Well, let me just give you a real world example from the company where I work. We handled the voting for the Junior Eurovision Song Contest in 2015… Live! Let that sink in. We had this awesome Service Oriented Architecture with messaging on Azure cloud which used its Storages, Service Bus (NServiceBus) and App Services. The challenge for the votings that we do is always a small window of time (15 minutes in this case) with a gigantic throughput (17 countries pushing data our way). A bunch of users monitoring every move our system made via a web application on Azure. On September the 15th of 2016 a global DNS outage hits Microsoft Azure (http://www.zdnet.com/article/global-dns-outage-hits-microsoft-azure-customers/)! Sure, this is a year later than that project. Sure, Microsoft guarantees in their SLA’s uptime percentages of 99%. But to be really unfair about it, we don’t need this specific system up and running 99% of the year. We needed it for 15 minutes on November the 21st of 2015. What if that 1% downtime happened during those 15 minutes of live television? These are just considerations you need to be aware about.
The toolings around serverless architectures are not entirely there yet. I’ve mentioned microservices a couple of times now, but going fully serverless and thus chopping up your code in tinier components to fit on the ephemeral containers is going to push developers to architectures mostly akin to microservices. And I promise more detail about microservices later in the dedicated section, but not being able to have a helicopter view of all your bits and pieces in some monitoring tool is not going to be great for the quality. Not being able to debug your stuff properly is going to hamper productivity. Not being able to have it all play along nicely in your CI/CD pipelines is going to counteract exactly what a nice big plus was for serverless. I’m not saying there isn’t anything there! On the contrary, AWS is doing some amazing work with their Serverless Framework (https://serverless.com/) and Microsoft had monitoring and debugging in Azure Portal from the start and have recently released support for Visual Studio. But to give a small example, I wanted to publish an Azure Function I made in Visual Studio to my Azure cloud subscription and that doesn’t work properly at the moment of writing this. And I’m sure similar experiences could be gathered by AWS and Google users.
Resources are limited so you’ll be handling first and foremost with stateless components. It means you can’t share anything, not RAM, not files on disk, nothing. Of course there are ways to go around this, like Redis Cache or Azure Function Apps (which act as some kind of grouping of a couple of functions which can actually share some state if necessary). Secondly limited resources means your execution times will be throttled. If for instance your AWS Lamba runs for more than five minutes it’ll be put down like Old Yeller. The message here is to keep the components running short and sweet, which is an inherent trade off for the convenience of not caring about the underlying machines.
And lastly, how about some major vendor lock in? At the end of the day you are going to put your data and your code on machines owned by AWS, Google or Microsoft. It’s as simple as that. You will invest a lot of time and money in their products, so it is not easy to switch. Now the day we can attach an Azure Function to an AWS Kinesis stream is the day you’ll hear me stop whining about this fact. Having said that, isn’t this something that happens anyway? As a developer you’ve seen a lot of Java and Linux so you lean more towards AWS and when you’ve spend your carrier mastering .NET you’ll lean towards Azure. As long as you keep all the available options open when making mission critical decisions for your architecture. Because some will be better at things than others and its your job to make the right calls.
Side note: Actually you can use .Net Core with AWS Lamba’s now! Just wanted to leave this here… (http://docs.aws.amazon.com/lambda/latest/dg/lambda-dotnet-coreclr-deployment-package.html). Plus you hear a lot about Poly Nimbus for these (and other) reasons (https://buildazure.com/2016/12/23/the-polynimbus-cloud-enterprise/).
The convenience of not actually caring about the actual machines is just to beginning. Let’s focus on the ‘elastic scaling’ bit more. Because I think if done right, it’ll actually help you write better structured software! Software that needs to scale horizontally needs to be properly structured. I think I can perhaps explain this the best by making a sidestep to the aforementioned Microservices.
Google Cloud Function’s is described as: “A serverless platform for building event-based microservices” (https://cloud.google.com/functions/). Tim Wagner, AWS Lamba General Manager did a talk about “Microservices without the servers” as he demoed AWS Lamba (https://aws.amazon.com/blogs/compute/microservices-without-the-servers/). In a talk for Channel 9 Chris Anderson explains how to leverage Azure Functions (and other technologies likes Azure Service Fabric) to build microservice applications (https://channel9.msdn.com/Events/Connect/2016/Microsoft-Azure-2-Panel-with-QA). The desire to marriage the two concepts is, not surprisingly, strong. To understand this, first a quick (and far too abbreviated) look at microservices.
I am going to be using Sam Newman’s book “Building Microservices” a lot in this section. The following is a liberate interpretation of mine after reading that book and my own experience: Microservices are basically the evolution of SOA. When DDD emerged and people started talking about Bounded Contexts as boundaries for their services, CI/CD emerged as a practice to make good SOA possible. IaaS (and such) emerged to catapult SOA’s into existence. Plus Scrum and its Scrum Teams to promote Agile development. The concept of scalable systems. This all together evolved into Microservices.
So what enables you to get all the benefits out of a good Microservices architecture? To quote Sam Newman in his book: “Can you make a change to a service and deploy it by itself without changing anything else?” If this is not the case, then it will fall apart one by one, leaving you with the disadvantages of Microservices. Which are all the disadvantages that come with distributed systems, like higher complexity, more moving parts, etc. But what does the question mean exactly? It refers to high cohesion/low coupling. Let’s use the following diagram as a start for our architecture:
As you can see when you need to scale out a monolithic system you need to copy the entire system. When a system of microservices is scaled out, you can scale it out along its well-defined services. What I like to do when I talk about microservices is talk about DDD’s Bounded Contexts a lot to determine the boundaries of the Services at top level and than discuss Business Components to determine how to divide them across endpoints creating the micro services if you will, mostly in accordance to Udi Dahan’s (and others like Greg Young) philosophy: “A given Bounded Context should be divided into Business Components, where these Business Components have full UI through DB code, and are ultimately put together in composite UI’s and other physical pipelines to fulfill the system’s functionality.” So the architecture may end up looking like:
The lines connecting the components represent the Business Components (which could also be deployed and hosted on themselves). The service boundaries have gotten the colors green, orange and blue and are scaled out per number needed. Now if we go serverless with stateless, non-sharing, functions that are throttled at five minutes of execution time because of the limited resources, you are going to be pushed to think even harder about high cohesion/low coupling, zero latency and horizontal scaling! Our example architecture as a serverless architecture could look like:
Think about how to integrate and deploy this continuously. Without caring about the actual machines. The three major FaaS technologies named so far all have some form of automatic scaling, next to deployment another key benefit you’ll want to build microservices for. And this has just been handed to you, automatically! This is what is going to make it easier to reap all the benefits of Microservices and mitigate the disadvantages more. How’s that for wanting your cake and eating it?
But the reason why this article is about serverless on Azure is because with Azure Functions you can group Functions within a Function App and because Azure Functions automatically scale without any configuration. Two key benefits that fit my architectural style better and I’ll show you why.
Azure Functions are basically under the hood App Services, build on top of Webjob SDK and being run in their own Azure Functions run-time. They make it possible to write small and concise event triggered software components that run in these ephemeral containers which make it possible to only charge you for how long it takes for your code to run. Because they run in ephemeral containers, and there’s no worries about technical infrastructure, they can scale themselves automatically as needed.
Let’s make one in 1 minute that is triggered by an HTTP request just to sidestep from all the conceptual talk.
Now find “my1stfunction” or whatever you called it in your resources and when you click on it you’ll see:
By default it created “HttpTriggerCSharp1” this way, which is the actual function. As you can denote from the screenshot you can add multiple functions to one Function App. You start in the “Develop” pane where you see the actual code of the function. It starts you off with a short tutorial guiding you along the different panes which make up an Azure Function. For now let’s just copy the Function’s Url (see encircled in red), open up a new tab and paste it in. Then add at the end of the Url: &name=Danny. Because as you can see the default code expects a parameter in the querystring or the body with the key “name”. And when you hit enter you should see the following screen:
So next to being able to group functions in a Function App which makes it a great match for Microservices, as it helps you to not create a Nanoservices architecture (because you want to keep that high cohesion which is just as important as the low coupling), what else does it offer?
Bindings come in the shape of the triggers that start a function, inputs that additionally feed the function and outputs that the function produces. I prefer to build my microservices architecture on top of a solid messaging technology, like in this case Azure Service Bus or Queue Storage. And I would even rather have a mature framework in place like NServiceBus. Well, functions can be triggered on new messages in a Service Bus queue/topic or even Queue Storage! For me this means I can just hook the functions in by receiving messages from the messages I already have going! More on bindings in a separate blog…
You can have Azure AD coupled to your function with three clicks! You go to “Function app settings”, click on “Configure authentication” and then you turn it “on”. A simple way to secure everything when you have lots of moving parts as with microservices is essential! More on this in a separate blog…
You can debug in the Azure Portal and also in Visual Studio! You can have the same azure-functions-cli run locally as that Azure has running on their VM’s to run Azure Functions! An easy way to debug your code is always important and more so with complex distributed environments. More on this in a separate blog…
You have the monitor pane in Azure Portal. You can pin the Function Apps to your dashboard. You can stream log files to a command line session on a local workstation using the Azure CLI. Monitoring is one of the success factors of a microservices architecture. Without it your moving parts will falter quietly. More on this in a separate blog…
If you go to “Function app settings” and click on “Configure continuous integration” you can deploy your function from GitHub, VSTS, and others. The practice of CI/CD has proven to be of utmost importance in an environment where you have lots of distributed bits, hosted on many different places. More on this in a separate blog…
You have the ability to use Azure Functions as a step in your Azure Logic App! Don’t even know how to begin to describe how cool this is! That is why there’ll be more on this in a separate blog…
Serverless has evolved so it seems to solve the pain of microservices (or architectures like microservices). Where you are in an environment where there are lots of moving bits and pieces scattered across different containers (may they be ephemeral or more long-lived for that matter) and the operational side of it is eating away your time spend on creating actual business value. Will we be able to make entire enterprise systems on a technology like Azure Functions? At the moment of writing this I don’t think that is wise. Especially the toolings around Azure Functions need to mature more before you can fully go for it. But it sure does seem promising and is a delight to work with for the simple things you just want to add quickly to your existing system without too much hassle.
Want to read more of Danny van der Kraan? You can do so here.