AWS Compute Blog
Developing .NET Core AWS Lambda functions
This post is courtesy of Mark Easton, Senior Solutions Architect – AWS
One of the biggest benefits of Lambda functions is that they isolate you from the underlying infrastructure. While that makes it easy to deploy and manage your code, it’s critical to have a clearly defined approach for testing, debugging, and diagnosing problems.
There’s a variety of best practices and AWS services to help you out. When developing Lambda functions in .NET, you can follow a four-pronged approach:
- Unit testing to test and debug functional units in isolation
- Local integration testing using the AWS Serverless Application Model CLI (AWS SAM)
- Logging with Amazon CloudWatch to record events and errors
- Recording in AWS X-Ray to trace execution across services
This post demonstrates the approach by creating a simple Lambda function that can be called from a gateway created by Amazon API Gateway and which returns the current UTC time. The post shows you how to design your code to allow for easy debugging, logging and tracing.
If you haven’t created Lambda functions with .NET Core before, then the following posts can help you get started:
- AWS Lambda .NET Core 2.0 Support Released
- Using the AWS Lambda Project in Visual Studio
- AWS Serverless Applications in Visual Studio
Unit testing Lambda functions
One of the easiest ways to create a .NET Core Lambda function is to use the .NET Core CLI and create a solution using the Lambda Empty Serverless template.
If you haven’t already installed the Lambda templates and the Lambda tools, run the following commands:
dotnet new -i Amazon.Lambda.Templates::*
dotnet tool install --global Amazon.Lambda.Tools
You can now use the template to create a serverless project and unit test project, and then add them to a .NET Core solution by running the following commands:
dotnet new serverless.EmptyServerless -n DebuggingExample
cd DebuggingExample
dotnet new sln -n DebuggingExample\
dotnet sln DebuggingExample.sln add */*/*.csproj
Although you haven’t added any code yet, you can validate that everything’s working by running the unit tests. Run the following commands:
cd test/DebuggingExample.Tests/
dotnet test
One of the key principles to effective unit testing is ensuring that units of functionality can be tested in isolation. It’s good practice to de-couple the Lambda function’s actual business logic from the plumbing code that handles the actual Lambda requests.
Using your favorite editor, create a new file, ITimeProcessor.cs, in the src/DebuggingExample folder, and create the following basic interface:
using System;
namespace DebuggingExample
{
public interface ITimeProcessor
{
DateTime CurrentTimeUTC();
}
}
Then, create a new TimeProcessor.cs file in the src/DebuggingExample folder. The file contains a concrete class implementing the interface.
using System;
namespace DebuggingExample
{
public class TimeProcessor : ITimeProcessor
{
public DateTime CurrentTimeUTC()
{
return DateTime.UtcNow;
}
}
}
Now add a TimeProcessorTest.cs file to the src/DebuggingExample.Tests folder. The file should contain the following code:
using System;
using Xunit;
namespace DebuggingExample.Tests
{
public class TimeProcessorTest
{
[Fact]
public void TestCurrentTimeUTC()
{
// Arrange
var processor = new TimeProcessor();
var preTestTimeUtc = DateTime.UtcNow;
// Act
var result = processor.CurrentTimeUTC();
// Assert time moves forwards
var postTestTimeUtc = DateTime.UtcNow;
Assert.True(result >= preTestTimeUtc);
Assert.True(result <= postTestTimeUtc);
}
}
}
You can then run all the tests. From the test/DebuggingExample.Tests folder, run the following command:
dotnet test
Surfacing business logic in a Lambda function
Now that you have your business logic written and tested, you can surface it as a Lambda function. Edit the src/DebuggingExample/Function.cs file so that it calls the CurrentTimeUTC method:
using System;
using System.Collections.Generic;
using System.Net;
using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;
// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(
typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
namespace DebuggingExample
{
public class Functions
{
ITimeProcessor processor = new TimeProcessor();
public APIGatewayProxyResponse Get(
APIGatewayProxyRequest request, ILambdaContext context)
{
var result = processor.CurrentTimeUTC();
return CreateResponse(result);
}
APIGatewayProxyResponse CreateResponse(DateTime? result)
{
int statusCode = (result != null) ?
(int)HttpStatusCode.OK :
(int)HttpStatusCode.InternalServerError;
string body = (result != null) ?
JsonConvert.SerializeObject(result) : string.Empty;
var response = new APIGatewayProxyResponse
{
StatusCode = statusCode,
Body = body,
Headers = new Dictionary<string, string>
{
{ "Content-Type", "application/json" },
{ "Access-Control-Allow-Origin", "*" }
}
};
return response;
}
}
}
First, an instance of the TimeProcessor class is instantiated, and a Get() method is then defined to act as the entry point to the Lambda function.
By default, .NET Core Lambda function handlers expect their input in a Stream. This can be overridden by declaring a customer serializer, and then defining the handler’s method signature using a custom request and response type.
Because the project was created using the serverless.EmptyServerless template, it already overrides the default behavior. It does this by including a using reference to Amazon.Lambda.APIGatewayEvents and then declaring a custom serializer. For more information about using custom serializers in .NET, see the AWS Lambda for .NET Core repository on GitHub.
Get() takes a couple of parameters:
- The APIGatewayProxyRequest parameter contains the request from the API Gateway fronting the Lambda function
- The optional ILambdaContext parameter contains details of the execution context.
The Get() method calls CurrentTimeUTC() to retrieve the time from the business logic.
Finally, the result from CurrentTimeUTC() is passed to the CreateResponse() method, which converts the result into an APIGatewayResponse object to be returned to the caller.
Because the updated Lambda function no longer passes the unit tests, update the TestGetMethod in test/DebuggingExample.Tests/FunctionTest.cs file. Update the test by removing the following line:
Assert.Equal("Hello AWS Serverless", response.Body);
This leaves your FunctionTest.cs file as follows:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Xunit;
using Amazon.Lambda.Core;
using Amazon.Lambda.TestUtilities;
using Amazon.Lambda.APIGatewayEvents;
using DebuggingExample;
namespace DebuggingExample.Tests
{
public class FunctionTest
{
public FunctionTest()
{
}
[Fact]
public void TetGetMethod()
{
TestLambdaContext context;
APIGatewayProxyRequest request;
APIGatewayProxyResponse response;
Functions functions = new Functions();
request = new APIGatewayProxyRequest();
context = new TestLambdaContext();
response = functions.Get(request, context);
Assert.Equal(200, response.StatusCode);
}
}
}
Again, you can check that everything is still working. From the test/DebuggingExample.Tests folder, run the following command:
dotnet test
Local integration testing with the AWS SAM CLI
Unit testing is a great start for testing thin slices of functionality. But to test that your API Gateway and Lambda function integrate with each other, you can test locally by using the AWS SAM CLI, installed as described in the AWS Lambda Developer Guide.
Note: Since the SAM CLI runs your code locally in a container, you’ll need to ensure Docker is installed, as described in the installation instructions.
Unlike unit testing, which allows you to test functions in isolation outside of their runtime environment, the AWS SAM CLI runs your code in a locally hosted Docker container. It can also simulate a locally hosted API gateway proxy, allowing you to run component integration tests.
After you’ve installed the AWS SAM CLI, edit the serverless.template file in the src/DebuggingExample/ directory, updating the default Get resource name with DebuggingExampleFunction, adding a function name and the publish directory path to the CodeURI parameter:
{ "AWSTemplateFormatVersion" : "2010-09-09", "Transform" : "AWS::Serverless-2016-10-31", "Description" : "An AWS Serverless Application.", "Resources" : { "DebuggingExampleFunction" : { "Type" : "AWS::Serverless::Function", "Properties": { "FunctionName": "DebuggingExample”, "Handler": "DebuggingExample:: DebuggingExample.Functions::Get", "Runtime": "dotnetcore2.1", "CodeUri": "bin/Release/netcoreapp2.1/publish", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaBasicExecutionRole" ], "Events": { "RootGet": { "Type": "Api", "Properties": { "Path": "/", "Method": "GET" } } } } } }, "Outputs" : { "ApiURL" : { "Description" : "API endpoint URL for Prod environment", "Value" : { "Fn::Sub" : "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" } } } }
Now you have updated the template, you can test your code locally. Because the Lambda function expects a request from API Gateway, create a sample API Gateway request by changing to the DebuggingExample/src/DebuggingExample/ directory and running the following command::
sam local generate-event apigateway aws-proxy > testApiRequest.json
You can now publish your DebuggingExample code locally and invoke it by passing in the sample request as follows:
dotnet publish -c Release
sam local invoke "DebuggingExampleFunction" --event testApiRequest.json --template serverless.template
The first time that you run it, it might take some time to pull down the container image in which to host the Lambda function. After you’ve invoked it one time, the container image is cached locally, and execution speeds up.
Finally, rather than testing your function by sending it a sample request, test it with a real API gateway request by running API Gateway locally:
sam local start-api --template serverless.template
If you now navigate to http://127.0.0.1:3000/ in your browser, you can get the API gateway to send a request to your locally hosted Lambda function. See the results in your browser.
Logging events with CloudWatch
Having a test strategy allows you to execute, test, and debug Lambda functions. After you’ve deployed your functions to AWS, you must still log what the functions are doing so that you can monitor their behavior.
The easiest way to add logging to your Lambda functions is to add code that writes events to CloudWatch. To do this, add a new method, LogMessage(), to the src/DebuggingExample/Function.cs file.
void LogMessage(ILambdaContext ctx, string msg)
{
ctx.Logger.LogLine(
string.Format("{0}:{1} - {2}",
ctx.AwsRequestId,
ctx.FunctionName,
msg));
}
This takes in the context object from the Lambda function’s Get() method, and sends a message to CloudWatch by calling the context object’s Logger.Logline() method.
You can now add calls to LogMessage in the Get() method to log events in CloudWatch. It’s also a good idea to add a Try… Catch… block to ensure that exceptions are logged as well.
public APIGatewayProxyResponse Get(APIGatewayProxyRequest request, ILambdaContext context)
{
LogMessage(context, "Processing request started");
APIGatewayProxyResponse response;
try
{
var result = processor.CurrentTimeUTC();
response = CreateResponse(result);
LogMessage(context, "Processing request succeeded.");
}
catch (Exception ex)
{
LogMessage(context, string.Format("Processing request failed - {0}", ex.Message));
response = CreateResponse(null);
}
return response;
}
To validate that the changes haven’t broken anything, you can now execute the unit tests again. Run the following commands:
cd test/DebuggingExample.Tests/
dotnet test
Tracing execution with X-Ray
Your code now logs events in CloudWatch, which provides a solid mechanism to help monitor and diagnose problems.
However, it can also be useful to trace your Lambda function’s execution to help diagnose performance or connectivity issues, especially if it’s called by or calling other services. X-Ray provides a variety of features to help analyze and trace code execution.
To enable active tracing on your function you need to modify the serverless.template by adding a Tracing attribute:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application.",
"Resources" : {
"DebuggingExampleFunction" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"FunctionName": "DebuggingExample”,
"Handler": "DebuggingExample::DebuggingExample.Functions::Get",
"Runtime": "dotnetcore2.1",
"CodeUri": "bin/Release/netcoreapp2.1/publish",
"MemorySize": 256,
"Timeout": 30,
"Tracing" : "Active",
"Role": null,
"Policies": [ "AWSLambdaBasicExecutionRole" ],
"Events": {
"RootGet": {
"Type": "Api",
"Properties": {
"Path": "/",
"Method": "GET"
}
}
}
}
}
},
"Outputs" : {
"ApiURL" : {
"Description" : "API endpoint URL for Prod environment",
"Value" : { "Fn::Sub" : "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" }
}
}
}
For more information about using X-Ray from .NET Core, see the AWS X-Ray Developer Guide. For information about adding support for X-Ray in Visual Studio, see the New AWS X-Ray .NET Core Support post.
Deploying and testing the Lambda function remotely
Having created your Lambda function and tested it locally, you’re now ready to package and deploy your code.
First of all you need an Amazon S3 bucket to deploy the code into. If you don’t already have one, create a suitable S3 bucket.
You can now package the .NET Lambda Function and copy it to Amazon S3, using your bucket name in the s3-bucket argument:
sam package \
--template-file serverless.template \
--output-template debugging-serverless.template \
--s3-bucket debugging-example-bucket
Finally, deploy the Lambda function by running the following command:
sam deploy \
--template-file debugging-serverless.template \
--stack-name DebuggingExample \
--capabilities CAPABILITY_IAM \
--region eu-west-1
After your code has deployed successfully, test it from your local machine by running the following command:
dotnet lambda invoke-function DebuggingExample -–region eu-west-1
Diagnosing the Lambda function
Having run the Lambda function, you can now monitor its behavior by logging in to the AWS Management Console and then navigating to CloudWatch Logs.
You can now click on the /aws/lambda/DebuggingExample log group to view all the recorded log streams for your Lambda function.
If you open one of the log streams, you see the various messages recorded for the Lambda function, including the two events explicitly logged from within the Get() method.
To review the logs locally, you can also use the AWS SAM CLI to retrieve CloudWatch logs and then display them in your terminal.
sam logs -n DebuggingExample --region eu-west-1
As a final alternative, you can also execute the Lambda function by choosing Test on the Lambda console. The execution results are displayed in the Log output section.
In the X-Ray console, the Service Map page shows a map of the Lambda function’s connections.
Your Lambda function is essentially standalone. However, the Service Map page can be critical in helping to understand performance issues when a Lambda function is connected with a number of other services.
If you open the Traces screen, the trace list showing all the trace results that it’s recorded. Open one of the traces to see a breakdown of the Lambda function performance.
Conclusion
In this post, I showed you how to develop Lambda functions in .NET Core, how unit tests can be used, how to use the AWS SAM CLI for local integration tests, how CloudWatch can be used for logging and monitoring events, and finally how to use X-Ray to trace Lambda function execution.
Put together, these techniques provide a solid foundation to help you debug and diagnose your Lambda functions effectively. Explore each of the services further, because when it comes to production workloads, great diagnosis is key to providing a great and uninterrupted customer experience.