Aurora Serverless with Entity Framework Core

Affordable, Reliable, Relational Data

Posted by Chase Q. Aucoin on December 7, 2019

Special Thanks

This is part of the 3rd Annual C# Developer Advent Calendar thanks to Matthew D. Groves for consistently making the developer community a better one. Please checkout the rest of the calendar for more great articles on C# and to spread more knowledge throughout the community.

a thank you note with purple flowers

Cloudy Weather

Click here to skip the dialog and go right to the project

Okay, so AWS is not really talked about much within the C# circles mostly because we have Azure, an all in one tightly integrated solution for Microsoft developers. Many larger organizations have contracts with AWS for their general IT components and can typically negotiate lower rates when dealing with large volumes that Microsoft is willing/able to provide. The problem with AWS while it does have .NET support for most of its platform it is treated like a second class citizen. It has been better in recent years, but it still remains mostly true.

I have had the mixed blessing of working almost exclusively in AWS for the past year. There are a handful of things I really like about the platform but it definitely doesn’t have the same Microsoft “hit the button and it just works™” mindset. You have to be kind of familiar with its “isms” in order to get the maximum value.

That said there are a few areas that I really like. One being Lambda the AWS serverless platform. With the introduction of .NET Core a few years ago the .NET platform has gotten a lot more robust with its hosting options.


Docker Containers, Kubernetes Clusters, Azure Functions, AWS Lambda: like it or not serverless is here to stay. It’s crossed the valley from bleeding edge to a great way for companies to test ideas cheaply and quickly to scaling those ideas rapidly. The long term cost can be “higher” than some traditional methods if you have sustained load, but in today’s agile marketplace where companies have to compete in terms of speed market to stay relevant the short term rewards are difficult to fathom.

Within the serverless space data has always been a bit of a sticky issue. We have serverless APIs, static websites, and when it comes to file storage we can scale to any conceivable size. But when it comes to data there have only been a handful of contenders that have been truly “pay-for-what-you-use” Even products like cosmosdb haven’t been very friendly from this perspective for innovating cheaply. Especially if you are a small organization, not-for-profit, or individual paying out of your own pocket. It gets real easy to get nickle and dimed to the tune of hundreds or thousands of dollars even for small workloads if you aren’t paying attention.

NoSQL such as DynamoDB, Google Firebase/Firestore, Azure Table Storage and others have been a great starting place for this. It makes sense that companies have started with this technology as NoSQL trades query complexity for model complexity. Queries are less computationally intensive but ultimately less robust than a traditional RDBMS when it comes to slicing and dicing data. And of course there are always traditional NoSQL vendors for more enterprise-grade feature sets such as Couchbase (note: Matt who is hosting the Advent Calendar works with them as a Developer Advocate so if you’d like to find out more, reach out and tell him Chase sent you)

Aurora serverless is AWS’s attempt to bridge the gap between RDBMS and serverless. I’m a little ashamed at Microsoft for not beating them to it, what with them having multiple of their own database systems for looks at watch 3 or so decades. If you are a director or higher-up at Microsoft reading this: what the heck is wrong with you, did you learn nothing from the windows phone and the Balmer tenure generally?

Deep breath


Wanna see how to do it? Because it’s pretty cool but it took me about a week to get it working because AWS has not great documentation. (That’s right, every company is getting a little shade)

Time to Make The Donuts

a picture of Fred the Dunkin' Donuts' man

To get started I’m going to assume you already have an AWS account but if not you can get an account here

We need to lay a little foundation for this to work out before we dive into the code.

Setting up Aurora

If you’re logged-in to your AWS console navigate to this will be your dashboard. From there we want to go to databases on the left-hand side second link from the top.

The RDS page on AWS with databases selected

From here we will click on “Create Database” on the top right side of the screen. This will present us with some options. The first one is what type of creation method we want to use. Select Easy Create. (Serverless isn’t an option in Standard. That took me a depressingly long time to realize.)

The create database screen with the easy create button selected

Next click Amazon Aurora as the engine type. For Db instance size select: Amazon Aurora with MySQL 5.6 compatibility (at the time of writing this article)

The create database screen with Amazon Aurora selected

For your DB cluster identifier if you are testing you can name the database anything you want, but make sure you know the name for later use. The same is true for the Master username.

Lastly click create database.

The create database screen with cluster options entered

This will take you back to the Databases screen and your database will show as “creating” you will see a notice at the top make sure to click “view credential details” and save you password and endpoint (you’ll need them later)

Database creating notice

Now for Something Stupid

Four grown men in radioactive contamination suits pressing ice cream cones into their foreheads.

While our database is provisioning this would be a great time to talk about one of the giant pitfalls of this service. AWS isn’t allowing any Non-AWS traffic to it. That means “effectively” the only way to connect to the database is via one of your AWS services, an EC2 Virtual Machine, or have a VPN hosted on their service. This goes back to the nickle and dime thing I was talking about earlier. I personally am not going to spend $16 a month in licensing and another $40 in hosting fees just to put OpenVPN in AWS to access a service that is meant to save me money. Fortunately there is a way around that. We can use a micro EC2 instance which has a free tier and ssh tunnel our MySQL commands via this instance into our database.

Yeah Science Meme

To get that going first we need to go to the EC2 console. You’ll see I have a single t2.micro instance called ssh tunnel. This is an Ubuntu instance, fortunately for this demo you don’t need to know anything about Linux other than that we will be using it for our tunnel. t2.Micro is on the free tier for AWS so with this box assuming it was the only EC2 instance you have would not cost you anything to run.

An EC2 instance labeled SSH Tunnel

Click on Create Instance then select Ubuntu as your instance type. Choose the default settings for all other options. Which should include an open port 22 for ssh traffic and a public ip. This is what we will use to connect to the box and proxy our requests.

A list of Amazon Machine Instances with Ubuntu selected.

If you’ve never use AWS before after you click Launch you’ll be asked to create a private key. You’ll have to download this key as a .PEM file.

The create key dialog from AWS EC2.

Save this PEM file, if you lose it you won’t be able to access the server without terminating it, starting a new one and creating a new key. Also since this gives a method into your internal network keep the file somewhere secure.

In the immortal words of Gandalf the wandering wizard

picture of Gandalf saying Keep it Secret Keep it Safe

Into the Code

Now that we’ve got everything creating let’s go to visual studio and create a new console application.

Create new project dialog in Visual Studio

I’ve named mine ChaseAucoin.Aurora but you can call your project anything you like. Right-click the .Csproj file and click “Edit Project File” then add this right after

    <PackageReference Include="Pomelo.EntityFrameworkCore.MySql" Version="2.1.0" />
    <PackageReference Include="SSH.NET" Version="2016.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.1.1">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>

This will include the necessary Nuget packages we will need to make this work.

In your project include your pem file and set it to copy to output directory by right clicking on it to go to it’s properties.

Now as for data, we are going to use the fantastic joke dataset by Taivo Pungas

The only one we are interested in for this exercise is wocka.json the data looks like this

        "body": "What's black and white and red all over?\r\n\r\nA newspaper.",
        "category": "Other / Misc",
        "id": 4,
        "title": "Black, White and Red"
        "body": "So, this guy walks into a bar.\r\n\r\nAnd says, \"ouch\".",
        "category": "Bar",
        "id": 5,
        "title": "Guy in a Bar"

As a disclaimer I should note these jokes were scrapped from random websites. Some of the jokes are crass, outdated, and plain not funny. There are, however, some real gems in the mix. Proceed accordingly.

Let’s represent this as a data model.

    public class Joke
        public int Id { get; set; }
        public string Category { get; set; }
        public string Title { get; set; }
        public string Body { get; set; }

and let create a data context for our database

    public class JokeContext : DbContext, IDisposable
        public DbSet<Joke> Jokes { get; set; }

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)

        protected override void OnModelCreating(ModelBuilder modelBuilder)

but for us to use it with Aurora we will have to make some changes to the construction of our class

    SshClient _client;
    ForwardedPortLocal _local;
    MySqlConnection _connection;
    public JokeContext(string auroraAddress, string auroraPassword, string sshTunnelAddress, string sshUserName, string keyFileName, bool isDebug = false)
        var builder = new MySqlConnectionStringBuilder();
        builder.UserID = "admin";
        builder.Password = auroraPassword;
        builder.Database = "jokes";
        builder.PersistSecurityInfo = true;
        builder.Port = 3306;

        if (isDebug)
            var dir = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);                
            PrivateKeyFile pkfile = new PrivateKeyFile($@"{dir}/{keyFileName}");
            _client = new SshClient(sshTunnelAddress, 22, sshUserName, pkfile);

            if (_client.IsConnected)
                _local = new ForwardedPortLocal("", auroraAddress, 3306);
			builder.Port = _local.BoundPort;
            builder.Server = "";
            builder.Server = System.Net.Dns.GetHostEntry(auroraAddress)

        _connection = new MySqlConnection(builder.ConnectionString);

The magic in what we are doing is here. We are using our PEM file to securely access our SSH tunnel and forward all the traffic coming in to our local port that gets automatically bound then forwarding it to our Aurora cluster.

    _client = new SshClient(sshTunnelAddress, 22, sshUserName, pkfile);

    if (_client.IsConnected)
        _local = new ForwardedPortLocal("", 3306, auroraAddress, 3306);

If we go to Program.cs we’ll need to set a few parameters.

    class Program
        //you can find this in your database details
        static readonly string _auroraAddress = "{endpoint}";
        static readonly string _auroraPassword = "{randomlyGeneratedPassword}";
        //address of the ubuntu instance we created avilable in the EC2 instance panel
        static readonly string _sshTunnelAddress = "";        
        //default is ubuntu
        static readonly string _sshUserName = "ubuntu";
        static readonly string _keyFileName = "mykey.pem";

Then form here it’s a hop, skip, and a jump. First we need to deserialize our jokes. Since they are already in JSON format makes short work of it. Simple grab the path of the wocka.json file on your local machine and deserialize it into an enumeration of our Joke class.

    var path = @"E:\wocka.json";
    var jsonJokes = File.ReadAllText(path);
    var jokes = JsonConvert.DeserializeObject<IEnumerable<Joke>>(jsonJokes);

Next we’re going to add a little extension method for batching an enumeration. This can be named LinqExtensions.cs

    static class LinqExtensions
        public static IEnumerable<IEnumerable<T>> Batch<T>(
            this IEnumerable<T> source, int batchSize)
            using (var enumerator = source.GetEnumerator())
                while (enumerator.MoveNext())
                    yield return YieldBatchElements(enumerator, batchSize - 1);

        private static IEnumerable<T> YieldBatchElements<T>(
            IEnumerator<T> source, int batchSize)
            yield return source.Current;
            for (int i = 0; i < batchSize && source.MoveNext(); i++)
                yield return source.Current;

Let’s let entity framework handle the creation of our database objects.

    JokeContext context = GetContext();

    public static JokeContext GetContext()
        return new JokeContext(_auroraAddress, _auroraPassword, _sshTunnelAddress, _sshUserName, _keyFileName, true);

Now query the database and see if there are any jokes, if not, then insert them. You can play with the batch size but I had good luck with 5000. When running in release mode it took about a second to load all 10k jokes (your mileage may vary depending on a number of factors like bandwidth, hardware, etc). Since we are doing bulk entry we are getting a fresh context each time. With our SSH tunnel this makes the process a little slower than when its running in production directly against the database.

    var jokeCount = context.Jokes.Count();
    if (jokeCount == 0)
        foreach(var batch in jokes.Batch(5000))
            context = GetContext();

            Console.WriteLine("Writing to serverless database");

Now with all that hard work done, let grab ourself a one liner to make us laugh.

    var oneLiners = context.Jokes.Where(joke => joke.Category == "One Liners")
    var randomOneLiner = oneLiners.OrderBy(x => Guid.NewGuid()).First();

    Console.WriteLine($"There are {oneLiners.Count} one liners!");


I got “All believers in telekinesis raise my hand!” I got a decent chuckle from that.


I should note some gotchas. Mainly one. By default there is a minimum capacity of 1 if you let it run 24/7 it’ll cost you about $30-$40. I have mine set to minimum of 0. This works for my purposes and infrequent use. I’m only spending about $2 a month on it. You can adjust this by modifying the configuration settings in your database in RDS. If the instance count goes to 0 I have found anecdotally it takes between 45 and 90ish seconds to spin up so plan accordingly.


I hope you enjoyed this article. This will be the first in a series on Aurora with C# the next article I’m working on will be using this same approach to create a serverless API with lambda, WebAPI, and ODATA.

If you have any questions or get stuck please reach out to me on Linkedin, Twitter, or Facebook. Soon I’ll be doing weekly livestreams on Facebook so make sure to like my page there for updates.


If you enjoyed this article, please consider subscribing to my blog for future updates.Your email address is only used to send you notifcations of new articles.

Unsubscribe anytime using the link included in every email.