Optimizing Entity Framework: Part 1

I was recently involved in a project where I used Entity Framework 4.3 (Code-First) for the first time. I must say I’ve enjoyed it more than I expected, it seemed to fit nicely with Domain Driven Design (DDD) principles we were following, migrations were quiet convenient to keep our incremental updates under control (most of the time) but most importantly the models were very focused and clean and as such, it didn’t feel like we were getting out of our way or bending our architecture/design to fulfil any constraints imposed by EF.

In this project – as it should be in most cases – we didn’t optimize prematurely. But towards the end of the project, with real data being used, it was obvious that EF was a bottleneck to our application and becoming noticeably slower than previous versions of the application (that used a combination of ADO.NET with MSSQL Stored Procedures and Linq-to-Sql). And this is when I began this interesting exercise of optimizing the performance of the application that I am sharing here.

Unfortunately, I can’t share the specific code as I don’t own it. But the application is a public facing website that uses ASP.NET MVC for its architecure and DDD complements the picture as mentioned previously. So at the front-end side of things, there are typically Controllers, highly ajaxified Views (making use of knockout and some other cool stuff that I will hopefully discuss in other posts) and Models corresponding to the Code-first models. Controllers talk to Services that talk to Repositories through their interfaces which in turn talk to the database using EF as the ORM of choice.

Tools of the trade: Profiling

In my view, if you’re talking about performance and not putting numbers on the table, then it’s just a chatter with no substance. With performance, speak numbers or remain silent.

So in this specific case, the aim was to decrease the response time from around 800ms-950ms to the range of 100ms. And this is the main reason why numbers are so important, because in many scenarios, a response time of 1s might be acceptable, for our case, it just wasn’t good enough especially when compared to the previous version of the application. I am sure that if you are developing for the scale of Facebook or Twitter then even 100ms could easily become as unacceptable. So measure it and decide.

In order to get started with the measurement exercise, you probably have one of two choices:

  1. Scatter around Stopwatch instances around your code and keep track of them.
  2. or you can use a Profiler.

dotTrace - Easily drill into the code stack and define your code bottlenecksI used dotTrace from JetBrains as my profiler of choice, it is not free (nor cheap) but it provides 10 days trial which was more than enough for this exercise. Most of the profilers – code execution and performance profilers rather than memory profilers – out there are commercial and expensive (for personal use at least), the only free option seems to be EQATEC which I only had a quick look at but it seems like it could do the trick as well.

So running dotTrace, I could see exactly where the bottlenecks in my code where. They were indeed around Database access (don’t take that for granted) and Entity Framework.

Tracing EF generated Sql

EF can surprise you when it comes to generated SQL, and this is probably where most of the optimization will happen. So you need an easy way to check what is being generated by EF, at the end of the day, whatever you’re writing in C# and linq, no matter how fancy, testable, well written it is, it gets translated to the only language your database understands: SQL.

So here is the other amazing tool that you need to add to your toolset: Entity Framework Profiler. As with most of these tools, it is not free nor cheap, but you get a 30-days free trial. If you can’t afford it, the other option would be to use SQL Profiler to trace the SQL generated by EF (it is also not free but your organization could probably have it as part of their SQL Server edition anyhow). Worst case scenario, you could go back to basics and use .ToTraceString() to have a peak on the generated SQL.

Entity Framework Profiler: Easy insight into what EF is generating on your behalf

Now, you probably could survive without both of these tools, by peaking into your code through debugger, Stopwatch, trace messages or any other means. But since performance tuning is a highly iterative exercise where you need to repeat and compare many times before reaching a satisfactory conclusion, then having such tools is highly rewarding. It frees you from one extra point of pain and let you focus on the task at hand.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s