Based on Betteridge's law of headlines: no!

But based on recent twitter activity, that's no doubt a somewhat controversial opinion, so in this post I look at what a unit-test for an API controller might look like, what a unit-test is trying to achieve, and why I think integration tests in ASP.NET Core give you far more bang-for-your-buck.

I start by presenting my thesis, about why I don't find unit tests of controllers very useful, acknowledging the various ways controllers are used in ASP.NET Core. I'll then present a very simple (but representative) example of an API controller, and discuss the unit tests you might write for that class, the complexities of doing so, as well as the things you lose by testing the controller outside the ASP.NET Core MVC framework as a whole.

This post is not trying to suggest that unit tests are bad in general, or that you should always use integration tests. I'm only talking about API/MVC controllers here.

Where does the logic go for an API/MVC controller?

The MVC/API controllers people write generally fall somewhere on a spectrum:

  • Thick controllers—The action method contains all the logic for implementing the behaviour. The MVC controller likely has additional services injected in the constructor, and the controller takes care of everything. This is the sort of code you often see in code examples online. You know the sort—where an EF Core DbContext, or IService is injected and manipulated in the action method body:
public class BlogPostController : Controller
{
    // Often this would actually be an EF Core DB Context injected in constructor!
    private readonly IRepository _repository;
    public BlogPostController(IRepository repository) => _repository = repository;

    [HttpPost]
    public ActionResult<Post> Create(InputModel input)
    {
        if(!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        // Some "business logic" 
        if(_repository.IsSlugAvailable(input.Slug)
        {
            ModelState.AddError("Slug", "Slug is already in use");
            return BadRequest(ModelState);
        }

        var model = new Post
        {
            Id = model.Id,
            Name = model.Name,
            Body = model.Body,
            Slug = model.Slug
        });
        _repository.Add(model);

        return model;
    }
}
  • Thin controllers—The action method delegates all the work to a separate service. In this case, most of the work is done in a separate handler, often used in conjunction with a library like Mediatr. The action method becomes a simple mapper between HTTP-based models/requests/responses, and domain-based models/commands/querys/results. Steve Smith's API endpoints project is a good example that is pushing this approach.
public class BlogPostController : BaseApiController
{
    [HttpPost]
    public async Task<IActionResult> Create([FromBody]NewPostCommand command)
    {
        var result = await Mediator.Send(command);
        return Ok(result);
    }
}

So which approach do I use? Well, as always it depends. In general, I think the second option is clearly the more scalable, manageable, and testable option, especially when used in conjunction with conventions or libraries that enforce that practice.

But sometimes, I write the other types of controllers. Sometimes it's because I'm being sloppy. Sometimes it's because I need to do some HTTP related manipulation which wouldn't make sense to do in a command handler. Sometimes the action is so simple it just doesn't warrant the extra level of indirection.

What I don't do (any more 🤦‍♂️), is put important domain logic in action methods. Why? Because it makes it harder to test.

"But you can unit-test controllers!" I hear you cry. Well…yes…but…

What don't you test in controller unit tests

MVC/API controllers are classes, and actions are just methods, so you can create and invoke them in unit tests the same way you would any other system under test (SUT).

The trouble is, in practice, controllers do most of their useful work as part of a framework, not in isolation. In unit tests, you (intentionally) don't get any of that.

In this section I highlight some of the aspects of MVC controllers that you can't easily test or wouldn't want to test in a unit test.

Routing

This is one of the most important roles of a controller: to serve as a handler for a given incoming route. Unit tests ignore that aspect.

You could certainly argue that's OK for unit tests—routing is a separate concern to the handling of a method. I can buy that, but routing is such a big part of what a controller is for, it feels like it's missing the point slightly.

Technically, it is possible to do some testing of the routing infrastructure for your app, as I showed in a previous post. I think you could argue both ways as to whether that's an integration test or a unit test, but the main point is it's pretty hard work!

Model Binding

When handling a request, the MVC framework "binds" the incoming request to a series of C# models. That all happens outside the controller, so won't be exercised by unit tests. The arguments you pass to a controller's action method in a unit test are the output of the model binding step.

Again, we're talking about unit tests of controllers, but model binding is a key part of the controller in practice, and likely won't be unit tested separately. You could have a method argument that's impossible to bind to a request, and unit tests won't identify that. Effectively, you may be calling your controller method with values that cannot be generated in practice.

For the simple, contrived, example model below, you'll get an exception at runtime when model binding tries to create an InputModel instance, as there's no default constructor.

public class InputModel
{
    public string Slug { get; }

    public InputModel(string slug)
    {
        Slug = slug;
    }
}

Granted, a mistake like that, using read-only properties, is very unlikely in practice. But there are also pretty common mistakes like typos in property names that mean model binding would fail. Those won't be picked up in unit tests, where strong typing means you just set the property directly.

Again, I'm not arguing that a unit test of a controller should catch these things, just pointing out how many implicit dependencies on the framework that MVC controllers have.

Model validation

The above example is contrived, but it highlights another important point. The validation of input arguments and enforcing constraints is an important part of most C# methods, but not action methods.

Validation occurs outside the controller, as part of the MVC framework. The framework communicates validation failures by setting values on the ModelState property. Controllers should typically check the ModelState property before doing anything.

In some ways, this is good. Validation is moved outside the action method, so you can test it independently of the controller action method. Whether you're using DataAnnotation attributes, of FluentValidation, you can ensure the models you're receiving are valid, and you can test those validation rules separately.

It feels a little strange in unit tests though, where passing in an "invalid" model, won't cause your action method to take the "sad" path, unless you explicitly match the ModelState property to the model.

For example if you have the following model:

public class InputModel
{
    [Required]
    public string Slug { get; set; }
}

and you want to test the "sad" path of the "thick" controller shown previously, then you have to make sure to set the ModelState:

[Fact]
public void InvalidModelTest()
{

    // Arrange
    var model = new InputModel{ Slug = "" }; // Invalid model
    var controller = new BlogPostController();

    // Have to explictly add this
    controller.ModelState.AddModelError("Slug", "Required");

    // Act
    var result = await controller.Create(model);

    // Assert etc
}

Again, this isn't necessarily a deal-breaker, but it's extra coupling. If you want to test your controller with "real" inputs, you have to ensure you keep the ModelState in sync with the method arguments, which means you need to keep it in-sync with the validation requirements of your model. If you don't, the behaviour of your controller in practice becomes undefined, or at least, untested.

Filter Pipeline

An exception to the previous validation example might be API controllers that are using the [ApiController] attribute. Requests to these controllers are automatically validated, and a 400 is sent back as part of the MVC filter pipeline for invalid requests. That means an API controller should only be called with valid models.

That helps with unit testing controllers, as it's an aspect your controller can ignore. No need to try and match the incoming model with the ModelState, as you know you should only have to handle valid models. When filters are used like this, to extract common logic, they make controllers easier to test.

But the filter pipeline isn't only used to extract functionality from controllers. It's sometimes used to provide values to your controllers. For example, think of a filter in a multi-tenant environment that sets common values for the tenant on the HttpContext. If you use filters like this, that's something else you're going to have to take into account in your controller unit tests.

Surely noone would do that, right? The extra coupling it adds seems obvious. Maybe… but that's essentially how authentication works (though using middleware, rather than a filter).

Still filters aren't used that often in my experience, except for the [Authorize] attribute of course.

Authorization

If you apply authorization policies declaratively using the [Authorize] attribute, then they'll have no effect on the controller unit tests. That's a good thing really, it's a separate concern. You can test your authorization policies and handlers separately from your controllers.

Except if you have resource-based, imperative, authorization checks in your controller. These are very common in multi-user environments—I shouldn't be able to edit a post that you authored, for example. Resource-based authorization uses the IAuthorizationService interface which you need to inject into your controller. You can mock this dependency pretty easily using a mocking framework, but it's just one more thing to have to deal with.

Each of these aspects on their own are pretty small, and easy to wave off as "not a big deal", but for me they just move the needle for how worthwhile it is to test your controllers.

So, what are you trying to test?

This is the crux of the matter for me— what are you trying to test by unit-testing MVC/API controllers? The answer will likely depend what "type" of controller you're trying to test.

Testing "thick" controllers

If you're testing the first type of controller, where the action method contains all the business logic for the action, then you're going to struggle. These controllers are doing everything, so the "unit" here is really too large to easily test. You're likely going to have to rely on mocking lots of services, which increases the complexity of tests, while generally reducing their efficacy.

Even ignoring all that, what are you trying to test. For the Create() method, are you going to test that _repository.Add() is called on a stub/mock of the IRepository? Interaction-based tests like these are generally pretty fragile, as they're often specific to the internal implementation of the method. State-based tests are generally a better option, though even those have issues with action methods, as you'll see shortly.

Testing "thin" controllers

Thin controllers are basically just orchestrators. They provide a handler and hooks for interacting with ASP.NET Core, but they delegate the "real" work to a separate handler, independent of ASP.NET Core.

With this approach, you can more-easily unit test your "domain" services, as that work is not happening in the controller. Instead, unit tests of the controller would effectively be testing that any pre-condition checks run correctly, that input models are mapped correctly to "domain service" requests, and that "domain service" responses are mapped correctly to HTTP responses.

But as we've already discussed, most of that doesn't happen in the controller itself. So testing the controller becomes redundant, especially as all your controllers start to look pretty much the same.

Let's just try a unit test

I've ranted a lot in this post, but it's time to write some code. This code is loosely based on the examples of unit testing controllers in the official documentation, but it suffers from a lot of the points I've already covered.

These examples only deal with testing the "thick" controller scenarios, as in the documentation.

In the "thick" controller example from the start of this post, I injected a single service _repository for simplicity, but often you'll see multiple services injected, as well as concrete types (like EF Core's DbContext). The more complicated the method gets, and the more dependencies it has, the harder the action is to "unit" test.

I guess a "unit" test for this controller should verify that if a slug has already been used, you should get a BadRequest result, something like this (for example using the Moq library):

[Fact]
public async Task Create_WhenSlugIsInUse_ReturnsBadRequest()
{
    // Arrange
    string slug = "Some Slug";
    var mockRepo = new Mock<IRepository>();
    mockRepo.Setup(repo => repo.IsSlugAvailable(slug)).Returns(false);
    var controller = new BlogPostController(mockRepo.Object);
    var model = new InputModel{ Slug = slug}

    // Act
    ActionResult<Post> result = controller.Create(model);

    // Assert
    Assert.IsType<BadRequestObjectResult>(result.Result);
}

This test has some value—it tests that calling Create() with a Slug that already exists returns a bad request. There's a bit of ceremony around creating the mock object, but it could be worse. The need to call result.Result to get the IActionResult is slightly odd, but I'll come to that shortly.

Lets look at the happy case, where we create a new post:

[Fact]
public async Task Create_WhenSlugIsNotInUse_ReturnsNewPost()
{
    // Arrange
    string slug = "Some Slug";
    var mockRepo = new Mock<IRepository>();
    mockRepo.Setup(repo => repo.IsSlugAvailable(slug)).Returns(true);
    var controller = new BlogPostController(mockRepo.Object);
    var model = new InputModel{ Slug = slug}

    // Act
    ActionResult<Post> result = controller.Create(model);

    // Assert
    Post createdPost = result.Value;
    Assert.Equal(createdPost.Slug, slug);
}

We still have the mock configuration ceremony, but now we're using result.Value to get the Post result. That Result/Value discrepancy is annoying…

ActionResult<T> and refactorability.

ActionResult<T> was introduced in .NET Core 2.1. It uses some clever implicit conversion tricks to allow you to return both an IActionResult or an instance of T from your action methods. Which means the following code compiles:

public class BlogPostController : Controller
{
    [HttpPost]
    public ActionResult<Post> Create(InputModel input)
    {
        if(!ModelState.IsValid)
        {
            return BadRequest(ModelState); // returns IActionResult
        }

        return new Post(); // returns T
    }
}

This is very handy for writing controllers, but it makes testing them a bit more cumbersome. With the example above, the following test would pass:

// Arrange
var controller = new BlogPostController()
var model = new InputModel()

// Act
ActionResult<Post> result = controller.Create();

// Assert
Post createdPost = result.Value;
Assert.NotNull(createdPost);

But if we change the last line of our controller to the semantically-identical version:

return Ok(new Post()); // returns OkObjectResult() of T

Then our test fails. The behaviour of the controller is identical in the context of the framework, but we have to update our tests:

// Act
ActionResult<Post> result = controller.Create();

// Assert
OkObjectResult objectResult = Assert.IsType<OkObjectResult>(result.Result);
Post createdPost = Assert.IsType<Post>(objectResult.Value);
Assert.NotNull(createdPost);

Yuk. There are things you can do to try and reduce this brittleness (such as relying on IConvertActionResult) but I just don't know that it's worth the effort.

Testing other aspects

This post is already way too long, so I'm not going to dwell on other difficulties. Here are a few highlights instead:

So what's the alternative?

Integration tests are often simpler, avoid that complexity and test more

In my experience, writing "integration" tests in ASP.NET Core are for controllers is far more valuable than trying to unit test them, and is easier than ever in ASP.NET Core.

Steve Gordon has a new Pluralsight course that describes a best-practices for integration testing your ASP.NET Core applications.

The in-memory TestServer available in the Microsoft.AspNetCore.TestHost package lets you create small, focused, "integration" tests for testing custom middleware. These are "integration" in the sense that they're executed in the context of a "dummy" ASP.NET Core application, but are still small, fast, and focused.

At the other end, the WebApplicationFactory<T> in the Microsoft.AspNetCore.Mvc.Testing package lets you test things in the context of your real application. You can still add stub services for the database (for example) if you want to keep everything completely in-memory.

On top of that, the rise of Docker has made using your real database for integration tests far more achievable. A few good end-to-end integration tests can really give you confidence that the overall "plumbing" in your application is correct, and that the happy path, at the very least, is working. And I'm not the only one who thinks like that:

Again, saying that integration tests are more valuable for testing "heavily integrated" components like controllers is not saying you shouldn't unit test. Unit tests should absolutely be used where they add value. Just don't try and force them everywhere for the sake of it.

Summary

I don't find unit testing MVC/API controllers very useful. I find they require a lot of ceremony, are often brittle using many mocks and stubs, and often don't actually catch any errors. I think integration tests (coupled with unit tests of your domain logic) adds far more value, with few trade-offs in most cases.