I'm currently in a unit testing-oriented period. I've had various discussions with different project teams and colleagues about the subject. One of the things I noticed in a piece of unit test code was that some people were verifying mock expectations in the fixture TearDown. I also found some examples of this practice on the net. The test code was looking like this:
2: public class SomeTestFixture
4: private MockRepository _mocks;
7: public void Setup()
9: _mocks = new MockRepository();
13: public void TearDown()
19: public void SomeTest()
21: ISomeInterface interfaceMock = _mocks.CreateMock<ISomeInterface>();
25: TestedObject sut = new TestedObject();
26: sut.SomeInterface = interfaceMock;
Personally, I don't find it is a good practice, and I would not recommend it. There are I think various reasons for this.
When we look at the basics of automated unit testing, each test case execution happens in four distinct phases, as described in xUnit Patterns: 1- Setup, 2- Exercise, 3- Verify, 4- Tear down. Clearly, the unit testing framework foresees a nice place to put the code pertaining to each of steps: the Setup, Teardown and Test methods. As you might have guessed, the test method should contain steps 2 and 3. When you don't follow that practice, you're in fact breaking the nice organization that your framework brings to your test code, as well as the semantics of the Teardown method (which is only supposed to contain step 4 code).
Also, when we look at the test code (lines 21-27), it is not obvious what we're in fact trying to verify. Did the developer forget to call VerifyAll? Did he forget to make some assertions? It is not intuitive to look at Teardown to actually find the end of the test case logic.
But that's not all. Suppose that you have many test cases in a fixture, and one of the test breaks because of unmet expectations. The stack trace would look like this:
Rhino.Mocks.Exceptions.ExpectationViolationException: ISomeInterface.SomeMethod(); Expected #1, Actual #0.
at Be.Aprico.Test.SomeTestFixture.TearDown() in SomeTestFixture.cs:line 24
The problem here is that is not obvious from the stack trace which test case failed. Of course, the test runner will tell you which test actually failed, but nevertheless, I prefer to have a stack trace that has its top where the problem is actually located, and not at some unrelated place.
Finally, what if your fixture has real teardown code that must be executed? Should you in this case verify the expectations before or after the teardown code?
// Should I do this ?
public void TearDown()
// Or this ?
public void TearDown()
In the first case, you run the risk that when expectations are not met in one of the test cases, the Cleanup call is in fact never executed due to the thrown exception. This can have bad consequences (mainly interacting, erratic tests, causing unrelated tests to fail because of the previous failed test leftovers). You can correct this by encapsulating the VerifyAll call with a try…finally block, but this in my opinion makes the teardown logic more complicated than it should.
In the second case, what we're really doing is swapping steps 3 and 4 of our 4-phases test. This goes against standard well-known practices, and doesn't help understanding the test code.
To conclude, I would recommend to always put the verification code close to the exercise code. That way, you ensure the test case is readable and complete. My personal preferred syntax is the following (using Rhino Mocks):
public void SomeTest()
ISomeInterface interfaceMock = _mocks.CreateMock<ISomeInterface>();
TestedObject sut = new TestedObject();
sut.SomeInterface = interfaceMock;
I find the using a very elegant, clear way to delimit expectations definition and actual exercise. This makes the test code very readable. But this is only a matter of personal preferences, there are other equally valid syntaxes!