2013-03-28

6. Efficient TDD: Start closer to the end user



Nobody gets it right every time. Especially not software design, where... well... everybody gets it wrong most of the time. Even when doing Test Driven Development (TDD). I remember a series of blogposts by Ron Jeffries where he tried to implement a Sudoku-solver using TDD and absolutely failed to solve the problem. If you read the posts from Ron Jeffries, notice how he jumps straight in and attack an internal design detail before even understanding the problem. He basically gets stuck there and end up without a solution.

I remember having had design-sessions that looks eerily similar. After listening to the product owner for two minutes outlining what's needed, we went on to discuss lots of internal design details. "We would need a Frobinizer." "Also, we need a FlexiPlug" "The Frobinizer and FlexiPlug cant talk to eachother directly - they need to post and read messages via the PlancCampfer." "But the semantics of a Planc dictates that it should not share component with the Camfer, they should be separated." At more than one situation, I've found my self preoccupied with the software components to a very high degree, and to a lesser degree about how it actually implements the features we wanted. And I do believe that the times I've ended up with a huge outburst of frustration and then scrapping everything and starting over, is strongly correlated with the times I've become to preoccupied with the internals of what I'm making.

The topic for this post is a strategy to approach design and implementation in a way that I experience steer me away from the problems Mr. Jeffries experienced. The key is to start with a test that is as close to the user as possible.

All applications originates with some user or some agent that interact with the application (not to much of a generalization, I presume.) If I write a GUI application (I've blogged about something slightly relevant previously)  I'm probably using a GUI framework, such as Qt, wxWidgets, GTK or Swing, so the actual view I'm putting together is composed of components from the framework. I'd like to avoid testing these framework-provided components. But the user clicks a button with the mouse, or he type some text in a text view, or he exercise a key-combo. I'd like to start as close to the user as possible, and for a GUI application this is where the user action leave the framework code - usually this involves implementing some kind of event handler. So the first test is for the event-handler that originates with the user interacting (and it's immediate surroundings.)

For another example, we can look at a web-service. When adding something new I like to start with writing a test for the network access-point. The test basically checks that the web-endpoint manages to get responses, and react appropriately to them. The endpoint is tested by relaying representations to an interface. The diagram below illustrates the  basic design of such a test.

I start by writing the WebServiceEndPointTest and the WebServiceEndPoint, create a interface that represent the real behavior, and translates calls to WebServiceEndPoint to something ServiceInterface can relate to (e.g. method calls and values.) The test then sets up the mock for the ServiceInterface, exercises the WebServiceEndPoint and validates that the interactions with the mock was correct. There is nothing revolutionary with this idea of course - the main point of this blogpost is that this is the right end to start working in. Get the user interactions defined first, before the design of the rest of the application get so set in stone that it can dictate how the user interaction can be.

You may view this as performing top-down design, as opposed to doing things bottom up. And you would be right in pointing out that this leads to a design that pretty much follows the Dependency Inversion Principle (DIP.) By following DIP, we get a design where "lover level" components and packages depend on abstractions exposed by "higher level" components, and that everything relates to each other via those abstractions. This in turn leads to a nicely decoupled and inherently testable codebase. The figure below is a caricature of the dependency inversion principle. If you let Foo in the picture below play the same role as WebServiceEndPoint above, notice how higher level components don't depend on lower level components, but that lower level components depends on abstractions from higher level components (higher translates to closer to the end user.)



It also turns out that the result of following this advice, becomes very similar to behavior driven development (BDD), without the domain language for specifying the tests. At the beginning I get an outer loop, where I write tests that closely capture the behavior and the interactions of the end user. These tests are also suitable for testing the overall behavior of the system, and can be 'wired' as integration tests (in addition to working as unit-tests for the outermost units.) I also get an inner loop, where I can refine and refactor the designs and tests of individual units that build the application. So from following a very simple advice - starting as close to the user as possible - we've discovered the dependency inversion principle and we've been doing BDD. Not bad for an afternoons work!

So, how would I do the sudoku thing then? When I solved sudoku by means of TDD, I started with a textual representation of a completed sudoku-board and wrote a test that verified that my solver recognized it as complete and valid. After that I progressed with leaving one, then two, then more fields blank. After a handful of tests, I had a solution that was not to dissimilar from this solution, based on search and constraint propagation. I felt satisfied with the solution, and I never needed any tests for the internal data structures. Starting closer to the user guided me in a good direction.

No comments:

Post a Comment