- Collect loose papers and materials
- Process all inboxes, so they are empty
- Empty your head (on paper, for example)
- Review your Next Actions list
- Review the two previous weeks in your calendar
- Review the two next weeks in your calendar
- Review everything you are waiting for
- Review project lists
- Review any relevant checklists
- Review SomedayMaybe
- Be Creative and courageous
Misc musings about software development, agile, TDD, C++ and explorations in other programming languages.
2013-04-08
13. On Getting Things Done - Keeping the weekly review under an hour.
2013-04-06
12. Is Scrum "GTD for teams?"
2013-04-04
11. It's not the resources, it's the priorities.
I remember participating in a project meeting once with our then most important customer. From the customer side, it included the main stakeholder - our key sponsor - and two of his colleagues, who participated in QA and follow up activities. From our part, it included the Project Manager for the project, who was a new hire, the CEO of our company, a developer who had participated in meeting this customer before, and me. It was the first time I visited, and I was quite new on the team. Due to several factors, such as key people moving on, lack of communication and a couple of very buggy binaries sent to the client, trust was suboptimal to say the least.
The meeting spanned three days. The first day was "get to know each other day". The second day was more of a feature discussion day, which turned out to be the day where the customer spent most of their time piling on new stuff they wanted to have delivered within the scope of the contract.
Ah - the contract. I need to talk about the contract. From the start, it was almost the agilists dream contract. It was fixed price, with a list of perhaps 20 or so vaguely described features they wanted, delivered over four increments. The contract had fixed seemingly non negotiable delivery-dates for each phase. We had delivered phase three, and the customer was still not happy with what we had delivered for phase two. So some place along the line we had failed to match - or control - the expectations of the customer. Fair enough. We were sitting there, we had not closed phase 2, we wanted to close phase 3 and we wanted to get started on the last phase. To add insult to injury, we had spent more time than scheduled on phase 3 - we didn't have the budget to complete phase 4 without taking a monetary loss (not to mention finishing it on time.) And still, the customer was piling on changes and fixes on features delivered for phase 1, 2 and 3.
I can rant on about what went wrong with that project, but I don't think that would be the interesting part. The interesting part is how we got out of it. We simply did a variation on the planning game. New and exiting for everybody but me.
On the evening of the second day, me and the PM sat down, wrote all requirements we had gathered during those two first days on A4-sized printer paper (For you purists: sorry - no index-cards in the grocery-store on the corner of our hotel.) One requirement on one piece of paper. There was even room for drawings. I committed a grave sin that day, by singlehandedly giving StoryPoint estimates on every item in the list (the PM was not able to participate.) And voila, we had a backlog.
The next day then turned out to be the planning and prioritization day. We started by presenting the key sponsor and his entourage with the following:
- Acknowledged that we had a fixed deadline
- Put forward
the pile of paperthe backlog - Stated the sum of the StoryPoints (we called them apples) and how they came about
- That given the current velocity of the team, we could complete a third of the backlog within the deadline for the project (our team had an established velocity, and we used this to extrapolate under the assumption that my estimates was ballpark-ish.)
2013-04-02
10. If it works, fix it anyway
"Why change that? It works. Don't fix it if it works."And in some sense, that's a good advice. It's a good strategy for risk-mitigation, and my hunch is that it's correct 80% of the time. But in this particular case, I disagreed. We had touched upon that piece of code several times the last sprints, we had fixed several bugs in it, and chances were we would have to dig in to it again in the next sprints.
What definition of "work" do you follow, when it comes to your code? In our discussion above, my colleague was of the opinion that if the user experienced the feature as working, we should not risk breaking that state of "working". I think that's a bad definition for "work".
Any definition of "work" must include maintainability to some degree. Of course, the required level of maintainability may vary, but for most cases readability and testability not to mention the existence of adequate tests must come first. If the code isn't easy to understand, it doesn't work. If the code is difficult to test, it doesn't work. If the code is difficult to exclude from a test that don't need it, it doesn't work. If it isn't covered by a test, it definitely doesn't work. If it doesn't work, it needs to be fixed.
The code must stand the test of change in order to be flexible. Maybe the feature works, but the code doesn't. So go and fix it anyway.
2013-03-30
8. A New War on Waterfall
The plan our product manager had outlined, contained steps of a data-driven process for product development. Where we every step of the way, from first concepts, then minimum value products (MVPs) and prototypes, could validate the product at different stages by interacting with the audience even before implementation and deployment. What my colleague failed to see was that the PM was picturing a "main-flow" - not a prescriptive "do this, then that, then finnish with this" type of plan. Colleague reacted as if saying that we want to do user studies and prototypes before we start working on the real thing, implies that lessons learned while implementing the product, or data from the live product, can not affect the direction the product takes. Which was obviously not what the PM had in mind. My colleague is not alone in making this mistake, though. An entire industry have made the same error for decades.
From reading the paper of Dr. Royce, however, it becomes quite clear that this inflexible process is definitely not what he intended to communicate in his paper. On the general overview of the process he states, and I quote: "I believe in this concept, but the implementation described above is risky and invites failure." The paper is full of examples showing how, if applied blindly, such a "step-by-step" process might fail. He also state quite clearly, translated to more modern terminology, that it is usual to have iterations spanning multiple phases. And there is also modern sounding gold in the paper, like "Involve the customer - the involvement should be formal, in-depth, and continuing" - still a hallmark of good software development if you ask me! So the first paper that defined the Waterfall process, is actually the first loud critic of it. It actually talks about something that to a high degree resembles something closer to a modern agile iterative methodology. Unfortunately after Royce's paper, and contrary to his intentions, the term waterfall is cemented in to our minds to mean an inflexible step-by-step process that are doomed to fail due to lack of feedback loops. It is wrongfully used as a contrast by whoever have
2013-03-29
7. Test behavior, not methods
I recently revieved some code from one of my colleagues. A birds-eye view of his class looked like the following. There was nothing wrong with it - it looked perfectly fine to me.
// Foo.java class Foo { public void Foo(Bar bar) {...} public void methodOne(String str){...} public String methodTwo(){...} public Integer methodFour(Boolean b){...} }
However, I read the test for this class first, FooTest. Even though it looked like it tested what it should test, I had some immediate comments after just a quick glance in the file.
class FooTest { @Test public void testMethodOne(){ ...code... assert... ..reusing and modifying state.. assert... ...more code... assert.. } @Test public void testMethodTwo(){..more of the same..} @Test public void testMethodThree(){..more of the same..}
As many others, he had written one test for every method in the system under test (SUT). The methods was long. State bled through from the start of the test-method to the end of it. Variables were reused and their state modified as the test went along. I usually prefer to test behaviors, not methods, and I felt being on safe grounds when I suggested for him to rearrange his test so that he had one test per behavior.
He thought what I said was a ridiculous idea. As an example he pointed out one of my own tests that follows this pattern and looked something like the following.
class FooTest { @Test public void methodOne_leaves_foo_with_clothes_on(){...} @Test public void methodTwo_calls_mom(){...} @Test public void methodTwo_writes_result_in_book(){...} @Test public void methodOne_after_methodTwo_lights_the_firecrackers(){...} ...more tests...
"I hear what you say, but I think it is ridiculous. What do you need all those damned tests for? You have twice the amount of code in your tests than what you have in the classes you write. Plus. You are violating our coding standard with those underscores! Think about maintainability, man! And after all, this is just the test-code. It doesn't need that much focus." Awh the pain! I managed to resist the temptation to slap the offending party in the face with a fresh cod I usually carry around for that exact purpose. Instead I resorted to a reasoned debate and conversation.
In some respects he's right. It becomes more code. Much more code. But I've come to realize that organizing tests by methods does not tell me anything about the behavior of the SUT. In order to understand the behavior of Foo from above, I need to read all the code in the tests. I probably need to read all code in Foo as well. And from that I might be able to deduce what it does or what it's supposed to do. But even if I get thus far, it's still very difficult for me to understand what aspects of that behavior actually have a test and which one have not.
One test per behavior, allows a quick glance of the behavior of the class or subsystem just by collapsing the body of the test-methods. The naming of the tests can read as a story. IMO the layout and cleanlyness of the test-code is more important than the production code it self. If we need more words to communicate, write more words - communicating the intents clearly is more valuable than writing less code.
As Einstein said, paraphrased, make it as simple as possible, but no simpler.
2013-03-28
6. Efficient TDD: Start closer to the end user
Nobody gets it right every time. Especially not software design, where... well... everybody gets it wrong most of the time. Even when doing Test Driven Development (TDD). I remember a series of blogposts by Ron Jeffries where he tried to implement a Sudoku-solver using TDD and absolutely failed to solve the problem. If you read the posts from Ron Jeffries, notice how he jumps straight in and attack an internal design detail before even understanding the problem. He basically gets stuck there and end up without a solution.
I remember having had design-sessions that looks eerily similar. After listening to the product owner for two minutes outlining what's needed, we went on to discuss lots of internal design details. "We would need a Frobinizer." "Also, we need a FlexiPlug" "The Frobinizer and FlexiPlug cant talk to eachother directly - they need to post and read messages via the PlancCampfer." "But the semantics of a Planc dictates that it should not share component with the Camfer, they should be separated." At more than one situation, I've found my self preoccupied with the software components to a very high degree, and to a lesser degree about how it actually implements the features we wanted. And I do believe that the times I've ended up with a huge outburst of frustration and then scrapping everything and starting over, is strongly correlated with the times I've become to preoccupied with the internals of what I'm making.
The topic for this post is a strategy to approach design and implementation in a way that I experience steer me away from the problems Mr. Jeffries experienced. The key is to start with a test that is as close to the user as possible.
All applications originates with some user or some agent that interact with the application (not to much of a generalization, I presume.) If I write a GUI application (I've blogged about something slightly relevant previously) I'm probably using a GUI framework, such as Qt, wxWidgets, GTK or Swing, so the actual view I'm putting together is composed of components from the framework. I'd like to avoid testing these framework-provided components. But the user clicks a button with the mouse, or he type some text in a text view, or he exercise a key-combo. I'd like to start as close to the user as possible, and for a GUI application this is where the user action leave the framework code - usually this involves implementing some kind of event handler. So the first test is for the event-handler that originates with the user interacting (and it's immediate surroundings.)
For another example, we can look at a web-service. When adding something new I like to start with writing a test for the network access-point. The test basically checks that the web-endpoint manages to get responses, and react appropriately to them. The endpoint is tested by relaying representations to an interface. The diagram below illustrates the basic design of such a test.
I start by writing the WebServiceEndPointTest and the WebServiceEndPoint, create a interface that represent the real behavior, and translates calls to WebServiceEndPoint to something ServiceInterface can relate to (e.g. method calls and values.) The test then sets up the mock for the ServiceInterface, exercises the WebServiceEndPoint and validates that the interactions with the mock was correct. There is nothing revolutionary with this idea of course - the main point of this blogpost is that this is the right end to start working in. Get the user interactions defined first, before the design of the rest of the application get so set in stone that it can dictate how the user interaction can be.
You may view this as performing top-down design, as opposed to doing things bottom up. And you would be right in pointing out that this leads to a design that pretty much follows the Dependency Inversion Principle (DIP.) By following DIP, we get a design where "lover level" components and packages depend on abstractions exposed by "higher level" components, and that everything relates to each other via those abstractions. This in turn leads to a nicely decoupled and inherently testable codebase. The figure below is a caricature of the dependency inversion principle. If you let Foo in the picture below play the same role as WebServiceEndPoint above, notice how higher level components don't depend on lower level components, but that lower level components depends on abstractions from higher level components (higher translates to closer to the end user.)
It also turns out that the result of following this advice, becomes very similar to behavior driven development (BDD), without the domain language for specifying the tests. At the beginning I get an outer loop, where I write tests that closely capture the behavior and the interactions of the end user. These tests are also suitable for testing the overall behavior of the system, and can be 'wired' as integration tests (in addition to working as unit-tests for the outermost units.) I also get an inner loop, where I can refine and refactor the designs and tests of individual units that build the application. So from following a very simple advice - starting as close to the user as possible - we've discovered the dependency inversion principle and we've been doing BDD. Not bad for an afternoons work!
So, how would I do the sudoku thing then? When I solved sudoku by means of TDD, I started with a textual representation of a completed sudoku-board and wrote a test that verified that my solver recognized it as complete and valid. After that I progressed with leaving one, then two, then more fields blank. After a handful of tests, I had a solution that was not to dissimilar from this solution, based on search and constraint propagation. I felt satisfied with the solution, and I never needed any tests for the internal data structures. Starting closer to the user guided me in a good direction.