2013-09-22

Clarify goals and objectives in 20 minutes

I've found that having a clear picture of my overall goals and objectives, makes it easier to make decisions about items in my inbox. It makes it easy to answer the question of "is this really something I should act on". Unclarity of goals, makes my lists of things I believe I have to do in this world long, unfocused and hard to navigate. But even though I experience real value from having the clarity of mind a concrete set of goals gives, I've found it really hard at times to formulate them for my self. Leading of course to a total breakdown in any productivity system I've been using at any time.

A solution came to me in a book called "Smart Choices: A Practical Guide to Making Better Decisions" by Hammond, Keeney and Raiffa. A part of their (somewhat elaborate) decision procedure for making a choice, also provides a very good crutch in order to get the clarity of mind I need in order to form a bunch of good objectives for my self.

The process is very simple. Bring out a sheet of paper (I use a two-page spread in my journal) and write down what area, project or problem you'd like to get a clarity of mind around as a headline - be it a high-level thing such as "career" or a lower level item such as "the car." Then list all concerns you have with respect to this area. For this phase, I like to use a timer. I set it to four minutes, and pour everything that enters my mind until the timer tells me times up. That usually fill 70% of an A5 page (which is the size of my journal) and leaves a little bit of room should I think of anything else during the rest of the process.

Next I transform all those concerns to objectives - which I write down on the right-side spread in my journal. The transformation goes from "concern" - say "clunky sound from rear when left turns" - to a positive objective such as "Decide whether I'm going to keep fixing the old car, or buy a new one." I try not to fuzz to much about it, but I regularly try to stop and think a little bit bigger than my first inclination (which would be to take the car to the repair-shop.)

The final step then, is to separate the means from the ends - some of the objectives tends to be a way to accomplish other objectives. I then turn the page in my journal, and give the page an informative heading such as "What I want from my Car". From the previous page, I pick the most important objective that is an end in itself, and write it down. Then I take any objectives that support that objective as a means, and write those down as well as sub-points for the main objective. Then I take the next 'ends' objective from the previous page, and so on until all objectives from the previous page have been transferred to my overview.

And voila, I have a nice overview of clearly stated goals that meets any concerns I have, that I can re-visit for clarity and focus and measure progress against in the future.





2013-07-06

Five questions on Velocity

What is Velocity, really, in software jargon?
  • "Velocity" is a measure on how fast something moves. In, say, Scrum, it translates to how many Story Points we can expect to produce during a fixed time-period. A Story Point is a relative estimate of the effort required to realise a well described User Story, or a user episode, if you like. I have used the term "Gummy Bears" instead of StoryPoints though. The name doesn't really matter.
Why do we want to measure Velocity, in this sense ?
  • In order to allow us to optimise our development speed, as our Velocity should be tightly correlated to our ability to construct software. Getting better at what we do should translate to a higher Velocity, and higher Velocity should reflect overall higher production speed.
Why do you think Velocity is correlated to development speed?
  • We estimate the size of a UserStory in Story Points. We believe we are able to estimate user-stories good enough to establish such a correlation. Sure enough, some times we are overly optimistic, and at times we are to pessimistic. But we believe we are equally wrong all the time, on average.
You only talk about User Stories in relation to Velocity. But I do other tasks as well, such as fixing bugs or upgrading my OS. Shouldn't they count positively against the Velocity as well - it is necessary work that needs to be done, right?
  • Ideally, no - fixing a bug doesn't count positively on the Velocity. You see, spending time fixing a bug, is time not spent building a new valuable User Story. So if you spend much time on fixing bugs, your Velocity should really be expected to go down.
But that's madness - I need to fix my bugs! Why do you say not to fix the bugs?
  • Please fix the bugs! But Velocity is feedback on how fast we are creating new valuable stuff. If you have to spend time fixing bugs, you've done something in the past that prevents you from creating new valuable stuff now. Velocity going down because you need to spend your time fixing bugs, is just an indication that you'r team need to get better at, say, quality assurance, in order to squash those bugs at an early state when they are easy to fix!





2013-04-08

13. On Getting Things Done - Keeping the weekly review under an hour.

After comparing Getting Things Done (GTD) with Scrum the other day, I got sort of inspired to write more about GTD. I'd like to share two revelations I've made about my personal GTD implementation. As I mentioned, I'm (making an attempt at) following David Allens productivity system GTD. My goal state is to follow it "to the book", and I do so to a large degree. And I have to admit. The most difficult habit to form and follow have been to perform the weekly review consistently.

The problems I've been facing, is that doing a review have, to me, been a very costly endeavor.  According to The Book, it should not take more than an hour. I've often found mine to take three. Spending three hours on a sunday afternoon reviewing my work-in-progress is a lot of work - both for me and my family. Even when it leaves me feeling totally in control of my destiny for an hour or five afterwards.

Your basic weekly review checklist looks like this:
  1. Collect loose papers and materials
  2. Process all inboxes, so they are empty
  3. Empty your head (on paper, for example)
  4. Review your Next Actions list
  5. Review the two previous weeks in your calendar
  6. Review the two next weeks in your calendar
  7. Review everything you are waiting for
  8. Review project lists
  9. Review any relevant checklists
  10. Review SomedayMaybe
  11. Be Creative and courageous
To me, those 10 actions + primer usually added up to much more than one hour of work! Even though, in the periods when I actually did this (without the optimizations I will describe in a second) on a regular basis, it felt like keeping it below an hour and thirty minutes was entirely possible. But still then, dropping the last point - reviewing the contents of the SomedayMaybe list, happened more often than not.

So what did I do then, to fix this?

SomedayMaybe is a bad name

My first observation was over time that I used my SomedayMaybe folder more like a trashcan. I placed things there, and never looked at them again. It grew huge! And for my own part, it was a problem of semantics. SomedayMaybe is, at least to my brain, to similar to the category of things that it would be nice to be able to do, should the universe put it in to my lap - but not otherwise. The intention of SomedayMaybe however is for things that you intend to do, but can't give top priority as of yet. For SomedayMaybe to work, it must be revisited on a regular basis! I needed a name for the category that was closer to "put it of for now, but make sure to read it again later."

I renamed my SomedayMaybe category to Incubate.

After renaming the list, I was more inclined to actually intervene with it. It was half the battle - my Incubate system was working!

There are always too many projects

Reviewing my projects list was another pain-point. It took a very long time during the review. And many of the projects was things I didn't really have intentions to execute on as of just now. They didn't really have a place on the project list at all! Of course, keeping the project list clean is one of the tings one do during the weekly review. And all of those projects should have been moved to the someday-maybe category. But I really had intentions to do them some time - and my someday maybe wasn't working so they stayed on my project list. A Catch 22 of sorts.

I ended up fixing this by adapting a concept from the Kanban process for agile software development: capping Work In Progress(WIP). I arbitrarily set the amount of projects I allowed to have open at any time to 15, and moved one by one over to my Incubate folder to that limit was reached. Reviewing 15 projects I could do in a couple of minutes. In addition, limiting the content of my active projects-list to only 15 projects at a time, really forced me to get my priorities in order. It became a driver to sort out especially what was happening on 30 thousand feet (the short-to-medium-term goals and intentions level.) And then, having my priorities in order enabled me to think clearly and more critical when I reviewed my Incubation lists. So in a sense, limiting the content of my active projects created a positive feedback-loop that made me clean up everything I put off - contributing to even more efficient weekly reviews.

Those two things - Limiting WIP and renaming SomedayMaybe to Incubate, essentially fixed it for me. There are some more details to adress here around review-optimization, but these two items turned out to be the gist of the matter for me. I'm now able to do a meaningful review within an hour, and I actually look through my Incubate-files when doing it.

2013-04-06

12. Is Scrum "GTD for teams?"

I've been using GTD for ten years plus, after a then colleague of mine sold me on the idea. He was very exited about David Allens book. I've been doing Agile in some way or another from before that after going all in on eXtreme Programming with a startup, and Scrum have been the go-to-process of choice for the last six years or so. A couple of weeks ago, I heard a pod-cast (I think it was with Daniel Pink) where a part of the show touched upon the similarity between Scrum and GTD. One of the statements made in that podcast was that "Scrum is GTD for teams." When listening, I agreed that this comparison was not entirely without merit. I've come to appreciate the similarities between them, and I pondered a similar idea my self a couple of years ago. This is a short stab at what's similar and what's different. Is Scrum really GTD for teams?

Scrum (as do other agile processes, such as eXtreme Programming) split the problem space of software development in to Stories, and device a plan for how to prioritize and track the translation of the Stories to actual software artefacts. GTD focuses on translating "stuff" that's coming your way in to Projects and next actions, and the efficient management of these through usage of contexts.

A Story maps almost directly to a Project. A story is a tiny description of a user interaction of the system, with a clear definition of the What and Why, and a description of the goal state through a recipe for how to demonstrate the complete story. A simplified example may be "User press the Stroke button in order to draw lines with strokes", where this should be demonstrated by performing a drawing action that does not cause strokes unless the Strokes-button is indeed pressed. In a similar way, GTD encourage you to think first about "Whats the Outcome, and what does doing look like" and then define your actions in terms of things you need to do in order to reach done. A Project in GTD-jargon is any goal-state that needs more than one physical action. In this regard, you may say that "Research BitCoin" may be an ill formed project. "Research what economists in general thinks about BitCoin" however, is better. "Research ten renowned economists opinions of BitCoin" however, clearly states what needs to be done much in the same way what a Scrum Story does. In some sense, Scrum and GTD have converged to very similar formats  for the actionable items. At least ideally, if not in form.

Both GTD and Scrum defines a kind of "morning ritual". Both asks you to figure out and state somewhat clearly what your goals for the day is. The GTD morning ritual is to look over the calendar for today, find the next actions you need to complete and perhaps group them in a "today" list. The Scrum morning ritual is to gather around the Scrum-board, state for each other what goals one have for the day and perhaps grab new work if one's done with the previous task.

But this is where the similarities ends. Scrum defines, in addition, a process, where you first plan a "sprint" lasting one, two, three or four weeks. The goal is to first plan the sprint, by building a "Sprint backlog". The sprint backlog contains all stories to complete with estimates. The stories are perhaps broken down in more details - tasks needed to build the stories. Progress is measured relatively to the sprint backlog. One goal is to be able to fill the sprint with exactly the amount of work one can complete within the Sprint. The Sprint Backlog is considered "holy", and should not be touched during the sprint. The items on the Sprint Backlog is then tracked and "burned down" towards completion during the execution of the sprint. Then there is an end-of-sprint ritual to evaluate its execution. Lather, rince, repeat.

GTD do not come with such a process attached. However, GTD advocates a weekly review, where one clear the deck, look over all open loops, check the calendar for the last two weeks and the next two weeks. This is done in order to be able to see if there are any required actions that one have failed to record any where. something crops up. It may look like a planning and retrospect meeting, but it lacks the concept of estimation and "locking in" any target scope of work. GTD recognized that all your priorities may change during a two minutes chat beside the water-cooler, or when your spouse call you and tell you she have crashed the car. Reacting on such priroity-changing events in Scrum, requires an act of "terminating the sprint" and starting over with a new planning session.

With the last paragraph in mind, the GTD process looks more like a personal Kanban process. It is a more lightweight process than Scrum. Kanban focuses on continuous evolutionary change and how work items flow through the organization from concept to finished feature. The concept of a sprint is gone, so is the concept of a Sprint Backlog. There is only the product backlog, in addition to issues that crop up while one move forward together. The center of a Kanban process is of course the Kanban, maintaining a living representation of what work is actually being done at any time. Spending some time every week reflecting over the work being done is still a good idea. So is standups.

Another thing that distinguishes GTD from Scrum, is that GTD moves beyond the mere execution of a plan. GTD defines some useful thinking tools, the different levels of focus, to guide how projects and actions are prioritized. Areas of focus and responsibility, short term goals, visions and intentions, and finally purpose and principles - all these levels of thinking unite in a constructive way to reflect around you and your life's direction. A team working towards delivering a product need to do the same, and Scrum provides very little guidance besides giving the key stakeholder a name.

In conclusion then. I'd say that the execution-part of GTD bares more resemblance to Kanban, than it does to Scrum. So one may say Kanban is GTD for teams. Or GTD is a one-man-Kanban. But this comparison on the execution-level only leave out the deeper more life-encompassing aspects of GTD. GTD is a different beast all together.


2013-04-04

11. It's not the resources, it's the priorities.

"We don't have enough resources" - if I had an euro for every time I heard that statement, I would be rich. I've heard the sentence used everywhere, from the mouth of a one-man-team, and from leading developers in a seemingly "unlimited resources" situation. And every time, it kind of strikes me how... wrong... the statement is.

I remember participating in a project meeting once with our then most important customer. From the customer side, it included the main stakeholder - our key sponsor - and two of his colleagues, who participated in QA and follow up activities. From our part, it included the Project Manager for the project, who was a new hire, the CEO of our company, a developer who had participated in meeting this customer before, and me. It was the first time I visited, and I was quite new on the team. Due to several factors, such as key people moving on, lack of communication and a couple of very buggy binaries sent to the client, trust was suboptimal to say the least.

The meeting spanned three days. The first day was "get to know each other day". The second day was more of a feature discussion day, which turned out to be the day where the customer spent most of their time piling on new stuff they wanted to have delivered within the scope of the contract.

Ah - the contract. I need to talk about the contract. From the start, it was almost the agilists dream contract. It was fixed price, with a list of perhaps 20 or so vaguely described features they wanted, delivered over four increments. The contract had fixed seemingly non negotiable delivery-dates for each phase. We had delivered phase three, and the customer was still not happy with what we had delivered for phase two. So some place along the line we had failed to match - or control - the expectations of the customer. Fair enough. We were sitting there, we had not closed phase 2, we wanted to close phase 3 and we wanted to get started on the last phase. To add insult to injury, we had spent more time than scheduled on phase 3 - we didn't have the budget to complete phase 4 without taking a monetary loss (not to mention finishing it on time.) And still, the customer was piling on changes and fixes on features delivered for phase 1, 2 and 3.

I can rant on about what went wrong with that project, but I don't think that would be the interesting part. The interesting part is how we got out of it. We simply did a variation on the planning game. New and exiting for everybody but me.

On the evening of the second day, me and the PM sat down, wrote all requirements we had gathered during those two first days on A4-sized printer paper (For you purists: sorry - no index-cards in the grocery-store on the corner of our hotel.) One requirement on one piece of paper. There was even room for drawings. I committed a grave sin that day, by singlehandedly giving StoryPoint estimates on every item in the list (the PM was not able to participate.) And voila, we had a backlog.

The next day then turned out to be the planning and prioritization day. We started by presenting the key sponsor and his entourage with the following:

  • Acknowledged that we had a fixed deadline
  • Put forward the pile of paper the backlog
  • Stated the sum of the StoryPoints (we called them apples) and how they came about
  • That given the current velocity of the team, we could complete a third of the backlog within the deadline for the project (our team had an established velocity, and we used this to extrapolate under the assumption that my estimates was ballpark-ish.)
I must admit, that the first part of that meeting was difficult. But to cut the story short - when presented with what he asked for this way, the sponsor was able to agree to participate in prioritization of the backlog, remove two thirds of it's content, and negotiate a realistic scope for the deadline. On which we delivered successfully. Trust was reestablished, and we got a new contract with them.

Lack of resources was not an issue in the discussion. Priorities, on the other hand, was. Priorities, and clearly communicating what priorities the stakeholders have to make. What to keep, and what to remove. When this is communicated clearly, the resource-question solves it self. Either the project get's the amount of resources it needs, or the project gets shaffed. Both are wins, in some sense. In our case, it turned out to be more resources (the next contract.)

2013-04-02

10. If it works, fix it anyway

I remember a day where I'd spent a couple of hours understanding a piece of code. There was no tests for this code, and it was hard to read. When I finally understood it, I thought of a way to change the design of these parts so that it would be easier both to test and to read for the next maintainer coming along. I aired my opinion to one of the teammates, and you have probably already guessed what he said.
"Why change that? It works. Don't fix it if it works."
And in some sense, that's a good advice. It's a good strategy for risk-mitigation, and my hunch is that it's correct 80% of the time. But in this particular case, I disagreed. We had touched upon that piece of code several times the last sprints, we had fixed several bugs in it, and chances were we would have to dig in to it again in the next sprints.

What definition of "work" do you follow, when it comes to your code? In our discussion above, my colleague was of the opinion that if the user experienced the feature as working, we should not risk breaking that state of "working". I think that's a bad definition for "work".

Any definition of "work" must include maintainability to some degree. Of course, the required level of maintainability may vary, but for most cases readability and testability not to mention the existence of adequate tests must come first. If the code isn't easy to understand, it doesn't work. If the code is difficult to test, it doesn't work. If the code is difficult to exclude from a test that don't need it, it doesn't work. If it isn't covered by a test, it definitely doesn't work. If it doesn't work, it needs to be fixed.

The code must stand the test of change in order to be flexible. Maybe the feature works, but the code doesn't. So go and fix it anyway.

2013-03-30

8. A New War on Waterfall

"I'm concerned, because that looks very much like a waterfall!" a coworker of mine quipped. "We need to communicate across disciplines, and some times things discovered while implementing the solution might affect the final design of the product" he continued. The product manager had just finished outlining a sketch of a process he wanted us to follow when bringing new products or new versions of products to the market. My colleague (a very competent one, I might add) commented on what he perceived to be communicated as a linear step by step process. As he spoke, I felt an agonizing pain somewhere in my soul. As I've come to do ever so often lately.

The plan our product manager had outlined, contained steps of a data-driven process for product development. Where we every step of the way, from first concepts, then minimum value products (MVPs) and prototypes, could validate the product at different stages by interacting with the audience even before implementation and deployment. What my colleague failed to see was that the PM was picturing a "main-flow" - not a prescriptive "do this, then that, then finnish with this" type of plan. Colleague reacted as if saying that we want to do user studies and prototypes before we start working on the real thing, implies that lessons learned while implementing the product, or data from the live product, can not affect the direction the product takes. Which was obviously not what the PM had in mind. My colleague is not alone in making this mistake, though. An entire industry have made the same error for decades.

Even the person defining the waterfall process was clearly aware that any ventures in to software development would require feedback from parts lower "down" to various parts higher "up". The "waterfall process" is usually credited Winston W. Royce, although he did not use the term in the original paper. The figure he used to outline his basic process, or workflow, however, does indeed resemble a waterfall, hence the name. The waterfall is a flow of "steps" from System Requirements through analysis, design, coding and testing to deployment. The term Waterfall then, have come to mean a process where every activity in that list is performed to completion in isolation, where the result is handed over to the next box for processing. Below is the original canonical picture of the waterflow process.


From reading the paper of Dr. Royce, however, it becomes quite clear that this inflexible process is definitely not what he intended to communicate in his paper. On the general overview of the process he states, and I quote: "I believe in this concept, but the implementation described above is risky and invites failure." The paper is full of examples showing how, if applied blindly, such a "step-by-step" process might fail. He also state quite clearly, translated to more modern terminology, that it is usual to have iterations spanning multiple phases. And there is also modern sounding gold in the paper, like "Involve the customer - the involvement should be formal, in-depth, and continuing" - still a hallmark of good software development if you ask me! So the first paper that defined the Waterfall process, is actually the first loud critic of it. It actually talks about something that to a high degree resembles something closer to a modern agile iterative methodology. Unfortunately after Royce's paper, and contrary to his intentions, the term waterfall is cemented in to our minds to mean an inflexible step-by-step process that are doomed to fail due to lack of feedback loops. It is wrongfully used as a contrast by whoever have some new snake-oil a new process to sell. But that inflexible step-by-step process never existed.

This is the core of my pain: Most, if not all, people I meet in the software industry are (somewhat) grownup persons who are well aware that succeeding in software development require communication and feedback, and that things we learn underway almost invariably change what we end up building. Heck, it's those who are fresh out of school who tend to yell out the loudest about the need for feedback and communication. Do we forget this when we get older or something?

Could we please then, even if somebody for the sake of communication paints a simplified picture of a bunch of boxes with arrows between them take for granted that there are feedback-loops? And do that even if, taken literally, it resembles a waterfall? I'm not saying I believe that those feedback-loops will be popping up all by them selves. They must be nurtured and cared for as any other important aspect of the development process. But sketching a general flow from idea to finished product does not exclude it's existence!

It seems to me like yelling "waterfall!" has become the favorite guilt-by-association argument. Some feat, for a process that never really existed anyway! So then. Hereby, I declare war on the term waterfall it self.