Software Development Magazine - Project Management, Programming, Software Testing |
Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban |
Acceptance TDD Explained - Page 11
Lasse Koskela
The mid-iteration sanity check
I like to have an informal sanity check in the middle of an iteration. At that point, we should have approximately half of the stories scheduled for the iteration running and passing. This might not be the case for the first iteration, due to having to build up more infrastructure than in later iterations; but, especially as we get better at estimating our stories, it should always be in the remote vicinity of having 50% of the stories passing their tests.
Of course, we’ll be tracking story completion throughout the iteration. Sometimes we realize early on that our estimated burn rate was clearly off, and we must adjust the backlog immediately and accordingly. By the middle of an iteration, however, we should generally be pretty close to having half the stories for the iteration completed. If not, the chances are that there’s more work to do than the team’s capacity can sustain, or the stories are too big compared to the iteration length.
Learning from our mistakes, we’ve come to realize that a story’s burn-down rate is constantly more accurate a source of prediction than an inherently optimistic software developer. If it looks like we’re not going to live up to our planned iteration content, we decrease our load.
Decreasing the load
When it looks like we’re running out of time, we decrease the load. We don’t work harder (or smarter). We’re way past that illusion. We don’t want to sacrifice quality, because producing good quality guarantees the sustainability of our productivity, whereas bad quality only creates more rework and grinds our progress to a halt. We also don’t want to have our developers burn out from working overtime, especially when we know that working overtime doesn’t make any difference in the long run. Instead, we adjust the one thing we can: the iteration’s scope—to reality. In general, there are three ways to do that: swap, drop, and split. Tom DeMarco and Timothy Lister have done a great favor to our industry with their best-selling books Slack (DeMarco; Broadway, 2001) and Peopleware (DeMarco, Lister; Dorset House, 1999), which explain how overtime reduces productivity.
Swapping stories is simple. We trade one story for another, smaller one, thereby decreasing our workload. Again, we must consult the customer in order to assure that we still have the best possible content for the current iteration, given our best knowledge of how much work we can complete.
Dropping user stories is almost as straightforward as swapping them. "This low-priority story right here, we won’t do in this iteration. We’ll put it back into the product backlog." But dropping the lowest-priority story might not always be the best option, considering the overall value delivered by the iteration—that particular story might be of low priority in itself, but it might also be part of a bigger whole that our customer cares about. We don’t want to optimize locally. Instead, we want to make sure that what we deliver in the end of the iteration is a cohesive whole that makes sense and can stand on its own.
The third way to decrease our load, splitting, is a bit trickier compared to dropping and swapping—so tricky that we’d better give the topic its own little section.
Splitting stories
How do we split a story we already tried hard to keep as small as possible during the initial planning game? In general, we can split stories by function or by detail (or both). Consider a story such as "As a regular user of the online banking application, I want to optionally select the recipient information for a bank transfer from a list of most frequently and recently used accounts based on my history so that I don’t have to type in the details for the recipients every time."
Splitting this story by function could mean dividing the story into "…from a list of recently used accounts" and "…from a list of most frequently used accounts." Plus, depending on what the customer means by "most frequently and recently used," we might end up adding another story along the lines of "…from a weighted list of most frequently and recently used accounts" where the weighted list uses an algorithm specified by the customer. Having these multiple smaller stories, we could then start by implementing a subset of the original, large story’s functionality and then add to it by implementing the other slices, building on what we have implemented for the earlier stories.
Splitting it by detail could result in separate stories for remembering only the account numbers, then also the recipient names, then the VAT numbers, and so forth. The usefulness of this approach is greatly dependent on the distribution of the overall effort between the details—if most of the work is in building the common infrastructure rather than in adding support for one more detail, then splitting by function might be a better option. On the other hand, if a significant part of the effort is in, for example, manually adding stuff to various places in the code base to support one new persistent field, splitting by detail might make sense.
Regardless of the chosen strategy, the most important thing to keep in mind is that, after the splitting, the resulting user stories should still represent something that makes sense—something valuable—to the customer.
References
[1] Ron Jeffries, "Essential XP: Card, Conversation, Confirmation," XP Magazine, August 30, 2001
[2] Hannah Arendt, Men in Dark Times (Harvest Books, 1970).
More Software Testing and Agile Testing Knowledge
Software Testing Videos and Tutorials
Click here to view the complete list of archived articles
This article was originally published in the Summer 2008 issue of Methods & Tools
Methods & Tools Testmatick.com Software Testing Magazine The Scrum Expert |