Software Development Magazine - Project Management, Programming, Software Testing |
Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban |
Agile Delivery at British Telecom
Ian Evans, British Telecom
Introduction
It is becoming clear, not least from the pages of this publication, that agile development methods are being adopted or at least considered by a growing number of software development teams & organisations. Whether you are already an active practitioner agile development, or considering its adoption on your project, you will be aware of the business benefits that can be derived through faster and more effective software delivery not to mention the motivational impact it can have on development teams. Alternatively, maybe you work for a large organisation that has yet to make any serious inroads into agile development, and are left wondering how agility could be made to work on a large scale.
If you're in the latter camp, or even if you are not actively considering agile development as such but are struggling to deliver large and / or complex programmes using traditional approaches and wishing there was a better way, then you are probably where British Telecom (BT) found itself in 2004. That was before the arrival at the company of a new CIO who systematically set about replacing the company's long-standing waterfall-based delivery processes with one that embodied the key principles of agile delivery.
This article presents an overview of the approach taken by BT, illustrating how agile development principles can be applied successfully at the enterprise level. Needless to say, the approach taken by BT is not for the faint hearted - it has included a high degree of risk, and certainly a lot of pain. Now well into its second year however, although the transformation is far from complete, it is already paying dividends.
Background
BT employs some 8,000 IT professionals in a variety of roles including project & delivery management, architecture & design, software engineering, integration & testing, operational support and service management. Much of its internally-focussed development work has traditionally been channelled through a number of business-focussed delivery projects or programmes, ranging from quite small, simple developments to large-scale and complex business solutions, the latter tending to be the norm.
The predominant delivery approach, certainly for the larger delivery programmes, was very much waterfall-based. The use of agile development practice, notably DSDM and Scrum, was limited to a small number of fairly small, self-contained development teams. BT was in fact one of the founding members of the DSDM Consortium and took an active part in shaping the method in its early days.
Despite successfully delivering a number of large, complex solutions into a dynamic, competitive yet highly regulated business environment, many significant transformation programmes were struggling to deliver any notable results in an acceptable timeframe. As part of a CMMI-inspired improvement strategy, efforts had been made to formalise acknowledged best practice processes into a standard delivery methodology. In 2004, this standard methodology was in the process of being rolled out when the new CIO made it clear that an entirely new agile approach was needed.
Drawbacks of the waterfall
Reinforcement of current waterfall-based practices was not really the answer however. Many of the delivery problems experienced at BT, and no doubt other large organisations, stem from the nature of the waterfall lifecycle itself. Some examples of these problems are given here. For a more complete demolition of waterfall practices, refer to Craig Larman's excellent work [1].
Poor requirements capture
Capturing requirements certainly isn't a bad thing. On typical large programmes however,
- Individual business stakeholders are anxious to incorporate all of their known requirements into the first / next release
- "Gold users" generate hundreds, if not thousands of detailed requirements that often bear little relationship to the business problems that needs to be addressed
- Most if not all requirements are given a high priority
- The requirements themselves, at best, represent today's view, which will certainly have changed by the time the requirements are actually implemented
Disconnected design
Given the sheer number of requirements, the design community finds itself spending most if its time trying to figure out what they mean. Meanwhile,
- The requirements analysts move on to other projects, taking with them important tacit knowledge
- Some stakeholders become concerned that their requirements are not being adequately addressed, and therefore refuse to sign off the designs
- Other stakeholders unearth more requirements or raise change requests, diverting scarce design expertise onto impact analyses
Development squeeze
With the design stage having slipped, development teams find themselves under intense pressure to deliver components into the integration environment by the originally agreed date. In fact, they often take the decision, reluctantly, to start development against an unstable design, rather than do nothing or divert resources to other programmes. Inevitably, system testing is cut short so that original timescales are met and the programme is seen to be on target.
The integration headache
The integration team has a set number of weeks during which it needs to integrate what it expects to be fully functional and relatively bug-free code. Because of the instability of the component code, and the lack of any effective regression test capability, effort is instead diverted to trying to resolve elementary bugs in the delivered code, liaising with a development team that is now engaged in the next major release. Actual integration therefore runs into months, creating a knock-on effect on other programmes requiring the services of the Integration team, not to mention frustrations within the business community who had been busy preparing themselves for an on-time delivery.
The deployment nightmare
It is now at least 6, or even 12 - 18 months since the business originally identified the need for this particular solution. Compromises and oversights made during the requirements and design phases, followed by de-scoping during development has resulted in a solution that bears little relationship with what was originally envisaged. Besides, the world has actually moved on in the meantime. The business then finds that the solution is not fit-for-purpose and refuses to adopt it. Worse, they adopt it and soon find that it is slow, error-prone and lacks key features, and eventually revert to the old system. The end result - more shelfware!
Early in each delivery cycle, the programme sets out clear targets for what it expects to achieve for the business during that cycle. These targets invariably include a strong emphasis on the end-customer experience, such as overall response times, transaction success rates, and so on. At the end of the cycle, the programme is assessed against these targets, and the outcome of this assessment will influence the timing of bonus payments for the programme team members. Programmes failing to deliver business value over a series of cycles face being closed down altogether.
This of course places a certain amount of pressure on the (internal) customer to be clear about the business priorities and the features that would provide the greatest return on investment. It also requires that the customer is ready and able to deploy the solutions into the business and realise the intended benefits. In practice, programmes often take two or more 90-day cycles to progress a particular solution to a point where it is fit for deployment. Even so, there is an opportunity at the end of each cycle to assess what has been delivered so far, and to provide feedback based on what has already been developed.
Methods & Tools Testmatick.com Software Testing Magazine The Scrum Expert |