Software Development Magazine - Project Management, Programming, Software Testing |
Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban |
What's Wrong With Agile Methods - Part 3
Some Principles and Values to Encourage Quantification
Tom Gilb, http://www.gilb.com/
Lindsey Brodie, Middlesex University
Case Study of The ‘Confirmit’ Product
Tom Gilb and his son, Kai taught the Planguage methods to the FIRM (Future Information Research Management, in Norway) organization. Subsequently, FIRM used these methods in the development of their Confirmit product. The results were impressive, so much so that they decided to write up about their experiences (Johansen 2004, Johansen and Gilb 2005). In this section, some of the details from this Confirmit product development project are presented.
Use of Planguage Methods
First, the quantified requirements were specified, including the target levels. Next, a list of design ideas (solutions) was drawn up (seeFigure 8 for an example of an initial design idea specification).
Recoding: Type: Design Idea [Confirmit 8.5]. Description: Make it possible to recode a marketing variable, on the fly, from Reportal. Estimated effort: 4 team days. |
Figure 8. A brief specification of the design idea, ‘Recoding’
The impacts of the design ideas on the requirements were then estimated. The most promising design ideas were included in an Evo plan, which was presented using an Impact Estimation (IE) table (see Tables 2 and 3, which show the part of the IE table applying to Evo Step 9. Note these tables also include the actual results after implementation of step 9). The design ideas were evaluated with respect to ‘value for clients’ versus ‘cost of implementation’. The ones with the highest value-to-cost ratio were chosen for implementation in the early Evo steps. Note that value can sometimes be defined by risk removal (that is, implementing a technically challenging solution early can be considered high value if implementation means that the risk is likely to be subsequently better understood). The aim was to deliver improvements to real external stakeholders (customers, users), or at least to internal stakeholders (for example, delivering to internal support people, who use the system daily and so can act as ‘clients’).
EVO STEP 9: DESIGN IDEA: ‘Recoding’ | ||||
Estimated Scale Level | Estimated | Actual | Actual | |
REQUIREMENTS | ||||
Objectives | ||||
Usability.Productivity 65 <-> 25 minutes Past: 65 minutes. Tolerable: 35 minutes. Goal: 25 minutes. | 65 – 20 = 45 minutes | 50% | 65 - 38 = 27 minutes | 95% |
Resources | ||||
Development Cost 0 <-> 110 days | 4 days | 3.64% | 4 days | 3.64% |
Table 2. A simplified version of part of the IE table shown in Table 3.
It only shows the objective, ‘Productivity’ and the resource, ‘Development Cost’ for Evo Step 9, ‘Recoding’ of the MarketingResearch (MR) project. The aim in this table is to show some extra data, andsome detail of the IE calculations. Notice the separation of the requirementdefinitions for the objectives and the resources. The Planguage keyed icon ‘<->’means ‘from baseline to target level’. On implementation, Evo Step 9 alonemoved the Productivity level to 27 minutes, or 95% of the way to the target level
The IE table was used as a tool for controlling the qualities: estimated figures and actual measurements were input into it. Eachnext Evo step was then decided, based on the results achieved afterimplementation and delivery of the subsequent step. Note, the results were notactually measured with statistical accuracy by doing a scientifically correctlarge-scale survey (although FIRM are currently considering doing this). Theimpacts described for Confirmit 8.0 (the ‘Past’ levels) are based oninternal usability tests, productivity tests, performance tests carried out atMicrosoft Windows ISV laboratory in Redmond USA, and from direct customer feedback.
Current Status |
Improvements |
Goals |
Step 9 |
||||||
Design = ‘Recoding’ |
|||||||||
Estimated impact |
Actual impact |
||||||||
Units |
Units |
% |
Past |
Tolerable |
Goal |
Units |
% |
Units |
% |
Usability.Replaceability (feature count) |
|||||||||
1.00 |
1.0 |
50.0 |
2 |
1 |
0 |
||||
Usability.Speed.New Features Impact (%) |
|||||||||
5.00 |
5.0 |
100.0 |
0 |
15 |
5 |
||||
10.00 |
10.0 |
200.0 |
0 |
15 |
5 |
||||
0.00 |
0.0 |
0.0 |
0 |
30 |
10 |
||||
Usability.Intuitiveness (%) |
|||||||||
0.00 |
0.0 |
0.0 |
0 |
60 |
80 |
||||
Usability.Productivity (minutes) |
|||||||||
20.00 |
45.0 |
112.5 |
65 |
35 |
25 |
20.00 |
50.00 |
38.00 |
95.00 |
Development resources |
|||||||||
101.0 |
91.8 |
0 |
110 |
4.00 |
3.64 | 4.00 | 3.64 |
Table 3. Details of the real IE table, which was simplified in Table 2.
The two requirements expanded in Table 1 are highlighted in bold. The 112.5 % improvement result represents a 20 minutes level achieved after the initial 4 day stint (which landed at 27 minutes, 95%) . A few extra hours were used to move from 27 to 20 minutes, rather than use the next weekly cycle.
The Results Achieved
Due to the adoption of Evo methods there were focused improvements in the product quality levels. See Table 4, which gives some highlights of the 25 final quality levels achieved for Confirmit 8.5. See also Table 5, which gives an overview of the improvements by function (that is, product component) for Confirmit 9.0. No negative impacts are hidden. The targets were largely all achieved on time.
DESCRIPTION OF REQUIREMENT / WORK TASK | PAST | CURRENT STATUS |
Usability.Productivity: Time for the system to generate a survey | 7200 sec | 15 sec |
Usability.Productivity: Time to set up a typical specified Market Research (MR) report | 65 min | 20 min |
Usability.Productivity: Time to grant a set of End-users access to a Report set and distribute report login info. | 80 min | 5 min |
Usability.Intuitiveness: The time in minutes it takes a medium experienced programmer to define a complete and correct data transfer definition with Confirmit Web Services without any user documentation or any other aid | 15 min | 5 min |
Workload Capacity.Runtime.Concurrency: Maximum number of simultaneous respondents executing a survey with a click rate of 20 seconds and a response time < 500 milliseconds, given a defined [Survey-Complexity] and a defined [Server Configuration, Typical]. | 250 users | 6000 users |
Table 4. Improvements to product quality levels in Confirmit 8.5
FUNCTION | PRODUCT QUALITY | DEFINITION (quantification) | CUSTOMER VALUE |
Authoring | Intuitiveness | Probability that an inexperienced user can intuitively figure out how to set up a defined Simple Survey correctly. | p>Probability increased by 175% (30% to 80%) |
Authoring | Productivity | Time in minutes for a defined advanced user, with full knowledge of Confirmit 9.0 functionality, to set up a defined advanced survey correctly. | Time reduced by 38% |
Reportal | Performance | Number of responses a database can contain if the generation of a defined table should be run in 5 seconds. | Number of responses increased by 1400% |
Survey Engine | Productivity | Time in minutes to test a defined survey and identify 4 inserted script errors, starting from when the questionnaire is finished to the time testing is complete and ready for production. (Defined Survey: Complex Survey, 60 questions, comprehensive JScripting.) | Time reduced by 83% and error tracking increased by 25% |
Panel Management | Performance | Maximum number of panelists that the system can support without exceeding a defined time for the defined task, with all components of the panel system performing acceptably. | Number of panelists increased by 1500% |
Panel Management | Scalability | p>Ability to accomplish a bulk-update of X panelists within a timeframe of Z seconds. | Number of panelists increased by 700% |
Panel Management | Intuitiveness | Probability that a defined inexperienced user can intuitively figure out how to do a defined set of tasks correctly. | Probability increased by 130% |
Table 5. Some detailed results by function (product component) for Confirmit 9.0
Copyright © 2006 by Tom Gilb
Methods & Tools Testmatick.com Software Testing Magazine The Scrum Expert |