Project Management Book

...chapter 12 continued


Simulations

There is a technique we will call simulation that can be very effective at finding the gaps that inspections miss.

Having finished the inspections of the UFD, and believing it to be correct, gather the team in a room and open the UFD at page 1 which, say, shows the initial web page the customer will see. Generate some test data, e.g. post code X74Y-4, and invite someone to be the computer. He slavishly, without adding any of his own knowledge, executes the logic specified in the UFD and says: "ah, if post code X74Y-4 is entered this pop up would appear saying invalid post code." We may even have the screen layout on a transparency and write in X74Y-4 then show an OHP of the pop-up. This low-tech approach works well.

Then try test case number 2. This time it's a valid post code and the simulator (i.e. the person executing the logic specified in the UFD) tells us which web page would then be displayed to the user. Perhaps someone else now takes over the role of the computer and some more test data is 'entered' to see what the system would do. The simulator says that the data would be stored: post code P74 7YQ; Mr Smith; 74 Arcadia Avenue; Part number 34712; quantity 3. Someone takes on the role of keeper of the database and writes this data onto a flip chart on the wall.

When we get to page 392 of the UFD and we are trying to produce an invoice - actually doing the calculations with a calculator using data on the flip charts - the person doing it says: "oops, I can't calculate the invoice amount because the customer discount code, which gives the discount percentage, is blank". And we realise the discount code has to be a mandatory field rather than an optional field on the input screen on page 94. That kind of error tends otherwise to remain dormant until the later stages of system testing.

You will usually find that the simulation has to be stopped after the first hour or so. Why? The design just doesn't say what's supposed to happen next. So you stop the simulation, plug the gap and come back to it. It can take a couple of weeks to simulate right through a system but that could save a month of system testing. Give it a try, you will be amazed what a simulation will find.

This has been described as getting the users to do the user acceptance testing at the design stage: it's much cheaper to find the errors here than later when you have a million lines of code to unpick and rework. The simulation uses test data which can be re-used during system test. Simulations are excellent education for the test team. And those charged with writing user guides will find participating in simulations very informative.

How does this differ from prototyping? Prototyping is often interpreted as showing users a series of screen shots to illustrate how the dialogues flow. A simulation is much more than that. We are pumping test data through the system logic to see if it produces the right outputs. We are exercising not only the visible parts of the system but the invisible parts as well. We are system testing the UFD.

If you're thinking there's no way your design documents are ever at a level of detail or state of completeness that would allow them to be simulated in this way you have such a fundamental problem that no quality assurance techniques in the world are ever really going to help you. Simulation is a technique for refining silk purses, not a technique for the miraculous transformation of sow's ears. If you haven't got the basics of the software manufacturing process right everything we are describing in this chapter will sink without trace into the quagmire of your disorder.

"Agile Project Management" is a technique you may have come across. It is perhaps most readily applied to projects that lend themselves to rapid development of small pieces of software. Nevertheless, many of agile project management's principles should be applied in all software projects. For example, a leadership philosophy that encourages teamwork and accountability; joint IT and User teams; and, particularly in the context of this chapter, a disciplined project management process that encourages frequent inspection. Indeed, inspections and simulations as we have described them require disciplined project management (to make them happen) and joint IT and User teams (to make them effective). (And please remember that agile project management is not a technique that suggests you approach large projects with no planning, no disciplined software development process, etc. And agile project management certainly does not mean rushing to get some easy, flashy bits working to impress the users, only to have the project fall apart later because you never got down to the hard, unflashy graft of working out what the complicated bits of the system were supposed to do.)

Simulations require the project manager to make a decision: to do simulations rather than not do them. Getting the bugs out by system testing doesn't require any decision to be made: if there are bugs you will have no choice but to keep testing. Simulations represent a pro-active approach to quality management, leaving it all to testing a reactive one. Strangely, people will sometimes tell you they can't afford to have someone spend two weeks doing a simulation but they will happily commit that same person for two months to do testing - nobody will argue with the need to throw in limitless resources to sort out the manifest problems in testing - there is no choice. Earn your money, be proactive, get the bugs out early, save money.

If the requirements document is inspected and simulated (yes, the business processes can be simulated too) and the user functions design document is inspected and simulated and the IT guys inspect their IT technical design and do code inspections (a.k.a. walkthroughs or code reviews) there will be many fewer errors left to be found in system and user testing.

If you have no personal experience that leads you to know that inspecting and simulating up front will mean many fewer errors in system test you may be reluctant to plan significantly less testing time than you normally would. That's OK. Plan in inspections and simulations and the usual amount of testing time - belt and braces. Then when system test finishes early because you have run out of bugs to find nobody will complain. But this might mean 'quality' shows up as an addition to project cost in the estimates, even thought the actual outcome will be a reduced project cost (everything else being equal!).


Expected results

As soon as the user functions design document has been signed off, the system test team can start work. They can produce test data and work through the UFD to determine the expected results. For example, the customer orders 3 of product x at £n and 4 of product y at £m - the tester can then work out what data should appear on the invoice by following the logic specified in the UFD. But the tester realises that the expected result is very obviously wrong - the logic specified in the UFD is incorrect. He has found a bug months before he has done any actual testing, maybe even before the code is written.

When he comes to do the testing it should be quick - all his thinking has been done in advance - he just has to check the system produces what he was expecting, he doesn't have to spend time working out whether the system has produced the right result.


System testing

If the UFD is complete and correct how much business knowledge would one need to do system testing? None at all. The best testers can read and write but have no other obvious skills. There is only one way they can test - by laboriously generating 'what if' test cases based upon the UFD. After all, system test is fundamentally checking one thing: does the system do what the UFD says it should. And the very best testers of all are university vacation students - they're bright people, they work hard because they want a job when they leave university and they get paid peanuts - it's like employing slave labour to do testing.

In practice a good test team comprises a test team leader (or test team manager in a large project), the junior people to do the donkey work plus one or two wise heads to say if the system is really right despite what the UFD says. Leading system test is a highly skilled job. In a moment we will discuss when the system test leader should join the project team.

In projects that are modifying packages or bolting new functions onto them the UFD will be the design for the changes or additions to the package and the same principles will apply in system test: do the changes and addition perform as per the UFD. If it is a tried and tested package we will not need to test its underlying functionality. However, if we are the first users of a package or a new version of a package we may want to test it as well as our changes: this can mean the test cost is disproportionately high when compared to the build and unit test cost - something to watch out for if we are using rules of thumb for design cost vs build cost vs system cost, etc.

Is the following reasonable: the developers get behind schedule, skip the planned quality checks, deliver late to system test, system test overruns because there are so many bugs, the end date slips and the system test leader/manager gets the blame. Not very reasonable.

With good quality management the following becomes possible. At the start of development the test manager and the development manager agree how many errors the developers will deliver to system test to find - let's say 300. (The prediction at the simplest could be based upon how many bugs per development man month have been found in the past in system test). They further agree this would mean that around 50 bugs would be found in the first week of system testing and further still that if more than 80 are found in the first week that will constitute statistical proof there are many more than 300 bugs in the system. And the system test team will go home and the system will be handed back to the developers until they can show they've done their job properly. This is another nuclear deterrent, we hope we never have to use it but everyone on the production line has the right to expect that good quality work will be passed to them. Even if, when it comes to the crunch, you decide not to suspend system test it must be made clear that the development manager has failed and he should be appraised accordingly. If he knows at the outset he will be held accountable for the quality of what he delivers he is more likely to take quality seriously.

Having completed system test without the usual panic and maybe even early, the system is thrown open for a final user acceptance test - which should be something of a non-event. After all the users know, having done the simulation, that the system is acceptable and they should find very few, if any, errors in this final test. Indeed this should be more a demonstration of how wonderful the system is than a test.

When is a bug a bug? If the system is not performing as per the UFD that's a bug. If the system is performing as per the UFD but the UFD is wrong and you want it changed that's not a bug, that's a change request.

If quality management has been good during development we will know roughly how many bugs will be found in testing: we can plan the number of tests to be done per week, the expected number of bugs per week, the expected number of bugs outstanding at the end of each week, the average time to fix bugs, and then we can track actual performance against that plan. And one could devise several other measures to track both the quality of the system and the efficiency with which bugs are being fixed. A key measure is the bug finding trend: if the number being found each week is above plan and keeps going up each week the alarm bells start to ring. A declining number of bugs being found each week may indicate we're on the home straight.

A lack of quality management earlier in the project will mean system test is unpredictable and the project manager is not so much managing it as clinging to a tiger's tail. Manage quality throughout and system test becomes as predictable and manageable as any other part of the project.



...next

Project Management Book
Copyright M Harding Roberts 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024
This book must not be copied either as is or in modified form to any website or other medium
www.hraconsulting-ltd.co.uk


Home   Sitemap   Contact Us   Project Management Articles   Project Management Book




Privacy Policy and Cookies