Team members' quality objectives
"Team member, every error found in your work post inspection will count against you in your next appraisal. But the meter only starts running after your work has been inspected." Why might this work really well?
It means that authors become agents for quality. They demand a good inspection of their work - errors found there don't count against them. But perhaps you're thinking people will therefore submit unchecked work to inspection and let their colleagues find all the errors for them? Only once. It is embarrassing to be in a meeting at which your work is being inspected and it is riddled with errors - it's obvious you haven't done your job properly. In really bad cases the moderator may stop the inspection and tell the author to sort the work out and resubmit it when it's in a fit state - which can be quite humbling. Next time one hopes the author will do what he can to get it right - and glow with pride when few or no errors are found at the inspection.
We said earlier the target should be zero defects even though we know we'll fail to achieve it. If the target is any other number - say 20 and there are 10 team members - they all know they're allowed 2 errors each. They won't check their work all that well on the basis that a couple of errors are OK. And there will probably be 5 or 10 or 15 errors there actually.
The team leader's target is also zero: if a member of his team is continually submitting bad work to inspection that jeopardises the team leader's appraisal too. The team leader will, one hopes, offer support to that team member to help him improve the quality of his work. If there are several development teams and there is a certain amount of rivalry over which team will have the fewest errors found in their work in system test, that team member may also come under some peer pressure to pull his socks up. Generally, the more errors there are in a piece of work prior to inspection the more errors will be missed by the inspection, so anything that can be done to help and encourage authors to get it right themselves will have the biggest influence on the final quality. And when we reach the promised land inspections and simulations just verify that there are indeed no errors there.
Fundamentally we are trying to reduce the number of bugs in live running. If in the past there have typically been say, 20 bugs in live running for every 50 man months of project effort, and this time there are only 5 errors per 50 man months, you know the team have done a good job. There may be other factors such as project complexity and team experience to take into account, but that's the final criterion - did we deliver a good quality system and save a lot of problems and cost in live running. That's the only reason for attending to quality - to save money.
If you are brave you might even want to try this. At the beginning of your next project explain to your team all about inspections and simulations. Tell them you are a quality fanatic. Tell them you'd like them to plan the project for you and tell them they can plan in as much checking time as they like at the requirements and UFD stages. There's a risk they'll take you to the cleaners but they probably won't - even the prospect of too many inspections can result in brain death.
But by saying they have carte blanche to plan is as much time as they like in order to get things right at those early stages how have you almost guaranteed a good quality outcome? You have removed the excuse for bad quality: we didn't have enough time. The team know that one won't wash with you so they will check their work that one more time just to make sure it's right.
Consider these two projects:
Project A. The perception is that the schedule is very tight, there isn't enough time to do quality checks in the requirements or design steps. The requirements and design steps 'finish' on schedule but the documents aren't really finished and are in fact riddled with errors. Build is fraught and runs behind schedule. Testing takes a very long time and delivery is late.
Project B. The perception is that the schedule isn't too tight. Quality checks are planned in at the early stages and are done, and the requirements and UFD are top notch - as complete and correct as one could hope for. Build runs like clockwork. Testing goes to plan and delivery is on time.
Project A and project B are the same project.
Objective quality and subjective quality
When the system goes live, counting the number of bugs (by severity), the number of crashes and outages
and the number of mandatory improvements gives us an objective measure of the system's quality.
However, users sometimes love a system despite all the bugs -
it makes their lives easier, does what they want, and even has some sexy features. Conversely they may hate a system
that is available 24/7 and has no bugs at all. So it's worth doing a user satisfaction survey to gauge this subjective view
of the system's 'quality' - more on this in the next chapter.
And finally:
"The bitterness of poor quality lingers long after the sweetness of meeting the date has been forgotten."
Please click here to go to
Project Management Book Chapter 13
Project Management Book
Copyright M Harding Roberts 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024
This book must not be copied either as is or in modified form to any website or other medium
www.hraconsulting-ltd.co.uk
Home Sitemap Contact Us Project Management Articles Project Management Book