Software organizations that want to maximize the yield of Software Testing find that choosing the right testing strategy is hard, and most testing managers are ill-prepared for this. The organization has to learn how to plan testing efforts based on the characteristics of each project and the many ways the software product is to be used. This tutorial is intended for Software professionals who are likely to be responsible for defining the strategy and planning of the testing effort and managing it through its life cycle. These roles are usually Testing Managers or Project Managers.
1. Webinar: Risk Driven Testing May 5th, 2010 11:00 AM CST Please note: The audio portion of this webinar is only accessible through the telephone dial-in number that you received in your registration confirmation email.
2. Jorge Boria Senior VP International Process Improvement Liveware Inc. [email_address] Michael Milutis Director of Marketing Computer Aid, Inc. (CAI) [email_address]
3. About Presenter’s Firm Liveware is a leader among SEI partners, trusted by small, medium and large organizations around the world to increase their effectiveness and efficiency through improving the quality of their processes. With an average collective experience of over 20 years in software process improvement we know how to make our customers succeed. We partner with our clients by focusing on their bottom line and short and long term business goals. With over 70 Introduction to CMMI classes delivered and 40 SCAMPI appraisals performed, you will not find a better consultant for your process improvement needs.
9. The V Model Applied UAT Execution (SDS) Test Report Sys Test Execution (SDS) Test Report Acceptance UAT Test Planning and Preparation System Test Planning and Preparation Unit Test Planning and Preparation Acceptance Requirements (SRD) Acceptance Specifications (TSD) Coding (SDS) Unit Test Execution (SDS) Hand Off Developed Components (SDS)` Phase End Review Phase End Review Post Mortem Project Review
51. Jorge Boria Senior VP International Process Improvement Liveware Inc. [email_address] Michael Milutis Director of Marketing Computer Aid, Inc. (CAI) [email_address]
Editor's Notes
The purpose of this webinar is to discuss issues that impact the effectiveness of IT organizations. Our discussion will be limited to IT Service Delivery (problem resolution, consultation requests, enhancements and projects). We will not be addressing Infrastructure or Operations Management issues.
Discuss these versus the class expectations, going over the notes from the introduction slide.
There are many more problems… see what students can add to the list. Other things that are often missing are the quality characteristics - what are the reliability requirements, the availability requirements, maintainability, portability, etc. What platforms are needed? What’s the key problem with today’s system that has to be addressed by this new one? What can go wrong if we don’t plan for these things in testing?
A project is a microcosm within a larger organization. Effective risk management must take into account the business environment in which the project operates. Many, if not most, projects fail not for technology or project management reasons, but because of larger organizational pressures that are typically ignored. These organizational pressures come in many forms, such as competitive pressures, financial health, and organizational culture. Here is a sample list of risk sources and possible consequences. It is interesting to note that the elements of significant risk are not the same across all types of projects. Different types of projects face different kinds of risks and must then pursue entirely different forms of risk control. When you take only a limited amount of time to do risk identification, you might use this list of categories to guide brainstorming of the risks to the projects. For example, if you are working on a small project which will receive minimal risk and reviews focus, you may spend only a few minutes considering the risks. Use the list of categories here to guide that time in a top-down approach to identifying the risks.
When faced with what to test, the crunch between the scarcity of resources and the need to provide a comprehensive coverage forces the testing manager with a compromise. To go through the horns of this dilemma, the best option is to find those aspects of the product that have the most impact on the business, a concept sometimes identified with “good enough”. A product might be defect free and not good enough, or defect plagued and good enough for its market. These CSFs are the quintessential element of a good testing plan.
What are the business drivers for the change? What will make the product a success or a failure? For example, if the business need is headcount reduction based on the goodness of your interface, how can you test that the reduction could be (not will be, because that is outside your scope) achieved? In the above slide, discuss what features might be crucial to the success of the product.
You should research who are the buyers of the product. All products are considered to bring in positive changes that will eventually impact the bottom line. For some, this imperative is seen as a short term goal. Is this your case? If so, how? Consider that sometimes the problem the product is expected to solve is that of administrative control. Does the product have the functionality to provide this? Is this functionality correct? Would the end-users also see improvement in the installation of the system? How can you get the kinks out of the system before shipping it to them, so that this is true?
What good is a good system if it is not really solving a problem? Would you use eighteen-wheelers for urban transportation of letters and documents? Does that make them bad products? Conversely, would you use motorcycles to send fresh farm produce across the continental United States? Does that make motorcycles unfit for commercial applications? When you are testing, do you only test against requirements? Whose representation are you assuming that makes sense for the business? Remember that your role is not to check that the software runs, nor to prove it correct, but to show all aspects that the users will find objections to!
The testing manager has two dimensions to worry about: being effective, that is, detecting as many defects as possible, and being efficient, that is, do this with the restrictions of a scarcity of resources. The most scarce resource is, of course, time. We have already discussed that testing is, by definition, always in the critical path. Therefore, it is sage she who schedules critical tasks (let’s call the testing tasks related to critical success factors so) before others. The purpose of testing is to find defects, but an implied consequence of this is that these defects get fixed. In that sense, reporting is very much a critical skill of a good tester. One way to measure it is in the time spent by developers in reproducing the defect when trying to fix it. This, and the other measures that are shown here, are just examples of goal setting dimensions.
Link this plan to the Project Plan by the schedule constraints. Enter this under the Schedule Constraints sub-section. Describe the model being followed by the project: Simple Waterfall, Parallel Waterfall, Evolutionary, Prototyping, Spiral, etc. Enter this under the Project’s Lifecycle Model sub-section. Define the project’s tasks at a high level of granularity, in order to show the schedule dependencies of the testing tasks with the project’s tasks. Use your Testing Process now to interleave the Testing tasks without tailoring them yet. Enter all this under the Project’s Work breakdown structure sub-section. You will have the opportunity later to refine or change the testing tasks, even drop some tasks as you see adequate. If known, enter under the sub section The Project’s Design Architecture the overall design architecture, whether the architecture is batch, event-driven, one, two, or three-tiered, etc. Discuss any shortcomings of the project that can have an impact on the business from the viewpoint of the testing team. Enter this under the Project’s Shortcomings sub-section.
Link this plan to the Project Plan by the schedule constraints. Enter this under the Schedule Constraints sub-section. Describe the model being followed by the project: Simple Waterfall, Parallel Waterfall, Evolutionary, Prototyping, Spiral, etc. Enter this under the Project’s Lifecycle Model sub-section. Define the project’s tasks at a high level of granularity, in order to show the schedule dependencies of the testing tasks with the project’s tasks. Use your Testing Process now to interleave the Testing tasks without tailoring them yet. Enter all this under the Project’s Work breakdown structure sub-section. You will have the opportunity later to refine or change the testing tasks, even drop some tasks as you see adequate. If known, enter under the sub section The Project’s Design Architecture the overall design architecture, whether the architecture is batch, event-driven, one, two, or three-tiered, etc. Discuss any shortcomings of the project that can have an impact on the business from the viewpoint of the testing team. Enter this under the Project’s Shortcomings sub-section.
There are many more problems… see what students can add to the list. Other things that are often missing are the quality characteristics - what are the reliability requirements, the availability requirements, maintainability, portability, etc. Can we test them? Should we? What platforms are needed? What’s the key problem with today’s system that has to be addressed by this new one?
The simile here is that testing, always in the critical path, will not be granted the required time to do a thorough job, in all but the most mission critical projects. However, it still has to do a “good-enough” job. Therefore, a large part of the strategy is to cleverly budget the time allotted to testing. Mind you that this is not a problem of testing resources, because even with a very large number of testers you can have too little time to run a very large number of tests. Also, the nature of the process is that before you run a large number the programs break down and you send them back to fix. This is, in fact, the limiting factor: how many defects can be fixed per unit of time? Since you will find ten times as many defects in the time it takes to correct one, starting early makes all the sense. If you leave the testing till the end, when all the resources have been committed to delivering massive quantities of unusable functionality, the project is lost.
You cannot stress enough that quality cannot be tested into a product. Yes, you can test the kinks out of a product, but quality is a fundamental, quintessential, holistic characteristic. User-friendliness is not a requirement, it is a general statement. The (derived) requirement will have to be testable, as in number of buttons, number of clicks to get the job done, feedback received, time to do the job, etc. User friendliness is, surprisingly, very unfriendly to the tester. It isn’t even a usability statement! It probably, but not always, draws from usability, but performance and fitness of purpose are more important. You might want to have reliability numbers, but you can’t if you don’t have profiled scenarios of the usage, with probabilities attached.
You cannot stress enough that quality cannot be tested into a product. Yes, you can test the kinks out of a product, but quality is a fundamental, quintessential, holistic characteristic. User-friendliness is not a requirement, it is a general statement. The (derived) requirement will have to be testable, as in number of buttons, number of clicks to get the job done, feedback received, time to do the job, etc. User friendliness is, surprisingly, very unfriendly to the tester. It isn’t even a usability statement! It probably, but not always, draws from usability, but performance and fitness of purpose are more important. You might want to have reliability numbers, but you can’t if you don’t have profiled scenarios of the usage, with probabilities attached.
You cannot stress enough that quality cannot be tested into a product. Yes, you can test the kinks out of a product, but quality is a fundamental, quintessential, holistic characteristic. User-friendliness is not a requirement, it is a general statement. The (derived) requirement will have to be testable, as in number of buttons, number of clicks to get the job done, feedback received, time to do the job, etc. User friendliness is, surprisingly, very unfriendly to the tester. It isn’t even a usability statement! It probably, but not always, draws from usability, but performance and fitness of purpose are more important. You might want to have reliability numbers, but you can’t if you don’t have profiled scenarios of the usage, with probabilities attached.
It is time to think pre-scheduling. Will this strategy fly? Mainly, will the people be available, will there be time to perform the tests (and the fixes) will the model accommodate the strategy, will you have to change the strategy to accommodate the model. For example, you have set a high coverage goal for the unit tests. The architecture is OO framework. Will you have to accommodate the goals to fit the architecture? Will a high scenario coverage suffice?
Risk action planning turns risk information into decisions and actions. Planning involves developing actions to address individual risks, prioritizing risk actions, and creating an integrated risk management plan. Here are four key areas to address during risk action planning: Research. Do we know enough about this risk? Do we need to study the risk further to acquire more information and better determine the characteristics of the risk before we can decide what action to take? Accept. Can we live with the consequences if the risk were actually to occur? Can we accept the risk and take no further action? Manage. Is there anything the team can do to mitigate the impact of the risk should the risk occur? Is the effort worth the cost? Avoid. Can we avoid the risk by changing the project approach?
A contingency plan provides a fallback option in case all efforts to manage the risk fail. For example, suppose a new release of a particular tool is needed so that software can be placed on some platform, but the arrival of the tool is at risk. We may want to have a plan to use an alternate tool or platform. Simultaneous development may be the only contingency plan that ensures we hit the market window we seek. Deciding when to start the second parallel effort is a matter of watching the trigger value for the contingency plan. To determine when to launch the contingency plan, the team should select measures of risk handling or measures of impact that they can use to determine when their mitigation strategy is out of control. At that point, they need to start the contingency plan.
Trigger values for the contingency plan can often be established based on the type of risk or the type of project consequence that will be encountered. Trigger values help the project team determine when they need to spend the time, money, or effort on their contingency plan, since mitigation efforts are not working.
The action plan addresses the risk in a way that allows us to apply resources or other assistance to remove the potential problem. The contingency action is our fallback plan, for the possibility that the action does not work. Here, we see that a case where there probably is no viable option other than the one being developed. If it doesn’t get to us on time, we may need to ship without the feature. The product may have other capabilities for which the customer needs the release on the original date planned, whether or not it has the Web interface.
The action plan addresses the risk in a way that allows us to apply resources or other assistance to remove the potential problem. The contingency action is our fallback plan, for the possibility that the action does not work. Here, we see that a case where there probably is no viable option other than the one being developed. If it doesn’t get to us on time, we may need to ship without the feature. The product may have other capabilities for which the customer needs the release on the original date planned, whether or not it has the Web interface.
Another way to think it is to have the universe of test suites divided within itself in mandatory test cases, supplementary test cases, and complementary test cases, and have the suites ranked into “must run”, “good to run”, and “optional”.
Our focus is to help build effective business processes, leveraging the best products in the marketplace, to build solutions to customer problems quickly.