SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere Nutzervereinbarung und die Datenschutzrichtlinie.
SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere unsere Datenschutzrichtlinie und die Nutzervereinbarung.
Objective of this presentation is to give you all an introduction to Test Automation, its importance in the context of today's agile methodologies, things involved in automation, its complexity and limitations. If you are a student or early stage of your career, I intend to present a career option in front of you and generate curiosity so that you would further research into it based on the material I have listed at the end of the presentation.
Done programmaticallyFar more efficiently - A mature test automation regime will allow testing at the 'touch of a button' with tests run overnight when machines would otherwise be idle.Automated tests are repeatable, using exactly the same inputs in the same sequencetimeandagain, something that cannot be guaranteedwith manual testing. Automated testing enables even the smallest of maintenance changes to be fully tested with minimal effort.At first glance, it seems easy to automate testing: just buy one of the popular test execution tools, record the manual tests, and play them back whenever you want to. Unfortunately, as those who tried it have discovered, it doesn't work like that in practice.
Before we go into details of Automation, I would like to highlight that Automation is different from Testing.Testing – A Skill. Depends on quality test cases. Test cases has 4 attributes.Effectiveness – Whether or not it finds defects, or at least whether or not it is likely to find defects.Exemplary – An exemplary test case should test more than one thing, thereby reducing the total number of test cases required.Economic – How economical a test case is to perform, analyze, and debug.Evolvable – How much maintenance effort is required on the test case each time the software changes.Automation – Skill, of different kind.Manual vs Automation wrt 4 attributesWhether a test is automated or performed manually affects neither its effectiveness nor how exemplary it is.It doesn't matter how clever you are at automating a test or how well you do it, if the test itself achieves nothing then the end result is a test that achieves nothing faster.Once implemented, an automated test is generally much more economic, the cost of running it being a mere fraction of the effort to perform it manually. However, automated tests generally cost more to create and maintain.Roles of Tester vs Roles of Test AutomatorThe person who builds and maintains the artifacts associated with the use of a test execution tool is the test automator. A test automator may or may not also be a tester; he or she may or may not be a member of a test team. For example, there may be a test team consisting of user testers with business knowledge and no technical software development skills.
Other than efficiency, lets quickly see other benefits of automation:Regression Testing - In an environment where many programs are frequently modified,the effort involved in performing a set of regression tests should be minimal.A clear benefit of automation is the ability to run more tests in less time and therefore to make it possible to run them more often. This will lead to greater confidence in the system.Attempting to perform a full-scale live test of an online system with say200 users may be impossible, but the input from 200 users can be simulated usingautomated tests.Better use of resourcesAutomating menial and boring tasks, such as repeatedly entering the same test inputs, gives greater accuracy as well as improved staff morale, and frees skilled testers to put more effort into designing better test cases to be run.Machines that would otherwise lie idle overnight or at the weekend can be used to run automated tests.
Consistency and repeatability of testsTests that are repeated automatically will be repeated exactly every time. This gives a level of consistency to the tests which is very difficult to achievemanually.The same tests can be executed on different hardware configurations, using different operating systems, or using different databases.This gives a consistency of cross-platform quality for multi-platform products which is virtually impossible to achieve with manual testing.Reuse of tests The effort put into deciding what to test, designing the tests, and building the tests can be distributed over many executions of those tests. Tests which will be reused are worth spending time on to make sure they are reliable.NEED EXAMPLE Once a set of tests has been automated, it can be repeated far more quickly than it would be manually, so the testing elapsed time can be shortened. Knowing that an extensive set of automated tests has run successfully, there can be greater confidence that there won't be any unpleasant surprises when the system is released(providing that the tests being run are good/effective tests!)
As agile development becomes more prevalent, automation becomes more important.Continuous integration is test automation; regression tests are run every day, ifnot more often.The automation also needs to be responsive to change, just as agile development is, so the testware architecture is more critical.Testautomation is successfulin traditional as well as agile development, but agile development cannot succeed without test automation.
Let look at the test activities because these are the activities that we may want to automate:Identify – Determine ‘what’ can be testedCould be done in parallel with the development activityDesign – Determine ‘how’ to testTest case design will produce a numberof tests comprising specific input values, expected outcomes, and any other information needed for the test to run, such as environment prerequisites.Build – Implement test scripts, test inputs, test data and expected outcomes for comparison etcExecute – Execute the test casesCheck – Compare test case outcomes to expected outcomes As shown here, the first two test activities, identify test conditions and design test cases, are mainly intellectual in nature. The last two activities, execute test casesand compare test outcomes, are more clerical in nature. It is the intellectual activities that govern the quality of the test cases. The clerical activities areparticularly labor intensive and are therefore well worth automating. The activities of test execution and comparison are repeated many times, while the activities of identifying test conditions and designing test cases are performed only once (except for rework due to errors in those activities). For example: If a test finds an error in the software If a test fails for an environmental reason such as incorrect test data being used If tests are to be run on different platforms It is in automating the latter test activities where there is most to gain.
A test script is the data and/or instructions with a formal syntax, used by a test execution automation tool, typically held in a file. A test script can implement one or more test cases, navigation, set-up or clear-up procedures, or verification.Test scripts that you produce should be properly engineered. Writing scripts is much like writing a computer program.Although test scripts cannot be done away with altogether, using different scripting techniques can reduce the size and number of scripts and their complexity.One of the benefits of editing and coding scripts is to reduce the amount of scripting necessary to automate a set of test cases. This is achieved in 2 ways:One way is to code relatively small pieces of script that each perform a specific action or task that is common to several test cases. Each test case that needs to perform one of the common actions can then use the same script. The other way to reduce scripting is to insert control structures into the scripts to make the tool repeat sequences of instructions without having to code multiple copies of the instructions.
Number of Scripts - Fewer (Less than one script for each test case)Size of Scripts – Small, with annotation, no more than two pagesFunction - Each script has a clear, single purposeDocumentation - Specific documentation for users and maintainers, clear, succinct and up-to-dateReuse - Many scripts reused by different test casesStructured - Easy to see and understand the structure and therefore to make changes; following good programming practices, well-organized control constructsMaintenance - Easy to maintain; changes to software only require minor changes to a few scripts
Test verification is the process of checking whether or not the software has produced the correct outcome. This is achieved by performing one or more comparisons between an actual outcome of a test and the expected outcome of that test (i.e. the outcome when the software is performing correctly). Some tests require only a single comparison to verify their outcome while other tests may require several comparisons. For example, a test case that has entered new information into a database may require at least two comparisons, one to check that the information is displayed on the screen correctly and the other to check that the information is written to the database successfully.When automating test cases, the expected outcomes have either to be prepared in advance or generated by capturing the actual outcomes of a test run. In the latter case the captured outcomes must be verified manually and saved as the expected outcomes for further runs of the automated tests. This is called reference testing.An automated comparison tool, normally referred to as a 'comparator,' is a computer program that detects differences between two sets of data. For test automation this data is usually the outcome of a test run and the expected outcome.Dynamic Comparison=============Dynamic comparison is the comparison that is performed while a test case is executing. Test execution tools normally include comparator features that are specifically designed for dynamic comparison. Dynamic comparison is perhaps the most popular because it is much better supported by commercial test execution tools, particularly those with capture/replay facilities.Dynamic comparison is best used to check things as they appear on the screen in much the same way as a human tester would do.Dynamic comparison can be used to help program some intelligence into a test case, to make it act differently depending on the output as it occurs. For example, if an unexpected output occurs it may suggest that the test script has become out of step with the software under test, so the test case can be aborted rather than allowed to continue. Letting test cases continue when the expected outcome has not been achieved can be wasteful.More Complex - test cases that use many dynamic comparisons take more effort to create, are more difficult to write correctly (more errors arc likely so more script debugging will be necessary), and will incur a higher maintenance cost.Post-execution comparison================Post-execution comparison is the comparison that is performed after a test case has run. It is mostly used to compare outputs other than those that have been sent to the screen, such as files that have been created and the updated content of a database.Passive---------If we simply look at whatever happens to be available after the test case has been executed, this is a passive approach.Active-------If we intentionally save particular results that we are interested in during a test case, for the express purpose of comparing them afterwards, this is an active approach to post-execution comparison.
when a test case requires one or more post-execution comparisons, it is usually a different tool that performs it. In this situation the test execution tool may not run the post-execution comparator(s) of its own accord so we will have to run the comparator(s) ourselves. Figure 4.1 shows this situation in terms of the manual and automated tasks necessary to complete a set of 'automated' test cases. Figure 4.1 does not look much like efficient automated testing and indeed it is not. It would be nice if the test execution tool were responsible for running the comparator, but unless we tell it to do so, and tell it how to do so, it is not. To make the test execution tool perform the post-execution comparisons we will have to specifically add the necessary instructions to the end of the test script. This can amount to a significant amount of work, particularly if there are a good number of separate comparisons to be performed.
Even when we have added the instructions to perform the post-execution comparison we may not have solved the whole problem. Figure 4.2 shows why. The test execution tool will probably be able to tell us that the test case ran successfully (or not) but it may not tell us anything about the results of the post-execution comparisons. Assessing the results of the post-execution comparison is then a manual task. We have to look in two places to determine the final status of the test case run: the execution tool's log or summary report and the output from the comparator tool(s). In an ideal world the interface between the test execution tool and the post-execution comparators would be seamless, but there is usually a gap that we have to fill ourselves.
Testware is the term we use to describe all of the artifacts required for testing, including documentation, scripts, data, and expected outcomes, and all the artifacts generated by testing, including actual outcomes, difference reports, and summary reports.Architecture is the arrangement of all of these artifacts; that is, where they are stored and used, how they are grouped and referenced, and how they are changed and maintained. Testware:Test Materials – Input, Scripts, Data, Documentation, Expected OutcomeTest ResultsProducts – Actual OutcomeBy Products – Log, Stats, Reports
We divide the test materials into logical sets that we call Test Sets. Each Test Set contains one or more test cases. Normally Test Sets would contain a few tens of test cases but they may contain a few hundred or, at the other extreme, a single test case.A Test Suite is simply a collection of Test Sets and therefore contains all the test materials required to run the test cases contained within the Test Sets. There are two alternative ways of managing the configuration of the test-ware. The method that we favor is for the Testware Sets to be stored in the Testware Library as configuration items (that is, having a version number). The individual testware artifacts that make up the content of each type of set do not have their own version numbers. The effect of this is that whenever anything in the Testware Set is changed, a new version of the Testware Set is created containing the changed artifacts and the unchanged artifacts.
Automation – Testing which can be done
Far more efficient than manual testing
More complex than it appears
Testing and Automation are Different
many runs) Manual
First run of
Promises of Test Automation
Run existing tests on a new version of a program
Run more tests more often
Perform tests which would be difficult or impossible
to do manually
Better use of resources
Promises of Test Automation…
Consistency and repeatability of tests
Reuse of tests
Earlier time to market
Automation and Agile
Continuous Integration Continuous Integration Continuous Integration
What to Automate?
Governs the quality of tests
Good to automate
Test Script – data and/or instructions with a formal
syntax, used by a test execution automation tool,
typically held in a file.
Writing scripts is much like writing a computer
Reduce amount of scripting.
Attributes of a Script Set
Number of Scripts
Size of Scripts
Verification by comparison
Integration of test execution and post-execution
Start test tool,
select and run
Run test cases
Start test tool,
select and run
Run test cases (including
comparisons) and post-
Testware – all the artifacts required for testing
Architecture – arrangement of all of these artifacts
Test Sets – logical collection of testware artifacts
Test Suite – collection of Test Sets to meet a given test
Testware Library – a repository of the master
versions of all Testware Sets
Script Set Test Set Data Set Utility
Automating Pre & Post Processing
Pre & Post Processing
Select/Identify test cases to run
Set up test environment:
• Create test environment
• Load test data
Repeat for each test case:
• Set up test prerequisites
• Compare results
• Log results
• Clear up after test case
Clean up test environment:
• Delete unwanted data
• Save important data
Analyze test failures
Automating Pre & Post Processing…
Pre-processing tasks – Create, Check, Reorganize,
Post-processing tasks – Delete, Check, Reorganize,
Processing at different stages
What should happen after test case execution?
Limitations of Automation
Does not replace manual testing
Manual tests find more defects than automated tests
Greater reliance on the quality of the tests
Test automation does not improve effectiveness
Test automation may limit software development
Tools have no imagination
Test Automation Architect – designs the overall structure
of the automation
Test Automator – responsible for designing, writing,
and maintaining the automation software
Bridge between the Tester and the Tool
Good Programming Skills – SDET
Scripting – Perl, Python, Shell, sed, AWK etc
Debugging and Analysis
Software Test Automation - Dorothy Graham and
Experience of Test Automation - Dorothy Graham
and Mark Fewster
Presentations and White Papers from cigital.com