SlideShare ist ein Scribd-Unternehmen logo
1 von 8
Downloaden Sie, um offline zu lesen
AUTOMATED TEST TOOLS
EVALUATION CRITERIA




        Terry Horwath
    Version 1.02 (1/18/07)
Automated Test Tools Evaluation Criteria                                                                         Version 1.02 (1/18/07)




                                                    Table of Contents

1.     INTRODUCTION                                                                                                                            1
1.1     Author’s Background ...........................................................................................................1
1.2     Allocate Reasonable Resources and Talent..........................................................................1
1.3     Establish Reasonable Expectations ......................................................................................2
2.     RECOMMENDED EVALUATION CRITERIA                                                                                                         3
2.1     GUI Object Recognition.......................................................................................................3
2.2     Platform Support ..................................................................................................................3
2.3     Recording Browser Objects..................................................................................................3
2.4     Cross-browser Playback .......................................................................................................3
2.5     Recording Java Objects ........................................................................................................4
2.6     Java Playback .......................................................................................................................4
2.7     Visual Testcase Recording ...................................................................................................4
2.8     Scripting Language...............................................................................................................4
2.9     Recovery System ..................................................................................................................4
2.10    Custom Objects ....................................................................................................................5
2.11    Technical Support.................................................................................................................5
2.12    Internationalization Support .................................................................................................5
2.13    Reports..................................................................................................................................5
2.14    Training & Hiring Issues ......................................................................................................5
2.15    Multiple Test Suite Execution ..............................................................................................5
2.16    Testcase Management...........................................................................................................6
2.17    Debugging Support...............................................................................................................6
2.18    User Audience ......................................................................................................................6




                                                                      ii                                                     Terry Horwath
Automated Test Tools Evaluation Criteria                                      Version 1.02 (1/18/07)




1. INTRODUCTION
This document provides a list of evaluation criteria which has proven useful to me when
evaluating automated test tools like Mercury Interactive’s QuickTest Professional, WinRunner
and Segue’s Silk over the last several years for a variety of clients. Hopefully some readers will
find this information useful, such that it reduces your evaluation effort.
The specific criteria used for each project differs based on the client’s:
•   testing environment, and
•   test engineers’ programming backgrounds and skill sets, and
•   type of software being tested [especially the software develop tool, such as Visual Basic,
    PowerBuilder, Java, browser based applications, etc.], and
•   application(s) testing requirements.

The remainder of this chapter provides a variety of miscellaneous thoughts I have on automating
the testing process, while Chapter 2 contains my list of potential evaluation criteria. Note that
some of the Chapter 2 evaluation criteria is Java and web application testing oriented. Substitute
your application development tool—for example Visual Basic or PowerBuilder—in the Java
related evaluation criteria items.


1.1 Author’s Background
I have designed custom frameworks as well as hundreds of test cases using Silk/QaPartner from
1994 (version 1.0) through 2004 (version 5.5). with WinRunner (version 5) and Test Director in
1999 and 2000 and with QuickTest Professional since 2006 (versions 8 and 9).


1.2 Allocate Reasonable Resources and Talent
Most software testing projects do not fail because of the selected test tools—virtually all of top
automated testing tools on the market can be used to do an adequate job, even when the test tool
is not well matched with the software development environment. Rather I believe that most
failures are due to a combination of the following reasons:
1. Test engineers fail to treat the effort to develop a large number of complex test cases and test
   suites as a large software development project—it is crucial to apply good software
   development methodology to produce a test product, which includes defining requirements,
   developing a schedule, implementing each test suite using a shared custom framework of well
   known libraries and guidelines, as well as using a software version control system.
2. Sufficient manpower and time are not allocated early enough in the application development
   cycle. Along with incomplete testing this also leads to the phenomenon of test automation
   targeted for use with Release N actually being delivered and used with Release N+1.
3. Test technicians with improper skills are assigned to use these automated test tools. Users of
   these tools must have strong test mentalities and in all but a few situations they must also
   possess solid programming skills with the automation tool’s scripting language.




                                                  1                                   Terry Horwath
Automated Test Tools Evaluation Criteria                                       Version 1.02 (1/18/07)




1.3 Establish Reasonable Expectations
Through their promotional literature automated test tool vendors often establish unrealistic
expectations in one or more of the following areas:
•   What application features and functions can truly be tested with the tool.
•   The skill level required to effectively use the tool.
•   How useful the tool’s automatic recording capabilities are.
•   How quickly effective testcases can be produced.

This is unfortunate because in the hands of test engineers possessing the proper skill set all of the
top automated test tools can be used to test significant portions of virtually any GUI-centric
application. Use the following assumptions when reviewing this document and planning your
evaluation effort:
1. Even when a test tool is well matched with a software development tool, the test tool will still
   only be able to recognize a subset of the application’s objects—windows, buttons, controls,
   etc.—without taking special programming actions. This subset will be large when the
   development engineers create window objects using the development tool’s standard class
   libraries. The related issue of cross-browser playback also rears it head when testing web
   applications.
2. If the test engineer wants to unleash the full power of the test tool they will need to have, or
   develop, solid programming skills with the tool’s scripting language.
3. With few exceptions recording utilities—those tools which capture user interaction and insert
   validation functions—are only effective in roughing out a testcase. Thereafter captured
   sequences will most often need be cleaned up and/or generalized using the scripting
   language.
4. If an application has functionality which can’t be tested through the GUI you will need to:
   (a) use the tool’s ability to interface to DLLs—for Windows based applications;
   (b) use its SDK (software developer kit) or API if it supports one of these mechanisms;
   (c) use optional tools—at an additional cost—offered by the test tool vendor;
   (d) use other 3rd party non-GUI test tools more appropriate to the testing task.
5. If you are currently manually testing the application to be automated you will need to initially
   increase the size of the test team by a minimum of 1 or 2 test engineers—who possess good
   programming backgrounds. After a significant portion of testcases have been written and
   debugged you can start removing some of the manual test engineers. Pay back comes at the
   end of the automation effort, not during the initial implementation.
6. If the test team does not contain at least one member previously involved with automating the
   test process, coming up to speed is no small task—no matter which tool is selected. Budget
   dollars and time for training classes and consulting offered by the tool vendor to get your test
   team up and running.
7. Budget 80 hours of time to do a detailed evaluation of each vendor’s automated test tool
   against your selected evaluation criteria, using one of your applications. While you might
   initially recoil from this significant investment in time, keep in mind that the selected tool
   will likely be part of your department’s testing effort for many years—selecting the wrong
   tool will reduce productivity many times over 80 hours.




                                                   2                                   Terry Horwath
Automated Test Tools Evaluation Criteria                                       Version 1.02 (1/18/07)




2. RECOMMENDED EVALUATION CRITERIA

2.1 GUI Object Recognition
Does the tool:
(a) Provide the ability to record each object in a window—or on a browser page—such that a
    logical object identifier, used in the script, is definable independent of the operating system
    dependent property [or properties] used by the tool to access that object at runtime.
(b) (1) Provide the ability to associate (i.e. map) the logical object identifier with more than one
    operating system dependent property. And, (2) does the tool offer some technique to support
    a property definition technique which supports internationalization [if language localization
    is a testing requirement]?
(c) provide the ability to record—and deal effectively with—dynamically generated objects
    [often encountered when testing web applications].


2.2 Platform Support
Are all of the required platforms [i.e. NT 4.0, Windows XP, Windows Vista, etc.] supported for:
(a) testcase playback?
(b) testcase recording?
(c) testcase development [programming without recording support]?


2.3 Recording Browser Objects
Does the tool provide the ability to record against web applications under test, correctly
recognizing all browser page HTML objects, using the following browsers:
(a) IE7?
(b) IE6?
(c) Firefox?


2.4 Cross-browser Playback
Does the tool provide the ability to reliably and repeatedly playback test scripts against the
browsers which were not used during object capture and testcase creation, with little or no:
(a) Changes to the GUI map (WinRunner), GUI declarations (Silk) or the equivalent in other
    tools?
(b) Changes to testcase code?
(c) Does the tool provide some type of generic capability [without using sleep–like commands
    in the code] to deal with “browser not ready” to correctly synchronize code execution—such
    as an access to a web page over a slow internet connection?




                                                  3                                    Terry Horwath
Automated Test Tools Evaluation Criteria                                         Version 1.02 (1/18/07)




2.5 Recording Java Objects
Does the tool:
(a) Provide the ability to record objects against, and see all standard Swing, AWT and JFC 1.1.8
    and 1.2 objects, when running the Java application under test?
(b) Provide the ability to record objects against [and interact with] non-standard Java classes
    required by the Java application under test (i.e. for example the KLGroup’s 3rd party controls,
    when the application under test uses that 3rd party toolset)?
(c) Require that the platform’s static classpath environment variable be set with tool specific
    classes, or can this be set within the tool on a test suite by test suite basis?


2.6 Java Playback
Does the tool:
(a) Reliably and repeatedly play back the evaluation testcases?
(b) Provide some type of generic capability [without using sleep –like commands in the code]
    to deal with “application not ready” to correctly synchronize code execution? [This may or
    may not be an issue, depending on the application being tested].


2.7 Visual Testcase Recording
Does the tool:
(a) Provide the ability to visually record testcases by interacting with the application under test as
    a real user would?
(b) Provide the ability, while visually recording a testcase, to interactively insert—without
    resorting to programming—validation statements?
(c) Provide the ability, while interactively inserting a validation statements, to
    visually/interactively select validation properties (i.e. contents of a text field, focus on a
    control, control enabled, etc.)?


2.8 Scripting Language
Is the test tool’s underlying scripting language:
(a) object-oriented?
(b) Proprietary?


2.9 Recovery System
Does the tool support some type of built-in recovery system, which the programmer can
control/define, that drives the application under test back to a know state? (Especially in the case
where modal dialogs were left open when a testcase failure occurred)?




                                                    4                                    Terry Horwath
Automated Test Tools Evaluation Criteria                                       Version 1.02 (1/18/07)


2.10 Custom Objects
What capabilities does the tool provide to deal with unrecognized objects in a window or on a
browser page? [Spend a fair amount of time evaluating this capability, as it is quite important].


2.11 Technical Support
What was the quality and timeliness of technical support received during product evaluation?
[Remember—it won’t get any better after you purchase the product, but it might get worse].


2.12 Internationalization Support
Evaluate the support for internationalization [also referred to as language localization] in the
following areas [if this is a testing requirement]:
(a) Object recognition
(b) Object content (such as text fields, text labels, etc.).
(c) Evaluate and highlight any built–in or add-on multi–language support offered by the vendor.


2.13 Reports
What type of reporting and logging capabilities does the tool provide?


2.14 Training & Hiring Issues
(a) What is your [not the vendor’s] estimated learning curve to be competent (i.e. can write
    useful test scripts which may need to be rewritten later);
(b) What is your estimated learning curve to become an skilled (i.e. can write test scripts which
    rarely need to be rewritten).
(c) What is your estimated learning curve to become an expert (i.e. can design frameworks).
(d) What is your estimated availability of potential (i) employees, and (ii) expert consultants,
    skilled with this tool in your geographic area.


2.15 Multiple Test Suite Execution
(a) Can multiple test suites be driven completely from the tool [or from a command line
    interface] thereby allowing X number of unrelated suites/projects to be executed under a
    cron-like job or shell? (For true unattended operation).
(b) …including the ability to save the results log, as text, prior to or during termination/exit?
(c) …including the ability to return a reliable pass/fail status on termination/exit?




                                                  5                                     Terry Horwath
Automated Test Tools Evaluation Criteria                                       Version 1.02 (1/18/07)




2.16 Testcase Management
Does the tool support some type of test case management facility (either built-in or as an add-on)
that allows each test engineer to execute any combination of tests out of the full test suite for a
given project? How difficult is it to integrate manual testing results with automated test results?


2.17 Debugging Support
What type of debugging capabilities does the tool support to help isolate scripting and/or runtime
errors?


2.18 User Audience
Which of the following groups of users does the tool primarily target?
•   Test technicians possess good test mentalities, but often lack much if any background in
    programming or software development methodologies. They are the backbone of many test
    groups and have often spent years developing and executing manual testcases.
•   Test developers possess all of the test technician’s skill set, plus they have had some formal
    training in programming and limited experience working with on a software development
    project and/or automated testcases.
•   Test architects possess all of the test developer’s skill set, plus they have had many years of
    experience developing and maintaining automated test cases, as well as experience defining
    and implementing the test framework under which multiple automated test suites are
    developed. They are a recognized expert with at least one automated tool.




                                                 6                                     Terry Horwath

Weitere ähnliche Inhalte

Was ist angesagt?

Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
tharindakasun
 

Was ist angesagt? (20)

Selenium test automation
Selenium test automationSelenium test automation
Selenium test automation
 
Test automation process
Test automation processTest automation process
Test automation process
 
Automation Testing
Automation TestingAutomation Testing
Automation Testing
 
TestNG Framework
TestNG Framework TestNG Framework
TestNG Framework
 
Test Automation Strategies For Agile
Test Automation Strategies For AgileTest Automation Strategies For Agile
Test Automation Strategies For Agile
 
Test Automation and Selenium
Test Automation and SeleniumTest Automation and Selenium
Test Automation and Selenium
 
Rest assured
Rest assuredRest assured
Rest assured
 
End-to-End Test Automation for Both Horizontal and Vertical Scale
End-to-End Test Automation for Both Horizontal and Vertical ScaleEnd-to-End Test Automation for Both Horizontal and Vertical Scale
End-to-End Test Automation for Both Horizontal and Vertical Scale
 
Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
 
Scrum Testing Methodology
Scrum Testing MethodologyScrum Testing Methodology
Scrum Testing Methodology
 
Functional Testing Tutorial | Edureka
Functional Testing Tutorial | EdurekaFunctional Testing Tutorial | Edureka
Functional Testing Tutorial | Edureka
 
Test Automation Framework Designs
Test Automation Framework DesignsTest Automation Framework Designs
Test Automation Framework Designs
 
Create an architecture for web test automation
Create an architecture for web test automationCreate an architecture for web test automation
Create an architecture for web test automation
 
Data driven Automation Framework with Selenium
Data driven Automation Framework with Selenium Data driven Automation Framework with Selenium
Data driven Automation Framework with Selenium
 
Building a Test Automation Strategy for Success
Building a Test Automation Strategy for SuccessBuilding a Test Automation Strategy for Success
Building a Test Automation Strategy for Success
 
QA Best Practices in Agile World_new
QA Best Practices in Agile World_newQA Best Practices in Agile World_new
QA Best Practices in Agile World_new
 
Automation testing
Automation testingAutomation testing
Automation testing
 
6 Traits of a Successful Test Automation Architecture
6 Traits of a Successful Test Automation Architecture6 Traits of a Successful Test Automation Architecture
6 Traits of a Successful Test Automation Architecture
 
ATDD Using Robot Framework
ATDD Using Robot FrameworkATDD Using Robot Framework
ATDD Using Robot Framework
 
Need for automation testing
Need for automation testingNeed for automation testing
Need for automation testing
 

Andere mochten auch

Internet browers comparison
Internet browers comparisonInternet browers comparison
Internet browers comparison
ferristic
 
Automation Testing with Sikuli
Automation Testing with SikuliAutomation Testing with Sikuli
Automation Testing with Sikuli
lionpeal
 
Types of testing and their classification
Types of testing and their classificationTypes of testing and their classification
Types of testing and their classification
Return on Intelligence
 
How to Design a Successful Test Automation Strategy
How to Design a Successful Test Automation Strategy How to Design a Successful Test Automation Strategy
How to Design a Successful Test Automation Strategy
Impetus Technologies
 
ppt norm reference and criteration test
ppt norm reference and criteration testppt norm reference and criteration test
ppt norm reference and criteration test
Nur Arif S
 
Criteria for evaluation
Criteria for evaluationCriteria for evaluation
Criteria for evaluation
Juliet Cabiles
 

Andere mochten auch (20)

Testing a Test: Evaluating Our Assessment Tools
Testing a Test: Evaluating Our Assessment ToolsTesting a Test: Evaluating Our Assessment Tools
Testing a Test: Evaluating Our Assessment Tools
 
Types of test tools
Types of test toolsTypes of test tools
Types of test tools
 
Software testing tools
Software testing toolsSoftware testing tools
Software testing tools
 
Software testing tools (free and open source)
Software testing tools (free and open source)Software testing tools (free and open source)
Software testing tools (free and open source)
 
Internet browers comparison
Internet browers comparisonInternet browers comparison
Internet browers comparison
 
Browsers comparison
Browsers comparisonBrowsers comparison
Browsers comparison
 
Software testing tools and its taxonomy
Software testing tools and its taxonomySoftware testing tools and its taxonomy
Software testing tools and its taxonomy
 
Automation Testing with Sikuli
Automation Testing with SikuliAutomation Testing with Sikuli
Automation Testing with Sikuli
 
Sikuli
SikuliSikuli
Sikuli
 
Practical Software Testing Tools
Practical Software Testing ToolsPractical Software Testing Tools
Practical Software Testing Tools
 
Lecture 6
Lecture 6 Lecture 6
Lecture 6
 
FITT Toolbox: Evaluation Criteria
FITT Toolbox: Evaluation CriteriaFITT Toolbox: Evaluation Criteria
FITT Toolbox: Evaluation Criteria
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testing
 
Types of testing and their classification
Types of testing and their classificationTypes of testing and their classification
Types of testing and their classification
 
How to Design a Successful Test Automation Strategy
How to Design a Successful Test Automation Strategy How to Design a Successful Test Automation Strategy
How to Design a Successful Test Automation Strategy
 
Test Automation Framework Design | www.idexcel.com
Test Automation Framework Design | www.idexcel.comTest Automation Framework Design | www.idexcel.com
Test Automation Framework Design | www.idexcel.com
 
ppt norm reference and criteration test
ppt norm reference and criteration testppt norm reference and criteration test
ppt norm reference and criteration test
 
Criteria for evaluation
Criteria for evaluationCriteria for evaluation
Criteria for evaluation
 
Selecting the Right Automated Testing tool
Selecting the Right Automated Testing tool Selecting the Right Automated Testing tool
Selecting the Right Automated Testing tool
 
Practical Sikuli: using screenshots for GUI automation and testing
Practical Sikuli: using screenshots for GUI automation and testingPractical Sikuli: using screenshots for GUI automation and testing
Practical Sikuli: using screenshots for GUI automation and testing
 

Ähnlich wie Testing Tool Evaluation Criteria

Hrishikesh_iitg_internship_report
Hrishikesh_iitg_internship_reportHrishikesh_iitg_internship_report
Hrishikesh_iitg_internship_report
Hrishikesh Malakar
 
Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3
Techpartnerz
 
Software Testing basics
Software Testing basicsSoftware Testing basics
Software Testing basics
Olia Khlystun
 

Ähnlich wie Testing Tool Evaluation Criteria (20)

6. Testing Guidelines
6. Testing Guidelines6. Testing Guidelines
6. Testing Guidelines
 
Open Source Software Testing Tools
Open Source Software Testing ToolsOpen Source Software Testing Tools
Open Source Software Testing Tools
 
Software testing interview Q&A – Part 2
Software testing interview Q&A – Part 2Software testing interview Q&A – Part 2
Software testing interview Q&A – Part 2
 
Istqb Agile-tester Extension
Istqb Agile-tester ExtensionIstqb Agile-tester Extension
Istqb Agile-tester Extension
 
Hrishikesh_iitg_internship_report
Hrishikesh_iitg_internship_reportHrishikesh_iitg_internship_report
Hrishikesh_iitg_internship_report
 
Test automation - Building effective solutions
Test automation - Building effective solutionsTest automation - Building effective solutions
Test automation - Building effective solutions
 
Chapter 5 - Tools
Chapter 5 - ToolsChapter 5 - Tools
Chapter 5 - Tools
 
Qa case study
Qa case studyQa case study
Qa case study
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3
 
Identifying Software Performance Bottlenecks Using Diagnostic Tools- Impetus ...
Identifying Software Performance Bottlenecks Using Diagnostic Tools- Impetus ...Identifying Software Performance Bottlenecks Using Diagnostic Tools- Impetus ...
Identifying Software Performance Bottlenecks Using Diagnostic Tools- Impetus ...
 
Unit 5 st ppt
Unit 5 st pptUnit 5 st ppt
Unit 5 st ppt
 
Txet Document
Txet DocumentTxet Document
Txet Document
 
MTLM Visual Studio 2010 ALM workshop
MTLM Visual Studio 2010 ALM workshopMTLM Visual Studio 2010 ALM workshop
MTLM Visual Studio 2010 ALM workshop
 
Software Testing basics
Software Testing basicsSoftware Testing basics
Software Testing basics
 
stlc
stlcstlc
stlc
 
Automation testing
Automation testingAutomation testing
Automation testing
 
Choosing a performance testing tool
Choosing a performance testing toolChoosing a performance testing tool
Choosing a performance testing tool
 
The Case for Agile testing
The Case for Agile testingThe Case for Agile testing
The Case for Agile testing
 
VAL-210-Computer-Validati-Plan-sample.pdf
VAL-210-Computer-Validati-Plan-sample.pdfVAL-210-Computer-Validati-Plan-sample.pdf
VAL-210-Computer-Validati-Plan-sample.pdf
 

Kürzlich hochgeladen

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Kürzlich hochgeladen (20)

Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 

Testing Tool Evaluation Criteria

  • 1. AUTOMATED TEST TOOLS EVALUATION CRITERIA Terry Horwath Version 1.02 (1/18/07)
  • 2. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) Table of Contents 1. INTRODUCTION 1 1.1 Author’s Background ...........................................................................................................1 1.2 Allocate Reasonable Resources and Talent..........................................................................1 1.3 Establish Reasonable Expectations ......................................................................................2 2. RECOMMENDED EVALUATION CRITERIA 3 2.1 GUI Object Recognition.......................................................................................................3 2.2 Platform Support ..................................................................................................................3 2.3 Recording Browser Objects..................................................................................................3 2.4 Cross-browser Playback .......................................................................................................3 2.5 Recording Java Objects ........................................................................................................4 2.6 Java Playback .......................................................................................................................4 2.7 Visual Testcase Recording ...................................................................................................4 2.8 Scripting Language...............................................................................................................4 2.9 Recovery System ..................................................................................................................4 2.10 Custom Objects ....................................................................................................................5 2.11 Technical Support.................................................................................................................5 2.12 Internationalization Support .................................................................................................5 2.13 Reports..................................................................................................................................5 2.14 Training & Hiring Issues ......................................................................................................5 2.15 Multiple Test Suite Execution ..............................................................................................5 2.16 Testcase Management...........................................................................................................6 2.17 Debugging Support...............................................................................................................6 2.18 User Audience ......................................................................................................................6 ii Terry Horwath
  • 3. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) 1. INTRODUCTION This document provides a list of evaluation criteria which has proven useful to me when evaluating automated test tools like Mercury Interactive’s QuickTest Professional, WinRunner and Segue’s Silk over the last several years for a variety of clients. Hopefully some readers will find this information useful, such that it reduces your evaluation effort. The specific criteria used for each project differs based on the client’s: • testing environment, and • test engineers’ programming backgrounds and skill sets, and • type of software being tested [especially the software develop tool, such as Visual Basic, PowerBuilder, Java, browser based applications, etc.], and • application(s) testing requirements. The remainder of this chapter provides a variety of miscellaneous thoughts I have on automating the testing process, while Chapter 2 contains my list of potential evaluation criteria. Note that some of the Chapter 2 evaluation criteria is Java and web application testing oriented. Substitute your application development tool—for example Visual Basic or PowerBuilder—in the Java related evaluation criteria items. 1.1 Author’s Background I have designed custom frameworks as well as hundreds of test cases using Silk/QaPartner from 1994 (version 1.0) through 2004 (version 5.5). with WinRunner (version 5) and Test Director in 1999 and 2000 and with QuickTest Professional since 2006 (versions 8 and 9). 1.2 Allocate Reasonable Resources and Talent Most software testing projects do not fail because of the selected test tools—virtually all of top automated testing tools on the market can be used to do an adequate job, even when the test tool is not well matched with the software development environment. Rather I believe that most failures are due to a combination of the following reasons: 1. Test engineers fail to treat the effort to develop a large number of complex test cases and test suites as a large software development project—it is crucial to apply good software development methodology to produce a test product, which includes defining requirements, developing a schedule, implementing each test suite using a shared custom framework of well known libraries and guidelines, as well as using a software version control system. 2. Sufficient manpower and time are not allocated early enough in the application development cycle. Along with incomplete testing this also leads to the phenomenon of test automation targeted for use with Release N actually being delivered and used with Release N+1. 3. Test technicians with improper skills are assigned to use these automated test tools. Users of these tools must have strong test mentalities and in all but a few situations they must also possess solid programming skills with the automation tool’s scripting language. 1 Terry Horwath
  • 4. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) 1.3 Establish Reasonable Expectations Through their promotional literature automated test tool vendors often establish unrealistic expectations in one or more of the following areas: • What application features and functions can truly be tested with the tool. • The skill level required to effectively use the tool. • How useful the tool’s automatic recording capabilities are. • How quickly effective testcases can be produced. This is unfortunate because in the hands of test engineers possessing the proper skill set all of the top automated test tools can be used to test significant portions of virtually any GUI-centric application. Use the following assumptions when reviewing this document and planning your evaluation effort: 1. Even when a test tool is well matched with a software development tool, the test tool will still only be able to recognize a subset of the application’s objects—windows, buttons, controls, etc.—without taking special programming actions. This subset will be large when the development engineers create window objects using the development tool’s standard class libraries. The related issue of cross-browser playback also rears it head when testing web applications. 2. If the test engineer wants to unleash the full power of the test tool they will need to have, or develop, solid programming skills with the tool’s scripting language. 3. With few exceptions recording utilities—those tools which capture user interaction and insert validation functions—are only effective in roughing out a testcase. Thereafter captured sequences will most often need be cleaned up and/or generalized using the scripting language. 4. If an application has functionality which can’t be tested through the GUI you will need to: (a) use the tool’s ability to interface to DLLs—for Windows based applications; (b) use its SDK (software developer kit) or API if it supports one of these mechanisms; (c) use optional tools—at an additional cost—offered by the test tool vendor; (d) use other 3rd party non-GUI test tools more appropriate to the testing task. 5. If you are currently manually testing the application to be automated you will need to initially increase the size of the test team by a minimum of 1 or 2 test engineers—who possess good programming backgrounds. After a significant portion of testcases have been written and debugged you can start removing some of the manual test engineers. Pay back comes at the end of the automation effort, not during the initial implementation. 6. If the test team does not contain at least one member previously involved with automating the test process, coming up to speed is no small task—no matter which tool is selected. Budget dollars and time for training classes and consulting offered by the tool vendor to get your test team up and running. 7. Budget 80 hours of time to do a detailed evaluation of each vendor’s automated test tool against your selected evaluation criteria, using one of your applications. While you might initially recoil from this significant investment in time, keep in mind that the selected tool will likely be part of your department’s testing effort for many years—selecting the wrong tool will reduce productivity many times over 80 hours. 2 Terry Horwath
  • 5. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) 2. RECOMMENDED EVALUATION CRITERIA 2.1 GUI Object Recognition Does the tool: (a) Provide the ability to record each object in a window—or on a browser page—such that a logical object identifier, used in the script, is definable independent of the operating system dependent property [or properties] used by the tool to access that object at runtime. (b) (1) Provide the ability to associate (i.e. map) the logical object identifier with more than one operating system dependent property. And, (2) does the tool offer some technique to support a property definition technique which supports internationalization [if language localization is a testing requirement]? (c) provide the ability to record—and deal effectively with—dynamically generated objects [often encountered when testing web applications]. 2.2 Platform Support Are all of the required platforms [i.e. NT 4.0, Windows XP, Windows Vista, etc.] supported for: (a) testcase playback? (b) testcase recording? (c) testcase development [programming without recording support]? 2.3 Recording Browser Objects Does the tool provide the ability to record against web applications under test, correctly recognizing all browser page HTML objects, using the following browsers: (a) IE7? (b) IE6? (c) Firefox? 2.4 Cross-browser Playback Does the tool provide the ability to reliably and repeatedly playback test scripts against the browsers which were not used during object capture and testcase creation, with little or no: (a) Changes to the GUI map (WinRunner), GUI declarations (Silk) or the equivalent in other tools? (b) Changes to testcase code? (c) Does the tool provide some type of generic capability [without using sleep–like commands in the code] to deal with “browser not ready” to correctly synchronize code execution—such as an access to a web page over a slow internet connection? 3 Terry Horwath
  • 6. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) 2.5 Recording Java Objects Does the tool: (a) Provide the ability to record objects against, and see all standard Swing, AWT and JFC 1.1.8 and 1.2 objects, when running the Java application under test? (b) Provide the ability to record objects against [and interact with] non-standard Java classes required by the Java application under test (i.e. for example the KLGroup’s 3rd party controls, when the application under test uses that 3rd party toolset)? (c) Require that the platform’s static classpath environment variable be set with tool specific classes, or can this be set within the tool on a test suite by test suite basis? 2.6 Java Playback Does the tool: (a) Reliably and repeatedly play back the evaluation testcases? (b) Provide some type of generic capability [without using sleep –like commands in the code] to deal with “application not ready” to correctly synchronize code execution? [This may or may not be an issue, depending on the application being tested]. 2.7 Visual Testcase Recording Does the tool: (a) Provide the ability to visually record testcases by interacting with the application under test as a real user would? (b) Provide the ability, while visually recording a testcase, to interactively insert—without resorting to programming—validation statements? (c) Provide the ability, while interactively inserting a validation statements, to visually/interactively select validation properties (i.e. contents of a text field, focus on a control, control enabled, etc.)? 2.8 Scripting Language Is the test tool’s underlying scripting language: (a) object-oriented? (b) Proprietary? 2.9 Recovery System Does the tool support some type of built-in recovery system, which the programmer can control/define, that drives the application under test back to a know state? (Especially in the case where modal dialogs were left open when a testcase failure occurred)? 4 Terry Horwath
  • 7. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) 2.10 Custom Objects What capabilities does the tool provide to deal with unrecognized objects in a window or on a browser page? [Spend a fair amount of time evaluating this capability, as it is quite important]. 2.11 Technical Support What was the quality and timeliness of technical support received during product evaluation? [Remember—it won’t get any better after you purchase the product, but it might get worse]. 2.12 Internationalization Support Evaluate the support for internationalization [also referred to as language localization] in the following areas [if this is a testing requirement]: (a) Object recognition (b) Object content (such as text fields, text labels, etc.). (c) Evaluate and highlight any built–in or add-on multi–language support offered by the vendor. 2.13 Reports What type of reporting and logging capabilities does the tool provide? 2.14 Training & Hiring Issues (a) What is your [not the vendor’s] estimated learning curve to be competent (i.e. can write useful test scripts which may need to be rewritten later); (b) What is your estimated learning curve to become an skilled (i.e. can write test scripts which rarely need to be rewritten). (c) What is your estimated learning curve to become an expert (i.e. can design frameworks). (d) What is your estimated availability of potential (i) employees, and (ii) expert consultants, skilled with this tool in your geographic area. 2.15 Multiple Test Suite Execution (a) Can multiple test suites be driven completely from the tool [or from a command line interface] thereby allowing X number of unrelated suites/projects to be executed under a cron-like job or shell? (For true unattended operation). (b) …including the ability to save the results log, as text, prior to or during termination/exit? (c) …including the ability to return a reliable pass/fail status on termination/exit? 5 Terry Horwath
  • 8. Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07) 2.16 Testcase Management Does the tool support some type of test case management facility (either built-in or as an add-on) that allows each test engineer to execute any combination of tests out of the full test suite for a given project? How difficult is it to integrate manual testing results with automated test results? 2.17 Debugging Support What type of debugging capabilities does the tool support to help isolate scripting and/or runtime errors? 2.18 User Audience Which of the following groups of users does the tool primarily target? • Test technicians possess good test mentalities, but often lack much if any background in programming or software development methodologies. They are the backbone of many test groups and have often spent years developing and executing manual testcases. • Test developers possess all of the test technician’s skill set, plus they have had some formal training in programming and limited experience working with on a software development project and/or automated testcases. • Test architects possess all of the test developer’s skill set, plus they have had many years of experience developing and maintaining automated test cases, as well as experience defining and implementing the test framework under which multiple automated test suites are developed. They are a recognized expert with at least one automated tool. 6 Terry Horwath