SlideShare ist ein Scribd-Unternehmen logo
1 von 58
What is software testing

Software Testing is the process of executing a program or system with the intent of finding errors.Or, it involves any
activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its
required results.Software is not unlike other physical processes where inputs are received and outputs are produced.
Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small)
set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for
software is generally infeasible.

Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software
does not suffer from corrosion, wear and tear .Generally it will not change until upgrades, or until obsolescence. So
once the software is shipped, the design defects or bugs will be buried in and remain latent until activation.

Software bugs will almost always exist in any software module with moderate size not because programmers are
careless or irresponsible, but because the complexity of software is generally intractable and humans have only
limited ability to manage complexity. It is also true that for any complex systems, design defects can never be
completely ruled out.

Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because software
and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All
the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple
program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years,
even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem



Software Testing                                                   Page 1
will get worse, because timing and unpredictable environmental effects and human interactions are all possible input
parameters under consideration.

Objectives of testing:-
First of all objectives should be clear.

    Testing as a process of executing a program with the intent of finding errors.To perform testing, test cases are
     designed. A test case is a particular made up of artificial situation upon which a program is exposed so as to
     find errors. So a good test case is one that finds undiscovered errors.
    If testing is done properly, it uncovers errors and after fixing those errors we have software that is being
     developed according to specifications.

   The above objective implies a dramatic change in viewpoint .The move counter to the commonly held view
     than a successful test is one in which no errors are found. In fact, our objective is to design tests that a
     systematically uncover different classes of errors and do so with a minimum amount of time and effort.
Testing principles:

Before applying methods to design effective test cases, software engineer must understand the basic principles that
guide the software testing process. Some of the most commonly followed principles are:
All test should be traceable to customer requirements as the objective of testing is to uncover errors, it follows that
the most severe defects (from the customers point of view) are those that causes the program to fail to meet its
requirements.




Software Testing                                                   Page 2
Tests should be planned long before the testing begins. Test planning can begin as soon as the requirement model is
complete. Detailed definition of test cases can begin as soon as the design model has been solidated. Therefore, all
tests can be planned and designed before any code can be generated.

Exhaustive testing is not possible. The number of paths permutations for impossible to execute every combination of
paths during testing. It is possible however to adequately cover program logic and to ensure that all conditions in the
procedural design have been exercised.
To be most effective, an independent third party should conduct testing. By “most effective”, we mean testing that
has the highest probability of finding errors (the primary objective of testing).

Test Information Flow:

Testing is a complete process. For testing we need two types of inputs:

   Software configuration –it includes software requirement specification, design specification and source code of
     program. Software configuration is required so that testers know what is to be expected and tested.
   Test configuration – it is basically test plan and procedure. Test configuration is testing plan that is, the way
     how the testing will be conducted on the system. It specifies the test cases and their expected value. It also
     specifies if any tools for testing are to be used.
 Test cases are required to know what specific situations need to be tested. When tests are evaluated, test results
  are compared with actual results and if there is some error, then debugging is done to correct the error. Testing is
  a way to know about quality.



Software Testing                                                   Page 3
Different types of testing
   1. White box testing
   2. Black box testing
   3. Unit testing
   4. Incremental integration testing
   5. Integration testing
   6. Functional testing
   7. System testing
   8. End-to-end testing
   9. Sanity testing
   10.Regression testing
   11.Acceptance testing
   12.Load testing
   13.Stress testing
   14.Performance testing
   15.Usability testing
   16.Install/uninstall testing
   17.Recovery testing
   18.Security testing
   19.Compatibility testing
   20.Comparison testing
   21.Beta testing
   22.Alpha testing
   23.Smoke testing
   24.Monkey testing
   25.Ad hoc testing


Software Testing                        Page 4
1. Black box testing

   Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

2. White box testing

   This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing.
   Internal software and code working should be known for this type of testing. Tests are based on coverage of code
   statements, branches, paths, conditions.

3. Unit testing

    Testing of individual software components or modules. Typically done by the programmer and not by testers, as
   it requires detailed knowledge of the internal program design and code. May require developing test driver
   modules or test harnesses.
4. Incremental integration testing

   Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application
   functionality and modules should be independent enough to test separately. done by programmers or by testers.

5. Integration testing

   Testing of integrated modules to verify combined functionality after integration. Modules are typically code
   modules, individual applications, client and server applications on a network, etc. This type of testing is especially
   relevant to client/server and distributed systems.



Software Testing                                                    Page 5
6. Functional testing

   This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box
   type testing geared to functional requirements of an application.

7. System testing –

   Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements
   specifications, covers all combined parts of a system.
8. End-to-end testing –

   Similar to system testing, involves testing of a complete application environment in a situation that mimics real-
   world use, such as interacting with a database, using network communications, or interacting with other
   hardware, applications, or systems if appropriate.
9. Sanity testing –

    Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If
    application is crashing for initial use then system is not stable enough for further testing and build or application
    is assigned to fix.
10. Regression testing –

   Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the
   system in regression testing so typically automation tools are used for these testing types.




Software Testing                                                    Page 6
11. Acceptance testing

   Normally this type of testing is done to verify if system meets the customer specified requirements. User or
   customer do this testing to determine whether to accept application.

12. Load testing

    Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as
   testing of a web site under a range of loads to determine at what point the system’s response time degrades or
   fails.

13. Stress testing –

   System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like
   putting large number beyond storage capacity, complex database queries, continuous input to system or database
   load.

14. Performance testing

   Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance
   requirements. Used different performance and load tools to do this.

15.Usability testing

    User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help
   documented whenever user stuck at any point. Basically system navigation is checked in this testing.


Software Testing                                                 Page 7
16. Install/uninstall testing

    Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different
   hardware, software environment.

17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic
   problems.

18. Security testing

   Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized
   internal or external access. Checked if system, database is safe from external attacks.

19. Compatibility testing

   Testing how well software performs in a particular hardware/software/operating system/network environment
   and different combination s of above.

20.Comparison testing

   Comparison of product strengths and weaknesses with previous versions or other similar products.

21. Alpha testing –

   In house virtual user environment can be created for this type of testing. Testing is done at the end of
   development. Still minor design changes may be made as a result of such testing.


Software Testing                                                Page 8
22. Beta testing: Testing typically done by end-users or others. Final testing before releasing application for
      commercial purpose.
23.
24. Smoke testing

      It is a term used in plumbing, woodwind repair, electronics, computer software development, infectious disease
      control, and the entertainment industry. It refers to the first test made after repairs or first assembly to provide
      some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes
      will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the
      assembly is ready for more stressful testing.

25. Monkey testing

      It is random testing performed by automated testing tools (after the latter are developed by humans). These
      automated testing tools are considered "monkeys", if they work at random. We call them "monkeys" because it is
      widely believed that if we allow six monkeys to pound on six typewriters at random, for a million years, they will
      recreate all the works of Isaac Asimov.

      a) Smart monkeys- are valuable for load and stress testing they will find a significant number of bugs, but are
         also very expensive to develop.

            (b) Dumb monkeys- are inexpensive to develop, are able to do some basic testing,       but they will find few
            bugs.




Software Testing                                                      Page 9
26.Ad hoc testing

   Its a commonly used term for software testing performed without planning and documentation.The tests are
   intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being
   the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this
   can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks
   to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue
   with detailed reproduction steps, and a clear expected result. .

Testing types_

   a) Manual testing
   b) Automation testing

Manual testing

It is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use
most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often
follows a written test plan that leads them through a set of important test cases.


Test automation

Its the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the
setting up of test preconditions, and other test control and test reporting functions[1]. Commonly, test automation
involves automating a manual process already in place that uses a formalized testing process.


Software Testing                                                    Page 10
Software quality assurance

Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods
used to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuring
conformance to one or more standards, such as ISO 9000 or a model such as CMMI

SQA encompasses the entire software development process, which includes processes such as requirements
definition, software design, coding, source code control, code reviews, change management, configuration
management, testing, release management, and product integration.The American Society for Quality offers a
Certified Software Quality Engineer (CSQE) certification with exams held a minimum of twice a year.

SQA includes

● Defect Prevention
– prevents defects from occurring in the first place
– Activities: training, planning, and simulation
● Defects detection
– finds defects in a software artifact
– Activities: inspections, testing or measuring
● Defects removal
– isolation, correction, verification of fixes
– Activities: fault isolation, fault analysis, regression testing
● Verification
– are we building the product right ?
– performed at the end of a phase to ensure
that requirements established during


Software Testing                                                    Page 11
previous phase have been met
● Validation
– are we building the right product ?
– performed at the end of the development
process to ensure compliance with product
requirements

Objective of SQA

 Quality is a key measure of project success. Software producers want to be assured of the product quality before
delivery. For this, they need to plan and perform a systematic set of activities called Software Quality Assurance
(SQA).

SQA helps ensure that quality is incorporated into a software product. It aims at preventing errors and detecting
them as early as possible. SQA provides confidence to software producers that their product meets the quality
requirements. SQA activities include setting up processes and standards, detecting and removing errors, and
ensuring that every project performs project SQA activities. Introduction to Software Quality.


Importance of Software Quality

● Several historic disasters attributed to software
– 1988 shooting down of Airbus 320 by the USS Vincennes cryptic
and misleading output displayed by tracking software
– 1991 patriot missile failure inaccurate
calculation of time due to


Software Testing                                               Page 12
computer arithmetic errors
– London Ambulance Service Computer Aided Dispatch System –
several deaths
– On June 3, 1980, the North American Aerospace Defense Command
(NORAD) reported that the U.S. was under missile attack.
– First operational launch attempt of the space shuttle, whose realtime
operating software consists of about 500,000 lines of code, failed synchronization
problem among its flightcontrol
computers.
– 9 hour breakdown of AT&T's longdistance
telephone network caused
by an untested code patch
Importance of Software Quality
● Ariane 5 crash June 4, 1996
– maiden flight of the European Ariane 5 launcher
crashed about 40 seconds after takeoff
– lost was about half a billion dollars
– explosion was the result of a software error
● Uncaught exception due to floatingpoint
error: conversion
from a 64bit
integer to a 16bit
signed integer applied to a
larger than expected number
● Module was reused
without proper testing from Ariane 4


Software Testing                                                  Page 13
– Error was not supposed to happen with Ariane 4
– No exception handler
● Mars Climate Orbiter September
23, 1999
– Mars Climate Orbiter, disappeared as it began to orbit
Mars.
– Cost about $US 125million
– Failure due to error in a transfer of information
between a team in Colorado and a team in California
● One team used English units (e.g., inches, feet and pounds)
while the other used metric units for a key spacecraft
operation.
● Mars Polar Lander December,
1999
– Mars Polar Lander, disappeared during landing on
Mars
– Failure more likely due to unexpected setting of a
single data bit.
● defect not caught by testing
● independent teams tested separate aspects
● Internet viruses and worms
– Blaster worm ($US 525 millions)
– Sobig.F ($US 500 millions – 1billions)
● Exploit well known software vulnerabilities
– Software developers do not devote enough effort to applying
lessons learned about the causes of vulnerabilities.


Software Testing                                                Page 14
– Same types of vulnerabilities continue to be seen in newer
versions of products that were in earlier versions.
● Usability problems
● Monetary impact of poor software quality (Standish group 1995)
● 175,000 software projects/year Average
Cost per project
– Large companies $
US 2,322,000
– Medium companies $
US 1,331,000
– Small companies $
US 434,000
● 31.1% of projects canceled before completed
– cost $81 billion
● 52.7% of projects exceed their budget costing
189% of original estimates
– cost $59 billion
● 16.2% of software projects completed ontime
and onbudget
(9% for larger
companies)




Software Testing                                              Page 15
What are test cases

 A test case is a set of conditions or variables under which a tester will determine whether an application or software
system is working correctly or not. The mechanism for determining whether a software program or system has
passed or failed such a test is known as a test oracle. In some settings, an oracle could be a requirement or use case,
while in others it could be a heuristic. It may take many test cases to determine that a software program or system is
functioning correctly. Test cases are often referred to as test scripts, particularly when written. Written test cases are
usually collected into test suites

Test cases can be:-

1.Formal test cases

In order to fully test that all the requirements of an application are met, there must be at least two test cases for each
requirement: one positive test and one negative test unless a requirement has sub-requirements. In that situation,
each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the
test is frequently done using a traceability matrix. Written test cases should include a description of the functionality
to be tested, and the preparation required to ensure that the test can be conducted.
What characterizes a formal, written test case is that there is a known input and an expected output, which is
worked out before the test is executed. The known input should test a precondition and the expected output should
test a post condition.




Software Testing                                                   Page 16
2. Informal test cases

 For applications or systems without formal requirements, test cases can be written based on the accepted normal
operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and
results are reported after the tests have been run.

In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These
scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or
they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and
easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a
number of steps.




Software Testing                                                      Page 17
Test cases for different modules in ‘Alprus’.


           1 .Test cases for ‘Home’ Page:-

       Test case               Test          Test        Test                  Steps to       Expected
no     id        Module        summary       description data   Perquisite     follow         results
                                                                                1.Login to
                         Verify that         Ensure                            the
                         the                 that user                         application
                         homepage is         should be                         2.verify       User
                         displayed           able to see        Valid user     that home      should be
       TC_               after               the home           should         page is        able to see
       Home              successfully        page after         exist in the   getting        the home
     1 Page_001 Homepage login               login              application    displayed      page
                                                                                1.Login to    User
                                                                               the            should be
                                             Ensure                            application    able to see
                         Verify the          all the                           2.Verify       all the
                         availability        tasks                             that all the   tasks that
                         of all tasks        should be          Valid user     task are       are
       TC_               in the              available          should         displayed      available
       Home              “home               at “home           exist in the   in the         at “home
     2 Page_002 Homepage page”               page”              application    application    page”




           Software Testing                                           Page 18
1.Login to
                                   Ensure                       the            “Home
                                   that the                     2..Verify      page”
                                   user name                    that user      should
                                   should be     Valid user     name is        display
                    Verify that    available     name           displayed      name of
  TC_               homepage       throughout    should         on the         the
  Home              will display   the           exist in the   “Home          existing
3 Page_003 Homepage user name      application   application    page”          user
                                                                 1.Login to
                                                                the
                                                                application
                                                                2.verify all
                                   Ensure                       links are
                                   that all                     available
                    Verify the     links                        at” home       Links
                    availability   should be     Valid user     page”          should be
  TC_               of links in    available     should         throughout     available
  Home              “home          at “ home     exist in the   the            at home
4 Page_004 Homepage page”          page”         application    application    page




      Software Testing                                 Page 19
Ensure
                                  “logout “
                    Verify        button
                    functionality should be                  1.Login to
                    of “logout” click able                  the            User
                    button        and logout   Valid user application      should be
  TC_               throughout user from       should       2.Click on     logout
  Home              the           the          exist in the “logout”       from the
5 Page_005 Homepage application application    application button          application
                                  Ensure
                                  that the
                    Verify the    logout                     1.Login to    “Logout”
                    availability button                     the            button
                    of “logout” should be                   application    should be
                    button        there and    Valid user 2.verify the     displayed
  TC_               throughout logout the      should       availability   and logout
  Home              the           user from    exist in the of “logout”    user from
6 Page_006 Homepage application application    application button.         application




      Software Testing                               Page 20
1.Login to
                                  Ensure                       the
                                  that after                   application
                                  clicking on                  2.Click on
                                  “home”                       “home”
                                  link it                      and           User
                    Verify the    should        Valid user     navigate      should be
  TC_               functionality take user     should         the user to   able to see
  Home              of “home”     at home       exist in the   “home         the “home
7 Page_007 Homepage link          page          application    page”         page”
                                                               1.Login to
                                                               the
                                   Ensure                      application
                                   that                        2.Click on
                                   “search                     “home”        “Search
                                   button”                     link and      button
                    Verify the     should be    Valid user     navigate      should be
  TC_               availability   available    should         the user to   displayed
  Home              of “search     at home      exist in the   “home         on “Home
8 Page_008 Homepage button”        page         application    page”         page”.




      Software Testing                                Page 21
Software Testing   Page 22
Tools and Technologies used in ALPRUS:

Tools Used :

      TCMS(Test Case Management System)
      Bugzilla
      QTP

TCMS:
A Test Case Management System (TCMS) is meant to be a communications medium through which engineering
teams coordinate their efforts. More specifically, it allows for BlackBox QA, WhiteBox QA, Automation, and
Development to be a cohesive force in ensuring testing completeness with minimal effort or overhead. The end result
being higher quality deliverables in the same time frame, and better visibility into the testing efforts on a given
project.

A TCMS will only help coordinate the process, it does not implement the process itself. This document details the
individual groups directly involved in this process and how they interact together. This will set up the high-level
concepts which the effective usage of the TCMS relies upon, and give a better overall understanding of the
requirements for the underlying implementation.


Requirements

The TCMS has a concept of scenarios and configurations. In this context, scenario is a physical topology, and a
configuration is the software and/or hardware a given test case will be executed on. This information must come
from a Requirements document that specifies the expected scenarios, configurations, and functionality that the



Software Testing                                                  Page 23
product deliverable will be expected to support. A Requirements Document with this information is a necessity for
the TCMS to be used effectively by BlackBox QA and Development.


BlackBox QA

BlackBox QA creates test cases based upon their high level knowledge of the product, and executes test cases. Test
cases also come from Development, WhiteBox QA, and elsewhere that BlackBox QA also executes. All test cases are
funneled into the TCMS, a central repository for this information. On a given build, a BlackBox QA Engineer will
execute the test cases assigned to him or her, and update the Last Build Tested information to reflect that work.
With this information, management can create a simple query to gauge the testing status of a given project, and
redeploy effort as necessary. If a given test case fails, the Engineer can then submit a defect containing the test case
information easily. If a reported defect has a test case that is not in the TCMS, a BlackBox QA engineer can transfer
the test case information from the defect tracking system into the TCMS.

Automation

The main job of the Automation team is to automate execution of test cases for the purpose of increasing code
coverage per component. Once a given project has entered the "alpha" stage (functionality/code complete), release
milestones (betas, release candidates, etc) are then based upon the amount of code coverage per component in the
automated test suite. For instance, a goal is set for a minimum of 50% code coverage per component before a beta
candidate can be considered. This may seem as though the Automation team would then be the bottleneck for release
milestones, but this is not the case. Automation requires that test cases be supplied that sufficiently exercise code,
and works from there. As was stated before, all sections of engineering supply test cases; if Automation has
automated all test cases and has not met the goal for a given milestone, other sections of engineering (WhiteBox QA,
BlackBox QA, Development) need to supply more test cases to be automated. This is not to say that Automation is


Software Testing                                                  Page 24
helpless; they can supply test cases as well. The three groups mentioned so far (BlackBox QA, WhiteBox QA, and
Automation) are given a synergy by the TCMS whereby a feedback loop is created. For clarity, here is a diagram:

1. BlackBox QA (and development) record test cases into the TCMS, which the Automation team then automates
and generates code coverage data for.
2. When BlackBox testing yields no more code coverage, WhiteBox QA analyses output from the code coverage tool
to supply test cases to exercise heretofore untested codepaths.
3. The test cases supplied by WhiteBox QA are then approved by BlackBox QA and the cycle begins again.

This feedback loops has the "snowball rolling downhill" effect in regard to code coverage, which is why it is logical
to partially base release milestones upon those metrics.


Development

Development's role in the TCMS is simply to supply and critique test cases. The owner of a given component should
review the test cases in the TCMS for their component and supply test cases or information/training to QA to fill in
any gaps she/he sees. Component owners should also have a goal of supplying a given number of test cases for the
milestone of alpha release. This way, BlackBox QA and Automation have something to work from initially and can
provide more immediate results.




Software Testing                                                  Page 25
Roles in a Cycle

This table documents all of the aforementioned groups' roles in a given product release cycle. The only solid
definitions necessary is that the "alpha" release is functionality complete, and that each release milestone has an
incremental code coverage goal.
Milesto
          Development BlackBox QA WhiteBox QA                      Automation
ne
          designing;
                           research/study documenting design;
pre-      implementing
                           on product        reviewing code;       N/A
Alpha design,
                           technologies      providing feedback
          functionality
                           manual
          supplies initial execution of
          test cases;      initially
                                             running code/runtime begins
          provides         supplied test
Alpha                                        analysis tools;       automating test
          architecture/pr cases; test case
                                             reporting defects     case in TCMS
          oduct overview; creation;
          fixes bugs       reporting
                           defects
Beta      bug fixing; test manual            integrating           must report at
          case creation    execution of test code/runtime analysis least X percent
                           cases; test case tools in the automated amount of code
                           creation; defect test suite; reporting  coverage per
                           reporting         defects; ensuring     component,
                                             adherence to          repeat cycle until


Software Testing                                               Page 26
documented design;
                                                                     met
                                           test case creation
                                           analysing output of
                                           code/runtime analysis     must report at
                         manual            tools in the automated    least X plus 20
                         execution of test test suite; reporting     percent amount
        bug fixing; test
Release                  cases; test case defects; ensuring          of code coverage
        case creation
                         creation; defect adherence to               per component,
                         reporting         documented design;        repeat cycle until
                                           code review; test case    met
                                           creation


      Bugzilla:

Bugzilla is a Web-based general-purpose bug tracker and testing tool originally developed and used by the Mozilla
project, and licensed under the Mozilla Public License. Released as open source software by Netscape
Communications in 1998, it has been adopted by a variety of organizations for use as a defect tracker for both free
and open source software and proprietary products


Bugzilla's system requirements include:

A compatible database management system
A suitable release of Perl 5



Software Testing                                                    Page 27
An assortment of Perl modules
A compatible web server
A suitable mail transfer agent, or any SMTP server

Bugzilla boasts many advanced features:

    Powerful searching

    User-configurable email notifications of bug changes

    Full change history

    Inter-bug dependency tracking and graphing

    Excellent attachment management

    Integrated, product-based, granular security schema

    Fully security-audited, and runs under Perl's taint mode

    A robust, stable RDBMS back-end

    Web, XML, email and console interfaces

    Completely customisable and/or localisable web user interface



Software Testing                                                Page 28
 Extensive configurability

    Smooth upgrade pathway between versions

                                      The life cycle of a Bugzilla bug




                                                     


Software Testing                                            Page 29
QTP

 Quick Test Professional is automated testing software designed for testing various software applications and
environments. It performs functional and regression testing through a user interface such as a native GUI or web
interface. It works by identifying the objects in the application user interface or a web page and performing desired
operations (such as mouse clicks or keyboard events) it can also capture object properties like name or handler ID.
QuickTest Professional uses a VBScript scripting language to specify the test procedure and to manipulate the
objects and controls of the application under test. To perform more sophisticated actions, users may need to
manipulate the underlying VBScript.
Although QuickTest Professional is usually used for "UI Based" Test Case Automation, it also can automate some
"Non-UI" based Test Cases such as file system operations and database testing.

QTP performs following Tasks:-

   • Verification

      Checkpoints verify that an application under test functions as expected. You can add a checkpoint to check if a
      particular object, text or a bitmap is present in the automation run. Checkpoints verify that during the course
      of test execution, the actual application behavior or state is consistent with the expected application behavior
      or state. QuickTest Professional offers 10 types of checkpoints, enabling users to verify various aspects of an
      application under test, such as: the properties of an object, data within a table, records within aatabase, a
      bitmap image, or the text on an application screen. The types of checkpoints are standard, image, table, page,
      text, text area, bitmap, database, accessibility and XML checkpoints. Users can also create user-defined
      checkpoints.




Software Testing                                                 Page 30
• Exception handling

        QuickTest Professional manages exception handling using recovery scenarios , the goal is to continue running
       tests if an unexpected failure occurs.For example, if an application crashes and a message dialog appears,
       QuickTest Professional can be instructed to attempt to restart the application and continue with the rest of the
       test cases from that point. Because QuickTest Professional hooks into the memory space of the applications
       being tested, some exceptions may cause QuickTest Professional to terminate and be unrecoverable.

   •   Data-driven testing

       QuickTest Professional supports data-driven testing. For example, data can be output to a data table for reuse
       elsewhere. Data-driven testing is implemented as a Microsoft Excel workbook that can be accessed from
       QuickTest Professional. QuickTest Professional has two types of data tables: the Global data sheet and Action
       (local) data sheets. The test steps can read data from these data tables in order to drive variable data into the
       application under test, and verify the expected result.

   •   Automating custom and complex UI objects

       QuickTest Professional may not recognize customized user interface objects and other complex objects. Users
       can define these types of objects as virtual objects. QuickTest Professional does not support virtual objects for
       analog recording or recording in low-level mode.

   • Extensibilit

       QuickTest Professional can be extended with separate add-ins for a number of development environments that
       are not supported out-of-the-box. QuickTest Professional add-ins include support for Web, .NET, Java, and
       Delphi. QuickTest Professional and the QuickTest Professional add-ins are packaged together in HP


Software Testing                                                  Page 31
Functional Testing software.

   •   Test results

        At the end of a test, QuickTest Professional generates a test result. Using XML schema, the test result
       indicates whether a test passed or failed, shows error messages, and may provide supporting information that
       allows users to determine the underlying cause of a failure. Release 10 lets users export QuickTest
       Professional test results into HTML, Microsoft Word or PDF report formats. Reports can include images and
       screen shots for use in reproducing errors.

       User interface

       QuickTest Professional provides two views--and ways to modify-- a test script: Keyword View and Expert
       View. These views enable QuickTest Professional to act as an IDE for the test, and QuickTest Professional
       includes many standard IDE features, such as breakpoints to pause a test at predetermined places.

   •   Keyword view

       Keyword View lets users create and view the steps of a test in a modular, table format. Each row in the table
       represents a step that can be modified. The Keyword View can also contain any of the following columns Item,
       Operation, Value, Assignment, Comment, and Documentation. For every step in the Keyword View,
       QuickTest Professional displays a corresponding line of script based on the row and column value. Users can
       add, delete or modify steps at any point in the test.

   •   Expert view

       In Expert View, QuickTest Professional lets users display and edit a test's source code using VBScript.


Software Testing                                                Page 32
Designed for more advanced users, users can edit all test actions except for the root Global action, and changes
       are synchronized with the Keyword View.

   •   Languages

       QuickTest Professional uses VBScript as its scripting language. VBScript supports classes but not
       polymorphism and inheritance. Compared with Visual Basic for Applications (VBA), VBScript lacks the
       ability to use some Visual Basic keywords, does not come with an integrated debugger, lacks an event handler,
       and does not have a forms editor. It has added a debugger, but the functionality is more limited when
       compared with testing tools that integrate a full-featured IDE, such as those provided with VBA, Java, or
       VB.NET.


Technologies QTP Supports


          1. Web
          2. Java(Core and Advanced)
          3. .Net
          4. WPF
          5. SAP
          6. Oracle
          7. Siebel
          8. PeopleSoft
          9. Delphi
          10.Power Builder


Software Testing                                                  Page 33
11. Stingray 1
           12.Terminal Emulator
           13. Flex
           14. Mainframe terminal emulators

Versions

  1. 10.0 - Released in 2009
  2. 9.5 - Released in 2007
  3. 9.2 - Released in 2007
  4. 9.0 - Released in 2006
  5. 8.2 - Released in 2005
  6. 8.0 - Released in 2004
  7. 7.0 - Never released.
  8. 6.5 - Released in 2003
  9. 6.0 - Released in 2002
 10. 5.5 - First release. Released in 2001

Technologies used in ALPRUS:

Manual testing:

It is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use
most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often
follows a written test plan that leads them through a set of important test cases.



Software Testing                                                   Page 34
For small scale engineering efforts (including prototypes), exploratory testing may be sufficient. With this informal
approach, the tester does not follow any rigorous testing procedure, but rather explores the user interface of the
application using as many of its features as possible, using information gained in prior tests to intuitively derive
additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester,
because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal
approach is to gain an intuitive insight to how it feels to use the application.

Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order
to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases
and generally involves the following steps.[1]

Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and
software licenses are identified and acquired.
Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes.
Assign the test cases to testers, who manually follow the steps and record the results.
Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the
software can be released, and if not, it is used by engineers to identify and correct the problems.



Automation Testing:

Automated software testing tool is able to playback pre-recorded and predefined actions, compare the results to the
expected behavior and report the success or failure of these manual tests to a test engineer. Once automated tests are
created they can easily be repeated and they can be extended to perform tasks impossible with manual testing.
Because of this, savvy managers have found that automated software testing is an essential component of successful


Software Testing                                                 Page 35
development projects.Automated software testing has long been considered critical for big software development
organizations but is often thought to be too expensive or difficult for smaller companies to implement.
AutomatedQA’s TestComplete is affordable enough for single developer shops and yet powerful enough that our
customer list includes some of the largest and most respected companies in the world.
Companies like Corel, Intel, Adobe, Autodesk, Intuit, McDonalds, Motorola, Symantec and Sony all use
TestComplete.

What makes automated software testing so important to these successful companies?

Automated Software Testing Saves Time and Money

Software tests have to be repeated often during development cycles to ensure quality. Every time source code is
modified software tests should be repeated. For each release of the software it may be tested on all supported
operating systems and hardware configurations. Manually repeating these tests is costly and time consuming. Once
created, automated tests can be run over and over again at no additional cost and they are much faster than manual
tests. Automated software testing can reduce the time to run repetitive tests from days to hours. A time savings that
translates directly into cost savings.

Automated Software Testing Improves Accuracy

Even the most conscientious tester will make mistakes during monotonous manual testing. Automated tests perform
the same steps precisely every time they are executed and never forget to record detailed results.

Automated Software Testing Increases Test Coverage

Automated software testing can increase the depth and scope of tests to help improve software quality. Lengthy tests
that are often avoided during manual testing can be run unattended. They can even be run on multiple computers


Software Testing                                                Page 36
with different configurations. Automated software testing can look inside an application and see memory contents,
data tables, file contents, and internal program states to determine if the product is behaving as expected.
Automated software tests can easily execute thousands of different complex test cases during every test run
providing coverage that is impossible with manual tests. Testers freed from repetitive manual tests have more time
to create new automated software tests and deal with complex features.

Automated Software Testing Does What Manual Testing Cannot

Even the largest software departments cannot perform a controlled web application test with thousands of users.
Automated testing can simulate tens, hundreds or thousands of virtual users interacting with network or web
software and applications.
Implementation:

Software Testing Life Cycle:

Software Testing Life Cycle consists of six (generic) phases:

      Test Planning,
      Test Analysis,
      Test Design,
      Construction and verification,
      Testing Cycles,
      Final Testing and Implementation and
      Post Implementation.
      Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in
       software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.



Software Testing                                                  Page 37
Test Planning:

This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate
budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This
planning will be an ongoing process with no end point.

Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The
Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing
activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed,
the personnel responsible for testing, the resources and schedule required to complete testing, and the risks
associated with the plan.). Almost all of the activities done during this stage are included in this software test plan
and revolve around a test plan.

Test Analysis:

Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of
testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the
appropriate time to automate is, what type of specific documentation I need for testing.

Proper and regular meetings should be held between testing teams, project managers, development teams, Business
Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the
completeness of the test plan created in the planning phase, which will further help in enhancing the right testing
strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop
Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by
one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design,


Software Testing                                                    Page 38
Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and
Performance testing.

Test Design:

Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also
revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you
have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for
unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test
environment is prepared.

Construction and verification:

In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases,
Stress and Performance testing plans needs to be completed. We have to support the development team in their unit
testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are
performed and errors (if any) are reported.

Testing Cycles:

In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition
is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug
fixing --> retesting (test cycle 2, test cycle 3….).

Final Testing and Implementation:



Software Testing                                                     Page 39
In this we have to execute remaining stress and performance test cases, documentation for testing is completed /
updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be
conducted and the application needs to be verified under production conditions.

Post Implementation:

In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of
attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The
recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test
machines are restored to base lines in this stage



Bug A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer
program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most
bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are
caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that
seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly
known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.


Arithmetic bugs

  * Division by zero
  * Arithmetic overflow or underflow
  * Loss of arithmetic precision due to rounding or numerically unstable algorithms


Software Testing                                                Page 40
Logic bugs

  * Infinite loops and infinite recursion
  * Off by one error, counting one too many or too few when looping

Syntax bugs

  * Use of the wrong operator, such as performing assignment instead of equality test. In simple cases often warned
by the compiler; in many languages, deliberately guarded against by language syntax

Resource bugs

   * Null pointer dereference
   * Using an uninitialized variable
   * Using an otherwise valid instruction on the wrong data type (see packed      decimal/binary coded decimal)
   * Access violations
     * Resource leaks, where a finite system resource such as memory or file handles are exhausted by repeated
allocation without release.
   * Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not
lead to an access violation or storage violation. These bugs can form a security vulnerability.
   * Excessive recursion which though logically valid causes stack overflow

Multi-threading programming bugs

  * Deadlock


Software Testing                                               Page 41
* Race condition
  * Concurrency errors in Critical sections, Mutual exclusions and other features of concurrent processing. Time-of-
check-to-time-of-use (TOCTOU) is a form of unprotected critical section.

Teamworking bugs

  * Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the
same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy.
  * Comments out of date or incorrect: many programmers assume the comments accurately describe the code
  * Differences between documentation and the actual product

Bugs in popular culture

   * In the 1968 novel 2001: A Space Odyssey (and the corresponding 1968 film), a spaceship's onboard computer,
HAL 9000, attempts to kill all its crew members. In the followup 1982 novel, 2010: Odyssey Two, and the
accompanying 1984 film, 2010, it is revealed that this action was caused by the computer having been programmed
with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret
from the crew; this conflict caused HAL to become paranoid and eventually homicidal.
   * In the 1984 song 99 Red Balloons (though not in the original German version), "bugs in the software" lead to a
computer mistaking a group of balloons for a nuclear missile and starting a nuclear war.
   * The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database
application.

Effects of Bugs;




Software Testing                                                   Page 42
Bugs trigger Type I and type II errors that can in turn have a wide variety of ripple effects, with varying levels of
inconvenience to the user of the program. Some bugs have only a subtle effect on the program's functionality, and
may thus lie undetected for a long time. More serious bugs may cause the program to crash or freeze leading to a
denial of service. Others qualify as security bugs and might for example enable a malicious user to bypass access
controls in order to obtain unauthorized privileges.

The results of bugs may be extremely serious. Bugs in the code controlling the Therac-25 radiation therapy machine
were directly responsible for some patient deaths in the 1980s. In 1996, the European Space Agency's US$1 billion
prototype Ariane 5 rocket was destroyed less than a minute after launch, due to a bug in the on-board guidance
computer program. In June 1994, a Royal Air Force Chinook crashed into the Mull of Kintyre, killing 29. This was
initially dismissed as pilot error, but an investigation by Computer Weekly uncovered sufficient evidence to convince
a House of Lords inquiry that it may have been caused by a software bug in the aircraft's engine control computer.
[1]

In 2002, a study commissioned by the US Department of Commerce' National Institute of Standards and Technology
concluded that software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an
estimated $59 billion annually, or about 0.6 percent of the gross domestic product.


How to prevent bug

   •   Programming style While typos in the program code are often caught by the compiler, a bug usually appears
       when the programmer makes a logic error. Various innovations in programming style and defensive
       programming are designed to make these bugs less likely, or easier to spot. In some programming languages,
       so-called typos, especially of symbols or logical/mathematical operators, actually represent logic errors, since



Software Testing                                                  Page 43
the mistyped constructs are accepted by the compiler with a meaning other than that which the programmer
     intended.
   • Programming techniques Bugs often create inconsistencies in the internal data of a running program.
     Programs can be written to check the consistency of their own internal data while running. If an inconsistency
     is encountered, the program can immediately halt, so that the bug can be located and fixed. Alternatively, the
     program can simply inform the user, attempt to correct the inconsistency, and continue running.
   • Development methodologies      There are several schemes for managing programmer activity, so that fewer
     bugs are produced. Many of these fall under the discipline of software engineering (which addresses software
     design issues as well). For example, formal program specifications are used to state the exact behavior of
     programs, so that design bugs can be eliminated. Unfortunately, formal specifications are impractical or
     impossible for anything but the shortest programs, because of problems of combinatorial explosion and
     indeterminacy.

   •   Programming language support Programming languages often include features which help programmers
       prevent bugs, such as static type systems, restricted name spaces and modular programming, among others.
       For example, when a programmer writes (pseudocode) LET REAL_VALUE PI = "THREE AND A BIT",
       although this may be syntactically correct, the code fails a type check. Depending on the language and
       implementation, this may be caught by the compiler or at runtime. In addition, many recently-invented
       languages have deliberately excluded features which can easily lead to bugs, at the expense of making code
       slower than it need be: the general principle being that, because of Moore's law, computers get faster and
       software engineers get slower; it is almost always better to write simpler, slower code than "clever",
       inscrutable code, especially considering that maintenance cost is considerable. For example, the Java
       programming language does not support pointer arithmetic; implementations of some languages such as
       Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build.




Software Testing                                               Page 44
•   Code analysisTools for code analysis help developers by inspecting the program text beyond the compiler's
       capabilities to spot potential problems. Although in general the problem of finding all programming errors
       given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers
       tend to make the same kinds of mistakes when writing software.

   •   Instrumentation Tools to monitor the performance of the software as it is running, either specifically to find
       problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code
       explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a
       surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might
       cause the code to be rewritten.


   Bug management:
   It is common practice for software to be released with known bugs that are considered non-critical, that is, that
   do not affect most users main experience with the product. While software products may, by definition, contain
   any number of unknown bugs, measurements during testing can provide an estimate of the number of likely bugs
   remaining; this becomes more reliable the longer a product is tested and developed ("if we had 200 bugs last
   week, we should have 100 this week"). Most big software projects maintain two lists of "known bugs" those
   known to the software team, and those to be told to users. This is not dissimulation, but users are not concerned
   with the internal workings of the product. The second list informs users about bugs that are not fixed in the
   current release, or not fixed at all, and a workaround may be offered.

   There are various reasons for not fixing bugs:

         • The developers often don't have time or it is not economical to fix all non-severe bugs.
         • The bug could be fixed in a new version or patch that is not yet released.


Software Testing                                                  Page 45
• The changes to the code required to fix the bug could be large, expensive, or delay finishing the project.
         • Even seemingly simple fixes bring the chance of introducing new unknown bugs into the system. At the
           end of a test/fix cycle some managers may only allow the most critical bugs to be fixed.
         • Users may be relying on the undocumented, buggy behavior, especially if scripts or macros rely on a
           behavior; it may introduce a breaking change.
         • It's "not a bug". A misunderstanding has arisen between expected and provided behavior

   It is often considered impossible to write completely bug-free software of any real complexity. So bugs are
   categorized by Severity, and Low-Severity non-critical bugs are tolerated, as they do not affect the proper
   operation of the system for most users. NASA's SATC managed to reduce the number of errors to fewer than 0.1
   per 1000 lines of code (SLOC) but this was not felt to be feasible for any real world projects.

   The severity of a bug is not the same as its importance for fixing, and the two should be measured and managed
   separately. On a Microsoft Windows system a blue screen of death is rather severe, but if it only occurs in
   extreme circumstances, especially if they are well diagnosed and avoidable, it may be less important to fix than an
   icon not representing its function well, which though purely aesthetic may confuse thousands of users every single
   day. This balance, of course, depends on many factors; expert users have different expectations from novices, a
   niche market is different from a general consumer market, and so on.

   A school of thought popularized by Eric S. Raymond as Linus's Law says that popular open-source software has
   more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow".
   This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide
   vulnerabilities in complex, little understood and undocumented source code," because, "even if people are
   reviewing the code, that doesn't mean they're qualified to do so."




Software Testing                                                 Page 46
Bug management must be conducted carefully and intelligently because "what gets measured gets done" and
   managing purely by bug counts can have unintended consequences. If, for example, developers are rewarded by
   the number of bugs they fix, they will naturally fix the easiest bugs first leaving the hardest, and probably most
   risky or critical, to the last possible moment

   Debugging:-

Finding and fixing bugs, or "debugging" has always been a major part of computer programming.. As computer
programs grow more complex, bugs become more common and difficult to fix. Often programmers spend more time
and effort finding and fixing bugs than writing new code. Software testers are professionals whose primary task is to
find bugs, or write code to support testing. On some projects, more resources can be spent on testing than in
developing the program.

Usually, the most difficult part of debugging is finding the bug in the source code. Once it is found, correcting it is
usually relatively easy. Programs known as debuggers exist to help programmers locate bugs by executing code line
by line, watching variable values, and other features to observe program behavior. Without a debugger, code can be
added so that messages or values can be written to a console (for example with printf in the c language) or to a
window or log file to trace program execution or show values.

However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in one
section of a program to cause failures in a completely different section, thus making it especially difficult to track
(for example, an error in a graphics rendering routine causing a file I/O routine to fail), in an apparently unrelated
part of the system.

Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the
programmer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of Code


Software Testing                                                  Page 47
review, stepping through the code modelling the execution process in one's head or on paper can often find these
errors without ever needing to reproduce the bug as such, if it can be shown there is some faulty logic in its
implementation.

But more typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproduced, the
programmer can use a debugger or some other tool to monitor the execution of the program in the faulty region, and
find the point at which the program went astray.

It is not always easy to reproduce bugs. Some are triggered by inputs to the program which may be difficult for the
programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race
condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of
practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to
duplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs (humorously
named after the Heisenberg uncertainty principle.)

Debugging is still a tedious task requiring considerable effort. Since the 1990s, particularly following the Ariane 5
Flight 501 disaster, there has been a renewed interest in the development of effective automated aids to debugging.

There are also classes of bugs that have nothing to do with the code itself. for example one relies on faulty
documentation or hardware, the code may be written perfectly properly to what the documentation says, but the bug
truly lies in the documentation or hardware, not the code. However, it is common to change the code instead of the
other parts of the system, as the cost and time to change it is generally less. Embedded systems frequently have
workarounds for hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing the
hardware, especially if they are commodity items.




Software Testing                                                Page 48
Software Testing   Page 49
Bug tracking tools

       Tools            Vendor                                          Description
    AceProject        Websystems          Bug tracking software designed for project managers and developers.
  AdminiTrack        AdminiTrack                        Hosted issue and bug tracking application
    ADT Web           Borderwave        It is designed for small, medium and large software companies to simplify
                                          their defect, suggestion and feature request tracking. It allows to track
                                             defects, feature requests and suggestions by version, customer etc.
      Agility          AgileEdge           . Agility features a easy to use web-based interface. It includes fully
                                             customizable field lists, workflow engine, and email notifications.
  Bug/Defect       Applied Innovation                       Web-based bug tracking software
Tracking Expert      Management
    BugAwar            bugaware            Installed and ASP hosted service available. Email alert notification,
                                         knowledge base, dynamic reporting, team management, user discussion
                                                           threads, file attachment, searching.
   bugcentral.      bugcentral.com                          Web based defect tracking service
    BUGtrack        SkyeyTech, Inc.                         Web based defect tracking system
     BugHost        Active-X.COM          Ideal for small- to medium-sized companies who want a secure, Web-
                                         based issue and bug management system. There is no software to install



Software Testing                                              Page 50
and can be accessed from any Internet connection. Designed from the
                                   ground up, the system is easy to use, extremely powerful, and customizable
                                                               to meet your needs.
   BugImpact         Avna Int.                                           .
                                                     Unlimited: projects, entries/bugs/issues
                                   Web access -users access their BugImpact service through a standard Web
                                                                     browser
                                       Workflow configurations control: BugImpact installs with a default
                                     workflow configuration that can easily be changed or replaced entirely
                                        File attachment: details thread may contain attachments, such as
                                     screenshots, Excel spreadsheets, internal documents or just any binary
                                                                       files.
                                   E-mail notification: the system sends e-mail notification to users when new
                                                       bugs are assigned or status changes
                                      Builds : project(s) may have a specific 'fix-for' version with optional
                                                                     deadline
                                   Priority Colorize: custom colors may be associated with different priorities
   BugStation        Bugopolis       It is designed to make Bugzilla easier and more secure. A centralized
                                     system for entering, assigning and tracking defects. Configurable and
                                                                 customizable.
   Bug Tracker      Bug Tracker                   Web based defect tracking and data sharing
    Software         Software
  Bug Tracking     Bug-Track.com     It offers email notification, file attachment, tracking history, bilingual



Software Testing                                         Page 51
pages, 128-bit encryption connection and advance customization. .
    Bugvisor       softwarequality, Inc.   Enterprise solution for capturing, managing and communicating feature
                                            requests, bug reports, changes and project issues from emergence to
                                               resolution with a fully customizable and controllable workflow
     Bugzero            WEBsina               Web-based, easy-to-install, cross-platform defect tracking system
     Bugzilla          Bugzilla.org           Highly configurable Open source defect tracking system developed
                                                              originally for the Mozilla project
Census BugTrack        MetaQuest           . Includes VSS integration, notifications, workflow, reporting and change
                                                                             history.
 DefectTracker     Pragmatic Software                 Subscription-based bug/problem tracking solution
     Defectr             Defectr           Defect tracking and project management tool developed using IBM Lotus
                                                             Domino and Dojo Ajax framework.
    Dragonfly       Vermont Software         Web-based, cross-browser, cross-platform issue tracking and change
                     Testing Group            management for software development, testing, debugging, and
                                                                     documentation.
     ExDesk              ExDesk              Bug and issue tracking software, remotely hosted, allows to tracking
                                             software bugs and route them to multiple developers or development
                                                  groups for repair with reporting and automatic notification
    FogBUGZ          Fog Creek S/W                               Web-based defect tracking.
 Fast BugTrack          AlceaTech                                 Web-based bug tracking



Software Testing                                                Page 52
Footprints          Unipress                  Web-based issue tracking and project management tool
    IssueTrak      Help Desk Software      Offers issue tracking, customer relationship and project management
                        Central                                          functions.
      JIRA             Atlassian             J2EE-based, issue tracking and project management application.
    Jitterbug           Samba                                     Freeware defect tracking
      JTrac                              Generic issue-tracking web-application that can be easily customized by
                                          adding custom fields and drop-downs. Features include customizable
                                        workflow, field level permissions, e-mail integration, file attachments and a
                                                                   detailed history view.
     Mantis                                   Lightweight and simple bugtracking system. Easily modifiable,
                                                      customizable, and upgradeable. Open Source.
 MyBugReport          Bug Tracker          It allows the different participants working on the development of a
                                          software or multimedia application to detect new bugs, to ensure their
                                          follow-up, to give them a priority and to assign them within the team.
     Ozibug             Tortuga           Written in Java, it utilizes servlet technology and offers features such as
                      Technologies      reports, file attachments, role-based access, audit trails, email notifications,
                                                 full internationalization, and a customizable appearance.
 Perfect Tracker        Avensoft                                 Web-based defect tracking
ProblemTracker         NetResults       Web-based collaboration software for issue tracking; automated support;
                                                  and workflow, process, and change management.



Software Testing                                               Page 53
ProjectLocker      ProjectLocker      Hosted source control (CVS/Subversion), web-based issue tracking, and
                                                   web-based document management solutions.
   PR Tracker      Softwise Company      Records problem reports in a network and web-based database that
                                        supports access by multiple users. It include classification, assignment,
                                                sorting, searching, reporting, access control, & more.
    QEngine           AdventNet        Offers the facility of tracking and managing bugs, issues, improvements,
                                       and features. It provides role based access control, attachment handling,
                                      schedule management, automatic e-mail notification, workflow, resolution,
                                            worklogs, attaching screenshots, easy reporting, and extensive
                                                                      customization.
    SpeeDEV           SpeeDEV          A complete visual design of a multi level rol based process can be defined
                                      for different types of issues with conditional branching and automated task
                                                                        generation.
      Squish         Information                               Web based issue tracking
                     Management
                     Systems, Inc.
 Task Complete     Smart Design Te    TaskComplete enables a team to organize and track software defects using
                                          with integrated calendar, discussion, and document management
                                       capabilities. Can easily be customized to meet the needs of any software
                                                                   development team.
     teamatic         Teamatic                                  Defect tracking system
   TrackStudio       TrackStudio        Supports workflow, multi-level security, rule-based email notification,


Software Testing                                            Page 54
email submission, subscribe-able filters, reports. Has skin-based user
                                               interface. Supports ORACLE, DB2, MS SQL, Firebird, PostgreSQL,
                                                                        Hypersonic SQL .
  VisionProject          Visionera AB                Designed to make projects more efficient and profitable.
 Woodpecker IT            AVS GmbH              It is for performing request, version or bug management. Its main
                                            function is recording and tracking issues, within a freely defined workflow.
      yKAP              DCom Solutions         Uses XML to deliver a powerful, cost effective, Web based Bug/Defect
                                               tracking, Issue Management and Messaging product. , yKAP features
                                              include support for unlimited projects, test environments, attachments,
                                            exporting data into PDF/RTF/XLS/HTML/Text formats, rule-based email
                                             alerts, exhaustive search options, saving searches (public/ private), Auto-
                                             complete for user names, extensive reports, history, custom report styles,
                                                exhaustive data/trends analysis, printing, role-based security. yKAP
                                            allows the user to add custom values for system parameters such as Status,
                                              Defect cause, Defect type, priority, etc. yKAP is installed with complete
                                                                        help documentation.


    Tools              Vendor                                           Description
    assyst          Axios Systems        Offers a unique lifecycle approach to IT Service Management through the
                                                   integration of all ITIL processes in a single application.
BridgeTrak         Kemma Software   Record and track development or customers issues, assign issues to development
                                                   teams, create software release notes and more.


Software Testing                                                  Page 55
BugRat          Giant Java Tree    It provides a defect reporting and tracking system. Bug reporting by the Web and
                                                                             email.
 BugSentry          IT Collaborate       Automatically and securely reports errors in .NET and COM applications.
                                          BugSentry provides a .NET dll (COM interop version available too) that
                                                           developers ship with their products.
 Bug Trail            Osmosys          This easy to use tool allows to attach screenshots, automatically capture system
                                       parameters and create well formatted MS-WORD and HTML output reports.
                                       Customizable defect status flow allows small to large organizations configure as
                                                                  per their existing structure.
   BugZap            Cybernetic       For small or medium-size projects, which is easy to install, small and requires no
                     Intelligence                                server-side installation.
                       GmbH
Defect Agent       Inborne Software      Defect tracking, enhancement suggestion tracking, and development team
                                                             workflow management software.
   Defect           Tiera Software      Manages defects and enhancements through the complete entire life cycle of
  Manager                                            product development through field deployment
   Fast                 Alcea         Bug Tracking / Defect Tracking / Issue Tracking - Change Management Software
 BugTrack                                                        (work flow/process flow)
  GNATS                 GNU                                  Freeware defect tracking software.
  Intercept            Elsinore       Bug tracking system designed to integrate with Visual SourceSafe and the rest of
                     Technologies                       your Microsoft development environment



Software Testing                                                   Page 56
IssueView           IssueView              SQL server based bug tracking with Outlook style user interface.
    JIRA             Atlassian       Browser-based J2EE defect tracking and issue management software. Supports
                                                         any platform that runs Java 1.3.x.
    QAW            B.I.C Quality    Developed to assist all quality assurance measurements within ICT-projects. The
                                     basic of QAW is a structured way of registration and tracking issues (defects).
 QuickBugs         Excel Software   Tool for reporting, tracking and managing bugs, issues, changes and new features
                                     involved in product development. Key attributes include extreme ease-of-use and
                                    flexibility, a shared XML repository accessible to multiple users, multiple projects
                                    with assigned responsibilities, configurable access and privileges for users on each
                                    project. Virtually everything in QuickBugs is configurable to the organization and
                                       specific user needs including data collection fields, workflow, views, queries,
                                    reports, security and access control. Highly targeted email messages notify people
                                                         when specific events require their attention.
   Support            Acentre        Web enabled defect tracking application, one of the modules of the Tracker Suite
   Tracker                          software package. Support Tracker is based on Lotus Notes, allowing customers to
                                    leverage their existing Notes infrastructure for this bug tracking solution. Because
                                      Tracker Suite is server-based, Support Tracker installs with zero-impact on the
                                     desktop. User can create, track, and manage requests through Notes or over the
                                      Web. Requests are assigned, routed, and escalated automatically ts via Service
                                       Level Agreements, for proper prioritization and resource allocation. Support
                                              Tracker also features FAQ and Knowledgebase functionality.
SWBTracker         software with                                   Bug tracking system
                       brains


Software Testing                                                  Page 57
TestTrack Pro Seapine Software       Delivers time-saving features that keep everyone, involved with the project,
                                 informed and on schedule. TestTrack Pro is a scalable solution with Windows and
                                    Web clients and server support for Windows, Linux, Solaris, and Mac OS X,
                                    integration with MS Visual Studio (including .NET) and interfaces with most
                                  major source code managers including Surround SCM, and automated software
                                   testing tool, QA Wizard, along with other Seapine tools. Download a free Eval.
    Track          Soffront                                  Defect tracking system
 ZeroDefect        ProStyle                                     Issue management



Bug report




Software Testing                                              Page 58

Weitere ähnliche Inhalte

Was ist angesagt?

St & internationalization
St & internationalizationSt & internationalization
St & internationalization
Sachin MK
 
Types of Software testing
Types of  Software testingTypes of  Software testing
Types of Software testing
Makan Singh
 
Testing terms & definitions
Testing terms & definitionsTesting terms & definitions
Testing terms & definitions
Sachin MK
 
Testing artifacts test cases
Testing artifacts   test casesTesting artifacts   test cases
Testing artifacts test cases
Petro Chernii
 

Was ist angesagt? (20)

Software Testing: History, Trends, Perspectives - a Brief Overview
Software Testing: History, Trends, Perspectives - a Brief OverviewSoftware Testing: History, Trends, Perspectives - a Brief Overview
Software Testing: History, Trends, Perspectives - a Brief Overview
 
Concept of Failure, error, fault and defect
Concept of Failure, error, fault and defectConcept of Failure, error, fault and defect
Concept of Failure, error, fault and defect
 
Software testing
Software testingSoftware testing
Software testing
 
St & internationalization
St & internationalizationSt & internationalization
St & internationalization
 
Manual software-testing-interview-questions-with-answers
Manual software-testing-interview-questions-with-answersManual software-testing-interview-questions-with-answers
Manual software-testing-interview-questions-with-answers
 
A COMPOSITION ON SOFTWARE TESTING
A COMPOSITION ON SOFTWARE TESTINGA COMPOSITION ON SOFTWARE TESTING
A COMPOSITION ON SOFTWARE TESTING
 
Manual testing interview questions and answers
Manual testing interview questions and answersManual testing interview questions and answers
Manual testing interview questions and answers
 
Software testing
Software testingSoftware testing
Software testing
 
Types of Software testing
Types of  Software testingTypes of  Software testing
Types of Software testing
 
Software Engineering- Types of Testing
Software Engineering- Types of TestingSoftware Engineering- Types of Testing
Software Engineering- Types of Testing
 
Testing terms & definitions
Testing terms & definitionsTesting terms & definitions
Testing terms & definitions
 
11 steps of testing process - By Harshil Barot
11 steps of testing process - By Harshil Barot11 steps of testing process - By Harshil Barot
11 steps of testing process - By Harshil Barot
 
Introduction & Manual Testing
Introduction & Manual TestingIntroduction & Manual Testing
Introduction & Manual Testing
 
functional testing
functional testing functional testing
functional testing
 
Chapter 3 SOFTWARE TESTING PROCESS
Chapter 3 SOFTWARE TESTING PROCESSChapter 3 SOFTWARE TESTING PROCESS
Chapter 3 SOFTWARE TESTING PROCESS
 
Fundamentals of software testing
Fundamentals of software testingFundamentals of software testing
Fundamentals of software testing
 
Testing artifacts test cases
Testing artifacts   test casesTesting artifacts   test cases
Testing artifacts test cases
 
What is smoke testing
What is smoke testingWhat is smoke testing
What is smoke testing
 
System testing
System testingSystem testing
System testing
 
Regression testing
Regression testingRegression testing
Regression testing
 

Ähnlich wie Testing

How to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdfHow to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdf
Abhay Kumar
 
unit 4.pptx very needful and important p
unit 4.pptx very needful and important punit 4.pptx very needful and important p
unit 4.pptx very needful and important p
20EC040
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
Venkat Alagarsamy
 
Software testing.ppt
Software testing.pptSoftware testing.ppt
Software testing.ppt
Komal Garg
 
softwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdfsoftwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdf
SHAMSHADHUSAIN9
 

Ähnlich wie Testing (20)

Software unit4
Software unit4Software unit4
Software unit4
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
How to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdfHow to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdf
 
Object Oriented Testing
Object Oriented TestingObject Oriented Testing
Object Oriented Testing
 
Software testing
Software testingSoftware testing
Software testing
 
unit 4.pptx very needful and important p
unit 4.pptx very needful and important punit 4.pptx very needful and important p
unit 4.pptx very needful and important p
 
Functional Testing- All you need to know (2).pptx
Functional Testing- All you need to know (2).pptxFunctional Testing- All you need to know (2).pptx
Functional Testing- All you need to know (2).pptx
 
Software testing techniques
Software testing techniquesSoftware testing techniques
Software testing techniques
 
Istqb v.1.2
Istqb v.1.2Istqb v.1.2
Istqb v.1.2
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
 
Types of software testing
Types of software testingTypes of software testing
Types of software testing
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.ppt
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Software testing.ppt
Software testing.pptSoftware testing.ppt
Software testing.ppt
 
softwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdfsoftwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdf
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
UNIT 2.pptx
UNIT 2.pptxUNIT 2.pptx
UNIT 2.pptx
 

Kürzlich hochgeladen

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Kürzlich hochgeladen (20)

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 

Testing

  • 1. What is software testing Software Testing is the process of executing a program or system with the intent of finding errors.Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear and tear .Generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects or bugs will be buried in and remain latent until activation. Software bugs will almost always exist in any software module with moderate size not because programmers are careless or irresponsible, but because the complexity of software is generally intractable and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out. Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem Software Testing Page 1
  • 2. will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration. Objectives of testing:- First of all objectives should be clear.  Testing as a process of executing a program with the intent of finding errors.To perform testing, test cases are designed. A test case is a particular made up of artificial situation upon which a program is exposed so as to find errors. So a good test case is one that finds undiscovered errors.  If testing is done properly, it uncovers errors and after fixing those errors we have software that is being developed according to specifications.  The above objective implies a dramatic change in viewpoint .The move counter to the commonly held view than a successful test is one in which no errors are found. In fact, our objective is to design tests that a systematically uncover different classes of errors and do so with a minimum amount of time and effort. Testing principles: Before applying methods to design effective test cases, software engineer must understand the basic principles that guide the software testing process. Some of the most commonly followed principles are: All test should be traceable to customer requirements as the objective of testing is to uncover errors, it follows that the most severe defects (from the customers point of view) are those that causes the program to fail to meet its requirements. Software Testing Page 2
  • 3. Tests should be planned long before the testing begins. Test planning can begin as soon as the requirement model is complete. Detailed definition of test cases can begin as soon as the design model has been solidated. Therefore, all tests can be planned and designed before any code can be generated. Exhaustive testing is not possible. The number of paths permutations for impossible to execute every combination of paths during testing. It is possible however to adequately cover program logic and to ensure that all conditions in the procedural design have been exercised. To be most effective, an independent third party should conduct testing. By “most effective”, we mean testing that has the highest probability of finding errors (the primary objective of testing). Test Information Flow: Testing is a complete process. For testing we need two types of inputs:  Software configuration –it includes software requirement specification, design specification and source code of program. Software configuration is required so that testers know what is to be expected and tested.  Test configuration – it is basically test plan and procedure. Test configuration is testing plan that is, the way how the testing will be conducted on the system. It specifies the test cases and their expected value. It also specifies if any tools for testing are to be used.  Test cases are required to know what specific situations need to be tested. When tests are evaluated, test results are compared with actual results and if there is some error, then debugging is done to correct the error. Testing is a way to know about quality.  Software Testing Page 3
  • 4. Different types of testing 1. White box testing 2. Black box testing 3. Unit testing 4. Incremental integration testing 5. Integration testing 6. Functional testing 7. System testing 8. End-to-end testing 9. Sanity testing 10.Regression testing 11.Acceptance testing 12.Load testing 13.Stress testing 14.Performance testing 15.Usability testing 16.Install/uninstall testing 17.Recovery testing 18.Security testing 19.Compatibility testing 20.Comparison testing 21.Beta testing 22.Alpha testing 23.Smoke testing 24.Monkey testing 25.Ad hoc testing Software Testing Page 4
  • 5. 1. Black box testing Internal system design is not considered in this type of testing. Tests are based on requirements and functionality. 2. White box testing This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions. 3. Unit testing Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. May require developing test driver modules or test harnesses. 4. Incremental integration testing Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers. 5. Integration testing Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Software Testing Page 5
  • 6. 6. Functional testing This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application. 7. System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system. 8. End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real- world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. 9. Sanity testing – Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix. 10. Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. Software Testing Page 6
  • 7. 11. Acceptance testing Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application. 12. Load testing Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails. 13. Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. 14. Performance testing Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this. 15.Usability testing User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing. Software Testing Page 7
  • 8. 16. Install/uninstall testing Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment. 17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. 18. Security testing Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks. 19. Compatibility testing Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above. 20.Comparison testing Comparison of product strengths and weaknesses with previous versions or other similar products. 21. Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing. Software Testing Page 8
  • 9. 22. Beta testing: Testing typically done by end-users or others. Final testing before releasing application for commercial purpose. 23. 24. Smoke testing It is a term used in plumbing, woodwind repair, electronics, computer software development, infectious disease control, and the entertainment industry. It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing. 25. Monkey testing It is random testing performed by automated testing tools (after the latter are developed by humans). These automated testing tools are considered "monkeys", if they work at random. We call them "monkeys" because it is widely believed that if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov. a) Smart monkeys- are valuable for load and stress testing they will find a significant number of bugs, but are also very expensive to develop. (b) Dumb monkeys- are inexpensive to develop, are able to do some basic testing, but they will find few bugs. Software Testing Page 9
  • 10. 26.Ad hoc testing Its a commonly used term for software testing performed without planning and documentation.The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. . Testing types_ a) Manual testing b) Automation testing Manual testing It is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases. Test automation Its the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions[1]. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process. Software Testing Page 10
  • 11. Software quality assurance Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuring conformance to one or more standards, such as ISO 9000 or a model such as CMMI SQA encompasses the entire software development process, which includes processes such as requirements definition, software design, coding, source code control, code reviews, change management, configuration management, testing, release management, and product integration.The American Society for Quality offers a Certified Software Quality Engineer (CSQE) certification with exams held a minimum of twice a year. SQA includes ● Defect Prevention – prevents defects from occurring in the first place – Activities: training, planning, and simulation ● Defects detection – finds defects in a software artifact – Activities: inspections, testing or measuring ● Defects removal – isolation, correction, verification of fixes – Activities: fault isolation, fault analysis, regression testing ● Verification – are we building the product right ? – performed at the end of a phase to ensure that requirements established during Software Testing Page 11
  • 12. previous phase have been met ● Validation – are we building the right product ? – performed at the end of the development process to ensure compliance with product requirements Objective of SQA Quality is a key measure of project success. Software producers want to be assured of the product quality before delivery. For this, they need to plan and perform a systematic set of activities called Software Quality Assurance (SQA). SQA helps ensure that quality is incorporated into a software product. It aims at preventing errors and detecting them as early as possible. SQA provides confidence to software producers that their product meets the quality requirements. SQA activities include setting up processes and standards, detecting and removing errors, and ensuring that every project performs project SQA activities. Introduction to Software Quality. Importance of Software Quality ● Several historic disasters attributed to software – 1988 shooting down of Airbus 320 by the USS Vincennes cryptic and misleading output displayed by tracking software – 1991 patriot missile failure inaccurate calculation of time due to Software Testing Page 12
  • 13. computer arithmetic errors – London Ambulance Service Computer Aided Dispatch System – several deaths – On June 3, 1980, the North American Aerospace Defense Command (NORAD) reported that the U.S. was under missile attack. – First operational launch attempt of the space shuttle, whose realtime operating software consists of about 500,000 lines of code, failed synchronization problem among its flightcontrol computers. – 9 hour breakdown of AT&T's longdistance telephone network caused by an untested code patch Importance of Software Quality ● Ariane 5 crash June 4, 1996 – maiden flight of the European Ariane 5 launcher crashed about 40 seconds after takeoff – lost was about half a billion dollars – explosion was the result of a software error ● Uncaught exception due to floatingpoint error: conversion from a 64bit integer to a 16bit signed integer applied to a larger than expected number ● Module was reused without proper testing from Ariane 4 Software Testing Page 13
  • 14. – Error was not supposed to happen with Ariane 4 – No exception handler ● Mars Climate Orbiter September 23, 1999 – Mars Climate Orbiter, disappeared as it began to orbit Mars. – Cost about $US 125million – Failure due to error in a transfer of information between a team in Colorado and a team in California ● One team used English units (e.g., inches, feet and pounds) while the other used metric units for a key spacecraft operation. ● Mars Polar Lander December, 1999 – Mars Polar Lander, disappeared during landing on Mars – Failure more likely due to unexpected setting of a single data bit. ● defect not caught by testing ● independent teams tested separate aspects ● Internet viruses and worms – Blaster worm ($US 525 millions) – Sobig.F ($US 500 millions – 1billions) ● Exploit well known software vulnerabilities – Software developers do not devote enough effort to applying lessons learned about the causes of vulnerabilities. Software Testing Page 14
  • 15. – Same types of vulnerabilities continue to be seen in newer versions of products that were in earlier versions. ● Usability problems ● Monetary impact of poor software quality (Standish group 1995) ● 175,000 software projects/year Average Cost per project – Large companies $ US 2,322,000 – Medium companies $ US 1,331,000 – Small companies $ US 434,000 ● 31.1% of projects canceled before completed – cost $81 billion ● 52.7% of projects exceed their budget costing 189% of original estimates – cost $59 billion ● 16.2% of software projects completed ontime and onbudget (9% for larger companies) Software Testing Page 15
  • 16. What are test cases A test case is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not. The mechanism for determining whether a software program or system has passed or failed such a test is known as a test oracle. In some settings, an oracle could be a requirement or use case, while in others it could be a heuristic. It may take many test cases to determine that a software program or system is functioning correctly. Test cases are often referred to as test scripts, particularly when written. Written test cases are usually collected into test suites Test cases can be:- 1.Formal test cases In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test unless a requirement has sub-requirements. In that situation, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted. What characterizes a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition. Software Testing Page 16
  • 17. 2. Informal test cases For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run. In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of steps. Software Testing Page 17
  • 18. Test cases for different modules in ‘Alprus’. 1 .Test cases for ‘Home’ Page:- Test case Test Test Test Steps to Expected no id Module summary description data Perquisite follow results 1.Login to Verify that Ensure the the that user application homepage is should be 2.verify User displayed able to see Valid user that home should be TC_ after the home should page is able to see Home successfully page after exist in the getting the home 1 Page_001 Homepage login login application displayed page 1.Login to User the should be Ensure application able to see Verify the all the 2.Verify all the availability tasks that all the tasks that of all tasks should be Valid user task are are TC_ in the available should displayed available Home “home at “home exist in the in the at “home 2 Page_002 Homepage page” page” application application page” Software Testing Page 18
  • 19. 1.Login to Ensure the “Home that the 2..Verify page” user name that user should should be Valid user name is display Verify that available name displayed name of TC_ homepage throughout should on the the Home will display the exist in the “Home existing 3 Page_003 Homepage user name application application page” user 1.Login to the application 2.verify all Ensure links are that all available Verify the links at” home Links availability should be Valid user page” should be TC_ of links in available should throughout available Home “home at “ home exist in the the at home 4 Page_004 Homepage page” page” application application page Software Testing Page 19
  • 20. Ensure “logout “ Verify button functionality should be 1.Login to of “logout” click able the User button and logout Valid user application should be TC_ throughout user from should 2.Click on logout Home the the exist in the “logout” from the 5 Page_005 Homepage application application application button application Ensure that the Verify the logout 1.Login to “Logout” availability button the button of “logout” should be application should be button there and Valid user 2.verify the displayed TC_ throughout logout the should availability and logout Home the user from exist in the of “logout” user from 6 Page_006 Homepage application application application button. application Software Testing Page 20
  • 21. 1.Login to Ensure the that after application clicking on 2.Click on “home” “home” link it and User Verify the should Valid user navigate should be TC_ functionality take user should the user to able to see Home of “home” at home exist in the “home the “home 7 Page_007 Homepage link page application page” page” 1.Login to the Ensure application that 2.Click on “search “home” “Search button” link and button Verify the should be Valid user navigate should be TC_ availability available should the user to displayed Home of “search at home exist in the “home on “Home 8 Page_008 Homepage button” page application page” page”. Software Testing Page 21
  • 22. Software Testing Page 22
  • 23. Tools and Technologies used in ALPRUS: Tools Used :  TCMS(Test Case Management System)  Bugzilla  QTP TCMS: A Test Case Management System (TCMS) is meant to be a communications medium through which engineering teams coordinate their efforts. More specifically, it allows for BlackBox QA, WhiteBox QA, Automation, and Development to be a cohesive force in ensuring testing completeness with minimal effort or overhead. The end result being higher quality deliverables in the same time frame, and better visibility into the testing efforts on a given project. A TCMS will only help coordinate the process, it does not implement the process itself. This document details the individual groups directly involved in this process and how they interact together. This will set up the high-level concepts which the effective usage of the TCMS relies upon, and give a better overall understanding of the requirements for the underlying implementation. Requirements The TCMS has a concept of scenarios and configurations. In this context, scenario is a physical topology, and a configuration is the software and/or hardware a given test case will be executed on. This information must come from a Requirements document that specifies the expected scenarios, configurations, and functionality that the Software Testing Page 23
  • 24. product deliverable will be expected to support. A Requirements Document with this information is a necessity for the TCMS to be used effectively by BlackBox QA and Development. BlackBox QA BlackBox QA creates test cases based upon their high level knowledge of the product, and executes test cases. Test cases also come from Development, WhiteBox QA, and elsewhere that BlackBox QA also executes. All test cases are funneled into the TCMS, a central repository for this information. On a given build, a BlackBox QA Engineer will execute the test cases assigned to him or her, and update the Last Build Tested information to reflect that work. With this information, management can create a simple query to gauge the testing status of a given project, and redeploy effort as necessary. If a given test case fails, the Engineer can then submit a defect containing the test case information easily. If a reported defect has a test case that is not in the TCMS, a BlackBox QA engineer can transfer the test case information from the defect tracking system into the TCMS. Automation The main job of the Automation team is to automate execution of test cases for the purpose of increasing code coverage per component. Once a given project has entered the "alpha" stage (functionality/code complete), release milestones (betas, release candidates, etc) are then based upon the amount of code coverage per component in the automated test suite. For instance, a goal is set for a minimum of 50% code coverage per component before a beta candidate can be considered. This may seem as though the Automation team would then be the bottleneck for release milestones, but this is not the case. Automation requires that test cases be supplied that sufficiently exercise code, and works from there. As was stated before, all sections of engineering supply test cases; if Automation has automated all test cases and has not met the goal for a given milestone, other sections of engineering (WhiteBox QA, BlackBox QA, Development) need to supply more test cases to be automated. This is not to say that Automation is Software Testing Page 24
  • 25. helpless; they can supply test cases as well. The three groups mentioned so far (BlackBox QA, WhiteBox QA, and Automation) are given a synergy by the TCMS whereby a feedback loop is created. For clarity, here is a diagram: 1. BlackBox QA (and development) record test cases into the TCMS, which the Automation team then automates and generates code coverage data for. 2. When BlackBox testing yields no more code coverage, WhiteBox QA analyses output from the code coverage tool to supply test cases to exercise heretofore untested codepaths. 3. The test cases supplied by WhiteBox QA are then approved by BlackBox QA and the cycle begins again. This feedback loops has the "snowball rolling downhill" effect in regard to code coverage, which is why it is logical to partially base release milestones upon those metrics. Development Development's role in the TCMS is simply to supply and critique test cases. The owner of a given component should review the test cases in the TCMS for their component and supply test cases or information/training to QA to fill in any gaps she/he sees. Component owners should also have a goal of supplying a given number of test cases for the milestone of alpha release. This way, BlackBox QA and Automation have something to work from initially and can provide more immediate results. Software Testing Page 25
  • 26. Roles in a Cycle This table documents all of the aforementioned groups' roles in a given product release cycle. The only solid definitions necessary is that the "alpha" release is functionality complete, and that each release milestone has an incremental code coverage goal. Milesto Development BlackBox QA WhiteBox QA Automation ne designing; research/study documenting design; pre- implementing on product reviewing code; N/A Alpha design, technologies providing feedback functionality manual supplies initial execution of test cases; initially running code/runtime begins provides supplied test Alpha analysis tools; automating test architecture/pr cases; test case reporting defects case in TCMS oduct overview; creation; fixes bugs reporting defects Beta bug fixing; test manual integrating must report at case creation execution of test code/runtime analysis least X percent cases; test case tools in the automated amount of code creation; defect test suite; reporting coverage per reporting defects; ensuring component, adherence to repeat cycle until Software Testing Page 26
  • 27. documented design; met test case creation analysing output of code/runtime analysis must report at manual tools in the automated least X plus 20 execution of test test suite; reporting percent amount bug fixing; test Release cases; test case defects; ensuring of code coverage case creation creation; defect adherence to per component, reporting documented design; repeat cycle until code review; test case met creation Bugzilla: Bugzilla is a Web-based general-purpose bug tracker and testing tool originally developed and used by the Mozilla project, and licensed under the Mozilla Public License. Released as open source software by Netscape Communications in 1998, it has been adopted by a variety of organizations for use as a defect tracker for both free and open source software and proprietary products Bugzilla's system requirements include: A compatible database management system A suitable release of Perl 5 Software Testing Page 27
  • 28. An assortment of Perl modules A compatible web server A suitable mail transfer agent, or any SMTP server Bugzilla boasts many advanced features:  Powerful searching  User-configurable email notifications of bug changes  Full change history  Inter-bug dependency tracking and graphing  Excellent attachment management  Integrated, product-based, granular security schema  Fully security-audited, and runs under Perl's taint mode  A robust, stable RDBMS back-end  Web, XML, email and console interfaces  Completely customisable and/or localisable web user interface Software Testing Page 28
  • 29.  Extensive configurability  Smooth upgrade pathway between versions The life cycle of a Bugzilla bug Software Testing Page 29
  • 30. QTP Quick Test Professional is automated testing software designed for testing various software applications and environments. It performs functional and regression testing through a user interface such as a native GUI or web interface. It works by identifying the objects in the application user interface or a web page and performing desired operations (such as mouse clicks or keyboard events) it can also capture object properties like name or handler ID. QuickTest Professional uses a VBScript scripting language to specify the test procedure and to manipulate the objects and controls of the application under test. To perform more sophisticated actions, users may need to manipulate the underlying VBScript. Although QuickTest Professional is usually used for "UI Based" Test Case Automation, it also can automate some "Non-UI" based Test Cases such as file system operations and database testing. QTP performs following Tasks:- • Verification Checkpoints verify that an application under test functions as expected. You can add a checkpoint to check if a particular object, text or a bitmap is present in the automation run. Checkpoints verify that during the course of test execution, the actual application behavior or state is consistent with the expected application behavior or state. QuickTest Professional offers 10 types of checkpoints, enabling users to verify various aspects of an application under test, such as: the properties of an object, data within a table, records within aatabase, a bitmap image, or the text on an application screen. The types of checkpoints are standard, image, table, page, text, text area, bitmap, database, accessibility and XML checkpoints. Users can also create user-defined checkpoints. Software Testing Page 30
  • 31. • Exception handling QuickTest Professional manages exception handling using recovery scenarios , the goal is to continue running tests if an unexpected failure occurs.For example, if an application crashes and a message dialog appears, QuickTest Professional can be instructed to attempt to restart the application and continue with the rest of the test cases from that point. Because QuickTest Professional hooks into the memory space of the applications being tested, some exceptions may cause QuickTest Professional to terminate and be unrecoverable. • Data-driven testing QuickTest Professional supports data-driven testing. For example, data can be output to a data table for reuse elsewhere. Data-driven testing is implemented as a Microsoft Excel workbook that can be accessed from QuickTest Professional. QuickTest Professional has two types of data tables: the Global data sheet and Action (local) data sheets. The test steps can read data from these data tables in order to drive variable data into the application under test, and verify the expected result. • Automating custom and complex UI objects QuickTest Professional may not recognize customized user interface objects and other complex objects. Users can define these types of objects as virtual objects. QuickTest Professional does not support virtual objects for analog recording or recording in low-level mode. • Extensibilit QuickTest Professional can be extended with separate add-ins for a number of development environments that are not supported out-of-the-box. QuickTest Professional add-ins include support for Web, .NET, Java, and Delphi. QuickTest Professional and the QuickTest Professional add-ins are packaged together in HP Software Testing Page 31
  • 32. Functional Testing software. • Test results At the end of a test, QuickTest Professional generates a test result. Using XML schema, the test result indicates whether a test passed or failed, shows error messages, and may provide supporting information that allows users to determine the underlying cause of a failure. Release 10 lets users export QuickTest Professional test results into HTML, Microsoft Word or PDF report formats. Reports can include images and screen shots for use in reproducing errors. User interface QuickTest Professional provides two views--and ways to modify-- a test script: Keyword View and Expert View. These views enable QuickTest Professional to act as an IDE for the test, and QuickTest Professional includes many standard IDE features, such as breakpoints to pause a test at predetermined places. • Keyword view Keyword View lets users create and view the steps of a test in a modular, table format. Each row in the table represents a step that can be modified. The Keyword View can also contain any of the following columns Item, Operation, Value, Assignment, Comment, and Documentation. For every step in the Keyword View, QuickTest Professional displays a corresponding line of script based on the row and column value. Users can add, delete or modify steps at any point in the test. • Expert view In Expert View, QuickTest Professional lets users display and edit a test's source code using VBScript. Software Testing Page 32
  • 33. Designed for more advanced users, users can edit all test actions except for the root Global action, and changes are synchronized with the Keyword View. • Languages QuickTest Professional uses VBScript as its scripting language. VBScript supports classes but not polymorphism and inheritance. Compared with Visual Basic for Applications (VBA), VBScript lacks the ability to use some Visual Basic keywords, does not come with an integrated debugger, lacks an event handler, and does not have a forms editor. It has added a debugger, but the functionality is more limited when compared with testing tools that integrate a full-featured IDE, such as those provided with VBA, Java, or VB.NET. Technologies QTP Supports 1. Web 2. Java(Core and Advanced) 3. .Net 4. WPF 5. SAP 6. Oracle 7. Siebel 8. PeopleSoft 9. Delphi 10.Power Builder Software Testing Page 33
  • 34. 11. Stingray 1 12.Terminal Emulator 13. Flex 14. Mainframe terminal emulators Versions 1. 10.0 - Released in 2009 2. 9.5 - Released in 2007 3. 9.2 - Released in 2007 4. 9.0 - Released in 2006 5. 8.2 - Released in 2005 6. 8.0 - Released in 2004 7. 7.0 - Never released. 8. 6.5 - Released in 2003 9. 6.0 - Released in 2002 10. 5.5 - First release. Released in 2001 Technologies used in ALPRUS: Manual testing: It is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases. Software Testing Page 34
  • 35. For small scale engineering efforts (including prototypes), exploratory testing may be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure, but rather explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application. Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps.[1] Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired. Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes. Assign the test cases to testers, who manually follow the steps and record the results. Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the software can be released, and if not, it is used by engineers to identify and correct the problems. Automation Testing: Automated software testing tool is able to playback pre-recorded and predefined actions, compare the results to the expected behavior and report the success or failure of these manual tests to a test engineer. Once automated tests are created they can easily be repeated and they can be extended to perform tasks impossible with manual testing. Because of this, savvy managers have found that automated software testing is an essential component of successful Software Testing Page 35
  • 36. development projects.Automated software testing has long been considered critical for big software development organizations but is often thought to be too expensive or difficult for smaller companies to implement. AutomatedQA’s TestComplete is affordable enough for single developer shops and yet powerful enough that our customer list includes some of the largest and most respected companies in the world. Companies like Corel, Intel, Adobe, Autodesk, Intuit, McDonalds, Motorola, Symantec and Sony all use TestComplete. What makes automated software testing so important to these successful companies? Automated Software Testing Saves Time and Money Software tests have to be repeated often during development cycles to ensure quality. Every time source code is modified software tests should be repeated. For each release of the software it may be tested on all supported operating systems and hardware configurations. Manually repeating these tests is costly and time consuming. Once created, automated tests can be run over and over again at no additional cost and they are much faster than manual tests. Automated software testing can reduce the time to run repetitive tests from days to hours. A time savings that translates directly into cost savings. Automated Software Testing Improves Accuracy Even the most conscientious tester will make mistakes during monotonous manual testing. Automated tests perform the same steps precisely every time they are executed and never forget to record detailed results. Automated Software Testing Increases Test Coverage Automated software testing can increase the depth and scope of tests to help improve software quality. Lengthy tests that are often avoided during manual testing can be run unattended. They can even be run on multiple computers Software Testing Page 36
  • 37. with different configurations. Automated software testing can look inside an application and see memory contents, data tables, file contents, and internal program states to determine if the product is behaving as expected. Automated software tests can easily execute thousands of different complex test cases during every test run providing coverage that is impossible with manual tests. Testers freed from repetitive manual tests have more time to create new automated software tests and deal with complex features. Automated Software Testing Does What Manual Testing Cannot Even the largest software departments cannot perform a controlled web application test with thousands of users. Automated testing can simulate tens, hundreds or thousands of virtual users interacting with network or web software and applications. Implementation: Software Testing Life Cycle: Software Testing Life Cycle consists of six (generic) phases:  Test Planning,  Test Analysis,  Test Design,  Construction and verification,  Testing Cycles,  Final Testing and Implementation and  Post Implementation.  Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance. Software Testing Page 37
  • 38. Test Planning: This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point. Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan. Test Analysis: Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing. Proper and regular meetings should be held between testing teams, project managers, development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Software Testing Page 38
  • 39. Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing. Test Design: Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared. Construction and verification: In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported. Testing Cycles: In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3….). Final Testing and Implementation: Software Testing Page 39
  • 40. In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions. Post Implementation: In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage Bug A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth. Arithmetic bugs * Division by zero * Arithmetic overflow or underflow * Loss of arithmetic precision due to rounding or numerically unstable algorithms Software Testing Page 40
  • 41. Logic bugs * Infinite loops and infinite recursion * Off by one error, counting one too many or too few when looping Syntax bugs * Use of the wrong operator, such as performing assignment instead of equality test. In simple cases often warned by the compiler; in many languages, deliberately guarded against by language syntax Resource bugs * Null pointer dereference * Using an uninitialized variable * Using an otherwise valid instruction on the wrong data type (see packed decimal/binary coded decimal) * Access violations * Resource leaks, where a finite system resource such as memory or file handles are exhausted by repeated allocation without release. * Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation or storage violation. These bugs can form a security vulnerability. * Excessive recursion which though logically valid causes stack overflow Multi-threading programming bugs * Deadlock Software Testing Page 41
  • 42. * Race condition * Concurrency errors in Critical sections, Mutual exclusions and other features of concurrent processing. Time-of- check-to-time-of-use (TOCTOU) is a form of unprotected critical section. Teamworking bugs * Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy. * Comments out of date or incorrect: many programmers assume the comments accurately describe the code * Differences between documentation and the actual product Bugs in popular culture * In the 1968 novel 2001: A Space Odyssey (and the corresponding 1968 film), a spaceship's onboard computer, HAL 9000, attempts to kill all its crew members. In the followup 1982 novel, 2010: Odyssey Two, and the accompanying 1984 film, 2010, it is revealed that this action was caused by the computer having been programmed with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret from the crew; this conflict caused HAL to become paranoid and eventually homicidal. * In the 1984 song 99 Red Balloons (though not in the original German version), "bugs in the software" lead to a computer mistaking a group of balloons for a nuclear missile and starting a nuclear war. * The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database application. Effects of Bugs; Software Testing Page 42
  • 43. Bugs trigger Type I and type II errors that can in turn have a wide variety of ripple effects, with varying levels of inconvenience to the user of the program. Some bugs have only a subtle effect on the program's functionality, and may thus lie undetected for a long time. More serious bugs may cause the program to crash or freeze leading to a denial of service. Others qualify as security bugs and might for example enable a malicious user to bypass access controls in order to obtain unauthorized privileges. The results of bugs may be extremely serious. Bugs in the code controlling the Therac-25 radiation therapy machine were directly responsible for some patient deaths in the 1980s. In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket was destroyed less than a minute after launch, due to a bug in the on-board guidance computer program. In June 1994, a Royal Air Force Chinook crashed into the Mull of Kintyre, killing 29. This was initially dismissed as pilot error, but an investigation by Computer Weekly uncovered sufficient evidence to convince a House of Lords inquiry that it may have been caused by a software bug in the aircraft's engine control computer. [1] In 2002, a study commissioned by the US Department of Commerce' National Institute of Standards and Technology concluded that software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product. How to prevent bug • Programming style While typos in the program code are often caught by the compiler, a bug usually appears when the programmer makes a logic error. Various innovations in programming style and defensive programming are designed to make these bugs less likely, or easier to spot. In some programming languages, so-called typos, especially of symbols or logical/mathematical operators, actually represent logic errors, since Software Testing Page 43
  • 44. the mistyped constructs are accepted by the compiler with a meaning other than that which the programmer intended. • Programming techniques Bugs often create inconsistencies in the internal data of a running program. Programs can be written to check the consistency of their own internal data while running. If an inconsistency is encountered, the program can immediately halt, so that the bug can be located and fixed. Alternatively, the program can simply inform the user, attempt to correct the inconsistency, and continue running. • Development methodologies There are several schemes for managing programmer activity, so that fewer bugs are produced. Many of these fall under the discipline of software engineering (which addresses software design issues as well). For example, formal program specifications are used to state the exact behavior of programs, so that design bugs can be eliminated. Unfortunately, formal specifications are impractical or impossible for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy. • Programming language support Programming languages often include features which help programmers prevent bugs, such as static type systems, restricted name spaces and modular programming, among others. For example, when a programmer writes (pseudocode) LET REAL_VALUE PI = "THREE AND A BIT", although this may be syntactically correct, the code fails a type check. Depending on the language and implementation, this may be caught by the compiler or at runtime. In addition, many recently-invented languages have deliberately excluded features which can easily lead to bugs, at the expense of making code slower than it need be: the general principle being that, because of Moore's law, computers get faster and software engineers get slower; it is almost always better to write simpler, slower code than "clever", inscrutable code, especially considering that maintenance cost is considerable. For example, the Java programming language does not support pointer arithmetic; implementations of some languages such as Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build. Software Testing Page 44
  • 45. Code analysisTools for code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make the same kinds of mistakes when writing software. • Instrumentation Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten. Bug management: It is common practice for software to be released with known bugs that are considered non-critical, that is, that do not affect most users main experience with the product. While software products may, by definition, contain any number of unknown bugs, measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed ("if we had 200 bugs last week, we should have 100 this week"). Most big software projects maintain two lists of "known bugs" those known to the software team, and those to be told to users. This is not dissimulation, but users are not concerned with the internal workings of the product. The second list informs users about bugs that are not fixed in the current release, or not fixed at all, and a workaround may be offered. There are various reasons for not fixing bugs: • The developers often don't have time or it is not economical to fix all non-severe bugs. • The bug could be fixed in a new version or patch that is not yet released. Software Testing Page 45
  • 46. • The changes to the code required to fix the bug could be large, expensive, or delay finishing the project. • Even seemingly simple fixes bring the chance of introducing new unknown bugs into the system. At the end of a test/fix cycle some managers may only allow the most critical bugs to be fixed. • Users may be relying on the undocumented, buggy behavior, especially if scripts or macros rely on a behavior; it may introduce a breaking change. • It's "not a bug". A misunderstanding has arisen between expected and provided behavior It is often considered impossible to write completely bug-free software of any real complexity. So bugs are categorized by Severity, and Low-Severity non-critical bugs are tolerated, as they do not affect the proper operation of the system for most users. NASA's SATC managed to reduce the number of errors to fewer than 0.1 per 1000 lines of code (SLOC) but this was not felt to be feasible for any real world projects. The severity of a bug is not the same as its importance for fixing, and the two should be measured and managed separately. On a Microsoft Windows system a blue screen of death is rather severe, but if it only occurs in extreme circumstances, especially if they are well diagnosed and avoidable, it may be less important to fix than an icon not representing its function well, which though purely aesthetic may confuse thousands of users every single day. This balance, of course, depends on many factors; expert users have different expectations from novices, a niche market is different from a general consumer market, and so on. A school of thought popularized by Eric S. Raymond as Linus's Law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow". This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so." Software Testing Page 46
  • 47. Bug management must be conducted carefully and intelligently because "what gets measured gets done" and managing purely by bug counts can have unintended consequences. If, for example, developers are rewarded by the number of bugs they fix, they will naturally fix the easiest bugs first leaving the hardest, and probably most risky or critical, to the last possible moment Debugging:- Finding and fixing bugs, or "debugging" has always been a major part of computer programming.. As computer programs grow more complex, bugs become more common and difficult to fix. Often programmers spend more time and effort finding and fixing bugs than writing new code. Software testers are professionals whose primary task is to find bugs, or write code to support testing. On some projects, more resources can be spent on testing than in developing the program. Usually, the most difficult part of debugging is finding the bug in the source code. Once it is found, correcting it is usually relatively easy. Programs known as debuggers exist to help programmers locate bugs by executing code line by line, watching variable values, and other features to observe program behavior. Without a debugger, code can be added so that messages or values can be written to a console (for example with printf in the c language) or to a window or log file to trace program execution or show values. However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a completely different section, thus making it especially difficult to track (for example, an error in a graphics rendering routine causing a file I/O routine to fail), in an apparently unrelated part of the system. Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of Code Software Testing Page 47
  • 48. review, stepping through the code modelling the execution process in one's head or on paper can often find these errors without ever needing to reproduce the bug as such, if it can be shown there is some faulty logic in its implementation. But more typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproduced, the programmer can use a debugger or some other tool to monitor the execution of the program in the faulty region, and find the point at which the program went astray. It is not always easy to reproduce bugs. Some are triggered by inputs to the program which may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs (humorously named after the Heisenberg uncertainty principle.) Debugging is still a tedious task requiring considerable effort. Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, there has been a renewed interest in the development of effective automated aids to debugging. There are also classes of bugs that have nothing to do with the code itself. for example one relies on faulty documentation or hardware, the code may be written perfectly properly to what the documentation says, but the bug truly lies in the documentation or hardware, not the code. However, it is common to change the code instead of the other parts of the system, as the cost and time to change it is generally less. Embedded systems frequently have workarounds for hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing the hardware, especially if they are commodity items. Software Testing Page 48
  • 49. Software Testing Page 49
  • 50. Bug tracking tools Tools Vendor Description AceProject Websystems Bug tracking software designed for project managers and developers. AdminiTrack AdminiTrack Hosted issue and bug tracking application ADT Web Borderwave It is designed for small, medium and large software companies to simplify their defect, suggestion and feature request tracking. It allows to track defects, feature requests and suggestions by version, customer etc. Agility AgileEdge . Agility features a easy to use web-based interface. It includes fully customizable field lists, workflow engine, and email notifications. Bug/Defect Applied Innovation Web-based bug tracking software Tracking Expert Management BugAwar bugaware Installed and ASP hosted service available. Email alert notification, knowledge base, dynamic reporting, team management, user discussion threads, file attachment, searching. bugcentral. bugcentral.com Web based defect tracking service BUGtrack SkyeyTech, Inc. Web based defect tracking system BugHost Active-X.COM Ideal for small- to medium-sized companies who want a secure, Web- based issue and bug management system. There is no software to install Software Testing Page 50
  • 51. and can be accessed from any Internet connection. Designed from the ground up, the system is easy to use, extremely powerful, and customizable to meet your needs. BugImpact Avna Int. . Unlimited: projects, entries/bugs/issues Web access -users access their BugImpact service through a standard Web browser Workflow configurations control: BugImpact installs with a default workflow configuration that can easily be changed or replaced entirely File attachment: details thread may contain attachments, such as screenshots, Excel spreadsheets, internal documents or just any binary files. E-mail notification: the system sends e-mail notification to users when new bugs are assigned or status changes Builds : project(s) may have a specific 'fix-for' version with optional deadline Priority Colorize: custom colors may be associated with different priorities BugStation Bugopolis It is designed to make Bugzilla easier and more secure. A centralized system for entering, assigning and tracking defects. Configurable and customizable. Bug Tracker Bug Tracker Web based defect tracking and data sharing Software Software Bug Tracking Bug-Track.com It offers email notification, file attachment, tracking history, bilingual Software Testing Page 51
  • 52. pages, 128-bit encryption connection and advance customization. . Bugvisor softwarequality, Inc. Enterprise solution for capturing, managing and communicating feature requests, bug reports, changes and project issues from emergence to resolution with a fully customizable and controllable workflow Bugzero WEBsina Web-based, easy-to-install, cross-platform defect tracking system Bugzilla Bugzilla.org Highly configurable Open source defect tracking system developed originally for the Mozilla project Census BugTrack MetaQuest . Includes VSS integration, notifications, workflow, reporting and change history. DefectTracker Pragmatic Software Subscription-based bug/problem tracking solution Defectr Defectr Defect tracking and project management tool developed using IBM Lotus Domino and Dojo Ajax framework. Dragonfly Vermont Software Web-based, cross-browser, cross-platform issue tracking and change Testing Group management for software development, testing, debugging, and documentation. ExDesk ExDesk Bug and issue tracking software, remotely hosted, allows to tracking software bugs and route them to multiple developers or development groups for repair with reporting and automatic notification FogBUGZ Fog Creek S/W Web-based defect tracking. Fast BugTrack AlceaTech Web-based bug tracking Software Testing Page 52
  • 53. Footprints Unipress Web-based issue tracking and project management tool IssueTrak Help Desk Software Offers issue tracking, customer relationship and project management Central functions. JIRA Atlassian J2EE-based, issue tracking and project management application. Jitterbug Samba Freeware defect tracking JTrac Generic issue-tracking web-application that can be easily customized by adding custom fields and drop-downs. Features include customizable workflow, field level permissions, e-mail integration, file attachments and a detailed history view. Mantis Lightweight and simple bugtracking system. Easily modifiable, customizable, and upgradeable. Open Source. MyBugReport Bug Tracker It allows the different participants working on the development of a software or multimedia application to detect new bugs, to ensure their follow-up, to give them a priority and to assign them within the team. Ozibug Tortuga Written in Java, it utilizes servlet technology and offers features such as Technologies reports, file attachments, role-based access, audit trails, email notifications, full internationalization, and a customizable appearance. Perfect Tracker Avensoft Web-based defect tracking ProblemTracker NetResults Web-based collaboration software for issue tracking; automated support; and workflow, process, and change management. Software Testing Page 53
  • 54. ProjectLocker ProjectLocker Hosted source control (CVS/Subversion), web-based issue tracking, and web-based document management solutions. PR Tracker Softwise Company Records problem reports in a network and web-based database that supports access by multiple users. It include classification, assignment, sorting, searching, reporting, access control, & more. QEngine AdventNet Offers the facility of tracking and managing bugs, issues, improvements, and features. It provides role based access control, attachment handling, schedule management, automatic e-mail notification, workflow, resolution, worklogs, attaching screenshots, easy reporting, and extensive customization. SpeeDEV SpeeDEV A complete visual design of a multi level rol based process can be defined for different types of issues with conditional branching and automated task generation. Squish Information Web based issue tracking Management Systems, Inc. Task Complete Smart Design Te TaskComplete enables a team to organize and track software defects using with integrated calendar, discussion, and document management capabilities. Can easily be customized to meet the needs of any software development team. teamatic Teamatic Defect tracking system TrackStudio TrackStudio Supports workflow, multi-level security, rule-based email notification, Software Testing Page 54
  • 55. email submission, subscribe-able filters, reports. Has skin-based user interface. Supports ORACLE, DB2, MS SQL, Firebird, PostgreSQL, Hypersonic SQL . VisionProject Visionera AB Designed to make projects more efficient and profitable. Woodpecker IT AVS GmbH It is for performing request, version or bug management. Its main function is recording and tracking issues, within a freely defined workflow. yKAP DCom Solutions Uses XML to deliver a powerful, cost effective, Web based Bug/Defect tracking, Issue Management and Messaging product. , yKAP features include support for unlimited projects, test environments, attachments, exporting data into PDF/RTF/XLS/HTML/Text formats, rule-based email alerts, exhaustive search options, saving searches (public/ private), Auto- complete for user names, extensive reports, history, custom report styles, exhaustive data/trends analysis, printing, role-based security. yKAP allows the user to add custom values for system parameters such as Status, Defect cause, Defect type, priority, etc. yKAP is installed with complete help documentation. Tools Vendor Description assyst Axios Systems Offers a unique lifecycle approach to IT Service Management through the integration of all ITIL processes in a single application. BridgeTrak Kemma Software Record and track development or customers issues, assign issues to development teams, create software release notes and more. Software Testing Page 55
  • 56. BugRat Giant Java Tree It provides a defect reporting and tracking system. Bug reporting by the Web and email. BugSentry IT Collaborate Automatically and securely reports errors in .NET and COM applications. BugSentry provides a .NET dll (COM interop version available too) that developers ship with their products. Bug Trail Osmosys This easy to use tool allows to attach screenshots, automatically capture system parameters and create well formatted MS-WORD and HTML output reports. Customizable defect status flow allows small to large organizations configure as per their existing structure. BugZap Cybernetic For small or medium-size projects, which is easy to install, small and requires no Intelligence server-side installation. GmbH Defect Agent Inborne Software Defect tracking, enhancement suggestion tracking, and development team workflow management software. Defect Tiera Software Manages defects and enhancements through the complete entire life cycle of Manager product development through field deployment Fast Alcea Bug Tracking / Defect Tracking / Issue Tracking - Change Management Software BugTrack (work flow/process flow) GNATS GNU Freeware defect tracking software. Intercept Elsinore Bug tracking system designed to integrate with Visual SourceSafe and the rest of Technologies your Microsoft development environment Software Testing Page 56
  • 57. IssueView IssueView SQL server based bug tracking with Outlook style user interface. JIRA Atlassian Browser-based J2EE defect tracking and issue management software. Supports any platform that runs Java 1.3.x. QAW B.I.C Quality Developed to assist all quality assurance measurements within ICT-projects. The basic of QAW is a structured way of registration and tracking issues (defects). QuickBugs Excel Software Tool for reporting, tracking and managing bugs, issues, changes and new features involved in product development. Key attributes include extreme ease-of-use and flexibility, a shared XML repository accessible to multiple users, multiple projects with assigned responsibilities, configurable access and privileges for users on each project. Virtually everything in QuickBugs is configurable to the organization and specific user needs including data collection fields, workflow, views, queries, reports, security and access control. Highly targeted email messages notify people when specific events require their attention. Support Acentre Web enabled defect tracking application, one of the modules of the Tracker Suite Tracker software package. Support Tracker is based on Lotus Notes, allowing customers to leverage their existing Notes infrastructure for this bug tracking solution. Because Tracker Suite is server-based, Support Tracker installs with zero-impact on the desktop. User can create, track, and manage requests through Notes or over the Web. Requests are assigned, routed, and escalated automatically ts via Service Level Agreements, for proper prioritization and resource allocation. Support Tracker also features FAQ and Knowledgebase functionality. SWBTracker software with Bug tracking system brains Software Testing Page 57
  • 58. TestTrack Pro Seapine Software Delivers time-saving features that keep everyone, involved with the project, informed and on schedule. TestTrack Pro is a scalable solution with Windows and Web clients and server support for Windows, Linux, Solaris, and Mac OS X, integration with MS Visual Studio (including .NET) and interfaces with most major source code managers including Surround SCM, and automated software testing tool, QA Wizard, along with other Seapine tools. Download a free Eval. Track Soffront Defect tracking system ZeroDefect ProStyle Issue management Bug report Software Testing Page 58