Since the system had been began to gain importance in the place of the software, intended to help better the work done. Waterfall method of software development methods, agile methods are constantly changing and evolving process is hardly surprising given that the development becomes clear. Practices applied in the software development work is most basically the same, however, has changed the practices described in the application form is shown in various sources. Basic software development cycle to determine the requirements, design, modified, implementation, testing and deployment phases is composed.
Agile methods are almost the beginning of the end of the loop with the development of test event, be considered as almost the first, has progressed to a place of importance given to the prediction, the progress continues. In this study,one of agile development practice e of the Test-Driven Development is applied by considering the exchange tried to test effectiveness.
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Software Testing with a TDD Application
1. Prepared by Yelda Erdoğan
Supervised by Egemen Özden
Bahçeşir University
Graduate School of Natural and Applied Sciences
Engineering Management
Master of Science Project
2. Contents
Introduction
Overview of Software Development Models
Predecessors of Agile Development by Examples
Software Testing
Traditional Testing Process
Test-Driven Development
Testing in Agile Software Development
Application Data and Method
Result and Discussion
Conclusion
References
3. Introduction
There are two mainstream methodologies which are
traditional and agile software development
methodologies in the software development world.
People started to find these steps heavy. Industry
veterans demanded lighter processes that can be easily
followed. A group of industry experts calling them the
Agile Alliance came together in 2001.
(TDD) will be discussed. The TDD Cycle and mantra
give importance testing, when there is no written code.
Moreover the TDD has core place communicating
customer, business, and software project teams.
4. Introduction – Cont.
Traditional software development employs extensive
facilities and methods to make sure that the final product
matches the requirements defined at the beginning of the
project.
Software testing is placed at the end of the traditional way
of software development. On the other hand new
lightweight methodologies have emerged in the last decade
and contradicting the traditional way, these agile
methodologies increasingly focus on software testing.
TDD methodology software testing is even used as an
integral part of the design process.
5. Overview of Software
Development Models
Evolution of the SDLC has been continued for it was
started.
Software is very major and important item in systems.
Software development is originated in defense
systems.
Spending money for software is growth very important
portion. It was claimed that 12% for a year in 1980s by
(Humphrey, 1989). Software often adversely affects the
schedules and effectiveness of weapon systems.
Quality and maturity has become subject in software
development.
6. Overview of Software
Development Models - Cont.
Waterfall, the first known presentation describing use
of similar phases in software engineering was held by
Herbert D. Benington at Symposium on advanced
programming methods for digital computers on 29
June 1956
7. Overview of Software
Development Models - Cont.
Waterfall was draw by (Royce, 1970)
A model of the software
development process in which the
constituent activities, typically a
concept phase, requirements phase,
design phase, implementation
phase, test phase, and installation
and checkout phase, are performed
in that order, possibly with overlap
but with little or no iteration.
Contrast with: incremental
development; rapid prototyping,
spiral model. (IEEE, 1990)
8. Overview of Software
Development Models - Cont.
Gerald M. Weinberg suggested Egoless Programming
that suggests more technical review, more preventive
approach in software development on 1971 in
(Weinberg, 1998).
In 1974, SDLC was described in seven steps by
(Wolverton, 1974).
10. Overview of Software
Development Models - Cont.
Barry B. Boehm suggested spiral model for the software
development process in the International Workshop on the
Software Process and Software Environments on 1986.
Spiral model was presented as a candidate for improving
software process model situation. The major distinguishing
feature of the spiral model is that it creates a risk-driven
approach to the software process rather than a primarily
document-driven or code-driven process. (Boehm, August
1986).
In that case risk management is started being very
important place in projects.
11. Overview of Software
Development Models - Cont.
Boehm indicates that a typical cycle of the
spiral is described that each cycle of the
spiral begins with the identification of
The objectives of the portion of the
product being elaborated (performance,
functionality, ability to accommodate
change, etc.);
The alternative means of implementing
this portion of the product (design A,
design B, reuse, buy, etc.)
The constraints imposed on the
application of the alternatives (cost,
schedule, interface, etc.).
12. Overview of Software
Development Models - Cont.
Spiral Model is described as: A model of the software
development process in which the constituent
activities, typically requirements analysis, preliminary
and detailed design, coding, integration, and testing,
are performed iteratively until the software is
complete. Contrast with: waterfall model. (IEEE, 1990)
13. Overview of Software
Development Models - Cont.
‘DOD-STD-2167A Military Standard: Defense System Software
Development’ was published on 1988.
Standard suggests waterfall software development model.
Following statement has conflict because of strictly respect
project schedule with formal reviews and audits even if iterative
development selection
‘…each CSCI shall be compatible with the contract schedule for formal
reviews and audits. The software development process shall include
the following activities, which may overlap and may be applied
iteratively or recursively:
System Requirements Analysis
System Design,
Software Requirements Specification,
Preliminary Design,
….’
14. Overview of Software
Development Models - Cont.
ISO/IEC 12207 was published in 1995 in first time.
IEEE/EIA 12207.0 was published on 1996 in first time. In
time, they merged in one standard as ISO/IEC 12207-2008.
The standard states respectively in 6.1.2.3.4.8 and 7.1.1.3.1
section that following:
‘The supplier shall monitor and control the progress and the
quality of the software products or services of the project
throughout the contracted life cycle for an ongoing, iterative
task ’
‘Software implementation strategy related activities and tasks
may overlap or interact and may be performed iteratively or
recursively’.
15. Overview of Software
Development Models - Cont.
Agile Alliance published Manifesto for Agile Software
Development besides twelve principles in February 2001
which are given below: (2001)
Manifesto for Agile Software Development
We are uncovering better ways of developing software by doing
it and helping others do it. Through this work we have come
to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value
the items on the left more.
16. Overview of Software
Development Models - Cont.
12 Principles behind the Agile Manifesto
1. Our highest priority is to satisfy the customer through early
and continuous delivery of valuable software.
2. Welcome changing requirements, even late in development.
Agile processes harness change for the customer's
competitive advantage.
3. Deliver working software frequently, from a couple of weeks
to a couple of months, with a preference to the shorter
timescale.
4. Business people and developers must work together daily
throughout the project.
5. Build projects around motivated individuals.
6. Give them the environment and support they need, and trust
them to get the job done.
17. Overview of Software
Development Models - Cont.
7. The most efficient and effective method of conveying information
to and within a development team is face-to-face conversation.
8. Working software is the primary measure of progress.
9. Agile processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace
indefinitely.
10. Continuous attention to technical excellence and good design
enhances agility.
11. Simplicity--the art of maximizing the amount of work not done--is
essential. The best architectures, requirements, and designs
emerge from self-organizing teams.
12. At regular intervals, the team reflects on how to become more
effective, then tunes and adjusts its behavior accordingly
18. Predecessors of Agile Development
by Examples
Agile software development became popular beginning
of this century. People perceive agile as new, modern,
and replacement of waterfall model. Although it is a
new approach, there are some predecessors. In time,
there were some projects that executed by using
iterative and incremental development (IID).
1950 – IID X-15 hypersonic jet (not just sw project)
1960 – Project Mercury ran with every short, half-day, iterations. TR for every changes by conducting all team members, writing test
before development
1972 – IBM FSD application with IID, majorly documented
1972 – TRW Ballistic Missile Defense project with IID
1977 - 1980, IBM FSD built NASA’s space shuttle software was an application of IDD.
1977 - 1980, System Development Corp. built an air defense system was an application of IDD
1982- IBM built military command control project using evolutionary prototyping
1987- TRW launched a four-year project to build the Command Center Processing and Display System Replacement, a command and
control system, using IID methods
19. Software Testing
Software test is an activity in which a system or
component is executed under specified conditions, the
results are observed or recorded, and an evaluation is
made of some aspect of the system or component.
Software test activity is conducted to find bugs and
faults in order to have software product meets
customer needs.
It is expected that bugs and faults are fixed in adequate
time such as before customer acceptance.
20. Software Testing – Cont.
Test results provide information on degree of meeting
system requirements which are expected by customer
in quality manner.
Software test activity has interaction within software
development life cycle. Finding bugs in early stage of
software development life cycle is very important.
21. Software Testing – Cont.
In one of traditional and
very old approach
waterfall software
product development,
test activity is executed
after requirement
allocation, design
allocation and coding.
22. Software Testing – Cont.
Other traditional respectively
new approach is V model
which based on verification
and validation concept.
While a phase is executing,
testability of items are
considered.
For example, in requirement
specification phase, while
requirements are specified
their testability is considered.
Related test cases are
prepared and documented.
Execution of test cases is
performed later.
23. Software Testing – Cont.
Importance of early phase bug fix
The more bugs you can fix immediately, the less
technical debt your application generates and the less
“defect” inventory you have.
Defects are also cheaper to fix the sooner they are
discovered.
The cost to fix an error found after product release was
four to five times as much as one uncovered during
design, and up to 100 times more than one identified in
the maintenance phase.
24. Software Testing – Cont.
The cost to fix an
error found after
product release was
four to five times as
much as one
uncovered during
design, and up to
100 times more than
one identified in the
maintenance phase.
25. Software Testing – Cont.
Verification and validation by Boehm in 1979:
Verification: "Am I building the product right?"
Validation: "Am I building the right product?”
Verification and validation is the name given to the
checking and analysis process that ensure that
software conforms to its specification and meets needs
of the customers who are paying for that software
26. Software Testing – Cont.
Static testing is a test activity without executing the
code with reviews, walkthroughs and inspections.
Items which are subjected to review can be documents
or code.
For example syntax correctness and code complexity
analysis.
By using static testing method faults can be found in
an early phase of software development because the
testing can be started before any code is written.
27. Software Testing – Cont.
Dynamic testing is a test activity that executing code
with a set of test cases.
In dynamic testing, test activities are started almost
code writing finalized.
Static testing and dynamic testing are opposite of each
other.
28. Software Testing – Cont.
Functional test is a testing activity that intends to
verify that behavior/action/activity of the software test
item as specified for software system.
Features of software system are verified regarding
entered inputs and expected outputs.
The concept of functional testing is quite similar to all
systems, even though the inputs and outputs differ
from system to system.
29. Software Testing – Cont.
Non-functional test is a testing activity that intends to
verify quality attributes of software test item as
specified for software system.
Some quality attributes are usability, security,
portability, reliability, performance, stress, load,
maintainability, modularity, etc.
Non-functional tests are executed case based. Test
outputs gives information about product quality.
30. Software Testing – Cont.
White-box test is to verify the correct behavior of
internal structural elements. When code is known and
accessible for testing activity and code is inspected
internally, line by line, white-box test activity is
performed.
Black-box test is to verify software item works as
specified when software under test by given inputs.
Software item is seen as a closed box. Only
functionality/feature is known.
Gray-box test is a hybrid approach using both white-
box and black-box test.
31. Software Testing – Cont.
The objective for any review meeting is to identify
problems with the design. It is not to solve them.
Review meetings should be small (about seven
people). They should include people who did not work
on the design.
Reviewers should read design documents in advance
and challenge or question them in the meeting.
Many companies don't consider a design complete
until it is approved in a formal review. A design is
reworked and re-reviewed until it is finally abandoned
or accepted.
32. Software Testing – Cont.
Three common types of review meetings (Kaner, et al., 1999):
Walkthrough:
The designer simulates the program. She shows, step by step, what the
program will do with test data supplied by the reviewers. The simulation
shows how different pieces of the system interact and can expose
awkwardness, redundancy, and many missed details.
Inspection:
Reviewers check every line of the design against each item in a checklist.
An inspection might focus on error handling, conformity with standards,
or some other single, tightly defined area. If time permits, an inspection
checklist might cover a second area of concern.
Technical Review:
Reviewers bring a list of issues to the meeting. During the meeting, they
describe their objections and point out things that are ambiguous or
confusing. The purpose of the review is to generate a list of problems and
make sure that the designer understands each one. Deciding what
changes to make, and designing them, are not part of this meeting
33. Software Testing – Cont.
Generally software testing is divided into unit testing,
integration testing, system testing, and acceptance
testing regarding aim and objective of testing
activities.
Test levels activities are performed in software
development and maintenance phases.
35. Software Testing – Cont.
Verification of low level design is performed mostly in
unit testing activity.
Unit testing activity is the very effective test level in
order to find failures in early phase of the software
development. At the end of unit test phase, verified
units are ready for system testing
36. Software Testing – Cont.
Verification of high level design is performed mostly
in integration testing activity. At the end of integration
test phase, verified subsystems are ready for system
testing.
37. Software Testing – Cont.
Verification of system requirements is performed
mostly in system testing activity. At the end of system
test phase, verified software system is ready for
acceptance testing.
38. Software Testing – Cont.
Software system is tested whether meets customer’s
requirements. The main idea of acceptance testing is
software system evaluation in terms of customer
expectations within installed/deployed environment
39. Software Testing – Cont.
Regression testing activity mostly is performed after
bug fixing. Purpose of regression testing is to ensure
that software part is functional in conformance with
specified and expected after bug fix activity.
Regression testing can be performed in every testing
phase in software development.
41. TDD
The test-driven development (TDD) is described in
(Beck, 2002):
'In Test-Driven Development, you:
Write new code only if you first have a failing automated test.
Eliminate duplication.'
The motivation of TDD is second value of Manifesto
for Agile that 'Working software over comprehensive
documentation’. Writing tests first help define
functionality, a key task of software development.
42. TDD – Cont.
TDD was named as Test First Programming in the
beginning. Tests are considered first. Tests are
prepared in user perspective, run and usually failed
before coding because of there is no lacking code.
43. TDD – Cont.
TDD can also extend beyond the unit or ‘developer
facing’ test. There are many teams that use ‘customer
facing’ or ‘story’ tests to help drive coding. These tests
and examples, written in a form of understandable to
both business and technical teams, illustrate
requirements and business rules. Customer facing
tests might include functional, system, end-to-end,
performance, security, and usability tests.
Programmers write code to make these tests pass,
which shows the product owners and stakeholders
that delivered code meets their expectations.
44. TDD – Cont.
In TDD a programmers prepare automated unit tests
that define code requirements then immediately
writes the code. The tests include assertions that are
either true or false. Passing the tests confirms correct
behavior as developers evolve and refactor the code.
Programmers mostly use testing frameworks, for
example JUnit, in order to create and automatically
run sets of test cases.
TDD approach is about specification of needs and/or
functionals rather than validation.
45. TDD – Cont.
TDD cycle sequence
Quickly add a test
Run all tests and see the
new one fail
Write code and make a
little change
Run all tests and see them
all succeed
Remove duplication by
refactoring
46. TDD – Cont.
Quickly add a test
Test is written by programmer for every new feature or functionality.
New feature or functionality is clearly understood by programmer.
In this stage programmer focuses on requirement.
Run all tests and see the new one fail
Prepared tests are run and failed.
Write code and make a little change
Code is written or changed by developer regardless to test.
Run all tests and see them all succeed
All prepared and updated tests are run until successfully passed.
Remove duplication by refactoring
In this stage code is cleaned, duplications are removed, some
internal changes are done. Changes do not affect code functionality.
Refactoring is changing code, without changing its functionality, to
make it more maintainable, easier to read, easier to test, or easier to
extend.
47. TDD – Cont.
TDD Mantra gives tasks
performed during TDD Cycle:
Red – write a little test that
doesn’t work – perhaps doesn’t
even compile – first.
Green – Make the test work
quickly, committing “whatever
sins” are necessary in the
progress.
Refactor – Eliminate all the
duplication created in merely
getting the test to work.
48. TDD and Design
“Code for tomorrow, design for today.”
TDD is not about just testing activity. By using TDD,
software system is detailed in the smallest part which
is units. Software units are detailed by considering
functionality. Design activity is performed in that case.
User stories are gathered. Every story is investigated
regardless functions. By the way, software system is
design in very low level. Code implementation is done
by applying TDD. If any inadequate situation such as
failures will appear in following phase, then failures
can be traced back.
49. TDD and Automation
The test-driven development approach requires writing
automated test prior to developing functional code in
small, rapid iterations. Test automation is core agile
practice. Agile projects depend on automation. Good
practice of automation provides teams in compliance with
working framework. Source code control, automated builds
and test suites, deployment, monitoring increase
functionality, and quality of product.
Tests are new documentation. Moreover, they are living
document about how a piece of functionality works. When
TDD process is in ‘green’, and then ‘refactor’, updated
documentation exists. Besides, code is updated in every
time.
50. Testing in Agile Software
Development
Purpose of software testing still has as same goal as
traditional software testing is that finding failures of
software in order to have reliable software product.
Main characteristic of agile testing is that test activity
is started consideration almost in the beginning of the
software development.
In agile test, there are two roles in development
programmer and tester.
51. Testing in Agile Software
Development – Cont.
Purpose of software testing still has as same goal as
traditional software testing is that finding failures of
software in order to have reliable software product.
Main characteristic of agile testing is that test activity
is started consideration almost in the beginning of the
software development.
In agile test, there are two roles in development
programmer and tester.
52. Testing in Agile Software
Development – Cont.
Agile testing quadrants
shows how each of the four
quadrants reflects the
different reasons testing.
On one axis, matrix is
divided into tests that
support the team and tests
that critique the product.
The other axis divides
them into business-facing
and technology-facing
tests.
53. Testing in Agile Software
Development – Cont.
In Quadrant 1 and 2;
supporting development
team is the main objective.
Units, components and the
higher level parts of
software system are tested
automated except GUI and
these tests supports
development teams.
Quadrant 1 and 2 are
related more requirement
specialization and design
aids. Tests are run
automated in all code
change and addition. Tests
guide development in
functionality.
54. Testing in Agile Software
Development – Cont.
In Quadrant 3 and 4; critique
product is main objective.
Gathered customer
requirements should be
understood by programmers.
Critique product should
include praise and
suggestions for improvement.
Product is reviewed in
constructive manner.
Tests are run to verify all
system in every aspect,
including quality attributes.
Obtained results increase
team knowledge about
customer requirements.
Meeting degree of customer
needs is increased.
55. Testing in Agile Software
Development – Cont.
Quadrant 1 represents test-driven development which
is a core agile development practice:
Tests are derived from customer examples.
Unit tests are done in order to verify functionality.
Component tests are done in order to verify behavior of
system.
TDD is main test practice. TDD helps programmers
design code to deliver a story’s feature which is
intended.
Software system functionality is implemented in very
detailed level, in unit and components level.
56. Testing in Agile Software
Development – Cont.
Quadrant 2 supports the work of the development
team, in high level.
Tests are derived from customer examples.
Examples are produced from gathered user stories.
Gathered stories are verified.
Prototypes and simulations are prepared. They help
programmers to understand customer needs.
User interfaces are verified.
57. Testing in Agile Software
Development – Cont.
Major objective of Quadrant 3 is to find the most serious
bugs. Tests are performed to ensure whether customer
needs are understand. Misunderstandings are removed by
using examples. At the end of UAT, most of stories are
finalized.
User acceptance test (UAT) is mostly performed by users
and customers.
With UAT execution, customer still has chance to claim
new features and also future expectations.
Gathering requirements are continued in this phase.
Gathering information from focus group is also continued.
It provides advantages in usability test execution.
Defined constraints are tested in exploratory testing phase.
Also, each story and scenario is tested in exploratory
testing phase. New scenarios are considered.
Test results are analyzed.
58. Testing in Agile Software
Development – Cont.
Quadrant 4 tests are intended to critique quality attributes
of product such as performance, robustness, robustness,
and security.
Customer requirements can be specified as functional and
non-functional requirements. When requirements are
prioritize, non-functional requirements might be more
important than functional requirements. For example,
most of functional requirements are implemented,
sometimes absence of one non-functional requirement,
such as security might be more critical for customer. In
Quadrant 4, tests help to specification all kind of
functional requirements are met.
59. Application Data and Method
A real world scenario is implemented using TDD. The
Monitoring and Control module of a Risk
Management System has been developed using TDD
methodology. The other modules of this risk
management system are not implemented in the scope
of this work.
60. Application Data and Method
PROBLEM DEFINITION
A fully functional Risk Management application may have the
following modules:
Risk creation, classification, responsible definition user
interface,
Mitigation Plan/Contingency plan creation user interface,
Risk Impact Analysis module,
Activity Reporting interface,
Monitoring and Control module,
Warning publishing module, to send E-mails or integrated with
the office productivity suite.
The focus of this work is the implementation of test modules and
thus developing the "Monitoring and Control" module according
to these tests developed. The "Monitoring and Control" module
is implemented at a basic level to provide enough member
functions and variables to cover the tests defined, leaving
detailed functionality incomplete.
61. Application Data and Method
REQUIREMENTS
Six major requirements were gathered for the "monitoring and
control" module, these requirements are as follows:
Risks with "HIGH" classification should be monitored weekly. A
warning should be published to the user for each monitoring
request.
Risks with "MEDIUM" or "LOW" classification should be
monitored upon milestones. A warning should be published to
the user for each monitoring request.
The risks are to be reviewed on Risk Review Meetings and until
the risk is reviewed a warning should be published to the user for
each newly identified risk.
The risks should have a Mitigation/Contingency plan ready and
if not a warning should be published to the user for new risks.
A warning will be published to the user if a risk takes place.
A warning will be published to the user if the contingency plan,
for a risk that has occurred, is executed.
62. Application Data and Method
TEST CASES
A total of six tests were designed to cover these six
requirements.
The tests were implemented using the JUnit classes of
JAVA and merged under a single test suite.
63. Application Data and Method
The JUnit classes and the "Monitoring and Control" class
64. Application Data and Method
"new_risk_introduced_case" test case:
In this test case the monitoring and control module is tested
against the emergence of a new risk. The module is
supposed to publish a warning and set a warning published
flag. The test checks for this flag to be true.
"new_risk_not_introduced_case" test case:
In this test case the monitoring and control module is tested
against the emergence of a new risk. The module is
supposed to not publish a warning when there are no new
risks. The test checks for this flag to be false.
65. Application Data and Method
"mitigation_plan_check_test_case" test case:
In this test case the monitoring and control module is tested
against the availability of a mitigation plan for new tests. The
module is supposed to publish a warning when there are no
mitigation or contingency plans related with the risk. The test
fails if there is a new risk without a plan and the module does not
publish a warning and set the flag to true.
"periodic_risk_warnings_check" test case:
In this test case the monitoring and control module is tested
against the first two requirements. If a risk has a "HIGH" priority
then the module is supposed to publish weekly warnings for the
user to monitor the risk. If a risk has a "MEDIUM" or "LOW"
priority then the module is supposed to publish warnings for the
user upon milestones to monitor the risk. Two separate flags are
set for weekly periodic warnings and milestone based warnings.
66. Application Data and Method
"identified_risk_takes_place" test case:
In this test case the monitoring and control module is tested
against the occurrence information of the identified risks. If
a risk occurs the module is supposed to publish a warning to
the user.
"plan_performed" test case:
In this test case the monitoring and control module is tested
against the execution information of the contingency plan.
If a risk occurs and the plan is put under motion the module
supposedly publishes a warning for the user to monitor the
plan execution.
67. Application Data and Method
APPLICATION
Once the tests were complete the implementation of the
"Monitoring and Control" module was started. This module is
designed to enable the previously defined tests to succeed.
The data type "RISK" is developed to match the requirements of the
test functions. The Boolean variables defined in this class are defined
to represent the information needed by the test classes.
The "Monitoring and Control" module implemented the necessary
calculation and warning functions to match the functionality
defined in the test classes.
This module is designed to call the evaluation function once every
period but the calling mechanism is not implemented.
The handling of the various flags and the processing of an identified
risk are collected in a single function called the "evaluate_risk"
function.
The periodic processing of this module is supposed to be called by
another mechanism that has not been implemented.
68. Result and Discussion
Even though the application being implemented in
this work is not a major one, the aim is to show that by
designing the tests before-hand; the requirements are
better matched to the tests and functions. This
practice enabled the developer to better define the
work to be done even when all the requirements are
not known. Upon defining the test cases and
implementing the source code to work with these
tests, even the smallest functions required by the
application are handled in a result oriented manner.
69. Result and Discussion
When all the tests are completed successfully the software
under development is guaranteed to cover all the
requirements and there is not any chance of leaving a
requirement out-of scope. By applying the TDD method
the software is thoroughly tested at every development step
and this ensures that no unnecessary development is
performed at any point in the development cycle. Although
the code is already compact and working, the final step of
TDD is to re-factor the already complete software to have
an even better source code. This results in a final collection
of the desired results and removal of any unnecessary
variables, loops, controls etc.
70. Conclusion
It has been observed in this work that software
development methodologies are continuously improving
and the developments in the software lifecycle area are far
from completion. The same critic also stands for the
software testing. Also the importance of software testing is
increasingly emphasized by the software methodologies
proposed in the last decade. Software Testing is now not
only an integral part of the software development lifecycle,
but also an emerging new way to develop software. While
traditional software methodology employs software testing
at the end of the development lifecycle, Agile Software
Development methodologies evolved to employ software
testing from the beginning of development.
71. Conclusion
In this work, as the test cases were developed incrementally
to better cover the atomic functions defined by the
requirements, it has been observed that the "Working
Software over Comprehensive Documentation" item of the
Agile Manifesto holds an important role in the TDD
practice. The software documentation process has been
transformed from intensely writing enormous
documentation to using up-to-date test functions that are
easily readable and living. The test cases are used both to
easily document the software under development and to
test it. The future of software development leads to lesser
bugs and higher quality standards by the improvements in
CASE tools and the increasing importance of the unit tests.
72. References
Abran Alain and Moore James W. SWEBOK [Book]. - Los Alamitos, California : IEEE, 2004.
Alain Abran (Editor) James W. Moore (Ed.), Pierre Bourque (Ed.), Robert Dupuis (Ed.) SWEBOK [Book]. - Los Alamitos, California : IEEE
Computer Society Press, 2005. - p. Chapter 5 Page 3.
Beck Kent Test Driven Development by Example [Book]. - [s.l.] : Addison-Wesley Professional, 2002. - pp. viii-ix.
Beck Kent Test Driven Development by Example [Book]. - [s.l.] : Addison-Wesley Professional, 2002. - p. viii.
Benington Herbert D. Production of Large Computer Programs [Journal] // IEEE Annals of the History of Computing. - 1983.
Boehm Barry A Spiral Model of Software Development and Enhancement [Journal] // ACM SIGSOFT Software Engineering Notes. - August
1986.
Boehm Barry W. Guidelines for Verifying and Validating Software Requirements and Design Specifications [Journal] // Euro IFIP 79. - 1979. -
pp. 711-719.
Crispin Lisa and Gregory Janet Agile Testing Practical Guide For Testers and Agile Teams [Book]. - Boston : Pearson, 2009.
Crispin Lisa Driving Software Quality: How Test-Driven Development Impacts Software Quality [Journal] // IEEE Software. - 2006. - pp. 70-
71.
Humphrey Watts S. Managing the Software Process [Book]. - [s.l.] : Addison-Wesley, 1989.
IEEE IEEE Standard Glossary of Software Engineering Terminology [Book]. - New York : IEEE, 1990.
Iivari J A hierarchical spiral model for the software process [Journal] // ACM SIGSOFT Software Engineering Notes. - Jan. 1987. - pp. Volume
12 Issue 1, .
Kaner Cem, Falk Jack L. and Nguyen Hung Quoc Testing Computer Software [Book]. - [s.l.] : Wiley, 1999.
Larman Craig and Basili Victor R. Iterative and Incremental Development: Brief History [Journal] // IEEE Software. - [s.l.] : IEEE Computer
Society, 2003. - pp. 47-56.
Manifesto for Agile Software Development [Online]. - February 2001. - http://agilemanifesto.org/.
Royce Winston W. MAnaging The DEvelopment of Large Software Systems [Journal]. - [s.l.] : TRW, 1970. - pp. 328-338.
Sommerville Ian Software Engineering, 6th Edition [Book]. - Edinburg : Pearson, 2000.
Waterfall Model [Online] // Wikipedia. - http://en.wikipedia.org/wiki/Waterfall_development.
Weinberg Gerald M. The Psychology of Computer Programming: Silver Anniversary Edition [Book]. - [s.l.] : Dorset House; Anl Sub edition,
1998.
Wolverton Ray W. The Cost of Developing Large-Scale Software [Journal] // IEEE TRANSACTIONS ON COMPUTERS, VOL. c-23, NO. 6. –
1974.