Preparing for an interview as a manual tester? Here is a bit of help. For some technical edge, it is obviously useful to have an idea about what most interviewers ask. I have prepared a short list of the most commonly asked manual testing interview questions, along with their answers. Let’s jump right in!
3. Advantages of black
box testing
● Testing from the end user’s
point of view.
● No knowledge of programming
languages required for testing.
● Identifying functional issues in
the system.
● Mutual independence in tester’s
and developer’s work.
● Possible to design test cases as
soon as specifications are
complete.
ONE
4. Statement coverage
White box testing involves the use
of a metric called statement
coverage to ensure testing of
every statement in the program at
least once.
It is calculated as:
Statement Coverage = No. of
Statements Tested / Total no. of
Statements
TWO
5. Bug life cycle
● NEW or OPEN, when the bug is found by a
tester.
● REJECTED, if the project manager finds the
bug invalid.
● POSTPONED, if the bug is valid but not in
the scope of the current release.
● DUPLICATE, if the tester knows a similar
bug that has already been raised.
● IN-PROGRESS, when the bug is assigned to
a developer.
● FIXED, when the developer has fixed the
bug.
● CLOSED, if the tester retests the code and
the bug has been resolved.
● RE-OPENED, if the test case fails again.
THREE
6. Agile testing
Agile testing involves an iterative
and incremental testing process
for adaptability and customer
satisfaction by rapid delivery of
the product. The product is
broken down into incremental
builds which are delivered
iteratively.
FOUR
7. In monkey testing, the tester enters
random input to check if it leads to a
system crash. Monkey testing involves
Smart Monkey and Dumb Monkey.
While a Smart Monkey is used to find
stress by carrying out load testing and
stress testing, its development is
expensive.
Monkey testing
FIVE
8. SIX
Verification Validation
A static process of verifying documents, code, design
and programs, without code execution.
The dynamic process of testing the actual product by
executing code.
Methods used are inspections, reviews, walkthroughs,
desk-checking, etc.
Methods used are black box testing, gray box testing,
white box testing, etc.
It checks whether the software conforms to
specifications.
It tests whether the software meets client
requirements.
9. Baseline testing
Baseline testing involves running
test cases to analyze software
performance. The feedback
collected after baseline testing is
used to set a benchmark for future
tests by comparing current
performance with previous results.
SEVEN
10. Retesting Regression Testing
Verifying whether a previous defect has been
fixed.
Verifying whether a recent bug-fix has led some
other components to work incorrectly.
Specifically testing the fixed bug. Testing all components possibly affected by the
fixed bug.
Test cases that failed earlier. Test cases that passed in previous builds.
EIGHT
11. Priority defines the importance of
a bug from a business point of
view, while severity is the extent
to which it is affecting the
application’s functionality.
● Error in displaying company
logo: high priority, low
severity.
● A rare test case leading to
system crash: low priority,
high severity.
● Failure of online payments:
high priority, high severity.
NINE
Severity and priority of bugs
12. Alpha Testing Beta Testing
Performed by in-house developers. Performed by end-users.
Finding bugs and issues before product release.
A release of a beta version to end users for
obtaining feedback.
Occurs in a lab environment. Occurs in a real-world environment.
TEN
13. A test driver is a dummy software
component which calls the tested
module with dummy inputs during
bottom-up testing.
A test stub is a dummy software
component which is called by the
tested module to receive the
produced output during top-down
testing.
ELEVEN
Test driver and test stub
14. Test strategy is an official, finalized
document containing the testing
methods, plan and test cases
It is needed for:
● Understanding the testing process.
● Reviewing the test plan.
● Identifying roles, responsibility.
● Early identification of possible
testing issues to be resolved.
TWELVE
Need for test
strategy
15. Both are methods of test case design. In
error guessing, the tester guesses the
possible errors that might occur in the
system and design the test cases to
catch these errors. Error seeding, on
the other hand, involves the intentional
addition of known faults to estimate
the rate of detection and the number of
remaining errors.
THIRTEEN
Error guessing and error
seeding
16. In benchmark testing, the
application performance is
compared to the accepted
industry standard. It is different
from baseline testing because
while baseline testing is intended
to improve application
performance with each version,
benchmarking detects where the
performance stands with respect
to others in the industry.
FOURTEEN
Benchmark testing
17. Inspection is a verification process which
is more formalized than walkthroughs.
The inspection team has 3-8 members
including a moderator, a reader and a
recorder. The target is usually a
document such as requirements
specification or a test plan, and the
intention is to find flaws and lacks in the
document. The result is a written report.
FIFTEEN
Inspection in
software testing
18. While you cannot know exactly what the interviewer may ask
you about, the top 13 questions described above are the most
common ones. Hope they help!
CONCLUSION