Rahul Verma and Pradeep Soundararajan as young kids in testing in 2010 got together in Cuppa, JP Nagar, Bangalore and decided they would spend time together to help themselves and other testers. This is one of the outputs they produced. 2010 it is. Some ideas could be outdated or wrong even for 2010. Use it as a trigger to your own thought process and not as someone gave you something useful. Shared in 2019 when Rahul and Pradeep went back in memory over a beer talking about how did we get to this point of having a beer together after so many years.
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Heuristics of performance testing
1. Page 1 of 4
Don’t assume you know what it means
People to refer, their books and blogs: Scott Barber,
Alberto Savoia, Subramanya, Chris Loosely, Jared
Spool, Jakob Neilsen, Connie & Lloyd
Understand the Client and Server side aspects of
performance
Understand Networking
Be thorough about the communication protocol of
your current context
Understand the performance improvements in HTTP
1.1 versus HTTP 1.0
Google for general performance issues
Web 2.0 performance
Read basics of Queuing Theory
Join Performance testing groups in LinkedIn, STC, TR
Dig out presentations on Perf testing in conference
websites by experts
Http versus https
Don’t ask What is the difference between load, perf
and stress in any forum and get confused
Relationship of image size to load times in browser
Relationship to browser plugins
Conferences on performance testing, webinars,
podcasts, interviews
Learn UCML, Web usage signature
Write customized tools (don’t be lazy)
Server simulation: Ex: Fiddler
Fuzzing
Get yourself trained on perf testing, tools
Meet other performance testers & talk to them often
Create a utility library
Carve out a time to learn performance engineering.
Learn Python, Perl, Ruby, Java, AutoIT
Start perf testing in parallel to every other testing
Performance testing is not a stage in project - PDLC
Throughput versus latency
Perf testing is not just about speed of response
Performance benchmarking – how do we know its
good enough
Read book: Turning numbers into knowledge: Jonathan
G Koomey
Cloud perf testing: Are you testing something inside
the cloud or from the cloud?
Talk to customer, users and ask them what they would
consider as poor, good and bad performance
Read the Service Level Agreement to understand
performance test objectives
Negotiate what is achievable
Checks for readiness of Perf testing of a product
Understand the architecture of your application - Web
server: App server: DB server: Documentation,
Limitations, Live implementations, companies using it,
problems reported, testers dealing with it, perf related
config
Caching: Client and Server
Understand the security versus performance is a trade-
off
Is your client thin or thick?
Read code, code comments, integration test results
Read through bug tracking system and look for clues
Know business value of transactions
Spend time on your competitor’s website/product
List the risks you are taking for limitations you have
Reproducibility of perf bugs is crucial, plan for it
Scope mentioned in plan should be practical
Wishful thinking harms the project
Talk to developers daily
Meet and update all stakeholders
Availability of lab
Have acceptance /rejection checks in place
Heuristics of performance testing
Rahul Verma & Pradeep Soundararajan
No single person can conquer the world of performance testing because it’s hard, vast and growing. Here we bring in our
experience and practices of performance testing.
2. Page 2 of 4
Consider cost of testing performance to value
Perf testing is iterative (not a blue moon activity)
Hiring performance testers can impact what you can
achieve. Don’t hire only tool runners
Performance testing is a mindset
Constraining your budget on a key investment towards
perf testing could cost you more
Testers who code need to be in team
All testers need to understand perf lab setup well
Subject matter experts need to be paired with testers
Ask often: Is there a problem here?
If you want to ignore perf test results, don’t invest on it
Perf testers need to talk to other testers on the team
Pair perf testers with functional testers occasionally
Conduct surveys
Assessment of your perf testing strategy is needed
Play with Tools: Sniffers, Perfmon, Jmeter, Pylot,
LoadRunner, WebLoad, QALoad, NeoLoad,
Alexa for live portal traffic details of users and
distribution
Http traffic wire browser plugins, proxies
Real-time downtime reporting: Pingdom
Assess the effectiveness of user traffic simulation
Types of user access: Admin, Guest, Authenticated,
Logged in, Not logged in, Limited privileges, Hackers
Emotions of users using the product
Probing clients: Absolute experience
Consider the impact of bandwidth in performance
measurements
Understand the real user system configuration and
everything they have installed in it
Identify top priority machines, OS, browsers, flavors,
devices and platforms
Distribution of traffic, loads, data
Types of content being accessed: media, office
documents, html, flv, DRM, compressed, encrypted
Concurrency, Multithreads
Virtual and simulated users are not equal to actual
concurrency
Record playback is not THE thing
Parameterize, don’t hardcode
Calculate Memory footprint of a virtual user
Benchmark the test environment
Study your application in the light of Ramp up patterns
Load Balancing
Spike, SOAK, Memory Leaks
Min specifications test
Test your test environment
Have tool assessment plans in place
Maintain tools, upgrade them regularly
Time synchronization amongst test machines
Size of database, populating it with data
Be careful while performing tests on production server
Reliability tests and measurement
Choose the frequency of measurement carefully in a
test.
Log files
Limit the level to which logging happens
Not all perf tests can be done on virtual machines
Test data generators, tools
Frequently get user data from production to use it to
test environment
Don’t keep your test data outdated
Version control everything
Bring in actual users at regular intervals: subjective
analysis
Desktop App performance: Boot time, Load time,
Response time, Install time, Uninstall time, Internal
3. Page 3 of 4
processing time, Memory allocation time, Buffering
time
Test Lab : Planning, organizing, procurement, setting
up, documenting, Emulation, Troubleshooting
Separating out browser (any other app) perf problems
from what is being tested
You would need a lot of test data – keep it ready.
Relationship to Antivirus and Firewalls
Relationship to error handling
Consider Geographical distribution of users
Security testing at high load
Functional tests at high load
Why, what & how you measure
The act of measurement impacts the measurement.
Accurate measurements at times need
instrumentation of code/ Testability of code
Knowing what you can’t measure and knowing you
don’t have the right budget
Measurement without a goal is dangerous
Failures in proper measurement can lead to bad
decisions
Do a stage by stage analysis of response times
Analysis of users moving away using the product
Understand how performance relates to Hard Disk,
RAM, Processor, Java version, flash player, parallel app
running, RAID, network, and user needs
Making inferences and conjectures on statistics
collected
Depth and breadth of unit testing
Bottleneck analysis, Memory Leaks, Paging
OS process blocking resources
Analyze the consistency of performance across
platforms, OS, browsers, devices
Analyzing validation time as against what number a
tool spits out
Users don’t connect the way your tool connects
Users don’t react as immediate as your tools do
Focus on timing coverage
Don’t always extrapolate results
Garbage collection
Anticipate Microsoft to release a patch which will
reboot all systems in the world and connect to your
system at the same time
Test for recoverability of a server post a crash
Adding hardware is not necessarily a way to solve
problems
Performance delays can occur when hardware is
heated
Not rebooting systems for a long time causes a slow
down
List all interrupts from other processes, apps and OS
Don’t think robots use production systems
Humans will click on something for long time – don’t
rule that out
Constant low resource usage of server could mean
unnecessarily high configuration
Don’t jump to conclusions when looking at test results
Story behind a number is important
Averages are misleading
Remove Outliers & analyze
Denial of Service
If you have not known from which build the
performance started to get poor, you are definitely lost
Proper error message when system is overloaded
Study processor load sharing (affinity)
Conduct single user acceptance / rejection checks
Check if something works for a single (set of) user
Reports are not just numbers
Understand the audience of your reports
4. Page 4 of 4
When testers need time to investigate, give it
Measurements shouldn’t be analyzed in isolation
When numbers look good, they always aren’t.
Have a redundant server to handle things in events
Happy path testing is a myth. Happy users is reality
Forcing users to do a task increases load
Correlate speed of response to patience of target users
User perceives bad Usability as bad Performance