A reliable user interface (UI) is essential for an app’s success in today’s competitive market. There- fore, extensive testing of the UI with some special care and attention to functionality and user experience is indispensable. Challenges become even more complex when it comes to the Android platform and the number of unique issues it poses (Android poses significant challenges regard- ing UI). The keyword “fragmentation” symoolizes the biggest obstacle in the broad testing of mobile applications and refers to the difficulties caused by Android devices that are being re- leased onto the market in all shapes, sizes, and configuration types. This article will describe how Android emulators can offer a broad testing coverage of a wide range of device types using some tricks and simple practices.
2. By Erik Nijkamp
Planning Test Cases for Android Apps
A reliable user interface (UI) is essential for an app’s success in today’s competitive market. Therefore, extensive testing of the UI with some special care and attention to functionality and user
experience is indispensable. Challenges become even more complex when it comes to the Android
platform and the number of unique issues it poses (Android poses significant challenges regarding UI). The keyword “fragmentation” symbolizes the biggest obstacle in the broad testing of
mobile applications and refers to the difficulties caused by Android devices that are being released onto the market in all shapes, sizes, and configuration types. This article will describe how
Android emulators can offer a broad testing coverage of a wide range of device types using some
tricks and simple practices.
Introduction – Testing in a fragmented device
landscape
One of the biggest challenges for the common Android developer in his
daily routine is the wide range of end devices and operating system
versions. According to a study conducted by OpenSignal, in July 2013
there were more than 11,828 different Android end-user devices available
in the market, all of which differed in type, size, screen resolution, and
specific configurations. Considering the previous year’s survey, which
recorded only 3,997 different devices, this is an increasing challenge and
a constantly growing obstacle.
3. CPU: The “application binary interface” (ABI) defines the instruction set of the CPU. The main distinction here is made between
ARM and Intel-based CPUs.
4. Memory: A device has working memory (RAM) and the predefined
heap memory of the Dalvik VM (VM heap).
It is the first two features, OS and display, where special care needs to
be taken, as they are directly noticeable by the end user and should
be continuously and rigorously covered by testing. As to the Android
versions, in July 2013 there were eight different versions running simultaneously in the market leading to inevitable fragmentations. In July,
34.1 % out of almost 90 % of those devices were running the Gingerbread
version (2.3.3–2.3.7), 32.3 % the Jelly Bean (4.1.x) and 23.3 % the Ice Cream
Sandwich (4.0.3–4.0.4).
Figure 1. Distribution of 11,828 Android device types (OpenSignal Study, July 2013 [1])
From a mobile app development point of view, there are four fundamental characteristics defining an end device:
1. OS: The Android operating system version (1.1 to 4.3) which is technically defined by the “API level” (1 to 18).
2. Display: The screen is predominantly defined by the screen resolution (measured in pixels), screen density (measured in DPI), and/or
screen size (measured diagonally in inches).
64
Testing Experience – 24/2013
Figure 2. Distribution of 16 Android Versions (OpenSignal Study, July 2013 [1])
Considering the device’s displays, a study from April 2013 conducted by
TechCrunch showed that the vast majority (79.9 %) of active devices are
using the “normal” screen size that is anywhere between 3 and 4.5 inches
in size. On these devices the screen densities vary between “mdpi” (~160
dpi), “hdpi” (~240 dpi) and “xhdpi” (~320 dpi). The exception to the rule,
with 9.5 %, is the category of devices with a low density “ldpi” (~120 dpi)
and a small screen size.
3. With the help of the 2013 Handset Detection Study it is easy to find a
list of representative devices. A piece of interesting trivia is that 30 %
of Android users across India have a device with a very low resolution
of 240 × 320 pixels, as seen with the Samsung Galaxy Y S5360 in the list
above. In addition, the 480 × 800 resolution pixels is the one most used
at this point in time (seen in the Samsung Galaxy S II).
The “What”
Mobile apps must provide the best user experience as well as be shown
correctly (UI testing) on various smartphones and tablets of different
sizes and resolutions (keyword “responsive design”). At the same time,
apps must be functional and compatible (compatibility testing), with
as many device specifications as possible (memory, CPU, sensors, etc.).
Figure 3. Distribution of common screen sizes and density (“buckets”) (Google Study, April
2013) [2]
If this diversity is ignored during the quality assurance process, it can
definitely be expected that bugs will sneak into the app, followed by
a storm of bug reports and ending with negative user reviews in the
Google Play Store. Thus, the question at hand is: how can you practically
tackle this challenge with a reasonable level of testing effort? Defining
test cases and an accompanying testing process is an effective weapon
in dealing with this challenge.
Test cases – “Where”, “What”, “How”,
and “When” to test?
The “Where”
To save some expensive time on your testing effort, we recommend first
reducing the previously mentioned list of 32 combinations of Android
versions and display screens to 5–10 variations representing the top
devices used in the market. When choosing the reference devices, you
should ensure there is a wide enough range of versions and screen types
being covered.
As a reference, you can use OpenSignal’s survey to help select the most
widely used devices, or use the Infographic of Handset Detection [3]. To
satisfy curious minds, how to map the size and resolution of a screen
to density (“ldpi”, “mdpi”, etc.) and resolution buckets (“small”, “normal”, etc.) of the above statistics can be obtained from the Android
documentation [5].
Device model
Display Spec.
(Width × Height,
Size)
Display
Buckets (Size,
Density)
Lowest
Android
Version
RAM
Samsung I9300
Galaxy S III
720 × 1280, 4.8″
normal, xhdpi
4.0.4
1024 MB
Samsung I9100
Galaxy S II
480 × 800, 4.3″
normal, hdpi
2.3.4
1024 MB
Samsung Galaxy
Y S5360
240 × 320, 3.0″
small, ldpi
2.3.5
290 MB
LG Nexus 4 E960
768 × 1280, 4.7″
normal, xhdpi
4.2
2048 MB
Asus Google
Nexus 7
800 × 1280, 7.0″
large, tvdpi
4.1
1024 MB
HTC Wildfire S
320 × 480, 3.2″
small, ldpi
2.3
512 MB
Figure 5. Six examples of Android end devices with high diversity and distribution (Handset
Detection Study, February 2013) [3]
Together with the previously derived “direct” fragmentation issues (concerning Android versions and screen characteristics), the “contextual”
fragmentation has its own pivotal role. This role involves the variety of
different situations or contexts in which the user is using the end device
in his or her own environment. As an example, you should consider stress
[4] and exploratory testing to ensure flawless performance if there is an
unsteady network connection, interruptions of incoming calls, locking
of the screen, etc.
Hardware
Software
Environment
User
• GPS
• Camera
• Coverage
• Attitude sensor
• SD Card
• Battery
• Memory
• CPU
• Display
• API level
• Vendor
• modifications
• Installed
• apps
• System settings
• Accuracy of
position
• Slow data connection
• Reception
breaks
• Empty battery
• Offline capabilities
• Rotation
• Phone calls
• Modify volume
• Device buttons
• Screen (nn) lock
Figure 6. Different aspects of testing Android devices
It is necessary to prepare in advance a list of all possible test scenarios
that will cover the most common functions of the app. Early detection of
bugs and easy modifications in the source code is only possible through
continuous testing.
The “How”
One pragmatic way to take this broad variety into account is the Android
emulator – offering an adjustable tool that can virtually imitate Android
end-user devices on a standard PC. Briefly, the Android emulator is the
ideal tool in the QA process to perform continuous regression testing (UI, unit and integration tests) with various device configurations
(compatibility testing). During exploratory testing, the emulator can
be configured to a wide range of different scenarios. For example, the
emulator can be set in such a way that will simulate changes in the connection speed or quality.
Nevertheless, QA on real devices is indispensable. In practice, the virtual
devices used as reference can still differ in some small (yet, for certain
apps, very essential) aspects, such as no provider-specific adjustments
in the Android operating system or no support for headphones and
bluetooth. The performance on real hardware plays its own significant
role in the evaluation process and should be tested (usability testing)
on all possible end devices taking into account such aspects as touch
hardware support and the physical form of the device.
Testing Experience – 24/2013
65
4. The “When”
References
Now that we have defined where (reference devices), what (test scenarios)
and how (Android emulator and real devices) to test, it is crucial to sketch
a process and define when to execute which test scenarios. Therefore
we suggest the following two-stage process:
[1] http://opensignal.com/reports/fragmentation-2013/
1. Regression testing with virtual devices. This consists of continuous
and automated regression testing on virtual reference devices to
identify essential errors early on. The rationale here is to identify
bugs quickly and cost-efficiently.
2. Acceptance testing with real devices. This involves intensive (predominantly manual) testing on real devices before releasing the
app to the Google Play Store in a “staged rollout” (for example with
alpha and beta tester groups in Google Play [5]).
In the first stage, test automation greatly helps to implement this strategy
in an affordable manner. In this stage, only test cases which can be easily automated (ie can be executed on a daily basis) should be included.
In the ongoing development of an app, this automated testing provides a
safety net for the developers and testers. The daily test runs ensure that
core functionality is working properly, the overall stability and quality
of the app is transparently reflected by the test statistics, and identified
regressions can be easily correlated with recent changes. Such tests can
be easily designed and directly recorded from the tester’s computer using
SaaS solutions such as TestObject’s UI mobile app testing in the cloud.
If and only if this stage has been executed successfully, the process will
continue with labor-intensive testing in the second stage. The idea here
is to only invest testing resources if the core functionality passes the
automated test, enabling the testers to focus on advanced scenarios. This
stage might include test cases such as performance testing, usability
testing, or compatibility testing. Combining the two approaches yields
a powerful QA strategy for mobile apps [7].
Conclusion – Doing Testing Right
Used in the right manner, testing can be a powerful tool in the fight
against the fragmented Android landscape. The crucial component of
an effective testing strategy is to define custom-tailored test cases for
the application at hand and define a workflow or process that streamlines testing. Testing a mobile app is a major challenge, but it can be
solved efficiently with a structured approach, and the right set of tools
and expertise.
66
Testing Experience – 24/2013
[2] http://techcrunch.com/2013/04/03/android-activations-tweak/
[3] http://www.handsetdetection.com/blog/where-in-the-world-areandroid-devices-showing-up-infographic/
[4] http://testobject.com/blog/2013/08/find-bugs-automatically-withrandom-testing-in-continuous.html
[5] http://developer.android.com/guide/practices/screens_support.html
[6] https://support.google.com/googleplay/android-developer/
answer/3131213?hl=en
[7] http://testobject.com/blog/2013/11/a-testing-process-that-fits-yourmobile-app.html
◼
> about the author
Erik Nijkamp (erik.nijkamp@testobject.com) is the CEO
of TestObject GmbH (based in Hennigsdorf, a suburb
of Berlin). TestObject specializes in QA solutions for
the mobile sector and offers a cloud-based app testing
service which radically simplifies UI testing offering
test automation with an intuitive tests recorder and
ready to use tests for any mobile app.
As the product owner, he focuses on the strategic alignment of TestObject’s business solutions. During his time in Silicon Valley
(IBM Research US) and consulting (IBM Deutschland GmbH) he gained highly
valuable experiences in the high-tech sector.