Common Mobile Testing Pitfalls You Ought to Know About

Common Mobile Testing Pitfalls You Ought to Know About

Regardless of the fact that it may be your first foray into mobile application development or that you’ve spent a number of years in the field, expecting the unexpected can work in your favour here. Despite a robust testing strategy in place, you can and (eventually will) run into stumbling blocks that just aren’t as prevalent when testing for web or desktop application. 

Now we’re not saying web or desktop application testing doesn’t translate as well for mobile app testing – it does. But certain key areas are often missed when there’s an overlap of the two mediums, giving rise to a host of new and unique issues you’ve to deal with.

Did you check the bandwidth?

A long time ago (what seems like aeons at the point), testers had to worry about how much bandwidth their web pages were consuming. To get around this problem, oftentimes, JavaScript files had to be shortened and images needed compression to a point where they were barely legible anymore. This is less of a concern today with desktop apps and websites, and the same could be said for mobile apps with 5G on the horizon. And yet, QA technicians still need one eye on mobile app bandwidth usages. It’s still not uncommon to find areas in busy and densely populated cities to come across pockets of awful connectivity. And if you dare to travel outside of the metropolis, your signal service will be clamouring on for dear life very quickly.

Developers need to understand this problem and design their applications to either fail gracefully or to adapt to poor connectivity. Tests then need to exercise those capabilities thoroughly. For instance, how does your software if a crucial download is interrupted due to a network breakdown? What does it do when it detects a slow connection? Does it scale down to downloading smaller versions of assets? These are all important considerations.


And even if network speeds or availability weren’t an issue, the application could still underperform in how much it downloads. Bytes can still be expensive. During testing, it is important to monitor how much and how often data is downloaded. 

Or the screen resolution madness?

Screen real estate has also become a recent problem of late. A lot of people are going for 4k resolution monitors as the industry churns out affordable options. Here the problem is finding things to put in all the space you now have, as opposed to fitting everything in.

Mobile applications, on the other hand, are returning to the 640×480 VGA days as it behaves like a sweet spot for most devices. When testing software on mobile devices ensure that the application behaves properly under the most restrictive of screen layouts. The same applies to “responsive” web applications as well. How does the application look in the landscape orientation or with font sizes cranked all the way up? Most devices support multiscreen and “floating windows” – does the app look and behave properly with them?

Unlike a desktop application, mobile apps can switch orientation, percentage of the screen or font sizing mid-operation. It’s important to iron out these deficiencies during testing and define the acceptable combinations and make sure the rest are locked out.

What versions to support?

While there are still systems running Windows XP and Internet Explorer, the numbers are limited. At the very least, with desktop apps, you can draw a line in the sand and only provide code review services to a specific browser or OS version and restrict your testing to a reasonable set of platforms.

This isn’t the case on mobile. iOS is relatively better, offering support to devices several years old at the time of writing but the Android world is the wild west. Even today, the majority of currently active Android devices are running OS versions four or five years old.

This is a giant pain in the cushions for testers. You could restrict your app support to a few versions of the OS, especially in the case of iOS, but there’s no easy solution for Android. By trying to be more inclusive you are stuck supporting the much more heterogeneous Android ecology and have to perform a lot of testing across multiple devices and OS versions. It’s no surprise then, that most mobile testing labs bear a striking resemblance to a mobile phone store.

Fingers and modern UI

Developers for web and desktop applications don’t bother too much with how large buttons and other key UI elements are. This is because these devices are predominantly used by the mouse – an excellent pointing device. Things aren’t the same for mobile. Your fingers do most of the interaction with the UI and aren’t generally the most precise pointing instruments.

As a mobile app tester, it would be wise to test for the accessibility and ease of selection of UI elements. After all, it is very easy to accidentally press the “delete all” button in the gallery where you wanted to press “copy” or “move”. In instances such as these, you might want to ask the development team to add countdowns or confirmations to avoid disaster.

Now while this is predominantly down to the design and development team to sort out, as testers, it is your job to ensure none of these errors slips through the cracks. As a handy guide, most mobile OS makers provide detailed specification on the minimum size that touch targets should occupy, and it’s generally a good idea to follow them. If for nothing else, targets that are large enough tend to pass accessibility testing easily as well.

Adapting to the unique challenges of mobile application testing has its own set of rewards. Perhaps one day you might encounter somebody who waxes lyrical about an app you helped shepherd out the door. But if don’t rise to the challenge, that same individual could be cursing it instead. So test well!