6 Most Common QA Test Lead/Manager Interview Questions (with Answers and our Tips)

STH is back with yet another interview series. This one is for QA/Test lead position. We are going to cover few most common but important QA test lead/manager interview questions and answers.

As always, we will follow the pattern of explanation based answers rather than politically correct ones. Let’s begin.

Typically QA interviewers test all interviewees in 3 major areas:

#1) Core technical knowledge and expertise
#2) Attitude
#3) Communication

Now that we are talking about a QA test lead interview, the process is similar and the way to assess communication remains the same.

Overall cohesiveness, conviction and clarity are few factors that contribute to effective communication. When it comes to evaluating the first two areas for a QA test lead, we can divide the areas where the QA lead interview questions might come from 3 categories:

1) Technical Expertise
2) Team player attitude
3) Management skills

We will take a look at each of these and elaborate further.

test lead interview questions

Test Lead Interview Question on Technical Expertise:

This can be further divided into process and tools based skills. A few sample questions that can be asked are:

Q #1. What were your roles and responsibilities and how was your time divided between tasks in a project?

Normally a test lead works on the project just the way the other team members do. Only 10 %( industry standard, might differ from project to project) of the time is spent on coordination activities.

You can further break this down into saying:

50%- Testing activities- depending on the stage the project is in, this might be test planning, design or execution
20%- review
10%- coordination
20%- client communication and delivery management
STH’s tip:
Prepare ahead. Have all the numbers figured out ahead of time.

Read also => Test Lead Responsibilities

Q #2. What QA process do you use in your project and why?

When this question is asked to a QA team member, the idea is to assess their familiarity and comfort in using the process in place. But when this question is coming to the team lead, this is to understand your expertise is being able to establish the said process. The best way to go about this is: brainstorm.

A sample answer could be this way: Currently, we follow a mix of both traditional and Agile projects. The way we go about this is: we handle releases in short sprints but within the sprints, we would still create a test plan, test scenarios but not test cases and report the defects as we would in the waterfall model. To track progress we use a scrum board and for defects we use Bugzilla tool. Even though our sprints are short, we make sure that all reviews, reports and metrics happen on time.

You can add more to this: if it is an onsite-offshore model project, if the dev and QA sprints are separated and lag behind one another, etc.

See also => QA processes in end to end real projects

Q #3. What do you consider to be your key accomplishments/initiatives?

Everyone wants a successful manager, not just a manager- hence, this question.

Awards, performance ratings and company-wide recognition (pat-on-back, employee of the month) etc. are all great. But do not discount the day to day accomplishments:

May be you have streamlined the reporting process or simplified a test plan or created a document that can be used to sanity test a system that is complex very minimum supervision when used, etc.

Q #4. Have you been involved in test estimation and how do you do it?

Test estimation gives an approximate idea of how much time, effort and resources are required to test. This will help determine the cost, schedules and feasibility for most projects. Test leads are approached for test estimation at the beginning of every project. Therefore, the answer to the question of whether test estimation was part of the job profile for a QA lead is “Yes”.

The ‘How’ part differs from team to team and lead to lead. If you have used function points or any other techniques, be sure to mention that.

Also, if you have not used those methods and based the estimation totally on historical data, intuition and experience- say so and provide a rationale for doing so.

For example: when I have to estimate my projects or CRs, I simply create basic Test scenarios (high level) ones and get an idea of how many test cases I might be working with and their complexities. Field or UI level test cases can be run and written at a pace of about 50-100 per day/per person. Medium complexity test cases (with 10 or more steps) can be written about 30 per day/per person. High complexity or end to end ones are at a rate of 8-10 per day/per person. All of this is an approximation and there are other factors such as contingencies, team’s proficiency, available time, etc. have to be taken into consideration but this has worked for me in most cases. So, for this question this would be my answer.

STH Tips:

Estimations are approximations and are not accurate always. There will always be a give and take. But it is always better for a testing project to overestimate than underestimate.
It is also a good idea to talk about how you have sought the help of your team members in coming up with test scenarios and identifying complexities because this will establish you as a mentor, which every team lead should be.
Read also => How To Be a Good Team Mentor, Coach and a True Team-Defender in an Agile Testing World? – The Inspiration

Q #5. What tools do you use and why?

QA process tools such as HP ALM (Quality center), bug tracking software, Automation software are things that you should be proficient along with all your team members.

In addition to that, if you use any management software such as MS Project, Agile management tools- highlight that experience and talk about how the tool has helped your day to day tasks.

For example: Talk about how you use JIRA for simple defect and task management in your QA Project. In addition to that, if you can talk about the JIRA Agile Add in and how it has helped with Scrumboard creation, planning your user stories, sprint planning, working, reporting etc. that would be great.

Q #6. Process familiarity and Mastery – if you process you follow at your work place is waterfall, onsite-offshore, Agile or anything to that effect, expect detailed Q&A about its implementation, success, metrics, best practices and challenges among other things.

For details check out the below links:

Onsite offshore software testing
Agile testing tutorials

There goes the first section. In the next test lead or test manager interview questions article, we will deal with team player attitude and management related questions.

As a parting note, I would like to bring to your attention that when answering questions in an interview, do not look at it as an examination. Look at it as a platform to brainstorm and put forth your point of view and your individual experiences.

About the author: This article is written by STH team member Swati S.

The Formula for Usability Testing: Part 2

Part One of our series on usability testing ^(http://testosoft.com/goto/http://design.org/blog/the-formula-for-usability-testing-part-1), we discussed what testing is all about, why it’s important and the best times to do it. Now, let’s explore some of our UX agency’s ^(http://testosoft.com/goto/http://drawbackwards.com/) favorite strategies and best practices for gathering the data needed to make smart, user-centered product decisions.

Step 1: Determine User and Business Objectives

Why are customers using your product? What tasks are they trying to complete, and what problems are they trying to solve? These are your user objectives.

What are your organization’s goals for usability testing? Why are you considering doing it in the first place? What are you hoping to learn? These are your business objectives.


By outlining what both stakeholders [users and the business] are trying to accomplish and why, your team will be able to ask the right questions during testing and determine if your product is meeting those goals.


Documenting both sets of goals is key. If you focus solely on business objectives, you may not know whether users actually want or need your product. But if you focus solely on user objectives, you may build an amazing product that customers love but isn’t viable for your business. By outlining what both stakeholders are trying to accomplish and why, your team will be able to ask the right questions during testing and determine if your product is meeting those goals.

Step 2: Screen Usability Testing Participants

Before diving into user testing, it’s crucial to make sure that the participants are part of your product’s target audience so you can get relevant feedback. For example, if you’re designing a website hosting and management product, you wouldn’t want to test someone who has never managed a website before because that person wouldn’t be able to relate to the day-to-day tasks, challenges and feelings of your target audience. Instead, you’d want to screen a group of participants to ensure you’re talking to the right people in the first place.


Before diving into user testing, it’s crucial to make sure that the participants are part of your product’s target audience so you can get relevant feedback.


During this screening phase, have a screener distribute a short survey to help find users who represent your ideal customers. We like Survey Monkey ^(http://testosoft.com/goto/https://www.surveymonkey.com/)Google Forms ^(http://testosoft.com/goto/https://www.google.com/forms/about/) and Google Consumer Surveys ^(http://testosoft.com/goto/https://www.google.com/insights/consumersurveys/home) because they make the survey creation, distribution and analysis process easy. Plus, if you don’t already have a list of potential participants to survey, you can use these services to purchase responses from users who fit your target demographic.

As you’re gathering your survey group, start brainstorming questions that focus on behaviors (not demographics), such as:

  • Occupation
  • Work responsibilities
  • Personal interests
  • Digital behaviors and proficiency
  • Purchase-related intentions
  • Workspace and environmental conditions
  • Cultural norms and biases

The answers to these questions will help you hone in on the group of people who are the best fit for your test. Although you can usually identify 80% of major UX issues with as few as 6 participants, the more data you can gather, the better.

Step 3: Conduct Usability Tests and Analyze Results

As you design your usability test, keep in mind that your primary goal is to observe how real people would use your product in a real environment. These tests can be conducted remotely or in person, as long as the user can perform tasks naturally.

There are plenty of tools that make it easy to moderate the test, record sessions and analyze results. Some of our favorites include:

  • Remote testing: GoToMeeting ^(http://testosoft.com/goto/http://www.gotomeeting.com/)
  • In-person testing: Techsmith Morae ^(http://testosoft.com/goto/https://www.techsmith.com/morae.html)
  • Testing prototypes: InVision ^(http://testosoft.com/goto/https://www.invisionapp.com/) for designing clickable prototypes, lookback ^(http://testosoft.com/goto/https://lookback.io/) for recording sessions during testing
  • Testing existing sites and apps: Full Story ^(http://testosoft.com/goto/https://fullstory.com/) and Hotjar ^(http://testosoft.com/goto/https://www.hotjar.com/)

Before the test begins, set the user’s expectations about what the test is for and approximately how long it will take. It’s also important to reassure the user that the interface is being tested, not them. Otherwise your results may be skewed because they won’t admit when they can’t find something, or they’ll give you the answer they think you’re looking for.

During user testing, use a “think aloud” protocol ^(http://testosoft.com/goto/https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/), where you ask participants non-leading questions and have them talk through how they would complete basic tasks without any guidance. Tell them to wait for permission to click through to the next step. Then, ask follow-up questions to learn more about whether their hopes aligned with their expectations, how they’re feeling at that moment and more.


Reassure the user that the interface is being tested, not them. Otherwise your results may be skewed because they won’t admit when they can’t find something, or they’ll give you the answer they think you’re looking for.


For instance, let’s say you’re designing an app and want to test if the navigation is clear. Your conversation would go something like this:

Researcher: “Show me how you would change your notification settings. Where would you look first?”

Participant: “I would click on my avatar because that’s usually where settings are.”

Researcher: “Ok, please do that.”

::Participant clicks on avatar::

Researcher: “Is that what you expected to see?”

Participant: “Kind of. It has my contact and billing information, but not my notification settings.”

Researcher: “What did you hope to see?”

Participant: “I hoped I could update other settings here, like my notifications and password.”

During this test, the researchers would see that the interface did not work as they intended or meet the hopes of the user, and that the experience caused frustration. If they hear similar feedback from other users, they should consider moving the notification settings to the main account settings screen, making the navigation clearer so users know where to find them or using a completely different tactic to meet their hopes and expectations. As a designer or product owner, it may not be the feedback you wanted to hear, but it’s the feedback you neededto hear.


During user testing, use a “think aloud” protocol, where you ask participants non-leading questions and have them talk through how they would complete basic tasks without any guidance.


Once the testing is complete, review the recorded sessions and data with your team, map the findings back to the objectives outlined in Step 1 and see where there are gaps. Then, you’ll have the information you need to improve the experience and guide your product to success.

Best Practices From the Pros

The process of facilitating usability tests is a science in and of itself. A well-designed and executed test will produce useful feedback, but making common mistakes could lead to biased or inadequate results. Here are some of the best practices we’ve learned over the years.

When conducting usability testing, DO:

  • Contact way more users than you will need. You won’t hear back from everyone you contact, so it’s good to have backup testers.
  • Provide a gift to thank users for their participation. However, be careful not to bribe them, or they may give skewed answers. We’ve found that something like a $25-50 Amazon gift card usually strikes the right balance.
  • Write a script to follow during testing, but be prepared to deviate from it to follow the user’s natural path. There’s so much gold and a-ha moments here, so don’t be a slave to the script.
  • Use a “think aloud” protocol ^(http://testosoft.com/goto/https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/) during testing, where you encourage users to talk through their thought process and share whatever comes to mind.
  • Ask open-ended follow-up questions during testing (e.g., “Can you tell us more about that?”).

When conducting usability testing, DON’T:

  • Focus so much on demographics in the screening process. Things like age and gender don’t impact a user’s decisions as much as their context and behavior.
  • Lead users down a certain path or explain the interfaces during user testing (e.g., “This is helping you, right?” or “Let me explain what this does…”). This will skew the data.
  • Have different user segments (e.g., current customers and potential customers) complete the same test. Different segments often have unique needs and workflows, so separating them makes it easier to get focused results.
  • Force users to perform tasks that aren’t relevant to them. Irrelevant questions lead to irrelevant answers that cloud the rest of the data.
  • Make users feel like they are the ones being tested. It’s crucial to keep them in a positive, productive mindset and avoid making them feel like they need to tell you what you want to hear or lie when they don’t know an answer.
  • Go longer than 45 minutes per test. Users get fatigued after that amount of time and may stop providing quality feedback.


Don’t you want to know that users love your product and that it meets their needs? Wouldn’t you want to realize sooner rather than later if you’re drifting off course so you can avoid wasting resources and putting your company at risk?


Now You Know…and Knowing is Half the Battle

Don’t you want to know that users love your product and that it meets their needs? Wouldn’t you want to realize sooner rather than later if you’re drifting off course so you can avoid wasting resources and putting your company at risk?

You may be able to find out if your product functionally works just by trying it yourself, but you can’t know the answers to these questions without putting on your lab coat and testing it with real users. Only then can you validate your hypothesis and know without a doubt that you’re designing products that will lead your users and business to success.

Thinking about doing usability testing for your product? Gathered some initial results, but not sure what to do with them? Reach out to our UX team at Drawbackwards ^(http://testosoft.com/goto/http://drawbackwards.com/contact) for help navigating the process and taking your product from good to great

The Formula for Usability Testing: Part 1

Science is all about formulating a hypothesis, gathering data to test it and validating if it’s true or not. Whether scientists want to develop something new or make an improvement to an existing formula, they would never release it to the public without testing it first. No matter how great the idea is, they know that testing is the only way to prove their theory and know with reasonable certainty that it will work.

If scientists have proven the value of a testing process, like usability testing, why don’t more designers and business professionals do it?

In this two-part story, we’ll explore what usability testing is all about, why it’s a crucial step in the product design process and how to conduct tests like a pro.

An Ounce of Prevention is Worth a Pound of Cure

Usability testing is all about testing your product with real users to see if it works as you intended. Although this may seem like a logical idea that everyone can agree with, many organizations come up with endless excuses not to do it:

“It’s too expensive.”

“It takes too much time.”

“We don’t need it. We know what we’re doing.”

“We’ve already invested a ton of time and money in this direction. It wouldn’t make sense to go back now.”

These are real constraints that almost any company can relate to. But at the end of the day, they’re not good reasons to skip usability testing — they’re excuses. At Drawbackwards ^(http://testosoft.com/goto/http://www.drawbackwards.com/), we believe “Design Success = User Success.” That means no matter how certain you are about your product’s viability, there’s no substitute for real user feedback.


We believe “Design Success = User Success.” That means no matter how certain you are about your product’s viability, there’s no substitute for real user feedback.


For example, as the music app Spotify ^(http://testosoft.com/goto/https://www.spotify.com/us/) grows, they’re making a big push to improve retention and user engagement. They had a hypothesis that one of the things holding them back from reaching their goals was the app’s navigation.

At the time, the iOS app had much of its navigation hidden behind a “hamburger” icon (three horizontal lines in the corner of the screen), and Spotify wondered if users were having a hard time finding the features they needed. After extensive testing with both new and existing users, Spotify reported ^(http://testosoft.com/goto/http://techcrunch.com/2016/05/03/spotify-ditches-the-controversial-hamburger-menu-in-ios-app-redesign/) that those who had access to a clearer tab bar menu that wasn’t hidden “ended up clicking 9% more in general and 30% more on actual menu items. The tests also revealed that reducing the number of options in the tab bar to five increased the reach of Spotify’s programmed content.” This higher engagement will boost retention and revenue, which is sure to pay for the usability testing many times over.

As Spotify experienced firsthand, when you skip testing, you run the risk of:

  • Self-designing a product that may satisfy you and your team, but doesn’t solve your users’ problems or reach your business goals
  • Seeing customer satisfaction and sales decline, potentially leading to failure
  • Missing huge user experience problems that could have been easily fixed if caught earlier
  • Abandoning good ideas that could have worked if the real UX problems had been properly identified in the first place
  • Losing opportunities for product or service improvements to competitors
  • Contributing to the overall user frustration that nobody ever thinks about them when designing experiences

On the other hand, with product testing baked into your process, your team can:

  • Reveal usability problems and fix them before they turn into massive issues
  • Learn not only WHAT users do, but also WHY they do it
  • Align your team and executives around creating experiences for users, not themselves
  • Identify opportunities to improve the product using a task-oriented approach to testing, rather than a traditional focus group
  • Increase the likelihood of seeing a return on investment for money spent on design and development

So, would you rather spend tons of resources building a product and crossing your fingers that it will work, or confidently design products that you know will attract loyal fans?


Would you rather spend tons of resources building a product and crossing your fingers that it will work, or confidently design products that you know will attract loyal fans?


It’s Always a Good Time for Usability Testing

Many designers and product managers conduct user testing after building their product. This is a good step to get feedback and confirm whether they hit the mark.

But if a scientist developed a new medicine, released it to the market and then tested whether it worked, it could be too late to make corrections. They’ve already spent significant time and money developing a formula that may not work — or worse, make things even worse for their patients.


Any testing is better than none, but to get the best results, find a way to weave it into every stage of the product lifecycle.


The most successful scientists, UX designers and professionals test their product beforeduringand after development.

  • Before: Sets a benchmark or baseline for what the experience is currently like (Even if you’re developing a brand new product, it’s helpful to understand a user’s current workflow and identify ways your product will improve their experience.)
  • During: Test ideas to see how real users respond and make course corrections to avoid wasting resources
  • After: Validate the hypothesis, measure ROI and identify opportunities for improvement

Any testing is better than none, but to get the best results, find a way to weave it into every stage of the product lifecycle.

Next Up: User Testing Tips and Best Practices

So you’ve traded your suit for a lab coat and realized the value usability testing will have for your product. Now, how do you do it — and do it well? Stay tuned for part two of this story, where we’ll share tips on how to conduct successful screening and testing, plus our favorite tools and best practices to help you gather the best data possible.

In the meantime, if you have questions about usability testing in general or a specific testing challenge you’re facing, get in touch with our team at Drawbackwards ^(http://testosoft.com/goto/http://drawbackwards.com/contact).

65k impressions 300 clicks.. $1 JUICYADS Good

Seriously.. what the fuck is this shit?

My traffic is 90% US.. 65k impressions yesterday.. with almost 300 clicks… and I get one george washington? Wtf is this shit?!?!?!?! I’ve had days I made upwards of $3-4 off like 50 clicks and much less impressions.. did anyone else have a shitty yesterday?

This is my first day back on filler ads after being on a 30 day contract from someone buying one of my ad spaces. I felt like I got screwed in the long run, especially after finding out Juicyads…

65k impressions 300 clicks.. $1 JUICYADS FUCKING GARBAGE ^(http://testosoft.com/goto/https://www.blackhatworld.com/seo/65k-impressions-300-clicks-1-juicyads-fucking-garbage.973019/)

500 Recent Search Terms:

  1. viral9 com is a real website or not (1)
  2. :80 :8080 :3128 (82)
  3. is hitleap safe for propeller ads (1)
  4. xozila (21)
  5. fake traffic view increase cpm (1)
  6. account xvideo say invalid (1)
  7. backlink from dlvr it (1)
  8. Sorry upload functionality is currently not supported in iMacros for Firefox (1)
  9. telegram instagram like bot (1)
  10. free telegram engagement group (1)
  11. Facebook tinder token use in cpa work (1)
  12. unable to login in xvideos (1)
  13. your account has been disabled for violating our terms instagram (1)
  14. hitbeep (1)
  15. expired article hunter article score (1)
  16. youtube video unlited metjod 2017 (1)
  17. huffington post contributor account us free giveaway (1)
  18. how to make money with instagram (1)
  19. Atomic Email Studio (1)
  20. apextv YouTube email address (1)
  21. wpmdb nulled (1)
  22. how to recover my disabled instagram account (1)
  23. ads unlisted video blackhat (1)
  24. hosting providers for adult script (1)
  25. jing ling chatbot (1)
  26. Get $0 20 per link unique visitor paypal (1)
  27. viral9 alternative (1)
  28. majestic vs ahrefs (1)
  29. earn money from xvideos com (1)
  30. buil versiin 844 page recoder cr scripts math floor random number (1)
  31. https://goo gl/fnBCA2 (1)
  32. how to withdraw money from crakrevenue (1)
  33. fb unlimited fb ads 2017 (1)
  34. telegram open groups (1)
  35. twilikes alternate (1)
  36. fb 150k follower per ac (3)
  37. make money with likelikego and ogads (1)
  38. limit of cash that can be withdrawn in an unverified PayPal account in spain (2)
  39. snapchat crakrevenue (1)
  40. is coinbase safe (1)
  41. Facebook 0 link reach (1)
  42. transfer bitconnect from bitrex (1)
  43. How to connect instafaster api with your panel (1)
  44. how to make money using likelikego (5)
  45. content (2)
  46. make money with OpenLoad blackhatworld (1)
  47. payoneer shopify (3)
  48. blackhatworld allpvastore (2)
  49. Can i flip dot xyz domains (1)
  50. how to fill up twilikes account (1)
  51. buy youtube live stream viewers (2)
  52. paypal permanently limited (1)
  53. buy instagram followers with perfect money (1)
  54. spyder glimpse bgm download (1)
  55. sharepop tricks (1)
  56. shamax xhamster (1)
  57. cpagrip locker designs (1)
  58. articleForge premium account free (1)
  59. xvideos com或www xvideos jp/notmobile (1)
  60. telegram instagram rounds reddit (1)
  61. yputube unlisted method to rank viideo far 2017 (3)
  62. nametest script nulled (1)
  63. ph contact form submitter (1)
  64. free auto 500likes instagram (1)
  65. where do gigs get $200 Bing ads coupon code? (2)
  66. how to log in xvideos (1)
  67. socks5 прокси 50/50 (1)
  68. testosoft com (1)
  69. anything (1)
  70. write situation with using countious aspect (1)
  71. usa-proxy foxyproxy blogspot (3)
  72. socks5 (4)
  73. how to recover throtted account engagment on Instagram (3)
  74. fraudfilter for cloaking (1)
  75. supreme traffic bot license key (1)
  76. ahrefs how to lock subscription for shared account (2)
  77. how to make non hosted us adsense (2)
  78. the growth jacker (3)
  79. buy facebook verified badge (1)
  80. uncopyrighted videos amazing video (1)
  81. youtube logan paul vlogs (1)
  82. Clickjacking youtube subscribe javascript (3)
  83. my xvideo account not login (1)
  84. payoneer shopify mail (2)
  85. url/yourname fb script (1)
  86. how to upload anime without copyright (1)
  87. can changing the password stop hublaagram (1)
  88. hitbeep free traffic (1)
  89. how to auto generate friend request on blackhatworld com (1)
  90. https://testosoft com/tinder-bot-3 html (1)
  91. Instazood proxy (1)
  92. cheap high retention youtube views smm panel (1)
  93. 10 Alternatives to viral9 (1)

10 Interview Questions to Hire a Mobile QA Manager

If you develop mobile apps you will need to hire a QA manager at some point. They will help you manage the test cases, improve and structure your test process and ultimately grow your business. Here are 10 questions you should ask candidates during your job interview.

Let’s talk about what a mobile QA manager does first

So let’s start with this before moving on to the questions a qualified QA manager should be able to answer. Ideally a QA manager is the one person that ensures that a product meets the required quality standards, mainly in terms of reliability, usability and performance. This person should also be able to create a straightforward testing strategy as well as organise the test cases.

Now we can start with the questions that should be asked during an interview. You can add the questions below to your list. The interview should not consist only of these question and they can be asked in the order you prefer or that fit your interview schedule best.

1. In a complex mobile testing environment, what do you think are the the biggest challenges?

Ideally your candidate should mention four “typical” mobile testing challenges like:

  • Android device fragmentation, mainly regarding OS and OS versions, caused by manufacturer’s specific differences based on hardware, price and local market
  • Screen size fragmentation
  • Importance of localization
  • Importance of different mobile networks and user mobility

2. As a QA Manager, what you think is the best approach to test apps extensively?

Here you would expect the candidate tell you about his extensive knowledge about manual testing and automated testing and how it would be a good idea to combine both of them to save time and money.

3. Still staying in the topic of the previous question (manual and automated testing), what test cases should be automated and which ones performed manually, in your opinion?

At this point there are certain guidelines to decide which test cases should be automated and which shouldn’t.

The candidate should talk about a few of these points when talking about test automation:

  • Automate the most frequent test-cases: all the use cases that are frequently performed manually – could be automated instead
  • Automate test cases that are easy to automate
  • Automate test cases that have predictable and easily verifiable results
  • Automate the most tedious manual test cases
  • Automate test cases that are impossible to perform manually
  • Automate test cases that run on several different hardware or software platforms and configurations

When talking about manual testing these are important points:

  • Manual testing should be performed whenever the test case is too complicated and would take a lot of time to establish an automated test case for.
  • If certain parts of the app will be changed in the near future, it is advisable not to start writing scripts for these test cases, because they will be unusable very soon.

4. What mobile test automation frameworks are you familiar with and could you tell us about their pros and cons?

This question has a direct correlation to automated testing, so questions with a flowing topic should be asked in sequence. If the candidate nominates a testing framework he should be to talk about pros and cons. It would be beneficial if they mentioned well known frameworks like UI Automation, Robotium, Calabash, Appium, and Espresso, even if they will not be the ones who write the scripts.

mobile ui frameworks

5. How can mobile test automation be leveraged effectively?

There is no textbook answer here, because it depends on the implemented strategy, but a good answer should highlight some of these aspects:

  • Test automation can deliver information fast, but it does not have a great value if the test results do not reach a developer
  • A QA manager’s job would be to provide the right developer with the right information (as many as possible e.g. screenshots, logs and videos) so they can fix the issues
  • CI integration would be a valuable tool: it would help you integrate the test infrastructure into your own development pipeline to help information flow faster

6. How would you define a good test case? And a bad test case? Can you give us examples?

Your future QA Manager should be able to give you a clear example of a bad and a good test case. A good test case is defined by a short but clear description of what is expected from the app.

The example for both could look like this:


There can be two main bad test case scenarios. The first one would be very undescriptive, e.g. install, launch and login. In this case you don’t really know what to expect exactly from the test. The other bad test case would be a test case that tries to test many different functionalities at the same time. The result would be that you don’t concentrate on just one aspect of the app at a time and you might be missing other important functionality.


Here is an example of a good test case: install and launch the application. Try to login with user info@interview.com with password “abcd”, wait 5 seconds for the app to respond and shows you the app interface after the login.

7. How do you decide on mobile devices, OS and OS versions for testing?

The device choice is always tough, but there are some ways to decide which mobile devices are best suited for your app testing. The answer of your QA manager should include the fact that most apps nowadays are global apps, that means that you should ideally use global sources if your app is meant to be available worldwide, otherwise the local market should be researched. That can be done by analyzing the most used/sold devices either globally or in the local market.

One other criteria the candidate could point to is to analyze the app download statistics, which contain information about the most popular devices. They should also think of checking the app’s reviews to find out if users have issues happening on specific devices. If the app hasn’t been released yet, you obviously don’t have any data to carry out this strategy.

Ideally the candidate will tell you to combine both popular devices and app stores statistics based on the functionality of the app and the target audience.
Device trends could also be a good criteria as well as looking at which devices are being announced and how much expectation there is on those device from users.

8. What do you think about Continuous Integration in the development process?

The candidates answer should include that continuous integration is an important part of mobile application testing, because the development process should be an agile one. Ideally they have already had experience with Jenkins or Travis CI, or other, and they can give you their opinion.


9. Imagine you are the first QA manager joining our company. What would be the first three things you’d do?

Many companies, especially smaller ones, do not have a formal testing process, so it is important for the QA Manager to ask questions regarding how the testing process works now. They should then start to outline the things they would do to make the process straightforward, and how they would start introducing a proper testing plan and strategy.

Small companies can start with using a spreadsheet to write about 20 test cases (depending on how complicate the app is and how comprehensively it needs to be tested) and start from there. The test cases will be building up over time leading to inevitable choice of test management tools, tracking tools, CI ^(http://testosoft.com/goto/https://saucelabs.com/blog/10-tools-every-mobile-tester-should-know) etc.

10. If you found yourself working on a project for which no test automation has been done yet, and you only had a very limited amount of resources, what kind of tests would you aim to set up?

In this case the candidate would think about the most important things to be tested and would suggest to test critical features or known issues the app already has (if it’s been released). They would also advise to do smoke testing, which will reveal simple failures severe enough to know that the app or the new feature is not ready for release yet.

What are Mobile App Testing Challenges?

Mobile app testing is more complex than testing traditional desktop and web applications and therefore developers and testers have to face a whole set of challenges. These grow daily since the mobile testing field is still very fresh. In this article, we will focus on the main ones. Finding a solution to overcome all of them depends on the testing strategy that testers and developers apply to their agile development cycle.

Mobile app testing challenges

Based on our experience, we highlighted the most common issues mobile testes have to face daily. However there are many other challenges that are case specific that we did not include in our list.

Device fragmentation

As for August 2015 there were more than 24,000 different Android device and we can only imagine that that number has been growing over the past year. While waiting for this year’s Android device fragmentation report, we can show you how it looked like in 2015.

android fragementation ^(http://testosoft.com/goto/http://opensignal.com/reports/2015/08/android-fragmentation/embed/fragmentation-main.php)

Click here  ^(http://testosoft.com/goto/http://opensignal.com/reports/2015/08/android-fragmentation/embed/fragmentation-main.php)to see the detailed image.

All of these devices have different size, shape and hardware, as well as software which is a whole new set of challenges.

Screen size and OS fragmentation

Although we were talking of Android device fragmentation above, mainly because Apple has less devices, we can count both of them in when talking about screen size fragmentation. Many of those 24000 different Android devices have a different screen sizes, but Apple has many screen sizes as well, ranging from 3,5” of the iPhone 4, which is still a popular device, to the 12” of the iPad Pro. So it’s not just Android that is fragmented.

mobile app testing challenges - screen sizes

Together with the screen size fragmentation we have to also talk about the OS fragmentation and the different versions there are, based on geolocation and manufacturers. People can still be using very “old” mobile devices and buy devices with older OS versions with no means of updating them to a newer one. Android has still 11 different OS versions circulating ^(http://testosoft.com/goto/https://developer.android.com/about/dashboards/index.html).
Modifications to the OS made by the manufacturers make everything more complicated. Apple has less obstacles in this department, but developers and testers have still to face mobile devices with iOS 6.0.1, which does not get updates.

Manufacturer fragmentation

Another difficult thing to master is the manufacturer fragmentation. In 2012 there were “just” about 500 mobile device manufacturers, but in 2015 that number more than doubled, reaching 1200 different manufacturers.

manufacturers fragmentation

This fragmentation in manufacturers can be seen in the device fragmentation overall, which is not just about the device look, but also in the hardware and software changes that manufacturers use to distinguish themselves from the others. This small changes will affect how any app works, making testing even more important.


Localisation can be counted as one of the main mobile app testing challenges. It’s not just about the language of the app that can be changed, but how it interacts with the rest of the mobile device and how users feel about being able to change the app language to their own. Every language that will be added to the app, should be seen as new opportunity to penetrate a new market but it still remains an obstacle to overcome.

Mobile network operators and users mobility

Directly connected to localisation there is actual users mobility. People now a days travel very often, changing network, roaming and making sure that apps work when users need them is crucial. This may be online or offline, with a weak signal etc. A typical example are mobile plane boarding passes. Let’s say you have a ticket/boarding pass wallet app and your user have saved a plane boarding pass in it and the app won’t open without mobile data – that user has no means to board their plane!

Different application types

Apart from all the mobile app testing challenges we have already mentioned above, there is one that is not specific to the mobile devices but to the app. Apps need to be tested in different ways, because mobile apps can be built differently: they can be native, hybrid or web application (mobile-designed web pages). All of these elements play an important role on how the app will be tested.

Many different testing tools

Another obstacle for developers and testers is the choice of the right tools for mobile app testing. There are so many tools out there that it is impossible to choose from, unless you have a mobile testing strategy and you know exactly what you need to make your testing plan happen. You can read about our favourite testing tools here ^(http://testosoft.com/goto/https://saucelabs.com/blog/10-tools-every-mobile-tester-should-know).

Agile Development

Agile development is not a mobile testing challenge per se, but it is one challenging aspect of developing a mobile app, because the development process of mobile apps should be an agile one, to react quickly to changes and to user’s requirements. Becoming agile means to gain agility during the whole process and that can be done day after day, improving the whole process, making small sprints for every new feature/bug fix, turning the whole process to a more dynamic one.

What Is Mobile Application Testing?

The number of consumer and enterprise mobile apps have grown exponentially over the last few years, leaving the end user with an humongous number of apps to choose from. But how does the user choose the app that will take up the precious space on their device? App quality is the key to any app’s success and it can only be achieved through mobile application testing.

What is mobile application testing and why is it important?

App success can be measured by the number of downloads and the positive comments, as well as a rapid implementation of new features and bug fixes. Above all, not be underestimated, word of mouth. But how can you ensure any app success?

With mobile application testing.

This practice allows you to deliver better software and helps your app to be successful by testing its functionality, usability and consistency, growing your user base.

Testing is in fact an important part of every software development process and with mobile apps it has become even more important. The growing number of mobile devices is leading to a massive fragmentation of OS, screen sizes, variations of standard OS and more. With agile processes, software testing is performed every so often to assure the best possible quality. New features and bug fixes need to be released with short intervals, so users don’t loose interest and new features should not bring new bugs. Testing becomes vital for an app’s survival.

Main challenges of mobile application testing

Mobile application testing is more complex than testing traditional desktop and web applications and has its own set of challenges.

The biggest challenge is the many different mobile devices there are. As for August 2015 there were more than 24,000 different Android device and that number has only been growing over time. All of these device have different size, shape, software, software version as well as hardware and you should test on enough devices to ensure that the majority of your users are happy.

People nowadays travel more often than they used to, taking their mobile devices with them: changing network and roaming is one more challenge that needs to be overcome. You definitely want to make sure that your app works when your users need it. This may be online or offline, with a weak signal etc.

Approaches for mobile application testing

As goes for software testing, you have two main approaches for mobile application testing as well: manual testing and automated testing. We will briefly explain both.


Manual testing is a human input, analysis or evaluation. This approach is user centric focussing on explorative ways of monitoring, whether a mobile application meets user requirements and expectations. You should test your app for look & feel and for usability, making sure that it is user friendly. You should not be using manual testing for all your testing, but just for about 20% of them, for the rest you can use automated testing.


Automated testing is an other mobile application testing approach. You should ideally set up as many cases as possible, that will allow you to automate about 80% of your testing. There are specific test cases that should be automated and here is a list:

  • Automate the most frequent test-cases
  • Automate test cases that are easy to automate
  • Automate test cases that have predictable results
  • Automate the most tedious manual test cases
  • Automate test cases that are impossible to perform manually
  • Automate test cases that run on several different hardware or software platforms and configuration
  • Automate frequently used functionality

Mobile Testing Strategies to Build Better Products

As we find ourselves in an increasingly mobile-first environment, the need for ongoing mobile compatibility evaluation has increased and made a sound mobile testing strategy critical for application developers and site owners. With nearly every American owning a personal cell phone ^(http://testosoft.com/goto/http://www.pewinternet.org/fact-sheets/mobile-technology-fact-sheet/), the likelihood of having a poor cross-device user experience warrants extensive testing. However, many companies and developers still don’t understand the value of testing and having an overarching strategy about how you are going to test your systems for mobile capabilities and ensure a consistent error-free user experience. Many Chief Information Officers do not have a formal testing strategy, leaving every new product open to interpretation in terms of what will be tested and how. With so many crucial display and functional details to verify, this lack of rigor has the potential to be very costly.

Mobile Testing Strategy Across Devices ^(http://testosoft.com/goto/http://3qilabs.com/mobile-testing-strategies-build-better-products/mobileapps/)

Mobile testing strategies should encompass many devices

Users should be able to interact with your product no matter how their phone or tablet operates. Ensuring this, however, has become more complex as it’s become more crucial. For instance, basic details for application or software delivery such as the user’s screen size or resolution specs become quite complicated to manage with device proliferation ^(http://testosoft.com/goto/http://opensignal.com/reports/fragmentation.php). Though most phones are becoming totally touch-reliant, some users may still own BlackBerry phones or many older-model Nokia phones with offer non-touch screen phones, so you have to develop your mobile testing strategy to include approaches for devices with touch screens, QWERTY keypads and Trackball devices.  While building new software to support old features is never easy, a solid framework of mobile testing strategies can help companies plan save time and create better user experiences.

Android Screen Resolutions ^(http://testosoft.com/goto/http://3qilabs.com/mobile-testing-strategies-build-better-products/frag_android_screen_res_light/)

Abundance of Android Screen Sizes

Also, as operating systems have evolved, proliferated, and become more sophisticated, understanding your product’s performance across these various systems has proven difficult and important. Many systems these days vary in terms of processing speed and memory size, which will alter the product’s performance. Having a system and strategy for how you test current mobile operating systems and how you incorporate the additional formats that launch every day is paramount. You need to plan and have a clear strategy for how often and in what manner you will test and retest your applications to ensure your product will provide a desired user experience on virtually any phone in the market. Otherwise, you risk increased acquisition costs and poor user retention from the “gaps” you missed with an incomplete mobile testing strategy.

So what should you consider when testing? If you don’t currently have a mobile testing strategy there are a few strategies that you should think about:

  • Know your environment. While there are many new phones coming out every season, there is always going to be an upper echelon where a majority of users are purchasing and using. For example, the Apple and Android market are currently front runners ^(http://testosoft.com/goto/http://appleinsider.com/articles/14/03/14/how-android-lost-global-open-market-share-to-apples-integrated-ios) for devices, so you are at a huge disadvantage if your product does not function on the iOS or Android operating system.
  • Test for now, but prepare for the future. As phones release nearly every six months, there’s no way to know what the mobile market will look like even within the next couple of years. Therefore, it is important to use the most current technologies, but allow yourself the flexibility to adapt to any major changes so that your product will last the test of time. This includes upgrades or work-arounds that allow the user to continue using your application even after changing phones.
  • Pick what’s popular. With so many different options available, there’s virtually no way to test everything. But ensuring that your product will operate properly on most of the main devices currently around is well worth the time.

In this day and age, it’s absolutely critical that tech companies and any company interested in development have testing strategies for their products. Cutting edge companies like Google are testing mobile apps ^(http://testosoft.com/goto/http://googletesting.blogspot.com/2013/08/how-google-team-tests-mobile-apps.html) and seeing fantastic results. Without testing the right things, at the right time, errors will easily be missed at launch and the user will have a bad experience, ultimately deleting your app and missing out on your service. Develop mobile testing strategies for your applications now, or risk building products that create bad user experiences for your customers.

Mobile Apps Market For The Enterprise

By 2016, the number of mobile apps downloads are estimated to reach 44 billion and the worldwide online app market is expected to grow from approximately $6.8 billion in 2010 to $25 billion by 2015.

As we get ready to launch the Awetest Native App Testing Module, we have been working closely with our enterprise (B2B) customers to understand their mobile roadmaps and, as expected, almost everyone we speak to has BIG plans for mobile. One of our customer teams, a large enterprise expense reporting app, was surprised to find out that approx. 5 % of its traffic was coming through iPads – a platform that wasn’t on the support roadmap until 12 months ago (when their customer support lines started blowing up with iPad issues!).

Here’s a great info graphic from Zendesk that breaks this down for you:


Some interesting facts from this info graphic:

  • iPhone is being deployed/tested at over 80% of the Fortune 500
  • iPad is being deployed/tested at 65% of the Fortune 500
  • North American market for mobile office applications will grow from $1.7B in 2010 to $6.85B in 2015.

So, its no secret, and no one’s disputing, the dominant role mobile and mobile apps are (and will be) playing in our lives and as we started researching market trends and stats – we came across another great presentation ^(http://testosoft.com/goto/http://www.slideshare.net/chetansharma/annual-state-ofglobalmobileindustry2012chetansharmaconsulting) that puts together 100s of stats and data points to further reinforce the “mobile is taking over the world” mantra that pundits have been pushing for the last couple of years.

Some of the interesting facts from this presentation:
  • 70% of all US mobile device purchases are Smartphones
  • Apple already has over 25 Billion downloads from its mobile appstore
  • Globally, Mobile has higher penetration than Electricity

Obviously, the need to test mobile apps is going to grown alongside the application ecosystem. Furthermore, HTML5 blurs the lines between “Native” and “Browser” and will fuel the mobile “app” adoption and usage. Tim Cook on the last analyst call said iPad “Certification” wasn’t enough (80% of the Global 500 are certifying/testing the iPad). Instead Apple is “shifting our focus to penetration in enterprise”. Many would argue that Apple already has achieved significant ‘penetration’ inside the enterprise, but a focused push by the world’s most valuable company into the enterprise will surely change the enterprise computing landscape over the coming years.

Automation Best Practices: Building To Get the Job Done

Build: Building Automated Test Cases to Get the Job Done

With maintenance on automated test cases taking up a good portion of our clients time we thought it would be a good idea to layout some best practices for building out rock solid automation that will be both easy to understand at a glance and maintain with minimal effort. This section is a continuance of this post ^(http://testosoft.com/goto/http://www.3qilabs.com/2012/04/best-practices-for-achieving-automated-regression-testing-within-the-enterprise-building-automated-test-cases-to-stand-the-test-of-time-section-2/) in which we discussed the building out phase of a proper test automation implementation requirements. For a links to all articles in the series please use this link: Best Practices for Achieving Automated Regression Testing Within the Enterprise ^(http://testosoft.com/goto/http://3qilabs.com/2012/04/blog-series-best-practices-for-achieving-automated-regression-testing-within-the-enterprise/)

2.11 Revert back to original Data Conditions (wherever possible)

Data Consistency is the Bane of every tester’s existence. Let’s say you kick off a job and half way through setting up the data you wanted it dies, this being well before the script was able to reset all the data it created itself. That’s why Awetest is set up to deliver “Smart” Cleanup Scripts built to reset data and ensure maximum consistency.


2.12 Build App-Specific Library

Sometimes custom functions might be needed to properly test a given test case. The Awetest framework is fully expandable by allowing you to write your own custom test methods using raw Watir, Selenium, or our native Awetest language and then implement your customizations through an external application utility file managed in the ‘Assets Tab’ of the Awetest Regression Module.


Here is an example of the login method stored in this projects utility file.


Make custom code reusable (reusable custom functions save time later, regularly move any custom methods into a project utility file so they can be called in one line from your test scripts. This will build a robust custom library for the project that will make future scripting much faster and easier. Awetest lets you manage project utility files from the ‘Assets Tab’ in a given project)

2.13 Test & Deploy

Building scripts that seem to work is easy (building scripts that do work can be hard). Check the logs & view the run manually while tests are being executed). Awetest allows you to watch you tests run via a VNC Viewer to help catch on screen errors that might be hard for a computer to recognize.


Figure 2: Clicking the VNC Link will show you the desktop of the agent (Shamisen) Machine

Results can be deceiving if all you are doing is looking at the logs. Awetest logs are different. They include helpful extras that help cut down on the tedious task of digging through debug messages to get to the true error.


These include feature like hide/show pass or fail and screen captures before, during, and after an error is recorded.

With Awetest you also have the ability to watch the log update in real time via the jobs page. Simple navigate to the jobs page, click the click for the job you just started and watch as the log is populated. This is great for first runs of scripts when you know something is bound to go wrong.


2.14 Keep it Simple!

When building test cases it is critical to keep things as simple as possible. Overly complicated test cases that test wide variations of the application will take much longer to execute and be much harder to maintain. Keep it simple by keeping test cases as small as possible, a max of ten unit tests to make a test case.

To make things even simpler, Awetest allows you to easily refactor encapsulated unit tests (i.e. Successful Login, Create Account, Delete Account) into a project utility file where they can be references in a single line in the test case. This means building new test cases can be as simple as three or four lines of code. And you can do this across the board, not only with Awetest scripts. In the below example we are leveraging a Sikuli Script and variable spread sheet stored in the projects Assets library. The highlighted line of script indicates the single line that is executing the entire Sikuli portion of the test case.


2.15 Clean the reports

If you notice a run has prematurely failed and you want to rerun the job immediately you can do so directly from the jobs page. There is a restart button in the same row as the job next to a delete button.

Clicking the restart button will clean out the reports logged for that job and then restart it.


This is very important functionality designed to keep the garbage data out of the reports and keep the error-rate overview analytics clean and true. (More on this in the upcoming publication of our best practice for Reporting) ^(http://testosoft.com/goto/http://3qilabs.com/2012/04/blog-series-best-practices-for-achieving-automated-regression-testing-within-the-enterprise/)