Manual QA Interview Questions II

Continue building your knowledge in Manual QA practices and ace interviews with frequently asked questions.
Learn from experts

Q1. What do you understand by severity and priority? Are these two terms interlinked?

Ans: Severity and priority are two distinct terms and most often confused with. With the help of this question, the interviewer is trying to gauge if you are aware of the thin line difference. It is advisable to keep your answer short and precise. The term which describes the gravity of a bug and provides description from the application’s point is Severity. The severity of a bug shows its effect on the system (also called impact). On the other hand, priority decides which bug needs to be fixed first. It decides the order in which bugs must be fixed. Priority provides a user’s point of view and indicates the urgency with which a defect needs to be fixed. 

Severity of defect is decided by the QA and it depends on level of complexity and criticality of defects. 

Priority of defects is defined by the stakeholders in business like the project manager, business analyst etc. 

The process of developing fixes for defects is based on Severity and Priority of defects. 

A broad classification of Priority of defects is mentioned below- 

  • Priority 1- (P1)- Critical- This category of defect needs an immediate fix, within 24 hours. Usually, this happens in situations when functionality is completely blocked and no testing can take place. Some cases of memory leaks are also categorized as priority 1. 
  • Priority 2- (P2)- High- This category of defect comes next in order, after the critical defects are fixed. These defects are important to be fixed in order to make sure test activity matches the criteria set for a successful exit. Some common situations that qualify as P2 priority of defects include non- usability of a feature as expected or when a new code needs to be written. 
  • Priority 3- (P3)- Medium- Defects that deal with functionality issues, usually not meeting expected standards qualifies for a P3 priority and must be resolved only after the serious bugs have been fixed. 
  • Priority 4- (P4)- Low- Any defect with priority level P4 is an indicator of existence of a defect which does not need an immediate fix as a part of exit criteria. Some common examples of low priority defects include suggestions for enhancing the existing designs or include minor features to deliver a better user experience. 

Q2. Are you aware of the various types of severity? Enlist a few types of severity. 

Ans: Severity refers to the extent of impact of a defect on an application or a system. Severity of a bug can be categorized as high, medium or low and this classification depends on the size and structure of the teams developing software. Some of the types of severity have been listed here in the order of high to low. Below are the broad classifications of severity defects- 

  • Critical (S1)- This category of severity of a defect is a state of a complete block of the testing of a product. An example of Critical could be when a particular application crashes or is not usable any more.
  • Major (S2)- This category of severity includes situations in which a major feature after implementation either behaves differently than expected or does not meet requirements. It includes defects which cause data issues or wrong behavior of the application. 
  • Minor/ Moderate (S3)- When a feature, after implementation, behaves differently from the expected behavior but its impact on the application or the system is not very serious, can be classified under Moderate severity. 
  • Low (S4)- These defects do not cause any impact on the functionality of the application but are still classified as defects and need correction. Some examples of low severity include spelling mistakes in the error messages. 

Some more common severity examples have been listed below.

  • Load Conditions- High severity
  • Control of flow defects- High Severity
  • Any hardware failure- High Severity
  • Any error w.r.t calculations- High Severity
  • Errors w.r.t misinterpreted data- High Severity
  • Issues related to compatibility- High Severity
  • Error while managing defects- Medium Severity
  • Any defects related to boundary- Medium Severity
  • User’s inability to perform action. – Medium Severity
  • Defects with the user interface- low severity 
  • Accessibility issues- Low severity 

The below mentioned image summarizes classification levels of Defect Priority and Defect Severity. 

Q3. What is your understanding of the term ‘Quality’ in the context of Testing?

Ans: We can not deny that the term ‘quality’ is subjective and is different for different customer base. Generally, a quality software (in the context of testing) refers to a product created within budgets, delivered as per timelines and is largely bug free.

Q4. Do you think the terms ‘Quality control and Quality Assurance mean the same? Can you explain the difference between these two terms?

Ans: The best answer to this question lies in the respective terms. Be precise and clear while answering this question as the answer is quite straightforward.  While Quality Control is product- centric in approach and aims to detect any defects and to ensure that the software fulfils the requirement, Quality Assurance focuses on the process to ensure application of methods, techniques and processes which play a critical role creating quality products are followed diligently. 

Q5. Do you think the terms ‘Verification’ and ‘Validation’ are the same? Are there any differences in these terms?

Ans: The difference between Verification and Validation lies in the technique of testing which is used. Verification refers to static analysis method of testing in which code is not executed during the process. Some examples for verification are Reviews, Inspection etc. 

However, Validation is a dynamic analysis method and code is executed during the process of testing. Functional testing and Non-Functional testing techniques are a few examples. 

Look at the table below for summary of these differences.

Q6. How do you explain the difference between Static Testing and Dynamic Testing?

Ans: This is a follow up question from the previous question. Refer to the table below to understand the points of difference. 

Q7. Can you explain top-down and bottom-up approach in the context of testing?

Ans: Both top-down and bottom down approach are popular testing approaches. As the name suggests, in top-down approach, testing process begins with high-level modules and later the low-level modules are tested. At the end of the process, low- level modules are assimilated into the high- level state to make sure the framework works as expected. 

Bottom-up approach tests the low-level modules first and then the process moves to testing the high-level modules, finally ending with assimilation of high-level modules to low- level. This is done to ensure the framework fills up as per the proposal. 

Q8. Is there a difference between a Test Stub and Test Driver?

Ans:  Both Test Stub and Test Drivers are computer programs and are used in the process of software testing by acting as a replacement for other modules which cannot be accessed for testing. The aim of these computer programs is to recreate the functionalities of the other non- accessible modules and play a very important role in the process of software testing. 

 While testing a module or a component, a simulation environment is needed. Test driver and Test Stub are types of test framework. Both these work as dummy modules and are created especially for the purpose of testing. 

Test stubs are commonly used in top-down testing and enable testing of the high-level codes in situations when low-level codes are in the process of development. 

Test -drivers, on the other hand, are commonly used in bottom-up testing and enable the testing of low-level codes in situations when high-level codes are still in the process of development. 

Stubs are also referred to  as ‘called programs’ while drivers are referred to as ‘calling programs’. 

Q9. How will you explain the term Parameterization?

Ans: Parameterization refers to a method of entering data into an application however, values are not coded in the test script. There are many ways for parameterization. Some of them are- 

  • When values are used from the data table.
  • When Environment variables are used.
  • By using randomly generated numbers. 
  • By using test or action parameters. 

Q10. Can you explain the difference between Release and Build?

Ans: Release refers to a software which has been certified by the testing team. This software, after being certified by the team is given to the end user and is installable. At the time of release of a software to the client, release notes are attached along. These notes contain information pertaining to any open defects, change-requirements and most importantly the version of the release. 

Contrary to this, build refers to an executable file which is handed over to the testing team by the development team. The main purpose of build is to test the application. From this stage onwards, there are number of iterations of fixing and testing till the time the application performs as expected. The application is released in the market only after it becomes stable and is ready to be used by the end-users. 

Q11. Are you aware of the difference between bug leakage and bug release? 

Ans: It is possible that during the process of testing, some bugs got missed by the testing team. When these bugs are found by the end-user in the tested software after its release in the market, it is called bug leakage. 

On the other hand, in case of bug release, a particular version of the software is released in the market and contains some known bugs which are meant to be fixed in the successive versions. The priority of these issues is low and is stated in the release notes which are shared with end-users. 

Q12. Is there a way to determine when to stop testing? 

Ans: This is quite a tricky question. To be honest, it is not easy to decide when to stop the process of testing. These days most software applications are intensely complex in nature and run in an environment which is interdependent. This makes completion of the process of testing nearly impossible. However, here is a list of a few factors which can be considered when deciding to stop testing. 

  • When test cases are complete with a certain pass percentage. 
  • When the test budget begins to deplete. 
  • At the end of Beta or alpha testing period. 
  • When release and testing deadlines are met.
  • Lastly, when the bug rate drops below a specified level. 

Q13. How would you manage a situation where the software has a lot of bugs and cannot be tested? 

Ans: This question is a very common question asked during the interviews and is also a wonderful opportunity to showcase your experience. Well, this is not a rare situation for the testing team. Testing team often faces challenges with bugs which cannot be resolved at all. The most common approach in this situation is to focus on the more critical bugs and report the bugs which show up in the beginning. However, it is important to note that this situation can pose problems. For instance, unit or integration testing may not be sufficient, not appropriate build and release procedure resulting from a poor design. It becomes imperative for managers to be made aware of this situation and basic level of documentation must be provided to them. 

Q14. Is there way to ensure that the code has met specifications? 

Ans: This is yet another commonly asked question in interviews and requires a thorough answer. Although, these types of questions are meant for candidates with experience in the field of testing, it is desirable if freshers also know how to attempt to answer them. 

Generally, a code is labelled as ‘good’ when it is bug free, readable, can be maintained and works as per the expectations of the client. Every team developing codes has to adhere to the ‘coding standards’ during the development process. Also, there are a plethora of tools available to track the success of a code. For example, traceability matrix is a tool which ensures mapping of requirement with appropriate test cases. At the end of execution of all the test cases successfully, it shows that the code complies with the specifications. 

Q15. How do you explain ‘traceability matrix’? 

Ans: This is a follow up question which the interviewer may pick up based on your answer to the previous question. This is a direct question and it is advisable to keep the answer precise and to the point.  In the simplest language, a document that depicts the correlation between test cases and requirements is called a traceability matrix. 

During the development stage of a new product, this tool is extremely essential as it makes sure that the process of development is transparent. It also ensures that the product is complete (as per the requirement). This document aids the development team to make sure that the requirements of the customer are included at all stages of SDLC. The other reason highlighting the importance of traceability matrix is that it makes sure that requirements are recorded in the test cases. The overall process of identifying any miss with the functionality becomes easier. 

From a client’s perspective, this document is very important as it convinces them that the product meets all the specified requirements when the product is delivered. 

Some of the important parameters which must be included in traceability matrix are mentioned below- 

  • Requirement ID
  • Serial number of test case
  • Coverage of requirement in multiple test cases
  • Status of test design along with the execution of test status.
  • Multiple other test cases like Unit Test cases, Integration test cases, system test cases 
  • Any defects identified and its status
  • Status of User Acceptance Test

Below is an image depicting Traceability Matrix Workflow.

Q16. Do you know the term ‘pesticide paradox’? Are there ways to overcome it?

Ans: As per pesticide paradox, in a situation when the same tests are carried out repeatedly, the result of these tests will not be able to detect new bugs anymore. Developers will pay more attention to those aspects where the highest number of defects were found and may ignore the other aspects. 

The best way to overcome pesticide paradox is to make sure new and different test cases are written to be executed over different parts of the software. Also, it is recommended to timely review test cases and add new test cases as and when needed. This will help to detect more defects in those segments where the number of defects reduced. 

Q17. As a software tester, what do you understand by high availability testing?

Ans: The term ‘high availability’ refers to continuous operational ability of a component or a system, under high loads without failing. Therefore, high availability testing ensures thorough testing of a system and its sub-systems. In a few cases, there is simulation of failures to make sure that components are supportive for any redundancy. 

Q18. What do you understand by the terms MR and ER?

Ans: MR refers to Modification Request which is made by the client to request a change in the current functionality of a software. ER, on the other hand, refers to Enhancement Request which is to request addition of a new feature in the software. ER is also usually requested by the client. 

Q19. Can you explain the difference between Retesting and Regression Testing?

Ans: Regression testing aims to ensure that the other parts of the application remain unaffected by the new code. 

Regression testing aims to determine if there is any change in the performance of software which have been tested earlier. This type of testing involves repetition of functional and non-functional tests and can be done either manually or by automated testing. Regression testing is also referred to as Generic testing. It allows the test cases to be automated while keeping the style of testing generic. 

 Retesting aims to check is the fixed issue has been resolved. It is done for specific bugs and is done after the development team has fixed the bugs. The aim of retesting is to ensure passing of test cases after the defect has been fixed. Retesting, is sometimes referred to as planned testing and involves verification of bugs unlike regression testing. The style of testing is extremely planned but test cases cannot be automated. 

Q20. Are you aware of the software testing tool ‘phantom’? Can you explain its function?

Ans: In order to answer this question, it is advisable to read a little about the tool ‘phantom’. 

Official website- http://www.phantomtest.com/

Phantom is a freeware and has wide usage for Windows GUI automation scripting language. Its functioning is automated and it enables taking over the control of Windows. It is capable of simulating various combinations of keystrokes, mouse clicks etc. 

Q21. How will you explain the difference between performance testing and Monkey Testing?

Ans: Both these types of testing are extremely common in software testing and are done with specific aims. 

Monkey testing is done by using random inputs and the aims to check if the application will crash or not. It is generally conducted as random automated test units. This type of testing does not follow any specific rules, test cases or strategy. 

Monkey testing can be done in different ways. Here are the common techniques listed below. 

  • Dumb Monkey Testing- In this type of Monkey testing, the tester does not know anything about the module or the application which is used for testing the product. The tester enters random data. In this type of testing, the behavior of the tester is like a user who is not tech savvy and is making efforts to use the application. 
    The tester can use random data which is not valid in order to test if the application performs as per expectations. This type of testing test those conditions as well which may have been skipped by experienced testers who had prior knowledge of the application and followed all the steps of the testing procedure. 
  • Brilliant Monkey Testing- In this type of testing, a tester who possesses domain knowledge tests the application. Developer or testers who do not have knowledge of domain have an expectation of sequence of steps to be performed in a certain manner and have clear understanding of the data which is being entered. This is however, different from real life scenario. In real life, a user who has domain knowledge can perform the tasks in different ways with varied data. For instance, a tester with understanding of banking domain may be required to use random data while testing a banking application. This type of testing is extremely beneficial as it helps to test the application from the perspective of specific domain with random inputs. 

There are numerous tools which aid in automation of Monkey testing. These tools aid in the process of generation of random data and data entry into the application. These tools are also capable of execution of actions which are random. These tools are used for observing and reporting the output of the application.  Monkey Runner tool which is used for the purpose of monkey testing an android application is an example of one such tool. 

Performance testing is done to check the performance of a system in terms of speed, stability and response to specific workload. These tests generally yield results about the reliability and robustness of an application. The aim of these tests is to analyze and compare performance with benchmarks. Performance tests are critical to ensure users get high-quality experience and performance requirements are met. 

This can be explained with an example. Let’s assume the case of an e-commerce website which is likely to experience heavy traffic during the Black Friday sale. In order to make sure that website is able to handle unusually high traffic without a slowdown, a spike test could be done which will help simulate above average levels of traffic. This test checks for slowdown if any and also determines where the slowdown happens. 

Performance testing has two main categories- 

  • Protocol based tests- In this test, traffic is simulated through HTTP protocols and response time is measured. For Instance, a test may use a HTTP GET request to measure the time taken by the server to respond to the request with a payload. 
  • Browser based test- In this test, actual web browsers are used for making request( same as a real user) and measures the time taken for a response. It also measures the time taken by the browser to completely render the response. 

Q22. What do you understand by Quick Test Professional? 

Ans: Quick Test Professional is a testing tool by HP Software. This tool is a functional and regression testing tool and is capable of recording actions of users which have been entered during recording and executes these user actions when the test is run. In the regression testing phase, it tests the functionality of the application. 

Q23. Can you explain the working of QTP (Quick Test Professional)? 

Ans: QTP typically works with two components namely VB Script language and Object Repository. 

VB script statements are generated by QTP when the user actions are being recorded on the application. These statements show the actions which it needs to perform. 

The other component, object repository is primarily responsible for storing objects which are present in the application. Some examples of these objects can be Window, checkbox etc. There must be a corresponding object that exists in the Object repository for successfully running the test. 

Q24. Talking about recording in QTP, how many modes of recording are there in QTP?

Ans: To answer this question to the satisfaction of the interviewer, it is not enough to just name the modes of recordings but also explain these modes. There are three modes of recording for QTP which have been mentioned below- 

  • The first mode of recording in QTP is context-sensitive mode. This is a popular mode of recording and interestingly, about 99% of testing is done using this mode. In the context-sensitive mode, recording of objects and properties of objects by QTP is done from the application so as to enable their identification when the script is run.
  • The next mode of recording in QTP is the analog mode. This mode of recording in QTP records movement of mouse and keystrokes in form of tracks. This is immensely helpful in the process of testing for handwriting, signature scanning etc. either on the screen or on a window. 
  • The third mode of recording in QTP is called low-level recording. In this mode, recording of objects by QTP is done basis their location. The X and Y coordinates of the object are captured on the screen. 

You can also mention that Analog and Low-level recoding modes can be used only after the record button is pressed, which means that in order to use these two modes, it is important to start recording and using the automation menu. 

Q25. What do you understand by standard Object Class? 

Ans: Every object on the application is assigned properties by the developers. While identifying a particular type of object in the application, Object class is the standard used across the industry. Some of the popular standard classes are – 

  • Dialogue box
  • Window
  • Menu
  • Edit Box
  • Check box
  • Radio button
  • List box

Saurabh Dhingra

DevOps Trainer & Consultant

Saurabh has conducted enterprise transformation drives and trained 50,000+ trainees in DevOps, QA and Agile. He is on a mission to support professionals with the skills they need to move ahead in their careers.

Improve delivery and production process with
This is some text inside of a div block.
practices
Button Text

Upskill with more

Heading

resources

No items found.
No items found.
No items found.
Invest in the latest workplace trend:
Upskilling
Get hands-on, personalised training for teams in DevOps, QA, Agile, Cloud, Data Science, Office Productivity and more
Get FREE 1:1 Consultation
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.