Manual Testing Interview Questions

Software testing is the process of evaluating a system to check if it satisfies its business requirements. It measures the overall quality of the system in terms of attributes like correctness, completeness, usability, performance etc. Basically, it is used for ensuring the quality of software to the stakeholders of the application.

Following are the reason for testing:

  • Testing provides an assurance to the stakeholders that product works as intended.
  • Avoidable defects leaked to the end user/customer without proper testing adds bad reputation to the development company.
  • Defects detected earlier phase of SDLC results into lesser cost and resource utilization of correction.
  • Saves development time by detecting issues in earlier phase of development.
  • Testing team adds another dimension to the software development by providing a different view point to the product development process.

Testing (both manual and automated) can be stopped when one or more of the following conditions are met-

  • After test case execution – Testing phase can be stopped when one complete cycle of test cases is executed after the last known bug fix with agreed upon value of pass-percentage.
  • Once the testing deadline is met – Testing can be stopped after deadlines get met with no high priority issues left in system.
  • Based on Mean Time Between failure (MTBF)– MTBF is the time interval between two inherent failures. Based on stakeholders decisions, if the MTBF is quite large one can stop the testing phase.
  • Based on code coverage value – Testing phase can be stopped when the automated code coverage reaches a specific threshold value with sufficient pass-percentage and no critical bug.

There are three types of defects: Wrong, missing, and extra.

Wrong: These defects are occurred due to requirements have been implemented incorrectly.

Missing: It is used to specify the missing things, i.e., a specification was not implemented, or the requirement of the customer was not appropriately noted.

Extra: This is an extra facility incorporated into the product that was not given by the end customer. It is always a variance from the specification but may be an attribute that was desired by the customer. However, it is considered as a defect because of the variance from the user requirements.

Quality assurance is a process driven approach which checks if the process of developing the product is correct and conforming to all the standards. It is considered as a preventive measure as it identifies the weakness in the process to build a software. It involves activities like document review, test cases review, walk-throughs, inspection etc.

Quality control is product driven approach which checks that the developed product conforms to all the specified requirements. It is considered as a corrective measure as it tests the built product to find the defects. It involves different types of testing like functional testing, performance testing, usability testing and so on.

Simultaneous test design and execution against an application is called exploratory testing. In this testing, the tester uses his domain knowledge and testing experience to predict where and under what conditions the system might behave unexpectedly.

Exploratory testing is performed as a final check before the software is released. It is a complementary activity to automated regression testing.

Functional TestingNonFunctional Testing
Functional testing is performed to determine the system behaviour as per the client functional requirements.Non-functional testing is the process to determine the system performance as per client expectations
Functional testing is performed first with the help of Manual and Automation testing tools.Non-functional testing is performed after functional testing with the effective tools required.
It is easy to perform manual testing as client requirements are the input in functional testing.It is difficult to perform manual testing as scalability, reliability, speed and other performance parameters are input in non functional testing.
Functional testing is of following types:
• Unit Testing
• Smoke Testing
• Sanity Testing
• Integration testing
• User Acceptance testing
• Regression testing
Non-functional testing is of following types:
• Performance testing
• Load, Stress, Volume Testing
• Security testing
• Compatibility testing

Build is an executable file which refers to that part of an application which is handed over to a tester to test the implemented functionality of the application along with some bug fixes. The build can be rejected by the testing team if it does not pass the critical checklist which contains the major functionality of the application.

There can be multiple builds in the testing cycle of an application.

Release refers to the software application which is no longer in the testing phase and after completion of testing and development, the application is handed over to the client. One release has several builds associated with it.

VerificationValidation
Verification is the process of evaluating the artifacts as well as the process of software development in order to ensure that the product being developed will comply to the standards.Validation is the process of validating that the developed software product conforms to the specified business requirements.
It is static process of analyzing the documents and not the actual end product.It involves dynamic testing of software product by running it.
Verification is a process oriented approach.Validation is a product oriented approach.
Answers the question – “Are we building the product right?”Answers the question – “Are we building the right product?”
Errors found during verification require lesser cost/resources to get fixed as compared to be found during validation phase.Errors found during validation require more cost/resources. Later the error is discovered higher is the cost to fix it.
  • New: When the defect or bug is logged for the first time it is said as New.
  • Assigned: After the tester has logged a bug, his bug is being reviewed by the tester lead and then it is assigned to the corresponding developer team.
  • Open: Tester logs a bug in the Open state and it remains in the open state until the developer has performed some task on that bug.
  • Resolved/Fixed: When a developer has resolved the bug, i.e. now the application is producing the desired output for a particular issue, then the developer changes its status to resolved/fixed.
  • Verified/Closed: When a developer has changed the status to resolved/fixed then the tester now tests the issue at its end and if it’s fixed then he changes the status of the bug to ‘Verified/Close’.
  • Reopen: If a tester is able to reproduce the bug again i.e. the bug still exists even after fixing by the developer, it’s status is marked as reopen.
  • Not a bug/Invalid: A bug can be marked as invalid or not a bug by the developer when the reported issue is as per the functionality but is logged due to misinterpretation.
  • Deferred: Usually when the bug is of minimal priority for the release and if there is lack of time, in that case, those minimal priority bugs are deferred to the next release.
  • Cannot Reproduce: If the developer is unable to reproduce the bug at its end by following the steps as mentioned in the issue.

SDLC stands for Software Development Life Cycle. It refers to all the activities performed during software development – requirement gathering, requirement analysis, designing, coding or implementation, testing, deployment and maintenance.

KodNest Vhnky4ANIb4hPXuCXsGODXlB1Je4Em0c7L3vwBTVqUfWaIQ7YuB3ePqF3gn6d8VMHh3ZJimETWmhlUyp8E

Data-driven testing is the methodology where a series of test script containing test cases are executed repeatedly using data sources like Excel spreadsheet, XML file, CSV file, SQL database for input values and the actual output is compared to the expected one in the verification process.

For Example: Test studio is used for data-driven testing.

Some advantages of data-driven testing are:

  • Reusability.
  • Repeatability.
  • Test data separation from test logic.
  • The number of test cases is reduced.

Acceptance testing is done to enable a user/customer to determine whether to accept a software product. It also validates whether the software follows a set of agreed acceptance criteria. In this level, the system is tested for the user acceptability.

Types of acceptance testing are:

  • User acceptance testing: It is also known as end-user testing. This type of testing is performed after the product is tested by the testers. The user acceptance testing is testing performed concerning the user needs, requirements, and business processes to determine whether the system satisfies the acceptance criteria or not.
  • Operational acceptance testing: An operational acceptance testing is performed before the product is released in the market. But, it is performed after the user acceptance testing.
  • Contract and regulation acceptance testing: In the case of contract acceptance testing, the system is tested against certain criteria and the criteria are made in a contract. In the case of regulation acceptance testing, the software application is checked whether it meets the government regulations or not.
  • Alpha and beta testing: Alpha testing is performed in the development environment before it is released to the customer. Input is taken from the alpha testers, and then the developer fixes the bug to improve the quality of a product. Unlike alpha testing, beta testing is performed in the customer environment. Customer performs the testing and provides the feedback, which is then implemented to improve the quality of a product.

Accessibility testing is used to verify whether a software product is accessible to the people having disabilities (deaf, blind, mentally disabled and so on.).

Ad-hoc testing is a testing phase where the tester tries to ‘break’ the system by randomly trying the system’s functionality.

Agile testing is a testing practice that uses agile methodologies i.e. follow test-first design paradigm.

Testing by using software tools which execute test without manual intervention is known as automated testing. Automated testing can be used in GUI, performance, API, etc.

The Bottom-up testing is a testing approach which follows integration testing where the lowest level components are tested first, after that the higher level components are tested. The process is repeated until the testing of the top-level component.

In Baseline testing, a set of tests is run to capture performance information. Baseline testing improves the performance and capabilities of the application by using the information collected and make the changes in the application. Baseline compares the present performance of the application with its previous performance.

Benchmarking testing is the process of comparing application performance with respect to the industry standard given by some other organization.

It is a standard testing which specifies where our application stands with respect to others.

RegressionRetesting
Regression is a type of software testing that checks the code change does not affect the current features and functions of an application.Retesting is the process of testing that checks the test cases which were failed in the final execution.
The main purpose of regression testing is that the changes made to the code should not affect the existing functionalities.Retesting is applied on the defect fixes.
Defect verification is not an element of Regression testing.Defect verification is an element of regression testing.
Automation can be performed for regression testing while manual testing could be expensive and time-consuming.Automation cannot be performed for Retesting.
Regression testing is also known as generic testing.Retesting is also known as planned testing.
Regression testing concern with executing test cases that was passed in earlier builds. Retesting concern with executing those test cases that are failed earlier.Regression testing can be performed in parallel with the retesting. Priority of retesting is higher than the regression testing.
Alpha TestingBeta Testing
It is always done by developers at the software development site.It is always performed by customers at their site.
It is also performed by Independent testing teamIt is not be performed by Independent testing team
It is not open to the market and public.It is open to the market and public.
It is always performed in a virtual environment.It is always performed in a real-time environment.
It is used for software applications and projects.It is used for software products.
It follows the category of both white box testing and Black Box Testing.It is only the kind of Black Box Testing.
It is not known by any other name.It is also known as field testing.

Black box Testing: The strategy of black box testing is based on requirements and specification. It requires no need of knowledge of internal path, structure or implementation of the software being tested.

White box Testing: White box testing is based on internal paths, code structure, and implementation of the software being tested. It requires a full and detail programming skill.

Gray box Testing: This is another type of testing in which we look into the box which is being tested, It is done only to understand how it has been implemented. After that, we close the box and use the black box testing.

Black box testingGray box testingWhite box testing
Black box testing does not need the implementation knowledge of a program.Gray box testing knows the limited knowledge of an internal program.In white box testing, implementation details of a program are fully required.
It has a low granularity.It has a medium granularity.It has a high granularity.
It is also known as opaque box testing, closed box testing, input-output testing, data-driven testing, behavioral testing and functional testing.It is also known as translucent testing.It is also known as glass box testing, clear box testing.
It is a user acceptance testing, i.e., it is done by end users.It is also a user acceptance testing.Testers and programmers mainly do it.
Test cases are made by the functional specifications as internal details are not known.Test cases are made by the internal details of a program.Test cases are made by the internal details of a program.

Software testing life cycle refers to all the activities performed during testing of a software product. The phases include-

  • Requirement analyses and validation – In this phase the requirements documents are analysed and validated and scope of testing is defined.
  • Test planning – In this phase test plan strategy is defined, estimation of test effort is defined along with automation strategy and tool selection is done.
  • Test Design and analysis – In this phase test cases are designed, test data is prepared and automation scripts are implemented.
  • Test environment setup – A test environment closely simulating the real world environment is prepared.
  • Test execution – The test cases are prepared, bugs are reported and retested once resolved.
  • Test closure and reporting – A test closure report is prepared having the final test results summary, learning and test metrics.

A test bed is a test environment used for testing an application. A test bed configuration can consist of the hardware and software requirement of the application under test including – operating system, hardware configurations, software configurations, tomcat, database etc.

A test plan is a formal document describing the scope of testing, the approach to be used, resources required and time estimate of carrying out the testing process. It is derived from the requirement documents(Software Requirement Specifications).

A test scenario is derived from a use case. It is used for end end to end testing of a feature of an application. A single test scenario can cater multiple test cases. The scenario testing is particularly useful when there is time constraint while testing.

A test case is used to test the conformance of an application with its requirement specifications. It is a set of conditions with pre-requisites, input values and expected results in a documented form.

  • TestCaseId – A unique identifier of the test case.
  • Test Summary – One liner summary of the test case.
  • Description – Detailed description of the test case.
  • Prerequisite or pre-condition – A set of prerequisites that must be followed before executing the test steps.
  • Test Steps – Detailed steps for performing the test case.
  • Expected result – The expected result in order to pass the test.
  • Actual result – The actual result after executing the test steps.
  • Test Result – Pass/Fail status of the test execution.
  • Automation Status – Identifier of automation – whether the application is automated or not.
  • Date – The test execution date.
  • Executed by – Name of the person executing the test case.

A test script is an automated test case written in any programming or scripting language. These are basically a set of instructions to evaluate the functioning of an application.

A bug is a fault in a software product detected at the time of testing, causing it to function in an unanticipated manner.

A defect is non-conformance with the requirement of the product detected in production (after the product goes live).

Defect density is the measure of density of the defects in the system. It can be calculated by dividing number of defect identified by the total number of line of code(or methods or classes) in the application or program.

A defect priority is the urgency of the fixing the defect. Normally the defect priority is set on a scale of P0 to P3 with P0 defect having the most urgency to fix.

Defect severity is the severity of the defect impacting the functionality. Based on the organisation, we can have different levels of defect severity ranging from minor to critical or show stopper.

A blocker is a bug of high priority and high severity. It prevents or blocks testing of some other major portion of the application as well.

A critical bug is a bug that impacts a major functionality of the application and the application cannot be delivered without fixing the bug. It is different from blocker bug as it doesn’t affect or blocks the testing of other part of the application.

A bug goes through the following phases in software development-

  • New – A bug or defect when detected is in New state
  • Assigned – The newly detected bug when assigned to the corresponding developer is in Assigned state
  • Open – When the developer works on the bug, the bug lies in Open state
  • Rejected/Not a bug – A bug lies in rejected state in case the developer feels the bug is not genuine
  • Deferred – A deferred bug is one, fix of which is deferred for some time(for the next releases) based on urgency and criticality of the bug
  • Fixed – When a bug is resolved by the developer it is marked as fixed
  • Test – When fixed the bug is assigned to the tester and during this time the bug is marked as in Test
  • Reopened – If the tester is not satisfied with issue resolution the bug is moved to Reopened state
  • Verified – After the Test phase if the tester feels bug is resolved, it is marked as verified
  • Closed – After the bug is verified, it is moved to Closed status.

Equivalence class partitioning is a specification based black box testing techniques. In equivalence class partitioning, set of input data that defines different test conditions are partitioned into logically similar groups such that using even a single test data from the group for testing can be considered as similar to using all the other data in that group. E.g. for testing a Square program(program that prints the square of a number- the equivalence classes can be- Set of Negative numbers, whole numbers, decimal numbers, set of large numbers and so on.

Boundary value analysis is a software testing technique for designing test cases wherein the boundary values of the classes of the equivalence class partitioning are taken as input to the test cases e.g. if the test data lies in the range of 0-100, the boundary value analysis will include test data – 0,1, 99, 100.

Decision table testing is a type of specification based test design technique or black box testing technique in which testing is carried out using decision tables showing application’s behavior based on different combination of input values. Decision tables are particularly helpful in designing test cases for complex business scenarios involving verification of application with multiple combinations of input.

State transition testing is a black box test design technique based on state machine model. State transition testing is based on the concept that a system can be defined as a collection of multiple states and the transition from one state to other happens because of some event.

A cause effect graph testing is black box test design technique in which graphical representation of input i.e. cause and output i.e. effect is used for test designing. This technique uses different notations representing AND, OR, NOT etc relations between the input conditions leading to output.

A use case testing is a black box testing approach in which testing is carried out using use cases. A use case scenario is seen as interaction between the application and actors(users). These use cases are used for depicting requirements and hence can also serve as basis for acceptance testing.

Statement testing is a white box testing approach in which test scripts are designed to execute code statements.
Statement coverage is the measure of the percentage of statements of code executed by the test scripts out of the total code statements in the application. The statement coverage is the least preferred metric for checking test coverage.

Decision testing or branch testing is a white box testing approach in which test coverage is measured by the percentage of decision points(e.g. if-else conditions) executed out of the total decision points in the application.

Testing can be performed at different levels during the development process. Performing testing activities at multiple levels help in early identification of bugs. The different levels of testing are –

  • Unit Testing
  • Integration Testing
  • System Testing
  • Acceptance Testing

Unit testing is the first level of testing and it involves testing of individual modules of the software. It is usually performed by developers.

Integration testing is performed after unit testing. In integration testing, we test the group of related modules. It aims at finding interfacing issues between the modules.

  • Big bang Integration Testing – In big bang integration testing, testing starts only after all the modules are integrated.
  • Top-down Integration Testing – In top down integration, testing/integration starts from top modules to lower level modules.
  • Bottom-up Integration Testing – In bottom up integration, testing starts from lower level modules to higher level module up in the hierarchy.
  • Hybrid Integration Testing – Hybrid integration testing is the combination of both Top-down and bottom up integration testing. In this approach, the integration starts from middle layer and testing is carried out in both the direction

In case of top-down integration testing, many a times lower level modules are not developed while beginning testing/integration with top level modules. In those cases Stubs or dummy modules are used that simulate the working of modules by providing hard-coded or expected output based on the input values.

In case of bottom-up integration testing, drivers are used to simulate the working of top level modules in order to test the related modules lower in the hierarchy.

A test harness is a collection of test scripts and test data usually associated with unit and integration testing. It involves stubs and drivers that are required for testing software modules and integrated components.

System testing is the level of testing where the complete software is tested as a whole. The conformance of the application with its business requirements is checked in system testing.

Monkey testing is a type of testing that is performed randomly without any predefined test cases or test inputs.

Performance testing is a type of non-functional testing in which the performance of the system is evaluated under expected or higher load. The various performance parameters evaluated during performance testing are – response time, reliability, resource usage, scalabilty etc.

Load testing is a type of performance testing which aims at finding application’s performance under expected workload. During load testing we evaluate the response time, throughput, error rate etc parameters of the application.

Stress testing is a type of performance testing in which application’s behavior is monitored under higher workload then expected. Stress testing is done to find memory leaks and robustness of the application.

Volume testing is a type of performance testing in which the performance of application is evaluated with large amount of data. It checks the scalability of the application and helps in identification of bottleneck with high volume of data.

Spike testing is a type of performance testing in which the application’s performance is measured while suddenly increasing the number of active users during the load test.

Usability testing is the type of testing that aims at determining the ease of using the application. It aims at uncovering the usability defects in the application.

Accessibility is the type of testing which aims at determining the ease of use or operation of the application specifically for people with disabilities.

To check if software is compatible with operating system, platform or hardware.

Configuration testing is the type of testing used to evaluate the configurational requirements of the software along with effect of changing the required configuration.

Localisation testing is a type of testing in which we evaluate the application’s customization(localized version of application) in a particular culture, locale or country.

Globalization testing is a type of testing in which application is evaluated for its functioning across the world in different cultures, languages, locale and countries.

Negative testing is a type of testing in which the application’s robustness(graceful exiting or error reporting) is evaluated when provided with invalid input or test data.

Security testing is a type of testing which aims at evaluating the integrity, authentication, authorization, availability, confidentiality and non-repudation of the application under test.

Penetration testing or pen testing is a type of security testing in which application is evaluated(safely exploited) for different kinds of vulnerabilities that any hacker could exploit.

Robustness testing is a type of testing that is performed to find the robustness of the application i.e. the ability of the system to behave gracefully in case of erroneous test steps and test input.

A/B testing is a type of testing in which the two variants of the software product are exposed to the end users and on analyzing the user behaviour on each variant, the better variant is chosen and used thereafter.

Concurrency testing is a multi-user testing in which an application is evaluated by analyzing application’s behavior with concurrent users accessing the same functionality.

All pair testing is a type of testing in which the application is tested with all possible combination of the values of input parameters.

Failover testing is a type of testing that is used to verify application’s ability to allocate more resources(more servers) in case of failure and transferring of the processing part to back-up system.

Fuzz testing is a type of testing in which large amount of random data is provided as input to the application in order to find security loopholes and other issues in the application.

UI or user interface testing is a type of testing that aims at finding Graphical User Interface defects in the application and checks that the GUI conforms to the specifications.

Risk analysis is the analysis of the risk identified and assigning an appropriate risk level to it based on its impact over the application.

  • Smoke testing is a type of testing in which the all major functionalities of the application are tested before carrying out exhaustive testing. Whereas, sanity testing is subset of regression testing which is carried out when there is some minor fix in application in a new build.
  • In smoke testing, shallow-wide testing is carried out while in sanity narrow-deep testing (for a particular functionality) is done.
  • The smoke tests are usually documented or are automated. Whereas, the sanity tests are generally not documented or unscripted.

Code coverage is the measure of the amount of code covered by the test scripts. It gives the idea of the part of the application covered by the test suite.

Cyclomatic complexity is the measure of the number of independent paths in an application or program. This metric provides an indication of the amount of effort required to test complete functionality. It can be defined by the expression –
L – N + 2P, where:
L is the number of edges in the graph
N is the number of node
P is the number of disconnected parts

Testing performed by executing or running the application under test either manually or using automation.

An exit criteria is a formal set of conditions that specify the agreed upon features or state of application in order to mark the completion of the process or product.

In software testing, a traceability matrix is a table that relates the high level requirements with either detailed requirements, test plans or test cases. RTM helps in ensuring 100% test coverage.

Pilot testing is a testing carried out as a trial by limited number of users to evaluate the system and provide their feedback before the complete deployment is carried out.

Backend testing is a type of testing that involves testing the backend of the system which comprises of testing the databases and the APIs in the application.

Mutation testing is a type of white box testing in which the source code of the application is mutated to cause some defect in its working. After that the test scripts are executed to check for their correctness by verifying the failures caused the mutant code.

A scrum is a process for implementing Agile methodology. In scrum, time is divided into sprints and on completion of sprints, a deliverable is shipped.

The different roles in scrum are –

  • Product Owner – The product owner owns the whole development of the product, assign tasks to the team and act as an interface between the scrum team(development team) and the stakeholders.
  • Scrum Master – The scrum master monitors that scrum rules get followed in the team and conducts scrum meeting.
  • Scrum Team – A scrum team participate in the scrum meetings and perform the tasks assigned.

A scrum meeting is daily meeting in scrum process. This meeting is conducted by scrum master and update of previous day’s work along with next day’s task and context is defined in this meeting.

  1. Verify the slot for ATM Card insertion is as per the specification
  2. Verify that user is presented with options when card is inserted from proper side
  3. Verify that no option to continue and enter credentials is displayed to user when card is inserted correctly
  4. Verify that font of the text displayed in ATM screen is as per the specifications
  5. Verify that touch of the ATM screen is smooth and operational
  6. Verify that user is presented with option to choose language for further operations
  7. Verify that user asked to enter pin number before displaying any card/bank account detail
  8. Verify that there are limited number of attempts upto which user is allowed to enter pin code
  9. Verify that if total number of incorrect pin attempts gets surpassed then user is not allowed to continue further- operations like blocking of card etc gets initiated
  10. Verify that pin is encrypted and when entered
  11. Verify that user is presented with different account type options like- saving, current etc
  12. Verify that user is allowed to get account details like available balance
  13. Verify that user same amount of money gets dispatched as entered by user for cash withdrawal
  14. Verify that user is only allowed to enter amount in multiples of denominations as per the specifications
  15. Verify that user is prompted to enter the amount again in case amount entered is not as per the specification and proper message should be displayed for the same
  16. Verify that user cannot fetch more amount than the total available balance
  17. Verify that user is provided the option to print the transaction/enquiry
  18. Verify that user user’s session timeout is maintained and is as per the specifications
  19. Verify that user is not allowed to exceed one transaction limit amount
  20. Verify that user is not allowed to exceed one day transaction limit amount
  21. Verify that user is allowed to do only one transaction per pin request
  22. Verify that user is not allowed to proceed with expired ATM card
  23. Verify that in case ATM machine runs out of money, proper message is displayed to user
  24. Verify that in case sudden electricity loss in between the operation, the transaction is marked as null and amount is not withdrawn from user’s account
  1. UI scenario – Verify that the dimension of the coffee machine are as per the specification
  2. Verify that outer body as well as inner part’s material are as per the specification
  3. Verify that the machine’s body color as well brand are correctly visible and as per specification
  4. Verify the input mechanism for coffee ingredients-milk, water, coffee beans/powder etc
  5. Verify that the quantity of hot water, milk, coffee powder per serving is correct
  6. Verify the power/voltage requirements of the machine
  7. Verify the effect of suddenly switching off the machine or cutting the power. Machine should stop in that situation and in power resumption, the remaining coffee should not get come out of the nozzle.
  8. Verify that coffee should not leak when not in operation
  9. Verify the amount of coffee served in single serving is as per specification
  10. Verify that the digital display displays correct information
  11. Check if the machine can be switched on and off using the power buttons
  12. Check for the indicator lights when machine is switched on-off
  13. Verify that the functioning of all the buttons work properly when pressed
  14. Verify that each button has image/text with it, indicating the task it performs
  15. Verify that complete quantity of coffee should get poured in single operation, no residual coffee should be present in the nozzle
  16. Verify the mechanism to clean the system work correctly- foamer
  17. Verify that the coffee served has the same and correct temperature each time it is served by the machine
  18. Verify that system should display error when it runs out of ingredients
  19. Verify that pressing coffee button multiple times lead to multiple serving of coffee
  20. Verify that there is passage for residual/extra coffee in the machine
  21. Verify that machine should work correctly in different climatic, moistures and temperature conditions
  22. Verify that machine should not make too much sound when in operation
  23. Performance test – Check the amount of time the machine takes to serve a single serving of coffee
  24. Performance test – Check the performance of the machine when used continuously till the ingredients run out of the requirements
  25. Negative Test – Check the functioning of coffee machine when two/multiple buttons are pressed simultaneously
  26. Negative Test – Check the functioning of coffee machine with lesser or higher voltage then required
  27. Negative Test – Check the functioning of coffee machine if the ingredient container’s capacity is exceeded
  1. Verify that the response fetched for a particular keyword is correct and related to the keyword, containing links to the particular webpage
  2. Verify that the response are sorted by relevancy in descending order i.e. most relevant result for the keyword are displayed on top
  3. Verify that response for multi word keyword is correct
  4. Verify that response for keywords containing alphanumeric and special characters is correct
  5. Verify that the link title, URL and description have the keyword highlighted in the response
  6. Verify auto-suggestion in Google e.g. providing input as ‘fac’ should give suggestions like ‘facebook’, ‘facebook massenger’, ‘facebook chat’ etc
  7. Verify that response fetched on selecting the suggested keyword and on providing the keyword directly should be same
  8. Verify that the suggestion provided by Google are sorted by most popular/relevant suggestions
  9. Verify that user can make search corresponding to different categories – web, images, videos, news, books etc and response should correspond to the keyword in that category only
  10. Verify that misspelled keyword should get corrected and response corresponding to the correct keyword should get displayed
  11. Verify that multi word misspelled keywords also get corrected
  12. Verify the performance of search- check if the time taken to fetch the response is within the ballpark
  13. Verify that total number of results fetched for a keyword
  14. Verify that the search response should be localised that is response should be more relevant to the country/area from which the search request is initiated
  15. Verify Google calculator service- make any arithmetic request, calculator should get displayed with correct result
  16. Verify Google converter service- make request like- 10USD in INR and check if the result is correct
  17. Verify search response for a large but valid strings
  18. Verify that incorrect keywords – keywords not having related result should lead to “did not match any documents” response
  19. Verify that user can make search using different languages
  20. Verify that for a keywords, some related search terms are also displayed to aid user’s search
  21. Verify that for number of results more than the limit on a single page, pagination should be present, clicking on which user can navigate to subsequent page’s result
  22. Verify Google’s advanced search options like- searching within a website, searching for files of specific extension
  23. Verify if the search is case-insensitive or not
  24. Verify the functionality of “I’m feeling Lucky” search- the top most search result should get directly returned (but as of now google doodle page link is displayed)
    Front End – UI Test Cases of Google Search
  25. Verify that Google Logo is present and centre aligned
  26. Verify that the search textbox is centre aligned and editable
  27. Verify that search request should get hit by clicking on search button or hitting enter after writing the search term
  28. Verify that in the search result- webpage’s title, URL and description are present
  29. Verify that clicking the search result will lead to the corresponding web page
  30. Verify that pagination is present in case number of results are greater than the maximum results allowed in a page
  31. Verify that user can navigate to a page number directly or move to previous or next page using the links present
  32. Verify that different languages links are present and gets applied on clicking the same
  33. Verify that the total number of results for the keyword is displayed
  34. Verify that the time taken to fetch the result is displayed
  1. Verify the dimensions of the lift
  2. Verify the type of door of the lift is as per the specification
  3. Verify the type of metal used in the lift interior and exterior
  4. Verify the capacity of the lift in terms of total weight
  5. Verify the buttons in the lift to close and open the door and numbers as per the number of floors
  6. Verify that lift moves to the particular floor as the button of the floor is clicked
  7. Verify that lift stops when up/down buttons at particular floor are pressed
  8. Verify if there is any emergency button to contact officials in case of any mishap
  9. Verify the performance of the floor – time taken to go to a floor
  10. Verify that in case of power failure, lift doesn’t free-fall and get halted in the particular floor
  11. Verify lifts working in case button to open the door is pressed before reaching the destination floor
  12. Verify that in case door is about to close and an object is placed between the doors, if the doors senses the object and again open or not
  13. Verify the time duration for which door remain open by default
  14. Verify if lift interior is having proper air ventilation
  15. Verify lighting in the lift
  16. Verify that at no point lifts door should open while in motion
  17. Verify that in case of power loss, there should be a backup mechanism to safely get into a floor or a backup power supply
  18. Verify that in case multiple floor number button are clicked, lift should stop at each floor
  19. Verify that in case capacity limit is reached users are prompted with warning alert- audio/visual
  20. Verify that inside lift users are prompted with current floor and direction information the lift is moving towards- audio/visual prompt
  1. Verify that the login screen is having option to enter username and password with submit button and option of forgot password
  2. Verify that user is able to login with valid username and password
  3. Verify that user is not able to login with invalid username and password
  4. Verify that validation message gets displayed in case user leaves username or password field as blank
  5. Verify that validation message is displayed in case user exceeds the character limit of the user name and password fields
  6. Verify that there is reset button to clear the field’s text
  7. Verify if there is checkbox with label “remember password” in the login page
  8. Verify that the password is in encrypted form when entered
  9. Verify that there is limit on the total number of unsuccessful attempts
  10. For security point of view, in case of in correct credentials user is displayed the message like “incorrect username or password” instead of exact message pointing at the field that is incorrect. As message like “incorrect username” will aid hacker in bruteforcing the fields one by one
  11. Verify the timeout of the login session
  12. Verify if the password can be copy-pasted or not
  13. Verify that once logged in, clicking back button doesn’t logout user
  1. Test GUI first. Like ( Play, Pause, Resume, Next, previous, Volume up and Volume down ) basic buttons available.
  2. Able to create playlists and songs played on the player as per playlist.
  3. Saving music tracks on existing playlists or new playlist.
  4. Finding music as per Genre or other catagories.
  5. Performance of app when connected to different sound systems.
  6. How app behaves when it left idle for long time.
  7. How app behaves when two or more same kind of app available.
  8. Go Legal
  9. Any FB or Tweet capability of listening music.
  10. Play continueslynon stop for linger duration.
  11. Stress testing.

Although there can be numerous test cases for mobile and considering smart phones, there can be even more test cases. But in this document we will focus mainly on- Calling, SMS and directory features.

  1. Verify that all the required buttons- numbers 0-9, calling buttons etc are present-
  2. Verify that user can make a call by pressing numbers and hitting calling(green) button
  3. Verify that user can make a call by selecting contact person from phone directory
  4. Verify that user can reject an incoming call
  5. Verify that user can receive an SMS
  6. Verify that user can type and send an SMS
  7. Verify that the dimension of the mobile is as per specification
  8. Verify the screen size of the mobile
  9. Verify that the weight of the mobile is as per the specification
  10. Verify the font type and size of the characters printed on the keypad
  11. Verify the color of the mobile phone’s outer body and characters printed on keypad
  12. Verify the pressure required to press a key on the keypad
  13. Verify that spacing between the keys on the keypad are adequate
  14. Check the type of mobile- smart phone or normal
  15. Check if the mobile is colored or black-white
  16. Check the lighting on the mobile screen is adequate- verify in dark day day light
  17. Check if mobile phone can be locked out without password or pin
  18. Check if mobile phone can be locked out with password or pin
  19. Verify that mobile phone can be unlocked with/without password
  20. Verify that user can receive call when phone is locaked
  21. Verify that receiving a call when phone is locked, doesn’t unlocked it after call completion
  22. Verify that user can select an incoming call and SMS alert ringtone
  23. Verify that user can make silent or vibrate mode or incoming calls and SMS
  24. Verify the battery requirement of the mobile
  25. Verify the total time taken to charge the mobile completely
  26. Verify the total time for mobile to get completely discharged when left idle
  27. Verify the total talk for mobile to get completely discharged when continuously used in conversation
  28. Verify the length of charger wire
  29. Verify that mobile can be switched off and ON
  30. Verify that user can store contact details on the phone book directory
  31. Verify that user can delete and update contact details in the phonebook directory
  32. Verify that Call logs are maintained in the Call Logs
  33. Verify that received and Sent SMSs are saved in mobile
  34. Verify that user can silent the phone during an incoming call
  35. Verify the auto-reject option can be applied and removed on particular numbers
  1. Verify the type of pen- whether it is ball point pen, ink pen or gel pen
  2. Verify the outer body of the pen- whether it should be metallic, plastic or any other material as per the specification
  3. Verify that length, breadth and other size specifications of the pen
  4. Verify the weight of the pen
  5. Verify if the pen is with cap or without cap
  6. Verify if the pen has rubber grip or not
  7. Verify the color of the ink of the pen
  8. Verify the odour of the pen
  9. Verify the size of the tip of the pen
  10. Verify the company name or logo of the maker is correct and at desired place
  11. Verify if the pen is smooth
  12. Verify if the pen’s ink gets leaked in case it is tilted upside down
  13. Verify if the pen’s gets leaked at higher altitude
  14. Verify the type of surfaces the pen can write at
  15. Verify if the text written by pen is erasable or not
  16. Verify pen’s and its ink condition at extreme temperature is as per the specification
  17. Verify the pressure upto which the pen’s tip can resist and work correctly
  18. Verify the pen is breakable or not at a certain height as the specification
  19. Verify text written by pen doesn’t get faded before a certain time as per the specification
  20. Verify the effect of water, oil and other liquid on the text written by pen
  21. Verify the condition of ink after long period of time is as per permissible specification or not
  22. Verify the total amount of text that can be written by the pen at one go
  23. Verify the pen’s ink is waterproof or not
  24. Verify if the pen is able to write when used against the gravity- upside down
  25. Verify that in case of ink pen, the pen’s ink can be refilled again

1.Test card numbers using the correct length and range and card numbers that are outside the correct length and range.
2.Test valid expiry dates, invalid expiry dates and invalid date formats.
3.Test valid CVV numbers , mismatched CVV numbers and blank CVV numbers.
4.Entering AVS details for configured numeric or alphanumeric formats.
5.Test swiping of cards from both sides and chips.
6.Verify that captured card numbers are properly encrypted and decrypted.
7.Test that the correct amount is being authorized.Test that merchant and customer copies of the receipts and any vouchers print properly.
8.Check that the receipts are printing the proper date, time, card details, authorized amount etc…
9.Test that the correct response codes are being returned for approved, declined, on hold and all other transactions.
10.Test that you can reprint the receipt for a closed transaction.
11.Check that you can void a payment before posting it and that after posting a payment voiding is not allowed.
12.All information regarding each credit card transaction should be reflected in reports. Any adjustments made in closed checks should be reflected in the report

×