Wednesday, 19 October 2011

TYPES OF TESTING





Software Testing-Testing Types
1.What different testing approaches are in Software Testing?
A: Each of the followings represents a different testing approach:
1.Black box testing
2. White box testing
3. Unit testing
4. Incremental testing
5. Integration testing
6. Functional testing
7. System testing
8. End-to-end testing
9. Sanity testing
10. Regression testing
11. Acceptance testing
12. Load testing
13. Performance testing
14. Usability testing
15. Install/uninstall testing
16. Recovery testing
17. Security testing
18. Compatibility testing
19. Exploratory testing
20. Ad-hoc testing
21. User acceptance testing
22. Comparison testing
23. Alpha testing
24. Beta testing
25. Mutation testing.



2. What is glass box testing in Software Testing?
A: Glass box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

3. What is open box testing in Software Testing?
A: Open box testing is same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

4.What is black box testing in Software Testing?
A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software.


5. What is unit testing in Software Testing?
A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.


6. What is system testing in Software Testing?
A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input. Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.

7. What is parallel/audit testing in Software Testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. Another Definition:-
With parallel testing, users can easily choose to run batch tests or asynchronous tests depending on the needs of their test systems. Testing multiple units in parallel increases test throughput and lower a manufacturer's


8. What is functional testing in Software Testing?
A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.

9. What is usability testing in Software Testing?
A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.

10. What is integration testing in Software Testing?
A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

11. What is end-to-end testing in Software Testing?
A: Similar to system testing, the 'macro' end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

12. What is regression testing in Software Testing?
A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
Another Definition
Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

13. What is sanity testing in Software Testing?
A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
Another Definition of Sanity testing
Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state



14. What is performance testing in Software Testing?
A: Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.
Another Definition :-
Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans



15. What is load testing in Software Testing?
A: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.
Another Definition
Load testing simulates the expected usage of a software program, by simulating multiple users that access the program's services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system's response at peak loads.

16. What is installation testing in Software Testing?
A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.


17. What is security/penetration testing in Software Testing?
A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

18. What is recovery/error testing in Software Testing?
A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.



19. What is compatibility testing in Software Testing?
A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

20. What is comparison testing?
A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products.

21. What is acceptance testing in Software Testing?
A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.
22. What is alpha testing in Software Testing?
A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers.
Another Definition
Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.

23. What is beta testing in Software Testing? 
A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers
Another Definition:- Following alpha testing, "beta versions" of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users. 

24. What is stress testing in Software Testing? 
A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service tools. 
Another Definition
Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

25. What is the difference between performance testing and load testing in Software Testing? 
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. 

26. What is the difference between reliability testing and load testing in Software Testing? 
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. 

27. What is the difference between volume testing and load testing in Software Testing? 
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. 

28. What is incremental testing in Software Testing? 
A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers. 

29. What is software testing? 
A: Software testing is a process that identifies the correctness, completenes, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects. 

30. What is automated testing in Software Testing? 
A: Automated testing is a formally specified and controlled method of formal testing approach. 

31. What is incremental integration testing in Software Testing? 
Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. 

32. What is the difference between alpha and beta testing in Software Testing? 
A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public. 

33. What is clear box testing in Software Testing? 
A: Clear box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic. You CAN learn clear box testing, with little or no outside help. 

34. What is boundary value analysis in Software Testing? 
A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries. 

35. What is ad hoc testing in Software Testing? 
A: Ad hoc testing is a testing approach; it is the least formal testing approach. 
Another Definition Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. 

36. What is gamma testing in Software Testing? 
A: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as "gamma testing". 

37. What is functional testing in Software Testing? 
A: Functional testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software. 

38. What is closed box testing in Software Testing? 
A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software. 

39. What is bottom-up testing in Software Testing? 
A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes. 

40. How do you perform integration testing in Software Testing? 
A: First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input. 

41. When do you choose automated testing in Software Testing? 
A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the automated testing tools is usually not worthwhile. Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You can learn to use automated tools, with little or no outside help. 

42. What is the difference between system testing and integration testing in Software Testing? 
A: System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa. For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment. The purpose of integration testing in Software Testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing, on the other hand, is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life. 

43. What are the parameters of performance testing in Software Testing? 
A: The term 'performance testing' is often used synonymously with stress testing, load testing, reliability testing, and volume testing. Performance testing is a part of system testing, but it is also a distinct level of testing. Performance testing verifies loads, volumes, and response times, as defined by requirements. 

44. What is disaster recovery testing in Software Testing? 
A: Disaster recovery testing is testing how well the system recovers from disasters, crashes, hardware failures, or other catastrophic problems

45. Which of the tools should learn? 
A:Learn the most popular software tools (i.e. LabView, LoadRunner, Rational Tools, Winrunner, etc.) -- and you want to pay special attention to LoadRunner and the Rational Toolset. 

46.What is the objective of regression testing in Software Testing? 
A: The objective of regression testing is to test that the fixes have not created any other problems elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of data and scripts are maintained and executed, to verify that changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level. 

47.Is the regression testing performed manually in Software Testing? 
A: It depends on the initial testing approach. If the initial testing approach is manual testing, then, usually the regression testing is performed manually. Conversely, if the initial testing approach is automated testing, then, usually the regression testing is performed by automated testing. 

48. What is Exploratory testing in Software Testing? 
Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. 

49. What is Volume testing in Software Testing? 
Volume testing involves testing a software or Web application using corner cases of "task size" or input data size. The exact volume tests performed depend on the application's functionality, its input and output mechanisms and the technologies used to build the application. Sample volume testing considerations include, but are not limited to:
If the application reads text files as inputs, try feeding it both an empty text file and a huge (hundreds of megabytes) text file.
If the application stores data in a database, exercise the application's functions when the database is empty and when the database contains an extreme amount of data.
If the application is designed to handle 100 concurrent requests, send 100 requests simultaneously and then send the 101st request.
If a Web application has a form with dozens of text fields that allow a user to enter text strings of unlimited length, try populating all of the fields with a large amount of text and submit the form.

50. What is Sociability Testing in Software Testing? 
This means that you test an application in its normal environment, along with other standard applications, to make sure they all get along together; that is, that they don't corrupt each other's files, they don't crash, they don't consume system resources, they don't lock up the system, they can share the printer peacefully, etc.

51. What is Mutation testing in Software Testing? 
 A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources. 

IEEE Standard for Software Test Documentation (ANSI/IEEE Standard 829-1983)


This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:
"A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning."
This standard specifies the following test plan outline:
Test Plan Identifier:
1. A unique identifier
Introduction
1. Summary of the items and features to be tested
2. Need for and history of each item (optional)
3. References to related documents such as project authorization, project plan, QA plan, configuration management plan, relevant policies, relevant standards
4. References to lower level test plans
Test Items
1. Test items and their version
2. Characteristics of their transmittal media
3. References to related documents such as requirements specification, design specification, users guide, operations guide, installation guide
4. References to bug reports related to test items
5. Items which are specifically not going to be tested (optional)
Features to be Tested
1. All software features and combinations of features to be tested
2. References to test-design specifications associated with each feature and combination of features
Features Not to Be Tested
1. All features and significant combinations of features which will not be tested
2. The reasons these features won't be tested
Approach
1. Overall approach to testing
2. For each major group of features of combinations of features, specify the approach
3. Specify major activities, techniques, and tools which are to be used to test the groups
4. Specify a minimum degree of comprehensiveness required
5. Identify which techniques will be used to judge comprehensiveness
6. Specify any additional completion criteria
7. Specify techniques which are to be used to trace requirements
8. Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadline
Item Pass/Fail Criteria
1. Specify the criteria to be used to determine whether each test item has passed or failed testing
Suspension Criteria and Resumption Requirements
1. Specify criteria to be used to suspend the testing activity
2. Specify testing activities which must be redone when testing is resumed
Test Deliverables
1. Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports
2. Identify test input and output data
3. Identify test tools (optional)
Testing Tasks
1. Identify tasks necessary to prepare for and perform testing
2. Identify all task interdependencies
3. Identify any special skills required
Environmental Needs
1. Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage (i.e., stand-alone), and any other software or supplies needed
2. Specify the level of security required
3. Identify special test tools needed
4. Identify any other testing needs
5. Identify the source for all needs which are not currently available
Testing is performed using hardware with the following minimum system requirements:
1. 133 MHz Pentium
2.  Microsoft Window, 98
3. 32 MB RAM
4. 10 MB available hard disk space
5. A display device capable of displaying 640x480 (VGA) or be
tter resolution .
6.  Internet connection via a modem or network.
Responsibilities
1. Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving
2. Identify groups responsible for providing the test items identified in the Test Items section
3. Identify groups responsible for providing the environmental needs identified in the Environmental Needs section
Staffing and Training Needs
1. Specify staffing needs by skill level
2. Identify training options for providing necessary skills
Schedule
1. Specify test milestones
2. Specify all item transmittal events
3. Estimate time required to do each testing task
4. Schedule all testing tasks and test milestones
5. For each testing resource, specify its periods of use
Testing scheduling and status reporting are performed by the Project Lead and project Administrator to monitor progress towards meeting product testing schedules and release date, as well as to identify any project scheduling risks. Each build will be tested before next subsequent build date. Software testing schedules will coincide with module development and release schedules
Risks and Contingencies
1. Identify the high-risk assumptions of the test plan
2. Specify contingency plans for each
Approvals
1. Specify the names and titles of all persons who must approve the plan
2. Provide space for signatures and dates

Saturday, 15 October 2011

Waterfall model

The waterfall model derives its name due to the cascading effect from one phase to the other as is illustrated in Figure. In this model each phase well defined starting and ending point, with identifiable deliveries to the next phase.
Note that this model is sometimes referred to as the linear sequential model or the software life cycle.

The model consist of 5 distinct stages, namely:

1. Requirements phase
2. Design phase
3. Coding phase
4.Testing phase
5. Maintenance phase

1.Requirements Phase: The first step is to identify the need for a new system.This includes determining whether a business problem or opportunity exists. You can ask the question "What do the users want?". The requirements should be recorded in a document.  
2.Design Phase: After the requirements have been determined, the necessary specifications for Hardware, Software, Data resources and Information products that will satisfy the functional requirements of the proposed system are determined. The Design will serve as a blueprint for the system and helps detect errors before they are built into the final system.
3.Coding Phase: It is the phase of SDLC where the design is translated into machine readable language. Coding is the act of creating the system.
4. Testing Phase: The system must be tested to evaluate its actual functionality in relation to expected or intended functionality. Testing is done to ensure the created programs work well in different environments with reliability.
5.Maintenance Phase: After the system is properly implemented, Developer's role in the system does not end there. Now, Developers will have to provide solutions to problems found by the end user.

Advantages of Waterfall Model
a) Testing is inherent to every phase of the waterfall model
b) It is an enforced disciplined approach
c) It is documentation driven, that is, documentation is produced at every stage
Disadvantages of Waterfall Model
The waterfall model is the oldest and the most widely used paradigm. However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid format.
Namely:
a) It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project progresses.
b) As The client usually only has a vague idea of exactly what is required from the software product, this WM has difficulty accommodating the natural uncertainty that exists at the beginning of the project.
c) The customer only sees a working version of the product after it has been coded. This may result in disaster any undetected problems are precipitated to this stage.

V MODEL


The V model, while admittedly obscure, gives equal weight to testing and development than  treating testing it as an afterthought. The V model shows the typical sequence of development activities on the left-hand side and the corresponding testing activities on the right-hand side.
In fact, the V model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of requirement gathering, analysis, design and coding. The Waterfall model did considerable damage by supporting the common impression that testing is merely a brief detour after most of the mileage has been gained by mainline development activities. Many managers still believe this, even though testing usually takes up half of the project time. The V model is the most popular model in the organisation. It uses do-check approach that is every development activity is checked by testing activity. It is also called Verification and Validation model.

Specification Attribute Checklist


  • Compatible: Is anything missing or forgotten? Is it thorough? Does it include everything necessary to make it stand alone?
  • Accurate: Is the proposed solution correct? Does it properly define the goal? Are there any errors?
  • Precise, Unambiguous and Clear: Is the description exact and not vague? Is there a single interpretation? Is it easy to read and understand?
  • Consistent: Is the description of the feature written so that it doesn't conflict with itself or other items in the specification?
  • Relevant: Is the statement necessary to specify the feature? Is it extra information that should be left out? Is the feature traceable to an original customer need?
  • Feasible: Can the feature be implemented with the available personnel, tools and resources within the specified budget and schedule?
  • Code free: Does the specification stick with defining the product and not the underlying software design, architecture and code?
  • Testable: Can the feature be tested? Is enough information provided that a tester could create tests to verify its operation?

Friday, 14 October 2011

Principles of Software testing

1.Testing shows presence of defects: Every software has defects;even after testing you cannot guarantee that your software is 100% bug free.
2.Exhaustive Testing is impossible: Testing with all possible combination of inputs is time consuming and cannot fit in the project schedule.
3.Early testing: Testing at early stage like requirement gathering and analysis helps prevent errors from entering into the code.
4.Defect Clustering: A small number of modules usually contains most of the defects discovered during pre-release testing or is responsible for most of the operational failures.
5.Pesticide Paradox: .If the same tests are repeated over and over again,eventually the same set of test cases will no longer find any new defects.To overcome 'Pesticide Paradox',test cases need to be regularly reviewed and revised.
6.Testing is context dependent: Testing is done differently in different contexts. For example,Safety-critical software is tested differently from an e-commerce site.
7.Absence of errors-fallacy: Finding and fixing defects does not help if the system built is unusable and does not fulfill the user's needs and expectations.

What is Software Testing? What is the role of a Software tester?

Software testing is the process of trying to discover every conceivable fault or weakness in a work product.Another way,to look at it is,testing is a process of executing a program with the intent of finding an error.
The objective of testing is to find all Possible bugs (defects) in the work product.
A mature view of software testing is the process of reducing the risk of software failure in the field to an acceptable level.
As you can see we take a destructive attitude towards the program when we test,but in a larger context,our work is constructive.

Role of a software tester is to find defects,and find them as early as possible and make sure they get fixed.