Author: | |
Website: | |
Page title: | |
URL: | |
Published: | |
Last revised: | |
Accessed: |
The test schedule for a software development project will be the result of a planning process in which the overall test requirements are determined and broken down into phases. The usual approach is to carry out unit testing at an early stage, followed by integration testing, then system testing, and finally acceptance testing. Depending on the size of the project, and the number and size of the subsystems within the scope of the project, testing at different levels may occur at the same time.
High-level planning will normally occur during the early stages of a software development project, although detailed planning of some of the lower-level testing may occur much later. The timing of each phase of testing will obviously depend on the scheduled completion dates for each stage of software development, since testing can obviously only take place once there is something to test.
Unit testing will probably be ongoing throughout the implementation stage of the project, as the various low level software components will be completed at different times. Similarly, integration testing will be undertaken for each group of software components as and when all of the units involved in a specific set of interactions are complete.
The test schedule will identify as accurately as possible the start and end dates for each stage of software testing. The test schedule planning process will take as its inputs the requirements specification, including a full system description and detailed functional specifications. The outputs will be a set of criteria for each phase of software testing, details of the software and hardware environment in which each phase of testing is to be carried out, and a detailed plan for each phase of testing that identifies what testing activities to be carried out, and when.
For each test plan created, suitable test cases and test data must be identified, together with the expected outcomes. If test results are not as expected, the reasons for this should be investigated and the code amended as necessary. Once problems have been corrected, the tests should be run again (including any regression testing felt to be necessary to detect additional errors introduced by the revised program code). Once all tests have been satisfactorily completed, the test results should be recorded and signed off by the project manager or an appropriate supervisor.