Wednesday, December 14, 2011

Software Testing Metrics

Test Metrics is most important to measure the quality, cost and effectiveness of the project and the processes. Without measuring these, project can’t be completed successfully.

Formulas for Calculating Metrics:

  1. % Test cases Executed: This metric is used to obtain the execution status of the test cases in terms of %.

% Test cases Executed = (No. of Test cases executed / Total no. of Test cases written) * 100.

  1. % Test cases not executed: This metric is used to obtain the pending execution status of the test cases in terms of %.

% Test cases not executed = (No. of Test cases not executed / Total no. of Test cases written) * 100.

  1. % Test cases Passed: This metric is used to obtain the Pass % of the executed test cases.

% Test cases Passed = (No. of Test cases Passed / Total no. of Test cases Executed) * 100.

  1. % Test cases Failed: This metric is used to obtain the Fail % of the executed test cases.

% Test cases Failed = (No. of Test cases Failed / Total no. of Test cases Executed) * 100.

  1. % Test cases Blocked: This metric is used to obtain the blocked % of the executed test cases. A detailed report can be submitted by specifying the actual reason of blocking the test cases.

% Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases Executed) * 100

  1. Defect Density is a measure showing the ratio of defects. against the size of a development. Size is typically expressed in terms of Function Points or Impact Points

Defect Density = No. of Defects identified / size

(Here ‘Size’ is considered as requirement. Hence here the Defect Density is calculated as number of defects identified per requirement).

  1. DRE (Defect Removal Efficiency) is used to identify the test effectiveness of the software.

Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found during QA testing + No. of Defects found by End user)) * 100

  1. Defect Leakage: Defect Leakage is the Metric which is used to identify the efficiency of the QA testing i.e., how many defects are missed / slipped during the QA testing.

Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100

  1. Defects by Severity/Priority: This metric is used to identify the no. of defects identified based on the Severity / Priority of the defect which is used to decide the quality of the software.
  • % Critical Defects = No. of Critical Defects identified / Total no. of Defects identified * 100
  • % High Impact Defects = No. of High Impact Defects identified / Total no. of Defects identified * 100
  • % Medium Impact Defects = No. of Medium Impact Defects identified / Total no. of Defects identified * 100
  • % Low Impact Defects = No. of Low Impact Defects identified / Total no. of Defects identified * 100
  • % High Priority Defects = No. of High Priority Defects identified / Total no. of Defects identified * 100
  • % Medium Priority Defects = No. of Medium Priority Defects identified / Total no. of Defects identified * 100
  • % Low Priority Defects = No. of Low Priority Defects identified / Total no. of Defects identified * 100
1. Cost of finding a defect in testing (CFDT)
= Total effort spent on testing / defects found in testing

Note: Total time spent on testing including time to create, review, rework, execute the test cases and record the defects. This should not include time spent in fixing the defects.

2. Test Case Adequacy: This defines the number of actual test cases created vs estimated test cases at the end of test case preparation phase. It is calculated as

No. of actual test cases / No: of test cases estimated

3. Test Case Effectiveness: This defines the effectiveness of test cases which is measured in number of defects found in testing without using the test cases. It is calculated as

No. of defects detected using test cases*100/Total no: of defects detected

4. Effort Variance: Can be calculated as                                                                                        {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100

5. Schedule Variance: It can be calculated as                                                                                {(Actual Duration - Estimated Duration)/Estimated Duration} *100

6. Schedule Slippage: Slippage is defined as the amount of time a task has been delayed from its original baseline schedule. The slippage is the difference between the scheduled start or finish date for a task and the baseline start or finish date. It is calculated as

((Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100

7. Rework Effort Ratio:
{(Actual rework efforts spent in that phase / Total actual efforts spent in that phase)} * 100

8. Review Effort Ratio:
(Actual review effort spent in that phase / Total actual efforts spent in that phase) * 100

9. Requirements Stability Index:
{1 - (Total No. of changes /No of initial requirements)}

10. Requirements Creep:
(Total No. of requirements added / No of initial requirements) * 100

11. Weighted Defect Density:
WDD = (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)

Note: Here the Values 5, 3, 1 correspond to seventies as mentioned below:                                        Fatal - 5
Major - 3
Minor - 1

12.Defect Removable Efficiency:
The Defect Removable Efficiency (DRE) is the percentage of defects that have been removed during an activity, computed with the equation below:

DRE = (Number of Defects Removed / Number of Defects at Start of Process) * 100

The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.)

We can also calculate DRE as:
DRE = A / (A+B)

where A = Defects by raised by testing team and B = Defects raised by customer
If dre <=0.8 then good product otherwise not.

Automation Metrics:

  1. % Automatable: This metric is used to obtain the teams are prioritizing automation or what areas might still require manual validations.

    % Automatable = (# of automatable tests / # of total tests) * 100 

    2.% Automation Progress: This metric will assist you in trending how you’re progressing towards that goal over time

    % Automation Progress = (# of automated tests / # of tests that are automatable) * 100

    3.% Automation Pass Rate: This metric is used to obtain how many automation tests passed.

    % Pass Rate = (# of cases that passed / # of test cases executed) * 100

    4.% Automation Execution Time: This metric is used to obtain how long it takes your automation suite to run from beginning to end.

    Execution Time = End Time – Start Time

    5.% Automation Test Coverage: This metric is used to obtain understand how much coverage their automation suite is providing vs how much manual testing is being done.

    % Automation Test Coverage % = (# of automation tests / # of total tests) * 100