User Tools



Insights

When using Testspace during the development cycle data generated from testing is automatically collected, stored, and continuously analyzed. This mined data is used to generate Insights, indicators and metrics used to assess and make process decisions concerning the quality of the software development process.

The following is an overview of the indicators:

  • Quality Debt - helps assess the readiness of a release
  • Results Strength - measures the stability of results and infrastructure
  • Workflow Efficiency - tracks how efficient regressions are being resolved
  • Test Effectiveness - measures if tests are effectively capturing side-effects

Insights are input into decision-making and require interpretation based on the Project’s specific process.

Project Insights

Descriptions of the numbered areas:

  1. Time Period selection and display
  2. Quality Debt Indicator with associated metrics
  3. Results Strength Indicator with associated metrics
  4. Workflow Efficiency Indicator with associated metrics
  5. Test Effectiveness Indicator with associated Metrics
  6. Results Vs. File Churn column chart.
  7. Results counter and chart
  8. Regressions counter and chart

Some metrics related to Spaces are specific to Project Type and are labeled and referred to as Spaces for Standalone Projects and Branches for Projects connected to Git Repositories


Quality Debt

Use Quality Debt to assess release readiness

The Quality Debt Indicator provides an assessment of outstanding regressions and liabilities. The Indicator and associated metrics are calculated from the at most five recent results for each active Space for the selected time period.

The Quality Debt Indicator is derived from the Failing Rate and Health Rate metrics as defined by the table below

Quality Debt

Indicator Failing Rate Health Rate
Undetermined Undetermined
<5% 100%
<5% >80%
>5% <80%

The metrics associated with Quality Debt are described in the following table:

Insight Description
Failing Rate The average failure rate for the N most recent results
Health Rate The % of healthy results for the N most recent.
Liability The total number of Exemptions and unexpected failures. Only applicable to the most recent valid result per Space.
File Churn The % of file changes from the total file churn

Results Strength

Use Results Strength to assess the stability of results and infrastructure

By tracking the average rate of health – the percentage of healthy results – along with results invalidity, the Results Strength Indicator provides insight into the collective strength of all active Branch/Spaces (depending on Project Type) for the selected time period. Results Strength is based on the status of all result sets which exist in one of 3 states.

Test cases failures can be Exempted from the determination of results health. refer to the How-to: Manage Health Status for more information.

nolink|Healthy Healthy 0 nonexempt test failures and with all metric criteria met
nolink|Unhealthy Unhealthy 1 or more nonexempt test failures, or unmet metric criteria
nolink|Invalid Invalid Excluded from the calculation of Health Rate, see How-to: Manage Health Status for details about an invalid result.

The Results Strength Indicator is derived from the Health Rate and Invalid Rate as defined by the following table.

Results Strength

Indicator Health Rate Invalid Rate
Undetermined >15%
>85% <15%
65-85% <15%
<65% <15%

The metrics associated with Results Strength are defined as follows:

Insight Description
Health Rate The % of healthy results. Invalid results are not counted.
Invalid Rate The % of invalid results caused by missing metrics, or a significant drop in test suite/case count.
File Churn The number of file changes.
Spaces/Branches with Results The total number of Space (or branches, depending on Project Type) with results for the selected period.

Workflow Efficiency

Use Workflow Efficiency to track how timely regressions are being resolved

Regressions are a normal part of any automated test workflow and should be proportional to change as measured by file churn. Unresolved regressions, however, create a cost/liability that increases with each new commit and result set. The Workflow Efficiency Indicator tracks the time it takes for unhealthy results to return to a healthy state.

The Workflow Efficiency Indicator is derived from the Health Recovery Time and Results Strength Insights as defined by the table below

Workflow Efficiency

Indicator Health Recovery Time
Undetermined
<4 days
4-7 days
>7 days

The metrics associated with Workflow Efficiency are described as follows:

Metric Description
Health Recovery Time The average time for unhealthy Results to turn healthy again.
Unhealthy Drops The number of times healthy results dropped to unhealthy. Invalid results are not counted.
Drops Recovery Rate The % of unhealthy results that have turned healthy again.
Health Improvement Rate The % change in the Health Rate based on the running average of the previous n weeks based the selected period.

Test Effectiveness

Use Test Effectiveness to measure if tests are effective at capturing side-effects

Based on the premise that the purpose of automated testing is to capture commits that result in regressions. Tracking test regression, especially for developer focused changes, is one of the primary indicators for how effective the CI based testing is.

Rules of Regression:

  • A regression occurs if its most recent result set has one or more test case failures.
  • A unique regression occurs when one or more new test case failures are reported as compared with the previous result set.
  • A new test case failure occurs if it follows five or more non-failing statuses (providing a level of hysteresis).
  • A recurring regression occurs when all current test case failures are not new.

The Test Effectiveness Indicator is derived from the Effective Regression Rate – the percentage of results sets with unique regressions – and the Results Strength as defined in the table below.

Test Effectiveness

Indicator Effective Regressions
Regression Failing Rate is greater than 30% or the Rate is 0%
>5%
1-5%
<1%

As with all indicators, Test Effectiveness should be viewed in the context of code churn. The Insights published under Test Effectiveness are described below

Insight Description
Effective Regression The % of results with new test case failures. Invalid results are not counted.
Unique Regressions The % of unique regressions versus recurring regressions.
Regression Failing Rate The average % of test failures per regression.
Spaces/Branches with Regressions The % of Spaces (or branches, depending on Project Type) that regressed at least once

Results Vs. File Churn Chart

The timeline column chart compares collective results status – healthy, unhealthy and invalid – against file churn as a quantitative measurement of change. The counts are based on all result sets for all Spaces active during the selected time period. The chart provides a 12-month view of the Project with the selected time period highlighted.

 Results Vs. Churn Chart


Results Counter and Chart

Results counter and Chart The results counter reports the total number of result sets analyzed from all Spaces that were active during the selected time period.

The chart provides a proportional view of the three result states – Healthyhealthy, Unhealthyunhealthy, and Inalidinvalid – for all result sets analyzed.

The percentage of healthy results (Health Rate) and the rate of invalid results (Invalid Rate) are reflected in the Project's Results Strength Indicator


Regressions Counter and Chart

Regressions Counter and Chart The regressions counter reports the total number of regressions analyzed from all Spaces that were active during the selected time period.

The chart provides a proportional view of the two types of regressions – unique vs. recurring – for all result sets analyzed.

- A unique regression occurs when there are one or more new test case failures from the previous result set.

- A recurring regression occurs when all test case failures have recurred from the previous result set.

Regressions do not include failing metrics. The percentage of results with unique vs. recurring regressions are reflected in the Project's Test Effectiveness Indicator as defined above.


Space/Branch Insights

Space/Branch Insights (depending on Project Type) publish Quality Debt metrics (highlighted in gray) from the at most five recent results, plus rates and metrics calculated for the selected time period.

The Quality Debt metrics help when assessing the readiness of the software associated with each Space/Branch. The meaning of readiness is specific to the Project – anything from "is a feature or bug fix ready to be merged?" to "is a product ready for customer release?"

The Test Effectiveness metrics of Effective Regression Rate and the Regression Failing Rate provide insights into the value of the testing for each Space, but must be viewed in the context of change activity as measured by the number of Results, Merged Pull Requests and File Churn.

Quality Improvements can be tracked by Test Growth, Code Coverage Growth, and decreases in Static Analysis Issues

 Space Insights

The metrics published for each active Space are defined below:

Metric Description
Failing Rate (recent) The average failing rate for the N most recent results. Invalid results are not counted
Health Rate (recent) The % of healthy results for the N most recent. Invalid results are not counted.
Liability (current) The total number of Exemptions and unexpected failures. Only applicable to the most recent valid Result per Space/Branch.
File Churn (recent) The % of file changes from the total file churn.
Effective Regression The % of results with new test case failures. Invalid results are not counted.
Regression Failing Rate The average % of test failures per regression.
Results The total number of results including invalid.
File Churn The total number of files changed.
Merged Pull Requests The number of merged pull requests.
Test Growth The growth in new test cases.
Code Coverage (lines) Growth The growth in code coverage.
Issues Decrease The number of issues removed or added.

Projects


Page Tools