Where there is no standard there can be no Kaizen.

—Taiichi Ohno

Notice: Adult supervision is required.

Sign by swimming pool

Metrics Abstract

Good news: Agile is inherently measurable, far more measurable than our prior software development practices, and there are a variety of measures teams can use to understand and improve progress and outcomes. However, before we head down this path, we pause to note the unique challenges associated with measuring the software process in general, implications for individual and team performance, and the way in which they change behavior, both intended and unintended. (That is why adult supervision is required.) But of one thing we can be absolutely certain:

The primary metric for agile is whether or not working software actually exists, and is demonstrably suitable for its intended purpose. This is determined empirically, by demonstration, at the end of every single iteration and PSI.

We encourage teams, release trains, and program and portfolio managers to pivot most of their measuring-attention to this critical fact. All other metrics-even the plethora of agile metric sets that we outline below-are subordinate to that objective and the overriding goal of keep the focus on rapid delivery of quality, working software.

Details

Any discussion of metrics must begin with an understanding of the intent of the subject measure, and where in the process the opportunity for that measure most naturally occurs. The following picture illustrates eight locations on the Big Picture where useful measures occur. As is typical, we’ll start at the Team Level, because that’s where all the code starts too. Each location is hyperlinked to a description below.

M1: Iteration Metrics

The end of each Iteration is an opportune time for each Agile Team to collect whatever metrics the team has agreed to. This occurs in the quantitative part of the team Retrospective. One such team’s metrics is illustrated in Figure 1.

Figure 1. One team’s sprint metrics

M2: SAFe ScrumXP Team Self-Assessment

Agile teams continuously assess and improve their process. Often this is via a structured, periodic self-assessment. This gives the team time to reflect on and discuss the key practices that help yield the results. One such assessment is a simple, 25-point basic SAFe ScrumXP practices assessment, as appears in this spreadsheet:

When the team completes the spreadsheet, it will automatically produce a radar chart such as Figure 2, which highlights relative strengths and weaknesses.

Figure 2. One team’s 25-pt Scrum self-assessment results

M3: Agile Release Train Self-Assessment

As Program execution is a core value of SAFe, The Agile Release Train also continuously works to improve its performance. The following self-assessment form can be used for this purpose. This can be used at PSI boundaries or any time the teams wants to pause and assess their organization and practices. Trending this data over time is a key performance indicator for the program.

A sample appears in Figure 3 below.

 

Figure 3. ART self-assessment results

M4: Release Progress Reports

Given the criticality of each release train PSI-Release timebox, it will be important to be able to assess status in real time. One such measure is the overall burn-down, as is illustrated in Figure 4.

 

Figure 4. Release Train (Program Level) PSI burndown
The burndown tells you the overall status, but it doesn’t tell you what to do about it, nor does it tell you which specific customer Features might be impacted. For that, we need a feature completeness report as illustrated in Figure 5. This report tells you which features are on track or behind, at any point in time. Together, these reports provide actionable data for scope and resource management.


Figure 5. Release/PSI feature progress report, highlighting the status of each feature, compared to plan

M5: PSI Metrics

The end of each PSI is a natural and significant measuring point. After all, that’s where the primary value is delivered. Figure 6 illustrates such an example.

 

Figure 6. One program’s PSI metrics
To assess the overall predictability of the release train, aggregating the individual teams percent of business value achieved compared to plan creates an overall Release Predictability Measure, as is illustrated in Figure 7.


Figure 7. An illustrative release train Release Predictability Measure, showing two of the teams on the train

For more on this approach, see Ref [1] Chapter 15.

M6: Epic Success Criteria

Epics are key economic drivers for any Release Train. As part of the lightweight business case that is developed in the Architecture or Portfolio Kanban, each epic should have success criteria that can be used to help establish scope and drive more detailed feature elaboration. An example for an architecture epic appears in Figure 8.

 

Figure 8. An example of success criteria for an architectural epic

 

M7: Lean Portfolio KPI Metrics

If your program portfolio aggregates into a single business unit, product or solution set, you may need a comprehensive set of Key Progress Indicators (KPIs) that you can use to assess internal and external progress. In the spirit of “the simplest set of measures that that can possibly work” at this level, we offer the set in Figure 9:

 

 

Figure 9. One business unit’s “Lean KPIs”

M8: Enterprise Balanced Scorecard

If yours is a larger enterprise, and your company focuses on comprehensive measurements, then you might want to gather a set of “balanced scorecard” measures for each business unit, and then map them into an executive dashboard as is illustrated in Figures 10 and 11.

 

Figure 10. A “balanced scorecard” approach, dividing measures into four areas of interest

 

Figure 11. Converting the above metrics to an alphabetic rating, and aggregating them can provide a look at the bigger picture for the enterprise

For more on this approach, see Chapter 22 of Scaling Software Agility: Measuring Business Performance.


Learn More

[1] Leffingwell, Dean. Agile Software Requirements: Lean Requirements Practices for Teams, Programs and the Enterprise. Addison-Wesley, 2011.

[2] Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. Addison-Wesley, 2007.

Last update: 25 January, 2013

This information on this page is © 2010-2014 Leffingwell, LLC. and Pearson Education, Inc. and is protected by US and International copyright laws. Neither images nor text can be copied from this site without the express written permission of the copyright holder. For permissions, please contact permissions@ScaledAgile.com.