ART Metrics

Where there is no standard there can be no Kaizen.

—Taiichi Ohno

Notice: Adult supervision is required.

Agile Release Train Metrics Abstract

Good news: Due to timeboxes and better understood work physics, Agile development is more readily measurable than our prior software development practices, and there are a variety of measures teams can use to understand and improve progress and outcomes. However, before we head down this path, we pause to note the unique challenges associated with measuring the software process in general, implications for individual and team performance, and the way in which measures change behavior, both intended and unintended. (That is why adult supervision is required here.) But of one thing we can be absolutely certain:

The primary metric for agile is whether or not working software actually exists, and is demonstrably suitable for its intended purpose. This is determined empirically, by demonstration, at the end of every single iteration and Program Increment.

We encourage teams, release trains, managers, program and portfolio managers to pivot most of their measuring-attention to this critical fact. All other metrics—even the extensive set of Agile metrics that we outline below—are subordinate to that objective and the overriding goal of keeping the focus on rapid delivery of quality, working software.

Details

Any discussion of Agile Release Train metrics must begin with an understanding of the intent of the measure, and where the opportunity for that measure occurs. Figure 1 illustrates eight locations on the Big Picture where useful ART measures occur, each of which is hyperlinked to a description below:

Figure 1 – Big Picture program level with measures highlighted

Image Map

M1: Iteration Metrics

The end of each Iteration is an opportune time for each Agile Team to collect whatever metrics the team has agreed to. This occurs in the quantitative part of the team Retrospective. One such team’s metrics set is illustrated in Figure 2.

Figue 1. One team's sprint metrics

Figure 2. One team’s iteration metrics

M2: SAFe ScrumXP Team Self-Assessment

Agile teams continuously assess and improve their process. Often this is via a structured, periodic self-assessment. This gives the team time to reflect on and discuss the key practices that help yield the results. One such assessment is a simple, 25-point basic SAFe ScrumXP practices assessment, as appears in this spreadsheet:

When the team completes the spreadsheet, it will automatically produce a radar chart such as Figure 3, which highlights relative strengths and weaknesses.

 

Team Agility Assessment Radar Chart

Figure 3: SAFe Team Agility Assessment radar chart

M3: Agile Release Train Self-Assessment

As Program execution is a core value of SAFe, The Agile Release Train also continuously works to improve its performance. The following self-assessment form can be used for this purpose. This can be used at Program Increment boundaries or any time the team wants to pause and assess their organization and practices. Trending this data over time is a key performance indicator for the program.

A sample appears in Figure 4 below.

ART-Assessment-Radar-Chart

Figure 4. ART Assessment radar chart

M4: Release Progress Reports

Given the critical nature of each release train Release timebox, it will be important to be able to assess status in real time. One such measure is the overall burn-down, as is illustrated in Figure 5.

Figure 5. Release Train (Program Level) PI burndown

Figure 5. Release Train (Program Level) PI burndown

The burndown tells you the overall status, but it doesn’t tell you which specific customer Features might be impacted. For that, we need a feature progress report, such as that illustrated in Figure 6. This report tells you which features are on track or behind, at any point in time. Together, these reports provide actionable data for scope and resource management.

Figure 6. Feature progress report, highlighting the status of each feature compared to the PI plan

Figure 6. Feature progress report, highlighting the status of each feature compared to the PI plan

M5: PI Metrics

The end of each PI is a natural and significant measuring point.  Figure 7 illustrates an example set of metrics for a program.

Figure 3. One program's PSI metrics

Figure 7. One program’s PI metrics

To assess the overall predictability of the release train, aggregating the individual teams percent of business value achieved compared to plan creates an overall Release Predictability Measure, as is illustrated in Figure 8.

Figure 8. An illustrative release train Release Predictability Measure, showing two of the teams on the train

Figure 8. An illustrative release train Release Predictability Measure, showing two of the teams on the train

For more on this approach, see Ref [1] Chapter 15.

Learn More


 

[1] Leffingwell, Dean. Agile Software Requirements: Lean Requirements Practices for Teams, Programs and the Enterprise. Addison-Wesley, 2011.

[2] Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. Addison-Wesley, 2007.

 

Last update: 16 July, 2013

This information on this page is © 2010-2014 Leffingwell, LLC. and is protected by US and International copyright laws. Neither images nor text can be copied from this site without the express written permission of the copyright holder. For permissions, please contact permissions@ScaledAgile.com.