Assuring Architectural Integrity via Capacity Allocation

(Source note: extracted from a blog post by Dean Leffingwell)

I’ve been pretty busy with some international work (mixed with serious vacation, I must admit) as well work on the next version of the Scaled Agile Framework, so I haven’t put up a ton of brand new content lately. But in my latest classes, I’ve been elaborating on one of those concepts I always use in my Agile programs, somewhat naturally, only to later (about now) realize that I’ve never actually described it in writing. (No wonder I keep getting those puzzling looks…….)

So in this post, I want to discuss a simple mechanism I use to help agile program teams solve a large problem, i.e. the debate about how much (if any!) architectural work can be planned for over the course of time.

The Problem

More specifically, let me frame this debate also as a one of those fun “them vs. us” discussions, that we all see so often early in enterprise rollouts: This one goes like this:

The Dilemma: How Much Rearchitecting Can we Afford?

As we see from the diagram, in the context of the Scaled Agile Framework Pig Picture, the challenge arises immediately, typically just prior to the first PSI/Release Planning event. Then, the teams must establish an agreed-to, publicly declared set of objectives, based on an agreed-to, visible backlog.

Of course this problem is not new to Agile, but for some reason, (maybe the zealotry around “design emerges”, coupled with PSI-cadence-forced, decision making, (see Chapter 21 of Agile Software Requirements, Agile Architecture), it sure seems to hit hard really early on.

Fortunately, with enterprise Agility, we have some new rules that we can apply to facilitate resolution to the discussion. Three of which include:

  • Rule 1: There is only one (program-level) backlog; nothing can be hidden (The decision is explicit and forced into the light of day)
  • Rule 2: The Product Managers (or equivalent content authority) owns the backlog and the new feature requests; the Architect (or equivalent design authority) owns the design. (We know who has to agree on what in order to get both done.)
  • Rule 3. All backlog items are estimated, so we have some sense of the effort required to get individual things done. (Prioritization decisions, both feature and architecture, will be driven by economics).

Even then, the discussions are interesting. After all, how do you compare the business value and relative priorities of such unlike things? For example

  • Product Manager: “we need to implement this new type of security for customer trading”
  • System Architect: “we need to update the entire back office to 64 bit servers”

In other words, we are forced to compare and contrast totally unlike things. So we shouldn’t be surprised that it can be very hard to gain agreement with such a model.

Capacity Allocation to the Rescue

Fortunately we have other constructs in our lean thinking tool kit, including Capacity Allocation. As David Anderson notes in KANBAN: “Once we establish WIP limits for the flow through the system, we can consider capacity allocation by work item type or class of service…capacity allocation allows us to guarantee service for each type of work received by the kanban system.”

In our case, this work consists of new features and architectural expansion. Moreover, since we assuredly have lots of demand for new features and lots of demand to evolve the system to better support both current (technical debt) and new features (rearchitecting), we need a ton of both. That would be fine if it weren’t for the fact that the backlog can only contain 100% of itself, so significant tradeoffs have to be made.

Given this new paradigm, I’ve found that the decision as to how much of each we can afford is far easier via rule 4:

  • Rule 4. We don’t have to be right forever; we only have to decide for the next PSI; after that we can revisit the limits based upon the then-current business context.

While that still requires a decision, I’ve found that the presence of these rules, along with the hyper-transparency of both sets of needs, drives teams to good decisions, made far more easily. And since we know we MUST have new user-value features to sell (or to demo) at each PSI boundary, the decision really comes down to how much architectural refactoring work can we afford in the next PSI.

In practice, I’ve seen this be as little as 15% or so. However I’ve also seen it be as high as 60% for multiple PSIs. In any case the decision is up to you; no one else can make it for you.

Finally, Things Get Easier

And finally, once that allocation is decided, we no longer have to compare unlike things:

  • The Product Manager has the full authority to define the features and priorities in the new feature allocation portion of the backlog.
  • The Architect has the same authority for the rearchitecting portion of the backlog.

And importantly, as illustrated in the picture below, both can use Lean ROI (see Weighted Shortest Job First) to make the decisions within their allocation based on business economics.

Driving Solution Integrity with Capacity Allocation

In this way, product strategy and technology decisions are purposeful; everything is visible, everything is based on lean economics; and finally, remember:

You don’t have to be right forever, you only have to agree on what commitments you’ll be making for the next 10 weeks or so!

 

© 2010-2012 Leffingwell, LLC.