It’s (Almost) Never Simple

The Stacey matrix can be a bit misleading because real-world problems aren’t evenly distributed across it. They cluster.

Jason Godesky
Better Programming

--

If you’ve ever been to a Scrum training, you’ve likely been introduced to the Stacey matrix — or at least, something you were told was a Stacey matrix. When he originally came up with this chart, Ralph Douglas Stacey was interested in how managers make decisions, so his chart showed how much the managers agreed on a given decision on the y-axis, and how certain they were about the decision on the x-axis. The version that’s been promulgated by Scrum trainers (and generally become much better known) drifts from that original meaning a bit, and gives us a somewhat different chart:

A blank chart. The x-axis is labeled “Approach,” with “Known” on the left and “Unknown” on the right. The y-axis is labeled “Requirements,” with “Known” at the bottom, and “Unknown” at the top.

In this version, instead of how certain the managers are about a decision, we track how much we know all of the details we need to know to fully implement and execute on our solution. Instead of how much the managers agree on the decision, we have our requirements. Do we know exactly what we want our solution to do, in detail?

The same chart as above, with a region in the lower-left corner (where requirements and approach are both known) labeled as “Simple.”

If both of these things are well-known, then we have a simple problem. We know what we need to do and how to do it, so all that’s left is to do it. This is where “Big Design Up-Front” (BDUF) really shines. We can write up an exhaustive requirements document because our requirements are all well known, and we can make a straightforward plan with reliable predictions because we know exactly how we’re going to implement every detail of it. We can set reliable schedules, follow prescribed checklists, and produce the exact end-product that we envisioned.

Simple problems aren’t always small problems. Building a battleship is usually in this category; we have a specific class of battleship that we’re making, so we have extremely precise specifications for all the parts and pieces. We’ve made battleships of the same class before, so we know exactly how it’s done. That doesn’t make the task small or easy, but that’s OK. Because the requirements and approach are both well known, it doesn’t matter that it’s going to take months to complete the work. The requirements aren’t going to change. The U.S. Navy isn’t going to suddenly change its mind on what an Iowa-class battleship is. If there’s a chance that the requirements might change, then they aren’t actually known, so we’d be higher up the y-axis, so we would no longer be in the “simple” region.

The same chart as above, but with another region highlighted — this one stretches from moderate to “Unknown” for “Requirements,” but stays within the “Known” region for “Approach.” It is labeled “Complicated (Political).”

If we stick with a problem where we’re confident that we know all about the approach, but add a little less certainty about our requirements, we find another set of problems. These are complicated problems. In the original matrix, where this was about whether or not managers agreed on the decision, Ralph Stacey called these politically complicated problems — the complication lay in getting all of the decision-makers to agree. When we drift this to a broader concept of “requirements,” the reasons can similarly be a little more broad. Politics can certainly be one reason why our requirements are less well known than we might like: if the decision-makers aren’t aligned, we may be called upon to start implementing a solution before the decision-makers have made all of the decisions about what they’d like the solution to do. This is a difficult and frustrating situation for the team responsible for implementing the solution, to say the least. More often, requirements are uncertain because decision-makers may change their minds. Sometimes this is just capriciousness on the part of business leaders, but it can also be the fickleness of the market and the fact that by the time you’ve gotten half-way through creating a solution, users’ attitudes and expectations have shifted.

The same chart as above, but with another region highlighted — this one stretches from moderate to “Unknown” for “Approach,” but stays within the “Known” region for “Requirements.” It is labeled “Complicated (Technical).”

If we return to that space where requirements are well known, though, there’s another kind of complicated problem to consider: the technically complicated problem. Here the requirements are known, but we’re not entirely sure how we’re going to implement them. This is the sort of problem that probably springs to mind most readily when we say that it’s complicated. We know what it is that we want to do, we’re just not sure yet how we’re going to do it.

The same chart as above, but with another region highlighted — this one is in the upper right corner, where “Requirements” and “Approach” are both “Unknown.” It is labeled “Chaos.”

If we don’t know what we want to do or how we’re going to do it, we have chaos. There’s not a lot we can do with chaos, except get out of it. Do some research, ask some questions, and figure out at least a little bit about our requirements or our approach (or, preferably, both).

The same chart as above, but with the area around the other regions (composing most of the space in the chart) labeled as “Complex.”

That leaves everything else in the chart: the domain of complex problems. With these problems, we’re dealing with at least some open questions about requirements and at least some open questions about our approach.

On the y-axis, outside of a few particularly well-defined and well-regulated spaces, the requirements are never actually known. We’re building something for people. It takes time to build things, and people are constantly changing, so the longer it takes us to build something, the more likely it becomes that even if we had all of our requirements exactly right when we began, those likely won’t be our users’ requirements by the time we finish. Throw in the fact that no human being has ever had perfect knowledge of anything and the likelihood that we get the requirements exactly right for exactly what our users will need in the future (which even our users themselves can’t provide us, because human beings are terrible at predicting what they’ll want in the future) is about as close to zero as you can get. Then throw in the near inevitability that stakeholders will change their minds at some point and the bottom half of the chart is left almost completely void of real-world examples.

The x-axis may be generally more evenly distributed, unless we’re talking about a solution that involves any amount of computer programming (which we usually are). This may be a property unique to software, but we really don’t spend time on things where the approach is known. Pose this challenge to any programmers you know or work with: let’s say you spend a full, eight-hour day working on a piece of code. The next day, something terrible has happened, and for some reason, all of your work is lost. How long will it take you to restore your lost work? The answer is almost invariably far short of eight hours. The last time it happened to me, it took me about half an hour. Typing is not the bottleneck in software development: thinking is. If we know what the solution is, then implementing it is trivial. Developers use package managers precisely so that they can easily import others’ solutions to well-known problems. Nearly all of the time spent in software development is devoted to figuring out the unique parts of the problem. They might not be spectacularly novel; it might be as simple as figuring out the details of how saving data works with these specific fields and this specific database, but if someone else had already figured out this exact problem before, we wouldn’t be spending time on it: we’d have imported, downloaded, or copied that solution and moved on to something else already. Which means that, with software development at least, we’re spending negligible time on anything on the left half of this chart. Nearly all of our time is spent on the right half.

Take these two facts together and you can see that, in a practical sense, complex problems are really the only problems.

Before, I said that “Big Design Up-Front” (BDUF) is a great solution for simple problems — but it’s pretty rare for any of us to face simple problems in the real world. Complicated and complex problems both deal with some amount of uncertainty. Solving problems like these is a bit like navigating a maze. If we knew everything, we could take our complete, top-down map of the maze to chart a course. Then we could just follow that course and find our way through the maze. That’s what BDUF is. The problem is that we don’t know everything. We don’t have a complete, top-down map of the maze, and if we think that we do, it’s almost certainly wrong. If we try to chart a course and follow it, we’re going to end up hopelessly lost in the space between whatever “map” we were following and the reality that we find ourselves in.

There are strategies to find your way through a maze (e.g., always keep your right hand on the wall), but they all come down to one way or another of exploring the space. Since we can’t know what the optimum path through will be, we need to accept that this work is about coming to a better understanding of the problem space just as much as it is about coming up with a solution.

In short, solving complex problems is much more like research and development than it is like building a wall, a car, or a battleship.

This is the point at which Scrum trainers will claim that, where old waterfall techniques will work well for simple problems, Scrum is better suited for complex and complicated problems. They’re not wrong, but it’s not just Scrum. Navigating an unknown problem space requires agility. Scrum can be agile (that is, when you actually follow it, and don’t turn it into a cargo cult), but it’s not the be-all and end-all of agility. What’s important is that core cycle that Dave Thomas talked about when he told us that “Agile is Dead” eight years ago:

  1. Figure out where you are.
  2. Take a small step towards your goal.
  3. Adjust your understanding based on what you learned.
  4. Go to step 1.

Where this usually goes wrong is when different parts of the business misunderstand what kind of problem they’re dealing with. When managers and decision-makers mistake complex problems for simple ones and expect the solutions to look like the ones they’re used to for simple problems, when they mistake the work for being more like building a house than R&D, they plan projects in a way that is almost guaranteed to fail and set up processes that hinder rather than help.

The first step, then, is to understand what kind of problem you’re facing and what that means. As a practical matter, it’s almost always a complex problem, which calls for an agile, exploratory approach. For problems like that, a lot of the tools and strategies that business people and project managers are used to range from useless to actively harmful. You need iterative, empirical processes to solve problems like these. They can’t predict the future, because no one knows what the future will be — that’s very much the crux of the problem you’re trying to solve. Instead, they focus on telling you exactly where you are right now, and what’s the best next step that you can take towards your goal. In a complex or complicated problem space, that’s the only thing that’s actually knowable.

--

--

I’m a product designer with full-stack development experience from Pittsburgh, Pennsylvania.