Every deliverable in a consulting engagement should pass two tests before it gets built: what decision does it enable, and who needs to see it to make that decision? If a deliverable does not enable a specific decision, it should not exist. If the audience is not defined, the format cannot be right. Most consulting artifacts that fail to land in the room fail because one or both of these questions was never asked.
Decision-driven filtering prevents artifact accumulation
Consulting engagements produce artifacts at volume: charters, roadmaps, risk registers, RACI matrices, governance models, operating model documents, and readout decks. In a three-month engagement, that volume can become its own problem. Many of these artifacts exist because the methodology calls for them, not because a specific decision requires them. A RACI matrix gets built because it’s part of the standard toolkit; a risk register gets built because governance expects one. Each is defensible on its own terms, but the real question is whether it enables a decision someone in the program needs to make right now. If it does not, it’s documentation, and documentation serves a different purpose than decision-making. Applying the two questions as a filter means some artifacts don’t get built. A process map for a workstream where no process decision is pending doesn’t get built; a current-state assessment for a function the program doesn’t touch doesn’t get built. The filter overrides methodology defaults with the engagement’s actual needs: does this specific program, with this specific leadership team, facing these specific decisions, need this artifact right now? The complete deliverable map shows every artifact the methodology can produce, but not every engagement needs all of them.
Effective deliverables give decision-makers what they need to act
A deliverable enables a decision when its absence would leave the decision-maker without the information they need to commit to a course of action. In practice, this connection between artifact and decision shows up clearly across the most common consulting outputs:
- The risk register enables decisions about what to mitigate. When the steering committee sees a high-likelihood, high-impact risk with no mitigation plan, they can decide to allocate resources or accept the risk. Without the register, the risk exists but is invisible to the people who have authority to act on it.
- The roadmap enables sequencing decisions, and the operating model enables governance decisions. When workstream leads see that two critical-path activities are scheduled for the same window and depend on the same shared resource, they can decide which one goes first. When leadership sees a proposed cadence of reviews, escalation paths, and decision rights, they can decide whether the governance structure matches how they want to run the program after the engagement ends.
In each case, the artifact exists because a specific person needs to make a specific decision, and the artifact provides what they need. A good workstream charter passes this test because it enables scope and ownership decisions; the principle of content before slides ensures that the decision-enabling substance comes first.
Defining the audience shapes format, detail, and emphasis
The audience question is a design question, not a distribution question. A risk register designed for the steering committee looks different from one designed for workstream leads. The steering committee needs program-level risks, their potential impact on timeline and budget, and the decisions requiring executive authority; workstream leads need risks within their scope, the dependencies that create exposure for their team, and the mitigations they own. The same data produces different artifacts because the audiences need different things to make different decisions. When the audience is not defined before the artifact is built, the default is to build for comprehensiveness. The deliverable tries to serve everyone and ends up optimized for no one. A risk register that includes both program-level and workstream-level risks in a single view forces the steering committee to sort through operational detail they don’t need, while workstream leads have to search for their items in a broader document. The result is an artifact that’s thorough but not useful. The discipline of critique before you create helps our teams test whether a deliverable’s format matches its audience before the artifact is finalized. In a twelve-week engagement, every hour spent on an artifact that does not enable a decision is an hour not spent on one that does. The practical habit is simple: before opening a template, write down the decision and the audience. If you can’t name both, the deliverable is not ready to build.