After years of running cross-functional planning engagements in retail and life sciences, we’ve seen enough program failures to identify the pattern underneath them. The specific circumstances vary: different industries and organizational structures, but the root causes do not.
Dependencies Surface During Execution, Not Planning
The commercial team commits to a field force launch date without knowing Medical Affairs has a PDUFA advisory committee that week. The first payer submission requires a market access analysis that depends on a label the regulatory team hasn’t received. These are not unforeseeable problems. The information existed in the organization before planning started. The planning process didn’t surface it because it didn’t create the conditions for cross-functional visibility. Most planning processes are organized by function: each workstream plans its own scope and timeline. The integration points between workstreams are acknowledged in theory but not mapped in detail; during execution, those integration points are where the program breaks.
Ambiguity About Who Decides What
A cross-functional program generates decisions that don’t fit within any single function’s authority. When the roadmap requires a tradeoff between the technology timeline and the operations readiness window, who decides? When two workstream leads disagree about a shared milestone date, where does the decision get made? In programs without clear decision rights, these questions stall. The decision either doesn’t get made, or it gets made in the wrong forum by someone without full context. Steering committees designed for executive oversight become the default escalation point for operational decisions. The symptom: recurring status meetings where the same issues appear week after week without resolution.
Alignment Decays Without a Structured Cadence
A cross-functional program produces alignment at the moment the plan is completed; everyone has seen the roadmap and understands the dependencies. That alignment starts falling apart as new information arrives and workstreams make adjustments that affect each other. Without a structured cadence to resurface the cross-functional view, each team drifts back into its functional silo. The operating rhythm prevents this through:
- Integration reviews where dependency status is tracked and escalation paths are clear
- Quarterly reviews where the roadmap is updated against reality
Programs without this rhythm rely on the program lead’s individual effort to keep alignment alive.
How the Nine Steps Address These Failure Modes
The nine-step methodology is designed around failure modes. Understanding how the steps connect reveals why each step exists to address these root causes. Steps 1 through 4: intake, stakeholder mapping, architecture, and the pre-mortem. These steps surface dependencies before they become surprises. The architecture maps workstream boundaries and shared resources. The pre-mortem forces the team to name failure modes, many of which are dependency conflicts. The constraints calendar compiles operational blackouts into a single cross-functional view. By the time the team reaches roadmapping, the dependencies are on the table. Steps 3 and 6 (architecture and operating model design) establish decision rights. The governance model defines who decides and at what level; the operating model installs the escalation framework that routes decisions to the right forum at the right speed. Step 6 also installs the operating rhythm. The meeting cadence and integration review format are designed and practiced during the engagement. The team practices the governance rhythm while the consulting team is still in the room, so the rhythm survives the handoff. These three failure modes reinforce each other: unmapped dependencies create decisions that unclear decision rights can’t resolve, and the absence of an operating rhythm means misalignment compounds over time. Programs that fail with good plans almost always fail on all three dimensions.