Most workstream charters are written once and never referenced again. They get built during the first two weeks of a program to satisfy a governance checkbox rather than to guide work. A useful charter is a reference document the workstream lead and their team use weekly to answer recurring questions: what are we responsible for, what’s outside our scope, who do we depend on, and what does success look like. When the charter lives in somebody’s email, the team has already lost it.
What Makes a Charter Useful
A useful charter does two things:
- It draws a clear boundary around scope (in and out, explicitly stated) and names the dependencies in specific terms. “This workstream owns the patient services hub build-out for the launch market” is useful. “This workstream is responsible for patient support enablement” is not. The boundary must be specific enough that when someone asks “does this workstream own the specialty pharmacy contracting?” the charter answers without interpretation. Dependencies need dates, owners, and escalation paths; without those, they’re wishes, not commitments.
- It defines the workstream’s success criteria (not the program’s, but the workstream’s). “Successful hub onboarding” is vague. “Patient enrollment workflow processing through all specialty pharmacy partners with less than 2% error rate by March 31” is a gate the team can plan against and leadership can evaluate.
What Ineffective Charters Have in Common
Ineffective charters tend to be comprehensive and vague at the same time. They fill every field in the template with language that sounds specific but commits to nothing. The scope section says “drive technology enablement across the enterprise.” The timeline says “aligned with program milestones.” The success criteria say “on-time delivery of key capabilities.” Each of those statements could mean almost anything; a workstream lead reading their own charter can’t answer the question “what am I on the hook for this quarter?” because the charter wasn’t written to answer it.
The other common failure is charters that are too detailed on activities and silent on boundaries. A charter that lists forty-seven tasks but doesn’t state what the workstream will not do leaves scope ambiguous. When a new request arrives adjacent to the workstream’s domain, there’s no document that says “that’s outside our boundary.” The workstream absorbs the request, scope expands, and the timeline slips.
How Good Charters Get Built
Good charters are built in a session where the workstream lead, the program lead, and representatives from dependent workstreams are in the room, working together the way we’d approach any collaborative roadmap session.
The session starts with scope. The workstream lead proposes what’s in and what’s out, and the room pressure-tests the boundaries: “if your workstream doesn’t own the payer submission strategy, who does?” Gaps and overlaps between workstreams become visible because the charters are being built together, not in isolation, making the program’s architecture legible in a way that isolated planning can’t.
Dependencies get mapped against the other workstreams in the room. When the Market Access workstream says they need final label language from Regulatory before filing payer submissions, the Regulatory lead confirms the timeline or flags a conflict; the dependency is negotiated in real time rather than assumed.
Success criteria get defined with input from the program lead, who connects them to program-level milestones. The workstream’s criteria are specific enough to measure and connected to the program’s criteria clearly enough that the workstream lead understands how their work fits into the whole.
The Reference Test
The test for a good charter is whether it gets referenced after it’s written. If the workstream lead pulls up the charter in a weekly planning meeting to check a scope question, it’s doing its job; if it lives in a folder nobody has opened since the chartering session, it was homework.
Charters that pass the reference test share a quality: they were built to be useful, not to be complete. They answer the questions every workstream faces (what’s in scope and what does done look like) at the level of specificity the team actually needs.